diff --git "a/corpus/corpus.csv" "b/corpus/corpus.csv" deleted file mode 100644--- "a/corpus/corpus.csv" +++ /dev/null @@ -1,70856 +0,0 @@ -id,document_label,page_content -81D740CEF3967C20721612B7866072EF240484E9,81D740CEF3967C20721612B7866072EF240484E9," Decision Optimization Java models - -You can create and run Decision Optimization models in Java by using the Watson Machine Learning REST API. - -You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models. - -For more information about these models, see the following reference manuals. - - - -* [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) -* [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html) -* [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html) - - - -To package and deploy Java models in Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md). -" -6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5,6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5," Decision Optimization notebooks - -You can create and run Decision Optimization models in Python notebooks by using DOcplex, a native Python API for Decision Optimization. Several Decision Optimization notebooks are already available for you to use. - -The Decision Optimization environment currently supports Python 3.10. The following Python environments give you access to the Community Edition of the CPLEX engines. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems. - - - -* Runtime 23.1 on Python 3.10 S/XS/XXS -* Runtime 22.2 on Python 3.10 S/XS/XXS - - - -To run larger problems, select a runtime that includes the full CPLEX commercial edition. The Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition): - - - -* NLP + DO runtime 23.1 on Python 3.10 with CPLEX 22.1.1.0 -* DO + NLP runtime 22.2 on Python 3.10 with CPLEX 20.1.0.1 - - - -You can easily change environments (runtimes and Python version) inside a notebook by using the Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)). Thus, you can formulate optimization models and test them with small data sets in one environment. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the notebook code. - -Multiple examples of Decision Optimization notebooks are available in the Samples, including: - - - -* The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid. -* The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied. - - - -These and more examples are also available in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) - -All Decision Optimization notebooks use DOcplex. -" -277C8CB678CAF766466EDE03C506EB0A822FD400,277C8CB678CAF766466EDE03C506EB0A822FD400," Supported data sources in Decision Optimization - -Decision Optimization supports the following relational and nonrelational data sources on . watsonx.ai. - - - -" -E990E009903E315FA6752E7E82C2634AF4A425B9,E990E009903E315FA6752E7E82C2634AF4A425B9," Ways to use Decision Optimization - -To build Decision Optimization models, you can create Python notebooks with DOcplex, a native Python API for Decision Optimization, or use the Decision Optimization experiment UI that has more benefits and features. -" -8892A757ECB2C4A02806A7B262712FF2E30CE044,8892A757ECB2C4A02806A7B262712FF2E30CE044," OPL models - -You can build OPL models in the Decision Optimization experiment UI in watsonx.ai. - -In this section: - - - -* [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__section_oplIO) -* [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__engsettings) - - - -To create an OPL model in the experiment UI, select in the model selection window. You can also import OPL models from a file or import a scenario .zip file that contains the OPL model and the data. If you import from a file or scenario .zip file, the data must be in .csv format. However, you can import other file formats that you have as project assets into the experiment UI. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata). - -For more information about the OPL language and engine parameters, see: - - - -* [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html) -* [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html) -" -8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_0,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9," Visualization view - -With the Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios. - -Quick links: - - - -* [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section-dashboard) -* [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter) -* [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_widgetssyntax) -* [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__viseditor) -* [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__vispages) - - - -The Visualization view is common to all scenarios in a Decision Optimization experiment. - -For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b). - -![Visualization panel showing solution in table and bar chart](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudvisualization.jpg) - -" -8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_1,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"The Visualization view helps you compare different scenarios to validate models and business decisions. - -For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows: - - - -1. Click the chart widget and configure it by clicking the pencil icon. -2. In the Chart widget editor, select Add scenario and choose scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed. -3. In the Table field, select the Solution data option and select solution from the drop-down list. -4. In the bar chart pane, select Descending for the Category order, Y-axis for the Bar type and click OK to close the Chart widget editor. A second bar chart is then displayed showing you the solution results for scenario 2. -5. Re-edit the chart and select @Scenario in the Split by field of the Bar chart pane. You then obtain both scenarios in the same bar chart: - - - -![Chart with two scenarios displayed in one chart.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/ChartVisu2Scen.png). - -You can select many different types of charts in the Chart widget editor. - -Alternatively using the Vega Chart widget, you can similarly choose Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor. Then, in the Mark section, select @Scenario for the color field. This selection gives you the following bar chart with the two scenarios on the same y-axis, distinguished by different colors. - -![Vega chart showing 2 scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen.jpg). - -If you re-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side-by-side as follows: - -![Vega charts showing 2 scenarios side by side.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen2.jpg) - -" -8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_2,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"You can use many different types of charts that are available in the Mark field of the Vega Chart widget editor. - -You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_hdc_5mm_33b) section. - -The following widgets are available: - - - -* [Notes widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_edc_5mm_33b) - -Add simple text notes to the Visualization view. -* [Table widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_fdc_5mm_33b) - -Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter). -* [Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_alh_lfn_l2b) - -Present input data and solution in charts. -" -8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_3,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"* [Gantt chart widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_idc_5mm_33b) - -Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart. - -This widget is used automatically for scheduling problems that are modeled with the Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the Modeling Assistant. -" -33923FE20855D3EA3850294C0FB447EC3F1B7BDF_0,33923FE20855D3EA3850294C0FB447EC3F1B7BDF," Decision Optimization experiments - -If you use the Decision Optimization experiment UI, you can take advantage of its many features in this user-friendly environment. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with Watson Machine Learning. - -The Decision Optimization experiment UI facilitates workflow. Here you can: - - - -* Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata) -* Create, import, edit and solve Python models in the Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b) -* Create, import, edit and solve models expressed in natural language with the Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase) -* Create, import, edit and solve OPL models in the Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels) -* Generate a notebook from your model, work with it as a notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) -" -33923FE20855D3EA3850294C0FB447EC3F1B7BDF_1,33923FE20855D3EA3850294C0FB447EC3F1B7BDF,"* Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__solution) -* Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) -* Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.htmltopic_visualization) -* Save models that are ready for deployment in Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) - - - -See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.htmlDOIntro__comparisontable) for a list of features available with and without the Decision Optimization experiment UI. - -See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface) for a description of the user interface and scenario management. -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_0,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Configuring environments and adding Python extensions - -You can change your default environment for Python and CPLEX in the experiment Overview. - -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_1,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Procedure - -To change the default environment for DOcplex and Modeling Assistant models: - - - -1. Open the Overview, click ![information icon](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/infoicon.jpg) to open the Information pane, and select the Environments tab. - -![Environment tab of information pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfoenvirons.png) -2. Expand the environment section according to your model type. For Python and Modeling Assistant models, expand Python environment. You can see the default Python environment (if one exists). To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure. -3. Expand the name of your environment, and select a different Python environment. -4. Optional: To create a new environment: - - - -1. Select New environment for Python. A new window opens for you to define your new environment. ![New environment window showing empty fields](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfonewenv1.png) -2. Enter a name, and select a CPLEX version, hardware specification, copies (number of nodes), Python version and (optionally) you can set Associate a Python extension to On to include any Python libraries that you want to add. -3. Click New Python extension. -4. Enter a name for your extension in the new Create a Python extension window that opens, and click Create. -5. In the new Configure Python extension window that opens, you can set YAML code to On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries: - - Modify the following content to add a software customization to an environment. - To remove an existing customization, delete the entire content and click Apply. - - Add conda channels on a new line after defaults, indented by two spaces and a hyphen. -channels: -- defaults - - To add packages through conda or pip, remove the comment on the following line. - dependencies: - -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_2,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Add conda packages here, indented by two spaces and a hyphen. - Remove the comment on the following line and replace sample package name with your package name: - - a_conda_package=1.0 - - Add pip packages here, indented by four spaces and a hyphen. - Remove the comments on the following lines and replace sample package name with your package name. - - pip: - - a_pip_package==1.0 - -You can also click Browse to add any Python libraries. - -For example, this image shows a dynamic programming Python library that is imported and YAML code set to On.![Configure Python extension window showing YAML code and a Dynamic Programming library included](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/PythonExtension.png) - -Click Done. -6. Click Create in the New environment window. - - - -Your chosen (or newly created) environment appears as ticked in the Python environments drop-down list in the Environments tab. The tick indicates that this is the default Python environment for all scenarios in your experiment. -5. Select Manage experiment environments to see a detailed list of all existing environments for your experiment in the Environments tab.![Manage experiment environment with two environments and drop-down menu.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvextn.png) - -You can use the options provided by clicking the three vertical dots next to an environment to Edit, Set as default, Update in a deployment space or Delete the environment. You can also create a New environment from the Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default. - -Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment. -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_3,497007D0D0ABAC3202BBF912A15BFC389066EBDA,"6. Click the Python extensions tab. - -![Python extensions tab showing created extension](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvpyextn.png) - -Here you can view your Python extensions and see which environment it is used in. You can also create a New Python extension or use the options to Edit, Download, and Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created. - -You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification. - - - - - -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_4,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Selecting a different run environment for a particular scenario - -You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane. - -" -497007D0D0ABAC3202BBF912A15BFC389066EBDA_5,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Procedure - - - -1. Open the Scenario pane and select your scenario in the Build model view. -2. Click the Configure run icon next to the Run button to open the Run configuration pane and select the Environment tab. -" -5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9,5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9," Sample models and notebooks for Decision Optimization - -Several examples are presented in this documentation as tutorials. You can also use many other examples that are provided in the Decision Optimization GitHub, and in the Samples. - -Quick links: - - - -* [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__docexamples) -* [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_modelbuildersamples) -" -167D5677958594BA275E34B8748F7E8091782560_0,167D5677958594BA275E34B8748F7E8091782560," Decision Optimization experiment views and scenarios - -The Decision Optimization experiment UI has different views in which you can select data, create models, solve different scenarios, and visualize the results. - -Quick links to sections: - - - -* [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_overview) -* [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_environment) -* [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_preparedata) -* [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__ModelView) -* [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_g21_p5n_plb) -* [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__runmodel) -* [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_runconfig) -" -167D5677958594BA275E34B8748F7E8091782560_1,167D5677958594BA275E34B8748F7E8091782560,"* [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__envtabConfigRun) -* [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__solution) -* [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__scenariopanel) -* [Generating notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__generateNB) -* [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Importingscenarios) -* [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Exportingscenarios) - - - -Note: To create and run Optimization models, you must have both a Machine Learning service added to your project and a deployment space that is associated with your experiment: - - - -" -167D5677958594BA275E34B8748F7E8091782560_2,167D5677958594BA275E34B8748F7E8091782560,"1. Add a [Machine Learning service](https://cloud.ibm.com/catalog/services/machine-learning) to your project. You can either add this service at the project level (see [Creating a Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new Decision Optimization experiment: click Add a Machine Learning service, select, or create a New service, click Associate, then close the window. -2. Associate a [deployment space](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.htmlcreate)). A deployment space can be created or selected when you first create a new Decision Optimization experiment: click Create a deployment space, enter a name for your deployment space, and click Create. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane. - - - -When you add a Decision Optimization experiment as an asset in your project, you open the Decision Optimization experiment UI. - -With the Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve. To edit and solve models, you must have Admin or Editor roles in the project. Viewers of shared projects can only see experiments, but cannot modify or run them. - -You can create a Decision Optimization model from scratch by entering a name or by choosing a .zip file, and then selecting Create. Scenario 1 opens. - -" -167D5677958594BA275E34B8748F7E8091782560_3,167D5677958594BA275E34B8748F7E8091782560,"With the Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem. - -For a step-by-step guide to build, solve and deploy a Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html). - -For each of the following views, you can organize your screen as full-screen or as a split-screen. To do so, hover over one of the view tabs ( Prepare data, Build model, Explore solution) for a second or two. A menu then appears where you can select Full Screen, Left or Right. For example, if you choose Left for the Prepare data view, and then choose Right for the Explore solution view, you can see both these views on the same screen. -" -1C20BD9F24D670DD18B6BC28E020FBB23C742682_0,1C20BD9F24D670DD18B6BC28E020FBB23C742682," Creating advanced custom constraints with Python - -This Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python. - -" -1C20BD9F24D670DD18B6BC28E020FBB23C742682_1,1C20BD9F24D670DD18B6BC28E020FBB23C742682," Procedure - -To create a new advanced custom constraint: - - - -1. In the Build model view of your open Modeling Assistant model, look at the Suggestions pane. If you have Display by category selected, expand the Others section to locate New custom constraint, and click it to add it to your model. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model.A new custom constraint is added to your model. - -![New custom constraint in model, with elements highlighted to be completed by user.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/newcustomconstraint.jpg) -2. Click Enter your constraint. Use [brackets] for data, concepts, variables, or parameters and enter the constraint you want to specify. For example, type No [employees] has [onCallDuties] for more than [2] consecutive days and press enter.The specification is displayed with default parameters (parameter1, parameter2, parameter3) for you to customize. These parameters will be passed to the Python function that implements this custom rule. - -![Custom constraint expanded to show default parameters and function name.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintFillParameters.jpg) -3. Edit the default parameters in the specification to give them more meaningful names. For example, change the parameters to employees, on_call_duties, and limit and click enter. -4. Click function name and enter a name for the function. For example, type limitConsecutiveAssignments and click enter.Your function name is added and an Edit Python button appears. - -![Custom rule showing customized parameters and Edit Python button.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintParameters.jpg) -" -1C20BD9F24D670DD18B6BC28E020FBB23C742682_2,1C20BD9F24D670DD18B6BC28E020FBB23C742682,"5. Click the Edit Python button.A new window opens showing you Python code that you can edit to implement your custom rule. You can see your customized parameters in the code as follows: - -![Python code showing block to be customized](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CustomRulePythoncode.jpg) - -Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value. -6. Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here. In this case, close this window for now and in the Scenario pane, expand the three vertical dots and select Generate a notebook for this scenario that contains the custom rule. Enter a name for this notebook.The notebook is created in your project assets ready for you to edit and debug. Once you have edited, run and debugged it you can copy the code for your custom function back into this Edit Python window in the Modeling Assistant. -7. Edit the Python code in the Modeling Assistant custom rule Edit Python window. For example, you can define the rule for consecutive days in Python as follows: - -def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit): -global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property -print('Adding constraints for the custom rule') -for employee, duties in employees.associated(on_call_duties): -duties_day_idx = duties.join(Day) Retrieve Day index from Day label -for d in Day['index']: -end = d + limit + 1 One must enforce that there are no occurence of (limit + 1) working consecutive days -" -1C20BD9F24D670DD18B6BC28E020FBB23C742682_3,1C20BD9F24D670DD18B6BC28E020FBB23C742682,"duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)] -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_0,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Adding multi-concept constraints and custom decisions: shift assignment - -This Decision Optimization Modeling Assistant example shows you how to use multi-concept iterations, the associated keyword in constraints, how to define your own custom decisions, and define logical constraints. For illustration, a resource assignment problem, ShiftAssignment, is used and its completed model with data is provided in the DO-samples. - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_1,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure - -To download and open the sample: - - - -1. Download the ShiftAssignment.zip file from the Model_Builder subfolder in the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder. -2. Open your project or create an empty project. -3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. -4. Select the Assets tab. -5. Select New asset > Solve optimization problems in the Work with models section. -6. Click Local file in the Solve optimization problems window that opens. -7. Browse locally to find and choose the ShiftAssignment.zip archive that you downloaded. Click Open. Alternatively use drag and drop. -8. Associate a Machine Learning service instance with your project and reload the page. -9. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. -10. Click Create.A Decision Optimization model is created with the same name as the sample. -11. Open the scenario pane and select the AssignmentWithOnCallDuties scenario. - - - - - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_2,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Using multi-concept iteration - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_3,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure - -To use multi-concept iteration, follow these steps. - - - -1. Click Build model in the sidebar to view your model formulation.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints. -2. Expand the constraint For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1. - - - - - - - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_4,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Defining custom decisions - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_5,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure - -To define custom decisions, follow these steps. - - - -1. Click Build model to see the model formulation of the AssignmentWithOnCallDuties Scenario.![Build model view showing Shift Assignment formulation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CloudStaffAssignRunModel.png) - -The custom decision OnCallDuties is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees. - -The constraint ![On call duty constraint](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/StaffAssignOncallDuty.jpg) ensures that the on-call duty requirements that are listed in the Day table are satisfied. - -The following steps show you how this custom decision OnCallDuties was defined. -2. Open the Settings pane and notice that the Visualize and edit decisions is set to true (or set it to true if it is set to the default false). - -This setting adds a Decisions tab to your Add to model window. - -![Decisions tab of the Add to Model pane showing two intents](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/DecisionsTab.jpg) - -Here you can see OnCallDuty is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables Day and Employee. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent. -3. Optional: Enter your own text to describe the OnCallDuty in the [to be documented] field. -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_6,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0,"4. Optional: To create your own decision in the Decisions tab, click the enter name, type in a name and click enter. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop-down menus. If you, for example, select assignment as the decision type, two dimensions are created. As assignment involves assigning at least one thing to another, at least two dimensions must be defined. Use select a table fields to define the dimensions. - - - - - - - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_7,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Using logical constraints - -" -C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_8,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure - -To use logical constraints: - - - -" -0EFC1AA12637C84918CEF9FA5DE5DA424822330C,0EFC1AA12637C84918CEF9FA5DE5DA424822330C," Formulating and running a model: house construction scheduling - -This tutorial shows you how to use the Modeling Assistant to define, formulate and run a model for a house construction scheduling problem. The completed model with data is also provided in the DO-samples, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.htmlExamples__section_modelbuildersamples). - -In this section: - - - -* [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_The_problem) -" -312E91752782553D39C335D0DAAF189025739BB4,312E91752782553D39C335D0DAAF189025739BB4," Modeling Assistant models - -You can model and solve Decision Optimization problems using the Modeling Assistant (which enables you to formulate models in natural language). This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code. The Modeling Assistant is only available in English and is not globalized. - -The basic workflow to create a model with the Modeling Assistant and examine it under different scenarios is as follows: - - - -1. Create a project. -2. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI). -3. Add and import your data into the scenario. -4. Create a natural language model in the scenario, by first selecting your decision domain and then using the Modeling Assistant to guide you. -5. Run the model to solve it and explore the solution. -6. Create visualizations of solution and data. -7. Copy the scenario and edit the model and/or the data. -8. Solve the new scenario to see the impact of these changes. - - - -![Workflow showing the previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg) - -This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase). - -For more information about deployment see . -" -2746F2E53D41F5810D92D843AF8C0AB2B36A0D47,2746F2E53D41F5810D92D843AF8C0AB2B36A0D47," Selecting a Decision domain in the Modeling Assistant - -There are different decision domains currently available in the Modeling Assistant and you can be guided to choose the right domain for your problem. - -Once you have added and imported your data into your model, the Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a decision domain for your model. - -If you need a decision domain that is not currently supported by the Modeling Assistant, you can still formulate your model as a Python notebook or as an OPL model in the experiment UI editor. -" -F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9_0,F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9," Create new scenario - -To solve with different versions of your model or data you can create new scenarios in the Decision Optimization experiment UI. - -" -F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9_1,F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9," Procedure - -To create a new scenario: - - - -1. Click the Open scenario pane icon ![Open scenario pane button](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/CPDscenariomanage.jpg) to open the Scenario panel. -2. Use the Create Scenario drop-down menu to create a new scenario from the current one. -3. Add a name for the duplicate scenario and click Create. -4. Working in your new scenario, in the Prepare data view, open the diet_food data table in full mode. -5. Locate the entry for Hotdog at row 9, and set the qmax value to 0 to exclude hot dog from possible solutions. -" -056E37762231E9E32F0F443987C32ACF7BF1AED4,056E37762231E9E32F0F443987C32ACF7BF1AED4," Working with multiple scenarios - -You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is. - -This example steps you through the process to generate multiple scenarios with a model. This makes it possible to test the performance of the model against multiple randomly generated data sets. It's important in practice to check the robustness of a model against a wide range of data. This helps ensure that the model performs well in potentially stochastic real-world conditions. - -The example is the StaffPlanning model in the DO-samples. - -The example is structured as follows: - - - -* The model StaffPlanning contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets. -* The Python notebookCopyAndSolveScenarios contains the random generator to create the new scenarios in the StaffPlanning model. - - - -For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview). - -For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/). -" -3BEB81A5A5953CD570FA673B2496F8AF98725438_0,3BEB81A5A5953CD570FA673B2496F8AF98725438," Generating multiple scenarios - -This tutorial shows you how to generate multiple scenarios from a notebook using randomized data. Generating multiple scenarios lets you test a model by exposing it to a wide range of data. - -" -3BEB81A5A5953CD570FA673B2496F8AF98725438_1,3BEB81A5A5953CD570FA673B2496F8AF98725438," Procedure - -To create and solve a scenario using a sample: - - - -1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your machine. You can also download just the StaffPlanning.zip file from the Model_Builder subfolder for your product and version, but in this case do not extract it. -2. Open your project or create an empty project. -3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. -4. Select the Assets tab. -5. Select New asset > Solve optimization problems in the Work with models section. -6. Click Local file in the Solve optimization problems window that opens. -7. Browse to choose the StaffPlanning.zip file in the Model_Builder folder. Select the relevant product and version subfolder in your downloaded DO-samples. -8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. -" -DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD,DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD," Input and output data - -You can access the input and output data you defined in the experiment UI by using the following dictionaries. - -The data that you imported in the Prepare data view in the experiment UI is accessible from the input dictionary. You must define each table by using the syntax inputs['tablename']. For example, here food is an entity that is defined from the table called diet_food: - -food = inputs['diet_food'] - -Similarly, to show tables in the Explore solution view of the experiment UI you must specify them using the syntax outputs['tablename']. For example, - -outputs['solution'] = solution_df - -defines an output table that is called solution. The entity solution_df in the Python model defines this table. - -You can find this Diet example in the Model_Builder folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). To import and run (solve) it in the experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b). -" -726175290D457B10A02C27F08ECA1F6546E64680,726175290D457B10A02C27F08ECA1F6546E64680," Python DOcplex models - -You can solve Python DOcplex models in a Decision Optimization experiment. - -The Decision Optimization environment currently supports Python 3.10. The default version is Python 3.10. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane. - -The basic workflow to create a Python DOcplex model in Decision Optimization, and examine it under different scenarios, is as follows: - - - -1. Create a project. -2. Add data to the project. -3. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI). -4. Select and import your data into the scenario. -5. Create or import your Python model. -6. Run the model to solve it and explore the solution. -7. Copy the scenario and edit the data in the context of the new scenario. -8. Solve the new scenario to see the impact of the changes to data. - - - -![Workflow showing previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg) -" -2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_0,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4," Solving and analyzing a model: the diet problem - -This example shows you how to create and solve a Python-based model by using a sample. - -" -2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_1,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4," Procedure - -To create and solve a Python-based model by using a sample: - - - -1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer. You can also download just the diet.zip file from the Model_Builder subfolder for your product and version, but in this case, do not extract it. -2. Open your project or create an empty project. -3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. -4. Select the Assets tab. -5. Select New asset > Solve optimization problems in the Work with models section. -6. Click Local file in the Solve optimization problems window that opens. -7. Browse to find the Model_Builder folder in your downloaded DO-samples. Select the relevant product and version subfolder. Choose the Diet.zip file and click Open. Alternatively use drag and drop. -8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. -9. Click New deployment space, enter a name, and click Create (or select an existing space from the drop-down menu). -10. Click Create.A Decision Optimization model is created with the same name as the sample. -11. In the Prepare data view, you can see the data assets imported.These tables represent the min and max values for nutrients in the diet (diet_nutrients), the nutrients in different foods (diet_food_nutrients), and the price and quantity of specific foods (diet_food). - -![Tables of input data in Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudpreparedata2.png) -12. Click Build model in the sidebar to view your model.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements. - -" -2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_2,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4,"![Python model for diet problem displayed in Run model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/newrunmodel3.png) - -" -D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733,D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733," Run parameters and Environment - -You can select various run parameters for the optimization solve in the Decision Optimization experiment UI. - -Quick links to sections: - - - -* [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__cplexruntime) -* [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__pyversion) -" -C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB,C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB," Decision Optimization - -IBM® Decision Optimization gives you access to IBM's industry-leading solution engines for mathematical programming and constraint programming. You can build Decision Optimization models either with notebooks or by using the powerful Decision Optimization experiment UI (Beta version). Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent Modeling Assistant (Beta version). You can also deploy models with Watson Machine Learning. - -Data format -: Tabular: .csv, .xls, .json files. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata) - -Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) - -For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html) - -" -E45F37BDDB38D6656992642FBEA2707FE34E942A,E45F37BDDB38D6656992642FBEA2707FE34E942A," Delegating the Decision Optimization solve to run on Watson Machine Learning from Java or .NET CPLEX or CPO models - -You can delegate the Decision Optimization solve to run on Watson Machine Learning from your Java or .NET (CPLEX or CPO) models. - -Delegating the solve is only useful if you are building and generating your models locally. You cannot deploy models and run jobs Watson Machine Learning with this method. For full use of Java models on Watson Machine Learning use the Java™ worker Important: To deploy and test models on Watson Machine Learning, use the Java worker. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).For the library and documentation for: - - - -" -5BC48AB9A35E2E8BAEA5204C4406835154E2B836,5BC48AB9A35E2E8BAEA5204C4406835154E2B836," Deployment steps - -With IBM Watson Machine Learning you can deploy your Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data. This deployment can be achieved by using the Watson Machine Learning REST API or by using the Watson Machine Learning Python client. - -See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST) for a full code example. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient) for a link to a Python notebook available from the Samples. -" -134EB5D79038B55A3A6AC019016A21EC2B6A1917,134EB5D79038B55A3A6AC019016A21EC2B6A1917," Deploying Java models for Decision Optimization - -You can deploy Decision Optimization Java models in Watson Machine Learning by using the Watson Machine Learning REST API. - -With the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md). - -The Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in Watson Machine Learning, including an example. You can use the code in this repository to package your Decision Optimization Java model in a .jar file that can be used as a Watson Machine Learning model. For more information about Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar). - -You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models. - -For more information about these models, see the following reference manuals. - - - -* [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) -" -B92F42609B54B82BFE38A69B781052E876258C2C_0,B92F42609B54B82BFE38A69B781052E876258C2C," REST API example - -You can deploy a Decision Optimization model, create and monitor jobs and get solutions using the Watson Machine Learning REST API. - -" -B92F42609B54B82BFE38A69B781052E876258C2C_1,B92F42609B54B82BFE38A69B781052E876258C2C," Procedure - - - -1. Generate an IAM token using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows. - -curl ""https://iam.bluemix.net/identity/token"" --d ""apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey"" --H ""Content-Type: application/x-www-form-urlencoded"" --H ""Authorization: Basic Yng6Yng="" - -Output example: - -{ -""access_token"": "" obtained IAM token "", -""refresh_token"": """", -""token_type"": ""Bearer"", -""expires_in"": 3600, -""expiration"": 1554117649, -""scope"": ""ibm openid"" -} - -Use the obtained token (access_token value) prepended by the word Bearer in the Authorization header, and the Machine Learning service GUID in the ML-Instance-ID header, in all API calls. -2. Optional: If you have not obtained your SPACE-ID from the user interface as described previously, you can create a space using the REST API as follows. Use the previously obtained token prepended by the word bearer in the Authorization header in all API calls. - -curl --location --request POST -""https://api.dataplatform.cloud.ibm.com/v2/spaces"" --H ""Authorization: Bearer TOKEN-HERE"" --H ""ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE"" --H ""Content-Type: application/json"" ---data-raw ""{ -""name"": ""SPACE-NAME-HERE"", -""description"": ""optional description here"", -""storage"": { -""resource_crn"": ""COS-CRN-ID-HERE"" -}, -""compute"": [{ -""name"": ""MACHINE-LEARNING-SERVICE-NAME-HERE"", -" -B92F42609B54B82BFE38A69B781052E876258C2C_2,B92F42609B54B82BFE38A69B781052E876258C2C,"""crn"": ""MACHINE-LEARNING-SERVICE-CRN-ID-HERE"" -}] -}"" - -For Windows users, put the --data-raw command on one line and replace all "" with "" inside this command as follows: - -curl --location --request POST ^ -""https://api.dataplatform.cloud.ibm.com/v2/spaces"" ^ --H ""Authorization: Bearer TOKEN-HERE"" ^ --H ""ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE"" ^ --H ""Content-Type: application/json"" ^ ---data-raw ""{""name"": ""SPACE-NAME-HERE"",""description"": ""optional description here"",""storage"": {""resource_crn"": ""COS-CRN-ID-HERE"" },""compute"": [{""name"": ""MACHINE-LEARNING-SERVICE-NAME-HERE"",""crn"": ""MACHINE-LEARNING-SERVICE-CRN-ID-HERE"" }]}"" - -Alternatively put the data in a separate file.A SPACE-ID is returned in id field of the metadata section. - -Output example: - -{ -""entity"": { -""compute"": [ -{ -""crn"": ""MACHINE-LEARNING-SERVICE-CRN"", -""guid"": ""MACHINE-LEARNING-SERVICE-GUID"", -""name"": ""MACHINE-LEARNING-SERVICE-NAME"", -""type"": ""machine_learning"" -} -], -""description"": ""string"", -""members"": [ -{ -""id"": ""XXXXXXX"", -""role"": ""admin"", -""state"": ""active"", -""type"": ""user"" -} -], -""name"": ""name"", -""scope"": { -""bss_account_id"": ""account_id"" -}, -""status"": { -""state"": ""active"" -} -}, -""metadata"": { -""created_at"": ""2020-07-17T08:36:57.611Z"", -""creator_id"": ""XXXXXXX"", -" -B92F42609B54B82BFE38A69B781052E876258C2C_3,B92F42609B54B82BFE38A69B781052E876258C2C,"""id"": ""SPACE-ID"", -""url"": ""/v2/spaces/SPACE-ID"" -} -} - -You must wait until your deployment space status is ""active"" before continuing. You can poll to check for this as follows. - -curl --location --request GET ""https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" -3. Create a new Decision Optimization model - -All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file create_model.json. The URL will vary according to the chosen region/location for your machine learning service. See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url). - -curl --location --request POST -""https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @create_model.json - -The create_model.json file contains the following code: - -{ -""name"": ""ModelName"", -""description"": ""ModelDescription"", -""type"": ""do-docplex_22.1"", -""software_spec"": { -""name"": ""do_22.1"" -}, -""custom"": { -""decision_optimization"": { -""oaas.docplex.python"": ""3.10"" -} -}, -""space_id"": ""SPACE-ID-HERE"" -} - -The Python version is stated explicitly here in a custom block. This is optional. Without it your model will use the default version which is currently Python 3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are 3.10. - -" -B92F42609B54B82BFE38A69B781052E876258C2C_4,B92F42609B54B82BFE38A69B781052E876258C2C,"If you want to be able to run jobs for this model from the user interface, instead of only using the REST API , you must define the schema for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface. - -You can also use the schema specified for input and output in your optimization model: - -{ -""name"": ""Diet-Model-schema"", -""description"": ""Diet"", -""type"": ""do-docplex_22.1"", -""schemas"": { -""input"": [ -{ -""id"": ""diet_food_nutrients"", -""fields"": -{ ""name"": ""Food"", ""type"": ""string"" }, -{ ""name"": ""Calories"", ""type"": ""double"" }, -{ ""name"": ""Calcium"", ""type"": ""double"" }, -{ ""name"": ""Iron"", ""type"": ""double"" }, -{ ""name"": ""Vit_A"", ""type"": ""double"" }, -{ ""name"": ""Dietary_Fiber"", ""type"": ""double"" }, -{ ""name"": ""Carbohydrates"", ""type"": ""double"" }, -{ ""name"": ""Protein"", ""type"": ""double"" } -] -}, -{ -""id"": ""diet_food"", -""fields"": -{ ""name"": ""name"", ""type"": ""string"" }, -{ ""name"": ""unit_cost"", ""type"": ""double"" }, -{ ""name"": ""qmin"", ""type"": ""double"" }, -{ ""name"": ""qmax"", ""type"": ""double"" } -] -}, -{ -""id"": ""diet_nutrients"", -""fields"": -{ ""name"": ""name"", ""type"": ""string"" }, -{ ""name"": ""qmin"", ""type"": ""double"" }, -{ ""name"": ""qmax"", ""type"": ""double"" } -] -} -], -""output"": [ -{ -""id"": ""solution"", -""fields"": -" -B92F42609B54B82BFE38A69B781052E876258C2C_5,B92F42609B54B82BFE38A69B781052E876258C2C,"{ ""name"": ""name"", ""type"": ""string"" }, -{ ""name"": ""value"", ""type"": ""double"" } -] -} -] -}, -""software_spec"": { -""name"": ""do_22.1"" -}, -""space_id"": ""SPACE-ID-HERE"" -} - -When you post a model you provide information about its model type and the software specification to be used.Model types can be, for example: - - - -* do-opl_22.1 for OPL models -* do-cplex_22.1 for CPLEX models -* do-cpo_22.1 for CP models -* do-docplex_22.1 for Python models - - - -Version 20.1 can also be used for these model types. - -For the software specification, you can use the default specifications using their names do_22.1 or do_20.1. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient__extendWML) which shows you how to extend the Decision Optimization software specification (runtimes with additional Python libraries for docplex models). - -A MODEL-ID is returned in id field in the metadata. - -Output example: - -{ -""entity"": { -""software_spec"": { -""id"": ""SOFTWARE-SPEC-ID"" -}, -""type"": ""do-docplex_20.1"" -}, -""metadata"": { -""created_at"": ""2020-07-17T08:37:22.992Z"", -""description"": ""ModelDescription"", -""id"": ""MODEL-ID"", -""modified_at"": ""2020-07-17T08:37:22.992Z"", -""name"": ""ModelName"", -""owner"": """", -""space_id"": ""SPACE-ID"" -} -} -" -B92F42609B54B82BFE38A69B781052E876258C2C_6,B92F42609B54B82BFE38A69B781052E876258C2C,"4. Upload a Decision Optimization model formulation ready for deployment.First compress your model into a (tar.gz, .zip or .jar) file and upload it to be deployed by the Watson Machine Learning service.This code example uploads a model called diet.zip that contains a Python model and no common data: - -curl --location --request PUT -""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/gzip"" ---data-binary ""@diet.zip"" - -You can download this example and other models from the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder. -5. Deploy your modelCreate a reference to your model. Use the SPACE-ID, the MODEL-ID obtained when you created your model ready for deployment and the hardware specification. For example: - -curl --location --request POST ""https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @deploy_model.json - -The deploy_model.json file contains the following code: - -{ -""name"": ""Test-Diet-deploy"", -""space_id"": ""SPACE-ID-HERE"", -""asset"": { -""id"": ""MODEL-ID-HERE"" -}, -""hardware_spec"": { -""name"": ""S"" -}, -""batch"": {} -} - -The DEPLOYMENT-ID is returned in id field in the metadata. Output example: - -{ -""entity"": { -""asset"": { -""id"": ""MODEL-ID"" -}, -""custom"": {}, -" -B92F42609B54B82BFE38A69B781052E876258C2C_7,B92F42609B54B82BFE38A69B781052E876258C2C,"""description"": """", -""hardware_spec"": { -""id"": ""HARDWARE-SPEC-ID"", -""name"": ""S"", -""num_nodes"": 1 -}, -""name"": ""Test-Diet-deploy"", -""space_id"": ""SPACE-ID"", -""status"": { -""state"": ""ready"" -} -}, -""metadata"": { -""created_at"": ""2020-07-17T09:10:50.661Z"", -""description"": """", -""id"": ""DEPLOYMENT-ID"", -""modified_at"": ""2020-07-17T09:10:50.661Z"", -""name"": ""test-Diet-deploy"", -""owner"": """", -""space_id"": ""SPACE-ID"" -} -} -6. Once deployed, you can monitor your model's deployment state. Use the DEPLOYMENT-ID.For example: - -curl --location --request GET ""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" - -Output example: -7. You can then Submit jobs for your deployed model defining the input data and the output (results of the optimization solve) and the log file.For example, the following shows the contents of a file called myjob.json. It contains (inline) input data, some solve parameters, and specifies that the output will be a .csv file. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt). - -{ -""name"":""test-job-diet"", -""space_id"": ""SPACE-ID-HERE"", -""deployment"": { -" -B92F42609B54B82BFE38A69B781052E876258C2C_8,B92F42609B54B82BFE38A69B781052E876258C2C,"""id"": ""DEPLOYMENT-ID-HERE"" -}, -""decision_optimization"" : { -""solve_parameters"" : { -""oaas.logAttachmentName"":""log.txt"", -""oaas.logTailEnabled"":""true"" -}, -""input_data"": [ -{ -""id"":""diet_food.csv"", -""fields"" : ""name"",""unit_cost"",""qmin"",""qmax""], -""values"" : -""Roasted Chicken"", 0.84, 0, 10], -""Spaghetti W/ Sauce"", 0.78, 0, 10], -""Tomato,Red,Ripe,Raw"", 0.27, 0, 10], -""Apple,Raw,W/Skin"", 0.24, 0, 10], -""Grapes"", 0.32, 0, 10], -""Chocolate Chip Cookies"", 0.03, 0, 10], -""Lowfat Milk"", 0.23, 0, 10], -""Raisin Brn"", 0.34, 0, 10], -""Hotdog"", 0.31, 0, 10] -] -}, -{ -""id"":""diet_food_nutrients.csv"", -""fields"" : ""Food"",""Calories"",""Calcium"",""Iron"",""Vit_A"",""Dietary_Fiber"",""Carbohydrates"",""Protein""], -""values"" : -""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], -""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], -""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], -""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], -""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], -""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], -" -B92F42609B54B82BFE38A69B781052E876258C2C_9,B92F42609B54B82BFE38A69B781052E876258C2C,"""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], -""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], -""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4] -] -}, -{ -""id"":""diet_nutrients.csv"", -""fields"" : ""name"",""qmin"",""qmax""], -""values"" : -""Calories"", 2000, 2500], -""Calcium"", 800, 1600], -""Iron"", 10, 30], -""Vit_A"", 5000, 50000], -""Dietary_Fiber"", 25, 100], -""Carbohydrates"", 0, 300], -""Protein"", 50, 100] -] -} -], -""output_data"": [ -{ -""id"":""..csv"" -} -] -} -} - -This code example posts a job that uses this file myjob.json. - -curl --location --request POST ""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --H ""cache-control: no-cache"" --d @myjob.json - -A JOB-ID is returned. Output example: (the job is queued) - -{ -""entity"": { -""decision_optimization"": { -""input_data"": [{ -""id"": ""diet_food.csv"", -""fields"": ""name"", ""unit_cost"", ""qmin"", ""qmax""], -" -B92F42609B54B82BFE38A69B781052E876258C2C_10,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Roasted Chicken"", 0.84, 0, 10], ""Spaghetti W/ Sauce"", 0.78, 0, 10], ""Tomato,Red,Ripe,Raw"", 0.27, 0, 10], ""Apple,Raw,W/Skin"", 0.24, 0, 10], ""Grapes"", 0.32, 0, 10], ""Chocolate Chip Cookies"", 0.03, 0, 10], ""Lowfat Milk"", 0.23, 0, 10], ""Raisin Brn"", 0.34, 0, 10], ""Hotdog"", 0.31, 0, 10]] -}, { -""id"": ""diet_food_nutrients.csv"", -""fields"": ""Food"", ""Calories"", ""Calcium"", ""Iron"", ""Vit_A"", ""Dietary_Fiber"", ""Carbohydrates"", ""Protein""], -""values"": ""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], ""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], ""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], ""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], ""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], ""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], ""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], ""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], ""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] -}, { -""id"": ""diet_nutrients.csv"", -""fields"": ""name"", ""qmin"", ""qmax""], -" -B92F42609B54B82BFE38A69B781052E876258C2C_11,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Calories"", 2000, 2500], ""Calcium"", 800, 1600], ""Iron"", 10, 30], ""Vit_A"", 5000, 50000], ""Dietary_Fiber"", 25, 100], ""Carbohydrates"", 0, 300], ""Protein"", 50, 100]] -}], -""output_data"": [ -{ -""id"": ""..csv"" -} -], -""solve_parameters"": { -""oaas.logAttachmentName"": ""log.txt"", -""oaas.logTailEnabled"": ""true"" -}, -""status"": { -""state"": ""queued"" -} -}, -""deployment"": { -""id"": ""DEPLOYMENT-ID"" -}, -""platform_job"": { -""job_id"": """", -""run_id"": """" -} -}, -""metadata"": { -""created_at"": ""2020-07-17T10:42:42.783Z"", -""id"": ""JOB-ID"", -""name"": ""test-job-diet"", -""space_id"": ""SPACE-ID"" -} -} -8. You can also monitor job states. Use the JOB-IDFor example: - -curl --location --request GET -""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" - -Output example: (job has completed) - -{ -""entity"": { -""decision_optimization"": { -""input_data"": [{ -""id"": ""diet_food.csv"", -""fields"": ""name"", ""unit_cost"", ""qmin"", ""qmax""], -" -B92F42609B54B82BFE38A69B781052E876258C2C_12,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Roasted Chicken"", 0.84, 0, 10], ""Spaghetti W/ Sauce"", 0.78, 0, 10], ""Tomato,Red,Ripe,Raw"", 0.27, 0, 10], ""Apple,Raw,W/Skin"", 0.24, 0, 10], ""Grapes"", 0.32, 0, 10], ""Chocolate Chip Cookies"", 0.03, 0, 10], ""Lowfat Milk"", 0.23, 0, 10], ""Raisin Brn"", 0.34, 0, 10], ""Hotdog"", 0.31, 0, 10]] -}, { -""id"": ""diet_food_nutrients.csv"", -""fields"": ""Food"", ""Calories"", ""Calcium"", ""Iron"", ""Vit_A"", ""Dietary_Fiber"", ""Carbohydrates"", ""Protein""], -""values"": ""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], ""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], ""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], ""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], ""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], ""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], ""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], ""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], ""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] -}, { -""id"": ""diet_nutrients.csv"", -""fields"": ""name"", ""qmin"", ""qmax""], -" -B92F42609B54B82BFE38A69B781052E876258C2C_13,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Calories"", 2000, 2500], ""Calcium"", 800, 1600], ""Iron"", 10, 30], ""Vit_A"", 5000, 50000], ""Dietary_Fiber"", 25, 100], ""Carbohydrates"", 0, 300], ""Protein"", 50, 100]] -}], -""output_data"": [{ -""fields"": ""Name"", ""Value""], -""id"": ""kpis.csv"", -""values"": ""Total Calories"", 2000], ""Total Calcium"", 800.0000000000001], ""Total Iron"", 11.278317739831891], ""Total Vit_A"", 8518.432542485823], ""Total Dietary_Fiber"", 25], ""Total Carbohydrates"", 256.80576358904455], ""Total Protein"", 51.17372234135308], ""Minimal cost"", 2.690409171696264]] -}, { -""fields"": ""name"", ""value""], -""id"": ""solution.csv"", -""values"": ""Spaghetti W/ Sauce"", 2.1551724137931036], ""Chocolate Chip Cookies"", 10], ""Lowfat Milk"", 1.8311671008899097], ""Hotdog"", 0.9296975991385925]] -}], -""output_data_references"": [], -""solve_parameters"": { -""oaas.logAttachmentName"": ""log.txt"", -""oaas.logTailEnabled"": ""true"" -}, -""solve_state"": { -""details"": { -""KPI.Minimal cost"": ""2.690409171696264"", -""KPI.Total Calcium"": ""800.0000000000001"", -""KPI.Total Calories"": ""2000.0"", -""KPI.Total Carbohydrates"": ""256.80576358904455"", -""KPI.Total Dietary_Fiber"": ""25.0"", -""KPI.Total Iron"": ""11.278317739831891"", -""KPI.Total Protein"": ""51.17372234135308"", -" -B92F42609B54B82BFE38A69B781052E876258C2C_14,B92F42609B54B82BFE38A69B781052E876258C2C,"""KPI.Total Vit_A"": ""8518.432542485823"", -""MODEL_DETAIL_BOOLEAN_VARS"": ""0"", -""MODEL_DETAIL_CONSTRAINTS"": ""7"", -""MODEL_DETAIL_CONTINUOUS_VARS"": ""9"", -""MODEL_DETAIL_INTEGER_VARS"": ""0"", -""MODEL_DETAIL_KPIS"": ""[""Total Calories"", ""Total Calcium"", ""Total Iron"", ""Total Vit_A"", ""Total Dietary_Fiber"", ""Total Carbohydrates"", ""Total Protein"", ""Minimal cost""]"", -""MODEL_DETAIL_NONZEROS"": ""57"", -""MODEL_DETAIL_TYPE"": ""LP"", -""PROGRESS_CURRENT_OBJECTIVE"": ""2.6904091716962637"" -}, -""latest_engine_activity"": [ -""2020-07-21T16:37:36Z, INFO] Model: diet"", -""2020-07-21T16:37:36Z, INFO] - number of variables: 9"", -""2020-07-21T16:37:36Z, INFO] - binary=0, integer=0, continuous=9"", -""2020-07-21T16:37:36Z, INFO] - number of constraints: 7"", -""2020-07-21T16:37:36Z, INFO] - linear=7"", -""2020-07-21T16:37:36Z, INFO] - parameters: defaults"", -""2020-07-21T16:37:36Z, INFO] - problem type is: LP"", -""2020-07-21T16:37:36Z, INFO] Warning: Model: ""diet"" is not a MIP problem, progress listeners are disabled"", -""2020-07-21T16:37:36Z, INFO] objective: 2.690"", -""2020-07-21T16:37:36Z, INFO] ""Spaghetti W/ Sauce""=2.155"", -""2020-07-21T16:37:36Z, INFO] ""Chocolate Chip Cookies""=10.000"", -""2020-07-21T16:37:36Z, INFO] ""Lowfat Milk""=1.831"", -""2020-07-21T16:37:36Z, INFO] ""Hotdog""=0.930"", -" -B92F42609B54B82BFE38A69B781052E876258C2C_15,B92F42609B54B82BFE38A69B781052E876258C2C,"""2020-07-21T16:37:36Z, INFO] solution.csv"" -], -""solve_status"": ""optimal_solution"" -}, -""status"": { -""completed_at"": ""2020-07-21T16:37:36.989Z"", -""running_at"": ""2020-07-21T16:37:35.622Z"", -""state"": ""completed"" -} -}, -""deployment"": { -""id"": ""DEPLOYMENT-ID"" -} -}, -""metadata"": { -""created_at"": ""2020-07-21T16:37:09.130Z"", -""id"": ""JOB-ID"", -""modified_at"": ""2020-07-21T16:37:37.268Z"", -""name"": ""test-job-diet"", -""space_id"": ""SPACE-ID"" -} -} -9. Optional: You can delete jobs as follows: - -curl --location --request DELETE ""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true"" --H ""Authorization: bearer TOKEN-HERE"" - -" -DEB599F49C3E459A08E8BF25304B063B50CAA294_0,DEB599F49C3E459A08E8BF25304B063B50CAA294," Deploying a Decision Optimization model by using the user interface - -You can save a model for deployment in the Decision Optimization experiment UI and promote it to your Watson Machine Learning deployment space. - -" -DEB599F49C3E459A08E8BF25304B063B50CAA294_1,DEB599F49C3E459A08E8BF25304B063B50CAA294," Procedure - -To save your model for deployment: - - - -1. In the Decision Optimization experiment UI, either from the Scenario or from the Overview pane, click the menu icon ![Scenario menu icon](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/images/scenariomenu.jpg) for the scenario that you want to deploy, and select Save for deployment -2. Specify a name for your model and add a description, if needed, then click Next. - - - -1. Review the Input and Output schema and select the tables you want to include in the schema. -2. Review the Run parameters and add, modify or delete any parameters as necessary. -3. Review the Environment and Model files that are listed in the Review and save window. -4. Click Save. - - - -The model is then available in the Models section of your project. - - - -To promote your model to your deployment space: - - - -3. View your model in the Models section of your project.You can see a summary with input and output schema. Click Promote to deployment space. -4. In the Promote to space window that opens, check that the Target space field displays the name of your deployment space and click Promote. -5. Click the link deployment space in the message that you receive that confirms successful promotion. Your promoted model is displayed in the Assets tab of your Deployment space. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used. - - - -To create a new deployment: - - - -6. From the Assets tab of your deployment space, open your model and click New Deployment. -7. In the Create a deployment window that opens, specify a name for your deployment and select a Hardware specification.Click Create to create the deployment. Your deployment window opens from which you can later create jobs. - - - - - -" -DEB599F49C3E459A08E8BF25304B063B50CAA294_2,DEB599F49C3E459A08E8BF25304B063B50CAA294," Creating and running Decision Optimization jobs - -You can create and run jobs to your deployed model. - -" -DEB599F49C3E459A08E8BF25304B063B50CAA294_3,DEB599F49C3E459A08E8BF25304B063B50CAA294," Procedure - - - -1. Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the data icon to open the data pane. Upload your input data tables, and solution and kpi output tables here. (You must have output tables defined in your model to be able to see the solution and kpi values.) -2. Open your deployment model, by selecting it in the Deployments tab of your deployment space and click New job. -3. Define the details of your job by entering a name, and an optional description for your job and click Next. -4. Configure your job by selecting a hardware specification and Next.You can choose to schedule you job here, or leave the default schedule option off and click Next. You can also optionally choose to turn on notifications or click Next. -5. Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables. Click Next. -" -95689297B729A4186914E81A59FFB3A09289F8D8,95689297B729A4186914E81A59FFB3A09289F8D8," Python client examples - -You can deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client. - -To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html). - -For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmldeployments). - -See also the following sample notebooks located in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder.. - - - -* Deploying a DO model with WML -* RunDeployedModel -* ExtendWMLSoftwareSpec - - - -The Deploying a DO model with WML sample shows you how to deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client. This notebook uses the diet sample for the Decision Optimization model and takes you through the whole procedure without using the Decision Optimization experiment UI. - -The RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model. This notebook uses a model that is saved for deployment from a Decision Optimization experiment UI scenario. - -The ExtendWMLSoftwareSpec notebook shows you how to extend the Decision Optimization software specification within Watson Machine Learning. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it. - -You can also find in the samples several notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data. -" -135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_0,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," Solve parameters - -To control solve behavior, you can specify Decision Optimization solve parameters in your request as named value pairs. - -For example: - -""solve_parameters"" : { -""oaas.logAttachmentName"":""log.txt"", -""oaas.logTailEnabled"":""true"" -} - -You can use this code to collect the engine log tail during the solve and the whole engine log as output at the end of the solve. - -You can use these parameters in your request. - - - - Name Type Description - - oaas.timeLimit Number You can use this parameter to set a time limit in milliseconds. - oaas.resultsFormat Enum



* JSON
* CSV
* XML
* TEXT
* XLSX


Specifies the format for returned results. The default formats are as follows:



* CPLEX - .xml
* CPO - .json
* OPL - .csv
* DOcplex - .json



Other formats might or might not be supported depending on the application type. - oaas.oplRunConfig String Specifies the name of the OPL run configuration to be executed. - oaas.docplex.python 3.10 You can use this parameter to set the Python version for the run in your deployed model. If not specified, 3.10 is used by default. - oaas.logTailEnabled Boolean Use this parameter to include the log tail in the solve status. - oaas.logAttachmentName String If defined, engine logs will be defined as a job output attachment. -" -135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_1,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," oaas.engineLogLevel Enum



* OFF
* SEVERE
* WARNING
* INFO
* CONFIG
* FINE
* FINER
* FINEST


You can use this parameter to define the level of detail that is provided by the engine log. The default value is INFO. - oaas.logLimit Number Maximum log-size limit in number of characters. - oaas.dumpZipName Can be viewed as Boolean (see Description) If defined, a job dump (inputs and outputs) .zip file is provided with this name as a job output attachment. The name can contain a placeholder ${job_id}. If defined with no value, dump_${job_id}.zip attachmentName is used. If not defined, by default, no job dump .zip file is attached. -" -135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_2,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," oaas.dumpZipRules String If defined, ta .zip file is generated according to specific job rules (RFC 1960-based Filter). It must be used in conjunction with the {@link DUMP_ZIP_NAME} parameter. Filters can be defined on the duration and the following {@link com.ibm.optim.executionservice.model.solve.SolveState} properties:



* duration
* solveState.executionStatus
* solveState.interruptionStatus
* solveState.solveStatus
* solveState.failureInfo.type



Example:

(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or ( (solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE))

(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or (|(solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE)) -" -939233F807850AE8D28246ADE7FDCCDA66E9DF03_0,939233F807850AE8D28246ADE7FDCCDA66E9DF03," Model deployment - -To deploy a Decision Optimization model, create a model ready for deployment in your deployment space and then upload your model as an archive. When deployed, you can submit jobs to your model and monitor job states. - -" -939233F807850AE8D28246ADE7FDCCDA66E9DF03_1,939233F807850AE8D28246ADE7FDCCDA66E9DF03," Procedure - -To deploy a Decision Optimization model: - - - -1. Package your Decision Optimization model formulation with your common data (optional) ready for deployment as a tar.gz, .zip, or .jar file. Your archive can include the following optional files: - - - -1. Your model files -2. Settings (For more information, see [ Solve parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeploySolveParams.htmltopic_deploysolveparams) ) -3. Common data - - - -Note: For Python models with multiple .py files, put all files in the same folder in your archive. The same folder must contain a main file called main.py. Do not use subfolders. -2. Create a model ready for deployment in Watson Machine Learning providing the following information: - - - -* Machine Learning service instance -* Deployment space instance -* Software specification ( Decision Optimizationruntime version): - - - -* do_ 22.1 runtime is based on CPLEX 22.1.1.0 -* do_ 20.1 runtime is based on CPLEX 20.1.0.1 - - - -You can extend the software specification provided by Watson Machine Learning. See the [ExtendWMLSoftwareSpec](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/jupyter/watsonx.ai%20and%20Cloud%20Pak%20for%20Data%20as%20a%20Service/ExtendWMLSoftwareSpec.ipynb) notebook in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). - -Updating CPLEX runtimes: - -" -939233F807850AE8D28246ADE7FDCCDA66E9DF03_2,939233F807850AE8D28246ADE7FDCCDA66E9DF03,"If you previously deployed your model with a CPLEX runtime that is no longer supported, you can update your existing deployed model by using either the [ REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api) or the [UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmldiscont-soft-spec). -* The model type: - - - -* opl (do-opl_) -* cplex (do-cplex_) -* cpo (do-cpo_) -* docplex (do-docplex_) using Python 3.10 - - - -(The Runtime version can be one of the available runtimes so, for example, an opl model with runtime 22.1 would have the model type do-opl_ 22.1.) - - - -You obtain a MODEL-ID. Your Watson Machine Learning model can then be used in one or multiple deployments. -3. Upload your model archive (tar.gz, .zip, or .jar file) on Watson Machine Learning. See [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats) for information about input file types. -4. Deploy your model by using the MODEL-ID, SPACE-ID, and the hardware specification for the available configuration sizes (small S, medium M, large L, extra large XL). See [configurations](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.htmltopic_paralleljobs__34c6).You obtain a DEPLOYMENT-ID. -5. Monitor the deployment by using the DEPLOYMENT-ID. Deployment states can be: initializing, updating, ready, or failed. -" -02C5718919D676E7EA14D16AC226407CC675C95E,02C5718919D676E7EA14D16AC226407CC675C95E," Model execution - -Once your model is deployed, you can submit Decision Optimization jobs to this deployment. - -You can submit jobs specifying the: - - - -* Input data: the transaction data used as input by the model. This can be inline or referenced -* Output data: to define how the output data is generated by model. This is returned as inline or referenced data. -* Solve parameters: to customize the behavior of the solution engine - - - -For more information see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt) - -After submitting a job, you can use the job-id to poll the job status to collect the: - - - -* Job execution status or error message -* Solve execution status, progress and log tail -* Inline or referenced output data - - - -Job states can be : queued, running, completed, failed, canceled. -" -E9E9556CA0C7B258D910BB31222A78BEABB46A48_0,E9E9556CA0C7B258D910BB31222A78BEABB46A48," Model input and output data adaptation - -When submitting your job you can include your data inline or reference your data in your request. This data will be mapped to a file named with data identifier and used by the model. The data identifier extension will define the format of the file used. - -The following adaptations are supported: - - - -* Tabular inline data to embed your data in your request. For example: - -""input_data"": [{ -""id"":""diet_food.csv"", -""fields"" : ""name"",""unit_cost"",""qmin"",""qmax""], -""values"" : -""Roasted Chicken"", 0.84, 0, 10] -] -}] - -This will generate the corresponding diet_food.csv file that is used as the model input file. Only csv adaptation is currently supported. -* Inline data, that is, non-tabular data (such as an OPL .dat file or an .lpfile) to embed data in your request. For example: - -""input_data"": [{ -""id"":""diet_food.csv"", -""content"":""Input data as a base64 encoded string"" -}] -* URL referenced data allowing you to reference files stored at a particular URL or REST data service. For example: - -""input_data_references"": { -""type"": ""url"", -""id"": ""diet_food.csv"", -""connection"": { -""verb"": ""GET"", -""url"": ""https://myserver.com/diet_food.csv"", -""headers"": { -""Content-Type"": ""application/x-www-form-urlencoded"" -} -}, -""location"": {} -} - -This will copy the corresponding diet_food.csv file that is used as the model input file. -* Data assets allowing you to reference any data asset or connected data asset present in your space and benefit from the data connector integration capabilities. For example: - -""input_data_references"": [{ -""name"": ""test_ref_input"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -" -E9E9556CA0C7B258D910BB31222A78BEABB46A48_1,E9E9556CA0C7B258D910BB31222A78BEABB46A48,"""href"": ""/v2/assets/ASSET-ID?space_id=SPACE-ID"" -} -}], -""output_data_references"": [{ -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/ASSET-ID?space_id=SPACE-ID"" -} -}] - -With this data asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo). -* Connection assets allowing you to reference any data and then refer to the connection, without having to specify credentials each time. For more information, see [Supported data sources in Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html). Referencing a secure connection without having to use inline credentials in the payload also offers you better security. For more information, see [Example connection_asset payload](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.htmlconnection_asset_payload).For example, to connect to a COS/S3 via a Connection asset: - -{ -""type"" : ""connection_asset"", -""id"" : ""diet_food.csv"", -""connection"" : { -""id"" : -}, -""location"" : { -""file_name"" : ""FILENAME.csv"", -""bucket"" : ""BUCKET-NAME"" -} -} - -For information about the parameters used in these examples, see [Deployment job definitions](https://cloud.ibm.com/apidocs/machine-learning-cpdeployment-job-definitions-create). - -Another example showing you how to connect to a DB2 asset via a connection asset: - -{ -""type"" : ""connection_asset"", -" -E9E9556CA0C7B258D910BB31222A78BEABB46A48_2,E9E9556CA0C7B258D910BB31222A78BEABB46A48,"""id"" : ""diet_food.csv"", -""connection"" : { -""id"" : -}, -""location"" : { -""table_name"" : ""TABLE-NAME"", -""schema_name"" : ""SCHEMA-NAME"" -} -} - - - -With this connection asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo). - -You can combine different adaptations in the same request. For more information about data definitions see [Adding data to an analytics project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html). -" -977988398EFBDCD10DB4ACED047D8D864883614A_0,977988398EFBDCD10DB4ACED047D8D864883614A," Model input and output data file formats - -With your Decision Optimization model, you can use the following input and output data identifiers and extension combinations. - -This table shows the supported file type combinations for Decision Optimization in Watson Machine Learning: - - - - Model type Input file type Output file type Comments - - cplex .lp
.mps
.sav
.feasibility
.prm

.jar for Java™
models .xml
.json

The name of the output file must be solution The output format can be specified by using the API.

Files of type .lp, .mps, and .sav can be compressed by using gzip or bzip2, and uploaded as, for example, .lp.gz or .sav.bz2.

The schemas for the CPLEX formats for solutions, conflicts, and feasibility files are available for you to download in the cplex_xsds.zip archive from the [Decision Optimization github](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/resources/cplex_xsds.zip). - cpo .cpo

.jar for Java
models .xml
.json

The name of the output file must be solution The output format can be specified by using the solve parameter.

For the native file format for CPO models, see: [CP Optimizer file format syntax](https://www.ibm.com/docs/en/icos/20.1.0?topic=manual-cp-optimizer-file-format-syntax). -" -977988398EFBDCD10DB4ACED047D8D864883614A_1,977988398EFBDCD10DB4ACED047D8D864883614A," opl .mod
.dat
.oplproject
.xls
.json
.csv

.jar for Java
models .xml
.json
.txt
.csv
.xls The output format is consistent with the input type but can be specified by using the solve parameter if needed. To take advantage of data connectors, use the .csv format.

Only models that are defined with tuple sets can be deployed; other OPL structures are not supported.

To read and write input and output in OPL, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels). - docplex .py
. (input data) Any output file type that is specified in the model. Any format can be used in your Python code, but to take advantage of data connectors, use the .csv format.

To read and write input and output in Python, use the commands get_input_stream(""filename"") and get_output_stream(""filename""). See [DOcplex API sum example](https://ibmdecisionoptimization.github.io/docplex-doc/2.23.222/mp/docplex.util.environment.html) - - - -Data identifier restrictions -: A file name has the following restrictions: - - - -* Is limited to 255 characters -* Can include only ASCII characters -" -D476F3E93D23F52EF1D5079343D92DB793E3AD5E,D476F3E93D23F52EF1D5079343D92DB793E3AD5E," Output data definition - -When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data). - -For more information about output file types and names see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats). - -Some output data definition examples: - - - -* To collect solution.csv output as inline data: - -""output_data"": [{ -""id"":""solution.csv"" -}] -* Regexp can be also used as an identifier. For example to collect all csv output files as inline data: - -""output_data"": [{ -""id"":""..csv"" -}] -* Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and ${job_id} and ${ attachment_name } place holder - -""output_data_references"": [{ -""id"":""..csv"", -""type"": ""connection_asset"", -""connection"": { -""id"" : -}, -""location"": { -""bucket"": ""XXXXXXXXX"", -""path"": ""${job_id}/${attachment_name}"" } -}] - -For example, here if you have a job with identifier to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file. -" -693BC91EAADEAE664982AA88A372590A6758F294_0,693BC91EAADEAE664982AA88A372590A6758F294," Running jobs - -Decision Optimization uses Watson Machine Learning asynchronous APIs to enable jobs to be run in parallel. - -To solve a problem, you can create a new job from the model deployment and associate data to it. See [Deployment steps](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.htmltopic_wmldeployintro) and the [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST). You are not charged for deploying a model. Only the solving of a model with some data is charged, based on the running time. - -To solve more than one job at a time, specify more than one node when you create your deployment. For example in this [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST__createdeploy), increment the number of the nodes by changing the value of the nodes property: ""nodes"" : 1. - - - -1. The new job is sent to the queue. -2. If a POD is started but idle (not running a job), it immediately begins processing this job. -3. Otherwise, if the maximum number of nodes is not reached, a new POD is started. (Starting a POD can take a few seconds). The job is then assigned to this new POD for processing. -4. Otherwise, the job waits in the queue until one of the running PODs has finished and can pick up the waiting job. - - - -The configuration of PODs of each size is as follows: - - - -Table 1. T-shirt sizes for Decision Optimization - - Definition Name Description - - 2 vCPU and 8 GB S Small - 4 vCPU and 16 GB M Medium - 8 vCPU and 32 GB L Large - 16 vCPU and 64 GB XL Extra Large - - - -For all configurations, 1 vCPU and 512 MB are reserved for internal use. - -" -693BC91EAADEAE664982AA88A372590A6758F294_1,693BC91EAADEAE664982AA88A372590A6758F294,"In addition to the solve time, the pricing depends on the selected size through a multiplier. - -In the deployment configuration, you can also set the maximal number of nodes to be used. - -Idle PODs are automatically stopped after some timeout. If a new job is submitted when no PODs are up, it takes some time (approximately 30 seconds) for the POD to restart. -" -73DEFA42948BBE878834CA4B7C9B0395F44B9B90_0,73DEFA42948BBE878834CA4B7C9B0395F44B9B90," Changing Python version for an existing deployed model with the REST API - -You can update an existing Decision Optimization model using the Watson Machine Learning REST API. This can be useful, for example, if in your model you have explicitly specified a Python version that has now become deprecated. - -" -73DEFA42948BBE878834CA4B7C9B0395F44B9B90_1,73DEFA42948BBE878834CA4B7C9B0395F44B9B90," Procedure - -To change Python version for an existing deployed model: - - - -1. Create a revision to your Decision Optimization model - -All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file update_model.json. The URL will vary according to the chosen region/location for your machine learning service. - -curl --location --request POST -""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/revisions?version=2021-12-01"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @revise_model.json - -The revise_model.json file contains the following code: - -{ -""commit_message"": ""Save current model"", -""space_id"": ""SPACE-ID-HERE"" -} - -Note the model revision number ""rev"" that is provided in the output for use in the next step. -2. Update an existing deployment so that current jobs will not be impacted: - -curl --location --request PATCH -""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @revise_deploy.json - -The revise_deploy.json file contains the following code: - -[ -{ -""op"": ""add"", -""path"": ""/asset"", -""value"": { -""id"":""MODEL-ID-HERE"", -""rev"":""MODEL-REVISION-NUMBER-HERE"" -} -} -] -3. Patch an existing model to explicitly specify Python version 3.10 - -curl --location --request PATCH -" -73DEFA42948BBE878834CA4B7C9B0395F44B9B90_2,73DEFA42948BBE878834CA4B7C9B0395F44B9B90,"""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE?rev=MODEL-REVISION-NUMBER-HERE&version=2021-12-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @update_model.json - -The update_model.json file, with the default Python version stated explicitly, contains the following code: - -[ -{ -""op"": ""add"", -""path"": ""/custom"", -""value"": { -""decision_optimization"":{ -""oaas.docplex.python"": ""3.10"" -} -} -} -] - -Alternatively, to remove any explicit mention of a Python version so that the default version will always be used: - -[ -{ -""op"": ""remove"", -""path"": ""/custom/decision_optimization"" -} -] -4. Patch the deployment to use the model that was created for Python to use version 3.10 - -curl --location --request PATCH -""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE"" --H ""Authorization: bearer TOKEN-HERE"" --H ""Content-Type: application/json"" --d @update_deploy.json - -The update_deploy.json file contains the following code: - -[ -{ -""op"": ""add"", -""path"": ""/asset"", -""value"": { ""id"":""MODEL-ID-HERE""} -" -1BB1684259F93D91580690D898140D98F12611ED,1BB1684259F93D91580690D898140D98F12611ED," Decision Optimization - -When you have created and solved your Decision Optimization models, you can deploy them using Watson Machine Learning. - -See the [Decision Optimization experiment UI](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.htmltopic_buildingmodels) for building and solving models. The following sections describe how you can deploy your models. -" -A255BB890CA287C5A91765B71832DAA45BA4132B_0,A255BB890CA287C5A91765B71832DAA45BA4132B," Global visualization preferences - -You can override the default settings for titles, range slider, grid lines, and mouse tracking. You can also specify a different color scheme template. - - - -1. In Visualizations, click the Global visualization preferences control in the Actions section. - -The Global visualization preferences dialog provides the following settings. - -Titles -: Provides global chart title settings. - -Global titles -: Enables or disables the global titles for all charts. - -Global primary title -: Enables or disables the display of global, primary chart titles. When enabled, the top-level chart title that you enter here is applied to all chart's, effectively overriding each chart's individual Primary title setting. - -Global subtitle -: Enables or disables the display of global chart subtitles. When enabled, the chart subtitle that you enter here is applied to all chart's, effectively overriding each chart's individual Subtitle setting. - -Default titles -: Enables or disables the default titles for all charts. - -Title alignment -: Provides the title alignment options Left, Center (the default setting), and Right. - -Tools -: Provides options that control chart behavior. - -Range slider -: Enables or disables the range slider for each chart. When enabled, you can control the amount of chart data that displays with a range slider that is provided for each chart. - -Grid lines -: Controls the display of X axis (vertical) and Y axis (horizontal) grid lines. - -Mouse tracker -: When enabled, the mouse cursor location, in relation to the chart data, is tracked and displayed when placed anywhere over the chart. - -Toolbox -: Enables or disables the toolbox for each chart. Depending on the chart type, the toolbox on the right of the screen provides tools such as zoom, save as image, restore, select data, and clear selection. - -ARIA -: When enabled, web content and web applications are more accessible to users with disabilities. - -Filter out null -: Enables or disables the filtering of null chart data. - -X axis on zero -: When enabled, the X axis lies on the other's origin position. When not enabled, the X axis always starts at 0. - -Y axis on zero -" -A255BB890CA287C5A91765B71832DAA45BA4132B_1,A255BB890CA287C5A91765B71832DAA45BA4132B,": When enabled, the Y axis lies on the other's origin position. When not enabled, the Y axis always starts at 0. - -Show xAxis Label -: Enables or disables the xAxis label. - -Show yAxis Label -: Enables or disables the yAxis label. - -Show xAxis Line -: Enables or disables the xAxis line. - -Show yAxis Line -: Enables or disables the yAxis line. - -Show xAxis Name -: Enables or disables the xAxis name. - -Show yAxis Name -: Enables or disables the yAxis name. - -yAxis Name Location -: The drop-down list provides options for specifying the yAxis name location. Options include Start, Middle, and End. - -Truncation length -: The specified value sets the string length. Strings that are longer than the specified length are truncated. The default value is 10. When 0 is specified, truncation is turned off. - -xAxis tick label decimal -: Sets the tick label decimal value for the xAxis. The default value is 3. - -yAxis tick label decimal -: Sets the tick label decimal value for the yAxis. The default value is 3. - -xAxis tick label rotate -: Sets the xAxis tick label rotation value. The default value is 0 (no rotation). You can specify value in the range -90 to 90 degrees. - -Theme -" -5D043091B2F2398611A819743FC83688D7658B22,5D043091B2F2398611A819743FC83688D7658B22," Visualizations layout and terms - -Canvas -: The canvas is the area of the Visualizations dialog where you build the chart. - -Chart type -: Lists the available chart types. The graphic elements are the items in the chart that represent data (bars, points, lines, and so on). - -Details pane -: The Details pane provides the basic chart building blocks. - -Chart settings -: Provides options for selecting which variables are used to build the chart, distribution method, title and subtitle fields, and so on. Depending on the selected chart type, the Details pane options might vary. For more information, see [Chart types](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html). - -" -9F5D44B3A96F8418BE317AD258E4932E468551BE,9F5D44B3A96F8418BE317AD258E4932E468551BE," 3D charts - -3D charts are commonly used to represent multiple-variable functions and include a z-axis variable that is a function of both the x and y-axis variables. -" -823EB607207DFD62D80671AF48451CCE1C44153F,823EB607207DFD62D80671AF48451CCE1C44153F," Bar charts - -Bar charts are useful for summarizing categorical variables. For example, you can use a bar chart to show the number of men and the number of women who participated in a survey. You can also use a bar chart to show the mean salary for men and the mean salary for women. -" -BECCA4C839A0BCF01ADCB6A5CE31A3B1168D3548,BECCA4C839A0BCF01ADCB6A5CE31A3B1168D3548," Box plots - -A box plot chart shows the five statistics (minimum, first quartile, median, third quartile, and maximum). It is useful for displaying the distribution of a scale variable and pinpointing outliers. -" -5466D9A71E87BB01000DC957683E9CD3C10AD8BC,5466D9A71E87BB01000DC957683E9CD3C10AD8BC," Bubble charts - -Bubble charts display categories in your groups as nonhierarchical packed circles. The size of each circle (bubble) is proportional to its value. Bubble charts are useful for comparing relationships in your data. -" -F7D94E6CD13F36EB9B1FE7653C436DC5745250B1,F7D94E6CD13F36EB9B1FE7653C436DC5745250B1," Candlestick charts - -Candlestick charts are a style of financial charts that are used to describe price movements of a security, derivative, or currency. Each candlestick element typically shows one day. A one-month chart might show the 20 trading days as 20 candlesticks elements. Candlestick charts are most often used in the analysis of equity and currency price patterns and are similar to box plots. - -The data set that is used to create a candlestick chart must contain open, high, low, and close values for each time period you want to display. -" -2C9D0D0309E01FF2EE0D298A16011857DE068038,2C9D0D0309E01FF2EE0D298A16011857DE068038," Chart types - -The gallery contains a collection of the most commonly used charts. -" -035430AFAC1E73483636073C5BF48BCF8B4F5E1D,035430AFAC1E73483636073C5BF48BCF8B4F5E1D," Circle packing charts - -Circle packing charts display hierarchical data as a set of nested areas to visualize a large amount of hierarchically structured data. It's similar to a treemap, but uses circles instead of rectangles. Circle packing charts use containment (nesting) to display hierarchy data. -" -49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1,49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1," Custom charts - -The custom charts option provides options for pasting or editing JSON code to create the wanted chart. -" -91B834E69C2153740973C59CF6B4D66260640342,91B834E69C2153740973C59CF6B4D66260640342," Dendrogram charts - -Dendrogram charts are similar to tree charts and are typically used to illustrate a network structure (for example, a hierarchical structure). Dendrogram charts consist of a root node that is connected to subordinate nodes through edges or branches. The last nodes in the hierarchy are called leaves. -" -2910B7C4CD65F8E4ADD1607791DD22BED468B61D,2910B7C4CD65F8E4ADD1607791DD22BED468B61D," Dual Y-axes charts - -A dual Y-axes chart summarizes or plots two Y-axes variables that have different domains. For example, you can plot the number of cases on one axis and the mean salary on another. This chart can also be a mix of different graphic elements so that the dual Y-axes chart encompasses several of the different chart types. Dual Y-axes charts can display the counts as a line and the mean of each category as a bar. -" -97492A97F355A95D56BCF768A62CA7FD75718086,97492A97F355A95D56BCF768A62CA7FD75718086," Error bar charts - -Error bar charts represent the variability of data and indicate the error (or uncertainty) in a reported measurement. Error bars help determine whether differences are statistically significant. Error bars can also suggest goodness of fit for a specific function. -" -41167E3AD363B416D508B03A300E5ACFAF83F042,41167E3AD363B416D508B03A300E5ACFAF83F042," Evaluation charts - -Evaluation charts are similar to histograms or collection graphs. Evaluation charts show how accurate models are in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot. - -Outcomes are handled by defining a specific value or range of values as a ""hit"". Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). - -Flag -: Output fields are straightforward; hits correspond to true values. - -Nominal -: For nominal output fields, the first value in the set defines a hit. - -Continuous -: For continuous output fields, hits equal values greater than the midpoint of the field's range. - -Evaluation charts can also be cumulative so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models. -" -57AB3726FA10435D26878C626F61988F7305B9E8,57AB3726FA10435D26878C626F61988F7305B9E8," Building a chart from the chart type gallery - -Use chart type gallery for building charts. Following are general steps for building a chart from the gallery. - - - -1. In the Chart Type section, select a chart category. A preview version of the selected chart type is shown on the chart canvas. -2. If the canvas already displays a chart, the new chart replaces the chart's axis set and graphic elements. - - - -1. Depending on the selected chart type, the available variables are presented under a number of different headings in the Details pane (for example, Category for bar charts, X-axis and Y-axis for line charts). Select the appropriate variables for the selected chart type. - - - -" -CC0ADF041F1628221CAC49A1BAEC1D497D762DC4,CC0ADF041F1628221CAC49A1BAEC1D497D762DC4," Heat map charts - -Heat map charts present data where the individual values that are contained in a matrix are represented as colors. -" -1453D1CAD565842EEA24C8D92963BD73338EF0F1,1453D1CAD565842EEA24C8D92963BD73338EF0F1," Histogram charts - -A histogram is similar in appearance to a bar chart, but instead of comparing categories or looking for trends over time, each bar represents how data is distributed in a single category. Each bar represents a continuous range of data or the number of frequencies for a specific data point. - -Histograms are useful for showing the distribution of a single scale variable. Data are binned and summarized by using a count or percentage statistic. A variation of a histogram is a frequency polygon, which is like a typical histogram except that the area graphic element is used instead of the bar graphic element. - -Another variation of the histogram is the population pyramid. Its name is derived from its most common use: summarizing population data. When used with population data, it is split by gender to provide two back-to-back, horizontal histograms of age data. In countries with a young population, the shape of the resulting graph resembles a pyramid. - -Footnote -: The chart footnote, which is placed beneath the chart. - -XAxis label -: The x-axis label, which is placed beneath the x-axis. - -" -9DF72C2325CE5BACA0CC7D2A884695D115557C40,9DF72C2325CE5BACA0CC7D2A884695D115557C40," Line charts - -A line chart plots a series of data points on a graph and connects them with lines. A line chart is useful for showing trend lines with subtle differences, or with data lines that cross one another. You can use a line chart to summarize categorical variables, in which case it is similar to a bar chart (see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts) ). Line charts are also useful for time-series data. -" -F5AF4BCC2D0168D2698BEB2A858C24F81A476610,F5AF4BCC2D0168D2698BEB2A858C24F81A476610," Map charts - -Map charts are commonly used to compare values and show categories across geographical regions. Map charts are most beneficial when the data contains geographic information (countries, regions, states, counties, postal codes, and so on). -" -0C836867DD758509B908532F35CFC5E160D81A19,0C836867DD758509B908532F35CFC5E160D81A19," Math curve charts - -A math curve chart plots mathematical equation curves that are based on user-entered expressions. -" -66E7B1F986535FCE165F0CB5C553A6305339204E,66E7B1F986535FCE165F0CB5C553A6305339204E," Scatter matrix charts - -Scatter plot matrices are a good way to determine whether linear correlations exist between multiple variables. -" -3094E343D06DA6AE0D0D5D4865C7B0D806DC61A1,3094E343D06DA6AE0D0D5D4865C7B0D806DC61A1," Multi-chart charts - -Multi-chart charts provide options for creating multiple charts. The charts can be of the same or different types, and can include different variables from the same data set. -" -E777A9C7D0450D572431F168374224179C1AE7C4,E777A9C7D0450D572431F168374224179C1AE7C4," Multiple series charts - -Multiple series charts are similar to line charts, with the exception that you can chart multiple variables on the Y-axis. -" -DE359E77F61C11B6F759E8DFE8EA69AAC3D0514A,DE359E77F61C11B6F759E8DFE8EA69AAC3D0514A," Parallel charts - -Parallel charts are useful for visualizing high dimensional geometry and for analyzing multivariate data. Parallel charts resemble line charts for time-series data, but the axes do not correspond to points in time (a natural order is not present). -" -6B4213FC5352021865E77592EBC27242E746B5AA,6B4213FC5352021865E77592EBC27242E746B5AA," Pareto charts - -Pareto charts contain both bars and a line graph. The bars represent individual variable categories and the line graph represents the cumulative total. -" -A2B0DB014389285D9ABCA9FE0D4035F85DE6D102,A2B0DB014389285D9ABCA9FE0D4035F85DE6D102," Pie charts - -A pie chart is useful for comparing proportions. For example, you can use a pie chart to demonstrate that a greater proportion of Europeans is enrolled in a certain class. -" -81F297B28D1978EB0D0B1985D6F44B45DFE53542,81F297B28D1978EB0D0B1985D6F44B45DFE53542," Population pyramid charts - -Population pyramid charts (also known as ""age-sex pyramids"") are commonly used to present and analyze population information based on age and gender. -" -BA8A6820B3DBFAA703679B19BE070F7BD0CCA3D1,BA8A6820B3DBFAA703679B19BE070F7BD0CCA3D1," Q-Q plots - -Q-Q (quantile-quantile) plots compare two probability distributions by plotting their quantiles against each other. A Q–Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such as location, scale, and skewness are similar or different in the two distributions. -" -61F714F5629AD260B0D9776FC53CDA2EAA10DF24,61F714F5629AD260B0D9776FC53CDA2EAA10DF24," Radar charts - -Radar charts compare multiple quantitative variables and are useful for visualizing which variables have similar values, or if outliers exist among the variables. Radar charts consists of a sequence of spokes, with each spoke representing a single variable. Radar Charts are also useful for determining which variables are scoring high or low within a data set. -" -5A812008B8370853F0C151FDE4DFEDA4A39193CB,5A812008B8370853F0C151FDE4DFEDA4A39193CB," Relationship charts - -A relationship chart is useful for determining how variables relate to each other. -" -67C56AAC7DA2232E4DA2B8AEDEC41B9D8755E22A,67C56AAC7DA2232E4DA2B8AEDEC41B9D8755E22A," Scatter plots and dot plots - -Several broad categories of charts are created with the point graphic element. - -Scatter plots -: Scatter plots are useful for plotting multivariate data. They can help you determine potential relationships among scale variables. A simple scatter plot uses a 2-D coordinate system to plot two variables. A 3-D scatter plot uses a 3-D coordinate system to plot three variables. When you need to plot more variables, you can try overlay scatter plots and scatter plot matrices (SPLOMs). An overlay scatter plot displays overlaid pairs of X-Y variables, with each pair distinguished by color or shape. A SPLOM creates a matrix of 2-D scatter plots, with each variable plotted against every other variable in the SPLOM. - -Dot plots -: Like histograms, dot plots are useful for showing the distribution of a single scale variable. The data are binned, but, instead of one value for each bin (like a count), all of the points in each bin are displayed and stacked. These graphs are sometimes called density plots. - -Summary point plots -: Summary point plots are similar to bar charts, except that points are drawn in place of the top of the bars. For more information, see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts). - -" -7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3,7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3," Series array charts - -Series array charts include individual sub charts and display the Y-axis for all sub charts in the legend. -" -5CF2FE478862FCAA1745D5B0770CE6486B3B71F8,5CF2FE478862FCAA1745D5B0770CE6486B3B71F8," Sunburst charts - -A sunburst chart is useful for visualizing hierarchical data structures. A sunburst chart consists of an inner circle that is surrounded by rings of deeper hierarchy levels. The angle of each segment proportional to either a value or divided equally under its inner segment. The chart segments are colored based on the category or hierarchical level to which they belong. -" -BAE3302FC87E1BBFA604BAA2D003069E4233A517,BAE3302FC87E1BBFA604BAA2D003069E4233A517," Theme River charts - -A theme river is a specialized flow graph that shows changes over time. -" -B49F37BD511123A94FCAD3C6E826E60FC61DB446,B49F37BD511123A94FCAD3C6E826E60FC61DB446," Time plots - -Time plots illustrate data points at successive intervals of time. The time series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform. Time plots provide a preliminary analysis of the characteristics of time series data on basic statistics and test, and thus generate useful insights about your data before modeling. Time plots include analysis methods such as decomposition, augmented Dickey-Fuller test (ADF), correlations (ACF/PACF), and spectral analysis. -" -D872C74770B5729E037E841679F741CF3D8C20AD,D872C74770B5729E037E841679F741CF3D8C20AD," Tree charts - -Tree charts represent hierarchy in a tree-like structure. The structure of a Tree chart consists of a root node (has no parent node), line connections (named branches), and leaf nodes (have no child nodes). Line connections represent the relationships and connections between the members. -" -9B6386C6C291665ACA0892481681A94A70185E9D,9B6386C6C291665ACA0892481681A94A70185E9D," Treemap charts - -Treemap charts are an alternative method for visualizing the hierarchical structure of tree diagrams while also displaying quantities for each category. Treemap charts are useful for identifying patterns in data. Tree branches are represented by rectangles, with each sub branch represented by smaller rectangles. -" -99B0C1C962E0642E5B877747ED37E9BB27238664,99B0C1C962E0642E5B877747ED37E9BB27238664," t-SNE charts - -T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization. t-SNE charts model each high-dimensional object by a two-or-three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. -" -3873A285DCB38EF4B4ED663BFA0DF4047AB7692D,3873A285DCB38EF4B4ED663BFA0DF4047AB7692D," Word cloud charts - -Word cloud charts present data as words, where the size and placement of any individual word is determined by how it is weighted. -" -3BB91EBACC556700F955C3E6E01D90E5256207CF,3BB91EBACC556700F955C3E6E01D90E5256207CF," Visualizing your data - -You can discover insights from your data by creating visualizations. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data and quickly understand large amounts of information. - -Data format -: Tabular: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files, and connected data. - -For more information about supported data sources, see [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). - -Data size -: No limit - -You can create graphics similar to the following example that shows how humidity values over time. - -![Example visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/viz_main.png) -" -9D9188E6383DB5F7038B98A688CB2DC9CF5A336C,9D9188E6383DB5F7038B98A688CB2DC9CF5A336C," watsonx.governance on IBM® watsonx -" -CF88BCC09A32B2D6D65F2C2A831E2960ACA1E347,CF88BCC09A32B2D6D65F2C2A831E2960ACA1E347," Cloud Object Storage on IBM® watsonx -" -59DF73D502B5F62E3837464E81AC6BC9FDF07014_0,59DF73D502B5F62E3837464E81AC6BC9FDF07014," IBM Cloud services in the IBM watsonx services catalog - -You can provision IBM® Cloud service instances for the watsonx platform. - -The IBM watsonx.ai component provides the following services that provide key functionality, including tools and compute resources: - - - -* [Watson™ Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html) -* [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html) - - - -If you signed up for watsonx.ai, you already have these services. Otherwise, you can create instances of these services from the Services catalog. - -If you signed up for watsonx.governance, you already have this service. Otherwise, you can create an instance of this service from the Services catalog. - -The [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html) provides storage for projects and deployment spaces on the IBM watsonx platform. - -The [Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/secure-gateway.html) service provides secure connections to on-premises date sources. - -These services provide databases that you can access in IBM watsonx by creating connections: - - - -* [IBM Analytics Engine](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/spark.html) -* [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloudant.html) -* [Databases for Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/elasticsearch.html) -* [Databases for EDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/edb.html) -* [Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/mongodb.html) -" -59DF73D502B5F62E3837464E81AC6BC9FDF07014_1,59DF73D502B5F62E3837464E81AC6BC9FDF07014,"* [Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/postgresql.html) -" -A56686454E771E5FDDA0315DD38313F9FCB31AAC,A56686454E771E5FDDA0315DD38313F9FCB31AAC," Cloudant on IBM® watsonx -" -F3BA8CCB1E55BB6535944CB5ACDB19EFAEB1C3F9,F3BA8CCB1E55BB6535944CB5ACDB19EFAEB1C3F9," Db2 on IBM watsonx -" -E81F1FD08E472AF1516E6C6B0C936A2DCA55CC20,E81F1FD08E472AF1516E6C6B0C936A2DCA55CC20," Db2 Warehouse on IBM watsonx -" -32217F5F0DEE4A95C64B2BD92C25366706CC7E0C,32217F5F0DEE4A95C64B2BD92C25366706CC7E0C," Databases for EDB on IBM watsonx -" -868801EC73691D31B90C8611E934AA5DD3B17EA7,868801EC73691D31B90C8611E934AA5DD3B17EA7," Databases for Elasticsearch on IBM® watsonx -" -408FDAB4F452AB2C207EE3416332D315598E3456,408FDAB4F452AB2C207EE3416332D315598E3456," Databases for MongoDB on IBM watsonx -" -649119A6EF3F5AA2B1B0C63E0973532D4C950F48,649119A6EF3F5AA2B1B0C63E0973532D4C950F48," Databases for PostgreSQL on IBM® watsonx -" -B9D44BBCF205103BF01619D31CFEBE31A725BA5A,B9D44BBCF205103BF01619D31CFEBE31A725BA5A," Secure Gateway on IBM® watsonx -" -6AC4A29FEBF419002BDBA62D99D997CF55E9FCF2,6AC4A29FEBF419002BDBA62D99D997CF55E9FCF2," IBM Analytics Engine on IBM® watsonx -" -40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3,40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3," Watson Machine Learning on IBM watsonx - -Watson Machine Learning is part of IBM® watsonx.ai. Watson Machine Learning provides a full range of tools for your team to build, train, and deploy Machine Learning models. You can choose the tool with the level of automation or autonomy that matches your needs. Watson Machine Learning provides the following tools: - - - -* AutoAI experiment builder for automatically processing structured data to generate model-candidate pipelines. The best-performing pipelines can be saved as a machine learning model and deployed for scoring. -" -C4BB814768F5D91D2C6AA90B34FDDD944AA1EB91,C4BB814768F5D91D2C6AA90B34FDDD944AA1EB91," Watson Studio on IBM watsonx -" -189F970CF3B162E67B98B2A928B36193169E3CAF,189F970CF3B162E67B98B2A928B36193169E3CAF," Working with your data - -To see a quick sample of a flow's data, right-click a node a select Preview. To more thoroughly examine your data, use a Charts node to launch the chart builder. - -With the chart builder, you can use advanced visualizations to explore your data from different perspectives and identify patterns, connections, and relationships within your data. You can also visualize your data with these same charts in a Data Refinery flow. - -Figure 1. Sample visualizations available for a flow - -![Shows four example charts available in Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/charts_thumbnail4.png) - -For more information, see [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html). -" -6A32659DF809F04F9A670634129FC75CC9140729,6A32659DF809F04F9A670634129FC75CC9140729," Setting properties for flows - -You can specify properties to apply to the current flow. - -To set flow properties, click the Flow Properties icon:![Flow properties icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/flow_properties.png) - -The following properties are available. -" -81045ED1B34827B3BD74D2546185C3BD3163B37E,81045ED1B34827B3BD74D2546185C3BD3163B37E," Flow scripting - -You can use scripts to customize operations within a particular flow, and they're saved with that flow. For example, you might use a script to specify a particular run order for terminal nodes. You use the flow properties page to edit the script that's saved with the current flow. - -To access scripting in a flow's properties: - - - -1. Right-click your flow's canvas and select Flow properties. -2. Open the Scripting section to work with scripts for the current flow. - - - -Tips: - - - -* By default, the Python scripting language is used. If you'd rather use a scripting language unique to old versions of SPSS Modeler desktop, select Legacy. -* For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide. - - - -You can specify whether or not the script runs when the flow runs. To run the script each time the flow runs, respecting the run order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. Or, to ignore the script, you can select the option to only Run all terminal nodes when the flow runs. - -The script editor includes the following features that help with script authoring: - - - -* Syntax highlighting; keywords, literal values (such as strings and numbers), and comments are highlighted -* Line numbering -* Block matching; when the cursor is placed by the start of a program block, the corresponding end block is also highlighted -* Suggested auto-completion - - - -A list of suggested syntax completions can be accessed by selecting Auto-Suggest from the context menu, or pressing Ctrl + Space. Use the cursor keys to move up and down the list, then press Enter to insert the selected text. To exit from auto-suggest mode without modifying the existing text, press Esc. -" -D3084BFB07D425EBACE9F538D800E08DAEA97594,D3084BFB07D425EBACE9F538D800E08DAEA97594," Flow scripting example - -You can use a flow to train a model when it runs. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node. - -Using a script, you can automate the process of testing the model nugget after you create it. For example, you might use a script such as the following to train a neural network model: - -stream = modeler.script.stream() -neuralnetnode = stream.findByType(""neuralnetwork"", None) -results = [] -neuralnetnode.run(results) -appliernode = stream.createModelApplierAt(results[0], ""Drug"", 594, 187) -analysisnode = stream.createAt(""analysis"", ""Drug"", 688, 187) -typenode = stream.findByType(""type"", None) -stream.linkBetween(appliernode, typenode, analysisnode) -analysisnode.run([]) - -The following bullets describe each line in this script example. - - - -* The first line defines a variable that points to the current flow. -* In line 2, the script finds the Neural Net builder node. -* In line 3, the script creates a list where the execution results can be stored. -* In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3. -* In line 5, a model apply node is created for the model nugget and placed on the flow canvas. -* In line 6, an analysis node called Drug is created. -* In line 7, the script finds the Type node. -* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node. -* Finally, the Analysis node runs to produce the Analysis report. - - - -Tips: - - - -" -C8B4A993CB8642BC87432FCB305EEE744C16A154_0,C8B4A993CB8642BC87432FCB305EEE744C16A154," Importing an SPSS Modeler stream - -You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client. - - - -1. From your project's Assets tab, click . -2. Select Local file, select the .str file you want to import, and click Create. - - - -If the imported stream contains one or more source (import) or export nodes, you'll be prompted to convert the nodes. Watsonx.ai will walk you through the migration process. - -Watch the following video for an example of this easy process: - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -[https://www.ustream.tv/embed/recorded/127732173](https://www.ustream.tv/embed/recorded/127732173) - -If the stream contains multiple import nodes that use the same data file, then you must first add that file to your project as a data asset before migrating because the conversion can't upload the same file to more than one import node. After adding the data asset to your project, reopen the flow and proceed with the migration using the new data asset. Nodes with the same name will be automatically mapped to project assets. - -Configure export nodes to export to your project or to a connection. The following export nodes are supported: - - - -Table 1. Export nodes that can be migrated - - Supported SPSS Modeler export nodes - - Analytic Server - Database - Flat File - Statistics Export - Data Collection - Excel - IBM Cognos Analytics Export - TM1 Export - SAS - XML Export - - - -Notes: Keep the following information in mind when migrating nodes. - - - -* When migrating export nodes, you're converting node types that don't exist in watsonx.ai. The nodes are converted to Data Asset export nodes or a connection. Due to a current limitation for automatically migrating nodes, only existing project assets or connections can be selected as export targets. These assets will be overwritten during export when the flow runs. -* To preserve any type or filter information, when an import node is replaced with Data Asset nodes, they're converted to a SuperNode. -" -C8B4A993CB8642BC87432FCB305EEE744C16A154_1,C8B4A993CB8642BC87432FCB305EEE744C16A154,"* After migration, you can go back later and use the Convert button if you want to migrate a node that you skipped previously. -* If the stream you imported uses scripting, you may encounter an error when you run the flow even after completing a migration. This could be due to the flow script containing a reference to an unsupported import or export node. To avoid such errors, you must remove the scripting code that references the unsupported node. -" -B851271C134A1B282412BD7A667C1C9813B4E8B2,B851271C134A1B282412BD7A667C1C9813B4E8B2," Text Mining model nuggets - -You can run a Text Mining node to automatically generate a concept model nugget using the Generate directly option in the node settings. Or you can use a more hands-on, exploratory approach using the Build interactively mode to generate category model nuggets from within the Text Analytics Workbench. -" -BBD1F022A8393101199ABB731534C10BE99CF1E4,BBD1F022A8393101199ABB731534C10BE99CF1E4," Mining for concepts and categories - -The Text Mining node uses linguistic and frequency techniques to extract key concepts from the text and create categories with these concepts and other data. Use the node to explore the text data contents or to produce either a concept model nugget or category model nugget. - -![Text Mining node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_textmining.png)When you run this node, an internal linguistic extraction engine extracts and organizes the concepts, patterns, and categories by using natural language processing methods. Two build modes are available in the Text Mining node's properties: - - - -* The Generate directly (concept model nugget) mode automatically produces a concept or category model nugget when you run the node. -* The Build interactively (category model nugget) is a more hands-on, exploratory approach. You can use this mode to not only extract concepts, create categories, and refine your linguistic resources, but also run text link analysis and explore clusters. This build mode launches the Text Analytics Workbench. - - - -And you can use the Text Mining node to generate one of two text mining model nuggets: - - - -* Concept model nuggets uncover and extract important concepts from your structured or unstructured text data. -* Category model nuggets score and assign documents and records to categories, which are made up of the extracted concepts (and patterns). - - - -The extracted concepts and patterns and the categories from your model nuggets can all be combined with existing structured data, such as demographics, to yield better and more-focused decisions. For example, if customers frequently list login issues as the primary impediment to completing online account management tasks, you might want to incorporate ""login issues"" into your models. -" -D73C52B16EC33CAA6D1F51EFFA5A6E37052D6110,D73C52B16EC33CAA6D1F51EFFA5A6E37052D6110," Nodes palette - -The following sections describe all the nodes available on the palette in SPSS Modeler. Drag-and-drop or double-click a node in the list to add it to your flow canvas. You can then double-click any node icon in your flow to set its properties. Hover over a property to see information about it, or click the information icon to see Help. - -When first creating a flow, you select which runtime to use. By default, the flow will use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for some nodes will vary depending on which runtime option you choose. -" -D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43_0,D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43," Text Analytics - -SPSS Modeler offers nodes that are specialized for handling text. - -The Text Analytics nodes offer powerful text analytics capabilities, using advanced linguistic technologies and Natural Language Processing (NLP) to rapidly process a large variety of unstructured text data and, from this text, extract and organize the key concepts. Text Analytics can also group these concepts into categories. - -Around 80% of data held within an organization is in the form of text documents—for example, reports, web pages, e-mails, and call center notes. Text is a key factor in enabling an organization to gain a better understanding of their customers' behavior. A system that incorporates NLP can intelligently extract concepts, including compound phrases. Moreover, knowledge of the underlying language allows classification of terms into related groups, such as products, organizations, or people, using meaning and context. As a result, you can quickly determine the relevance of the information to your needs. These extracted concepts and categories can be combined with existing structured data, such as demographics, and applied to modeling in SPSS Modeler to yield better and more-focused decisions. - -Linguistic systems are knowledge sensitive—the more information contained in their dictionaries, the higher the quality of the results. Text Analytics provides a set of linguistic resources, such as dictionaries for terms and synonyms, libraries, and templates. These nodes further allow you to develop and refine these linguistic resources to your context. Fine-tuning of the linguistic resources is often an iterative process and is necessary for accurate concept retrieval and categorization. Custom templates, libraries, and dictionaries for specific domains, such as CRM and genomics, are also included. - -Tips for getting started: - - - -* Watch the following video for an overview of Text Analytics. -* See the [Hotel satisfaction example for Text Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel.html). - - - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43_1,D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43,"Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -[https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench](https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench) -" -42E228E8218A4FDEF9F2CA0DB53B5B594A475B88,42E228E8218A4FDEF9F2CA0DB53B5B594A475B88," About text mining - -Today, an increasing amount of information is being held in unstructured and semi-structured formats, such as customer e-mails, call center notes, open-ended survey responses, news feeds, web forms, etc. This abundance of information poses a problem to many organizations that ask themselves: How can we collect, explore, and leverage this information? - -Text mining is the process of analyzing collections of textual materials in order to capture key concepts and themes and uncover hidden relationships and trends without requiring that you know the precise words or terms that authors have used to express those concepts. Although they are quite different, text mining is sometimes confused with information retrieval. While the accurate retrieval and storage of information is an enormous challenge, the extraction and management of quality content, terminology, and relationships contained within the information are crucial and critical processes. -" -3602C22051EA1148B07446605DD3C57BF7830C3A,3602C22051EA1148B07446605DD3C57BF7830C3A," How categorization works - -When creating category models in Text Analytics, there are several different techniques you can choose from to create categories. Because every dataset is unique, the number of techniques and the order in which you apply them may change. - -Since your interpretation of the results may be different from someone else's, you may need to experiment with the different techniques to see which one produces the best results for your text data. In Text Analytics, you can create category models in a workbench session in which you can explore and fine-tune your categories further. - -In this documentation, category building refers to the generation of category definitions and classification through the use of one or more built-in techniques, and categorization refers to the scoring, or labeling, process whereby unique identifiers (name/ID/value) are assigned to the category definitions for each record or document. - -During category building, the concepts and types that were extracted are used as the building blocks for your categories. When you build categories, the records or documents are automatically assigned to categories if they contain text that matches an element of a category's definition. - -Text Analytics offers you several automated category building techniques to help you categorize your documents or records quickly. -" -F976E639BDE8A2B880E46D94F4C832B6ED9A9303,F976E639BDE8A2B880E46D94F4C832B6ED9A9303," How extraction works - -During the extraction of key concepts and ideas from your responses, Text Analytics relies on linguistics-based text analysis. This approach offers the speed and cost effectiveness of statistics-based systems. But it offers a far higher degree of accuracy, while requiring far less human intervention. Linguistics-based text analysis is based on the field of study known as natural language processing, also known as computational linguistics. - -Understanding how the extraction process works can help you make key decisions when fine-tuning your linguistic resources (libraries, types, synonyms, and more). Steps in the extraction process include: - - - -* Converting source data to a standard format -* Identifying candidate terms -* Identifying equivalence classes and integration of synonyms -* Assigning a type -" -B0B80EB59E769546EEDF8CA32A493BF38C6A9707,B0B80EB59E769546EEDF8CA32A493BF38C6A9707," Export - -Export nodes provide a mechanism for exporting data in various formats to interface with your other software tools. -" -B6DC074F83F9E8984B9CD3A3BF5B392BC4A61844,B6DC074F83F9E8984B9CD3A3BF5B392BC4A61844," Extension nodes - -SPSS Modeler supports the languages R and Apache Spark (via Python). - -To complement SPSS Modeler and its data mining abilities, several Extension nodes are available to enable expert users to input their own R scripts or Python for Spark scripts to carry out data processing, model building, and model scoring. - - - -* The Extension Import node is available under Import on the Node Palette. See [Extension Import node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_importer.html). -* The Extension Model node is available under Modeling on the Node Palette. See [Extension Model node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html). -" -4E571695FB4E12489157704D87F89DF5DAD1A580,4E571695FB4E12489157704D87F89DF5DAD1A580," Field Operations - -After an initial data exploration, you will probably need to select, clean, or construct data in preparation for analysis. The Field Operations palette contains many nodes useful for this transformation and preparation. - -For example, using a Derive node, you might create an attribute that is not currently represented in the data. Or you might use a Binning node to recode field values automatically for targeted analysis. You will probably find yourself using a Type node frequently—it allows you to assign a measurement level, values, and a modeling role for each field in the dataset. Its operations are useful for handling missing values and downstream modeling. -" -2AEC614E6CBE5D4963D53DEC7E22877D5A1BEDE8,2AEC614E6CBE5D4963D53DEC7E22877D5A1BEDE8," Graphs - -Several phases of the data mining process use graphs and charts to explore data brought in to watsonx.ai. - -For example, you can connect a Plot or Distribution node to a data source to gain insight into data types and distributions. You can then perform record and field manipulations to prepare the data for downstream modeling operations. Another common use of graphs is to check the distribution and relationships between newly derived fields. -" -A9FA1D31F4CC6018DAF5B927908210846B082675,A9FA1D31F4CC6018DAF5B927908210846B082675," Import - -Use Import nodes to import data stored in various formats, or to generate your own synthetic data. -" -7E30541B3A12F403ADCB02F90BC96134CE6B6386,7E30541B3A12F403ADCB02F90BC96134CE6B6386," Modeling - -Watsonx.ai offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. - -The methods available on the palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. For more information about modeling, see [Creating SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.htmlspss-modeler). -" -A56C821C7EE483D01E4338397F62DDD6CB6D5E9F,A56C821C7EE483D01E4338397F62DDD6CB6D5E9F," Outputs - -Output nodes provide the means to obtain information about your data and models. They also provide a mechanism for exporting data in various formats to interface with your other software tools. -" -09BB38FB6DF4C562A478D6D3DC54D22823F922FB,09BB38FB6DF4C562A478D6D3DC54D22823F922FB," Record Operations - -Record Operations nodes are useful for making changes to data at the record level. These operations are important during the data understanding and data preparation phases of data mining because they allow you to tailor the data to your particular business need. - -For example, based on the results of a data audit conducted using the Data Audit node (Outputs palette), you might decide that you would like to merge customer purchase records for the past three months. Using a Merge node, you can merge records based on the values of a key field, such as Customer ID. Or you might discover that a database containing information about web site hits is unmanageable with over one million records. Using a Sample node, you can select a subset of data for use in modeling. -" -8435D88B7DC8317B982E1EAA57FA55B8391D00CF,8435D88B7DC8317B982E1EAA57FA55B8391D00CF," Aggregate node - -Aggregation is a data preparation task frequently used to reduce the size of a dataset. Before proceeding with aggregation, you should take time to clean the data, concentrating especially on missing values. A aggregation, potentially useful information regarding missing values may be lost. - -You can use an Aggregate node to replace a sequence of input records with summary, aggregated output records. For example, you might have a set of input sales records such as those shown in the following table. - - - -Sales record input example - -Table 1. Sales record input example - - Age Sex Region Branch Sales - - 23 M S 8 4 - 45 M S 16 4 - 37 M S 8 5 - 30 M S 5 7 - 44 M N 4 9 - 25 M N 2 11 - 29 F S 16 6 - 41 F N 4 8 - 23 F N 6 2 - 45 F N 4 5 - 33 F N 6 10 - - - -You can aggregate these records with Sex and Region as key fields. Then choose to aggregate Age with the mode Mean and Sales with the mode Sum. Select the Include record count in field aggregate node option and your aggregated output will be similar to the following table. - - - -Aggregated record example - -Table 2. Aggregated record example - - Age (mean) Sex Region Sales (sum) Record Count - - 35.5 F N 25 4 - 29 F S 6 1 - 34.5 M N 20 2 - 33.75 M S 20 4 - - - -From this you learn, for example, that the average age of the four female sales staff in the North region is 35.5, and the sum total of their sales was 25 units. - -Note: Fields such as Branch are automatically discarded when no aggregate mode is specified. -" -6D7B948346F167B5390A0E56E1B6DE83AE31A19A,6D7B948346F167B5390A0E56E1B6DE83AE31A19A," Analysis node - -With the Analysis node, you can evaluate the ability of a model to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values (your target field) for one or more model nuggets. You can also use Analysis nodes to compare predictive models to other predictive models. - -When you execute an Analysis node, a summary of the analysis results is automatically added to the Analysis section on the Summary tab for each model nugget in the executed flow. The detailed analysis results appear on the Outputs tab of the manager window or can be written directly to a file. - -Note: Because Analysis nodes compare predicted values to actual values, they are only useful with supervised models (those that require a target field). For unsupervised models such as clustering algorithms, there are no actual results available to use as a basis for comparison. -" -35A87CAEDB1F1B6739159B9C7A31CCE7C8978431_0,35A87CAEDB1F1B6739159B9C7A31CCE7C8978431," Anomaly node - -Anomaly detection models are used to identify outliers, or unusual cases, in the data. Unlike other modeling methods that store rules about unusual cases, anomaly detection models store information on what normal behavior looks like. This makes it possible to identify outliers even if they do not conform to any known pattern, and it can be particularly useful in applications, such as fraud detection, where new patterns may constantly be emerging. Anomaly detection is an unsupervised method, which means that it does not require a training dataset containing known cases of fraud to use as a starting point. - -While traditional methods of identifying outliers generally look at one or two variables at a time, anomaly detection can examine large numbers of fields to identify clusters or peer groups into which similar records fall. Each record can then be compared to others in its peer group to identify possible anomalies. The further away a case is from the normal center, the more likely it is to be unusual. For example, the algorithm might lump records into three distinct clusters and flag those that fall far from the center of any one cluster. - -Each record is assigned an anomaly index, which is the ratio of the group deviation index to its average over the cluster that the case belongs to. The larger the value of this index, the more deviation the case has than the average. Under the usual circumstance, cases with anomaly index values less than 1 or even 1.5 would not be considered as anomalies, because the deviation is just about the same or a bit more than the average. However, cases with an index value greater than 2 could be good anomaly candidates because the deviation is at least twice the average. - -Anomaly detection is an exploratory method designed for quick detection of unusual cases or records that should be candidates for further analysis. These should be regarded as suspected anomalies, which, on closer examination, may or may not turn out to be real. You may find that a record is perfectly valid but choose to screen it from the data for purposes of model building. Alternatively, if the algorithm repeatedly turns up false anomalies, this may point to an error or artifact in the data collection process. - -" -35A87CAEDB1F1B6739159B9C7A31CCE7C8978431_1,35A87CAEDB1F1B6739159B9C7A31CCE7C8978431,"Note that anomaly detection identifies unusual records or cases through cluster analysis based on the set of fields selected in the model without regard for any specific target (dependent) field and regardless of whether those fields are relevant to the pattern you are trying to predict. For this reason, you may want to use anomaly detection in combination with feature selection or another technique for screening and ranking fields. For example, you can use feature selection to identify the most important fields relative to a specific target and then use anomaly detection to locate the records that are the most unusual with respect to those fields. (An alternative approach would be to build a decision tree model and then examine any misclassified records as potential anomalies. However, this method would be more difficult to replicate or automate on a large scale.) - -Example. In screening agricultural development grants for possible cases of fraud, anomaly detection can be used to discover deviations from the norm, highlighting those records that are abnormal and worthy of further investigation. You are particularly interested in grant applications that seem to claim too much (or too little) money for the type and size of farm. - -Requirements. One or more input fields. Note that only fields with a role set to Input using a source or Type node can be used as inputs. Target fields (role set to Target or Both) are ignored. - -Strengths. By flagging cases that do not conform to a known set of rules rather than those that do, Anomaly Detection models can identify unusual cases even when they don't follow previously known patterns. When used in combination with feature selection, anomaly detection makes it possible to screen large amounts of data to identify the records of greatest interest relatively quickly. -" -F05134C8C952A7585B82A042B14BCF1234AF9329,F05134C8C952A7585B82A042B14BCF1234AF9329," Anonymize node - -With the Anonymize node, you can disguise field names, field values, or both when working with data that's to be included in a model downstream of the node. In this way, the generated model can be freely distributed (for example, to Technical Support) with no danger that unauthorized users will be able to view confidential data, such as employee records or patients' medical records. - -Depending on where you place the Anonymize node in your flow, you may need to make changes to other nodes. For example, if you insert an Anonymize node upstream from a Select node, the selection criteria in the Select node will need to be changed if they are acting on values that have now become anonymized. - -The method to be used for anonymizing depends on various factors. For field names and all field values except Continuous measurement levels, the data is replaced by a string of the form: - -prefix_Sn - -where prefix_ is either a user-specified string or the default string anon_, and n is an integer value that starts at 0 and is incremented for each unique value (for example, anon_S0, anon_S1, etc.). - -Field values of type Continuous must be transformed because numeric ranges deal with integer or real values rather than strings. As such, they can be anonymized only by transforming the range into a different range, thus disguising the original data. Transformation of a value x in the range is performed in the following way: - -A(x + B) - -where: - -A is a scale factor, which must be greater than 0. - -B is a translation offset to be added to the values. -" -4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB,4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB," Append node - -You can use Append nodes to concatenate sets of records. Unlike Merge nodes, which join records from different sources together, Append nodes read and pass downstream all of the records from one source until there are no more. Then the records from the next source are read using the same data structure (number of records, number of fields, and so on) as the first, or primary, input. When the primary source has more fields than another input source, the system null string ($null$) will be used for any incomplete values. - -Append nodes are useful for combining datasets with similar structures but different data. For example, you might have transaction data stored in different files for different time periods, such as a sales data file for March and a separate one for April. Assuming that they have the same structure (the same fields in the same order), the Append node will join them together into one large file, which you can then analyze. - -Note: To append files, the field measurement levels must be similar. For example, a Nominal field cannot be appended with a field whose measurement level is Continuous. -" -E14741F9A90592B67437AAED4B7042CD3DC268A8,E14741F9A90592B67437AAED4B7042CD3DC268A8," Extension model nugget - -The Extension model nugget is generated and placed on your flow canvas after running the Extension Model node, which contains your R script or Python for Spark script that defines the model building and model scoring. - -By default, the Extension model nugget contains the script that's used for model scoring, options for reading the data, and any output from the R console or Python for Spark. Optionally, the Extension model nugget can also contain various other forms of model output, such as graphs and text output. After the Extension model nugget is generated and added to your flow canvas, an output node can be connected to it. The output node is then used in the usual way within your flow to obtain information about the data and models, and for exporting data in various formats. -" -9346A72CFCD74DFDA05213A2A321BF9CFB823358,9346A72CFCD74DFDA05213A2A321BF9CFB823358," Apriori node - -The Apriori node discovers association rules in your data. - -Association rules are statements of the form: - -if antecedent(s) then consequent(s) - -For example, if a customer purchases a razor and after shave, then that customer will purchase shaving cream with 80% confidence. Apriori extracts a set of rules from the data, pulling out the rules with the highest information content. The Apriori node also discovers association rules in the data. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to efficiently process large data sets. - -Requirements. To create an Apriori rule set, you need one or more Input fields and one or more Target fields. Input and output fields (those with the role Input, Target, or Both) must be symbolic. Fields with the role None are ignored. Fields types must be fully instantiated before executing the node. Data can be in tabular or transactional format. - -Strengths. For large problems, Apriori is generally faster to train. It also has no arbitrary limit on the number of rules that can be retained and can handle rules with up to 32 preconditions. Apriori offers five different training methods, allowing more flexibility in matching the data mining method to the problem at hand. -" -27091A60BA512E180C699261ECFFDC3A621418A5_0,27091A60BA512E180C699261ECFFDC3A621418A5," Association Rules node - -Association rules associate a particular conclusion (the purchase of a particular product, for example) with a set of conditions (the purchase of several other products, for example). - -For example, the rule - -beer <= cannedveg & frozenmeal (173, 17.0%, 0.84) - -states that beer often occurs when cannedveg and frozenmeal occur together. The rule is 84% reliable and applies to 17% of the data, or 173 records. Association rule algorithms automatically find the associations that you could find manually using visualization techniques, such as the Web node. - -The advantage of association rule algorithms over the more standard decision tree algorithms (C5.0 and C&R Trees) is that associations can exist between any of the attributes. A decision tree algorithm will build rules with only a single conclusion, whereas association algorithms attempt to find many rules, each of which may have a different conclusion. - -The disadvantage of association algorithms is that they are trying to find patterns within a potentially very large search space and, hence, can require much more time to run than a decision tree algorithm. The algorithms use a generate and test method for finding rules--simple rules are generated initially, and these are validated against the dataset. The good rules are stored and all rules, subject to various constraints, are then specialized. Specialization is the process of adding conditions to a rule. These new rules are then validated against the data, and the process iteratively stores the best or most interesting rules found. The user usually supplies some limit to the possible number of antecedents to allow in a rule, and various techniques based on information theory or efficient indexing schemes are used to reduce the potentially large search space. - -" -27091A60BA512E180C699261ECFFDC3A621418A5_1,27091A60BA512E180C699261ECFFDC3A621418A5,"At the end of the processing, a table of the best rules is presented. Unlike a decision tree, this set of association rules cannot be used directly to make predictions in the way that a standard model (such as a decision tree or a neural network) can. This is due to the many different possible conclusions for the rules. Another level of transformation is required to transform the association rules into a classification rule set. Hence, the association rules produced by association algorithms are known as unrefined models. Although the user can browse these unrefined models, they cannot be used explicitly as classification models unless the user tells the system to generate a classification model from the unrefined model. This is done from the browser through a Generate menu option. - -Two association rule algorithms are supported: - - - -" -1ACF5ED461253F09DB844C2D84C1AE21277BC1E6_0,1ACF5ED461253F09DB844C2D84C1AE21277BC1E6," Auto Classifier node - -The Auto Classifier node estimates and compares models for either nominal (set) or binary (yes/no) targets, using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, rather than choose between Radial Basis Function, polynomial, sigmoid, or linear methods for an SVM, you can try them all. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best models for use in scoring or further analysis. - -Example -: A retail company has historical data tracking the offers made to specific customers in past campaigns. The company now wants to achieve more profitable results by matching the appropriate offer to each customer. - -Requirements -: A target field with a measurement level of either Nominal or Flag (with the role set to Target), and at least one input field (with the role set to Input). For a flag field, the True value defined for the target is assumed to represent a hit when calculating profits, lift, and related statistics. Input fields can have a measurement level of Continuous or Categorical, with the limitation that some inputs may not be appropriate for some model types. For example, ordinal fields used as inputs in C&R Tree, CHAID, and QUEST models must have numeric storage (not string), and will be ignored by these models if specified otherwise. Similarly, continuous input fields can be binned in some cases. The requirements are the same as when using the individual modeling nodes; for example, a Bayes Net model works the same whether generated from the Bayes Net node or the Auto Classifier node. - -Frequency and weight fields -" -1ACF5ED461253F09DB844C2D84C1AE21277BC1E6_1,1ACF5ED461253F09DB844C2D84C1AE21277BC1E6,": Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree, CHAID, QUEST, Decision List, and Bayes Net models. A weight field can be used by C&RT, CHAID, and C5.0 models. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building, and are not considered when evaluating or scoring models. - -Prefixes -: If you attach a table node to the nugget for the Auto Classifier Node, there are several new variables in the table with names that begin with a $ prefix. -: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes. -" -3A9DC582441C2474E183DA0E7DAC20FB182842C2,3A9DC582441C2474E183DA0E7DAC20FB182842C2," Auto Cluster node - -The Auto Cluster node estimates and compares clustering models that identify groups of records with similar characteristics. The node works in the same manner as other automated modeling nodes, enabling you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields. - -Clustering models are often used to identify groups that can be used as inputs in subsequent analyses. For example, you may want to target groups of customers based on demographic characteristics such as income, or based on the services they have bought in the past. You can do this without prior knowledge about the groups and their characteristics -- you may not know how many groups to look for, or what features to use in defining them. Clustering models are often referred to as unsupervised learning models, since they do not use a target field, and do not return a specific prediction that can be evaluated as true or false. The value of a clustering model is determined by its ability to capture interesting groupings in the data and provide useful descriptions of those groupings. - -Requirements. One or more fields that define characteristics of interest. Cluster models do not use target fields in the same manner as other models, because they do not make specific predictions that can be assessed as true or false. Instead, they are used to identify groups of cases that may be related. For example, you cannot use a cluster model to predict whether a given customer will churn or respond to an offer. But you can use a cluster model to assign customers to groups based on their tendency to do those things. Weight and frequency fields are not used. - -Evaluation fields. While no target is used, you can optionally specify one or more evaluation fields to be used in comparing models. The usefulness of a cluster model may be evaluated by measuring how well (or badly) the clusters differentiate these fields. -" -FD94481E337829121072F5E46CC39B6290E43B44_0,FD94481E337829121072F5E46CC39B6290E43B44," Auto Data Prep node - -Preparing data for analysis is one of the most important steps in any project—and traditionally, one of the most time consuming. Automated Data Preparation (ADP) handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques. You can use the algorithm in fully automatic fashion, allowing it to choose and apply fixes, or you can use it in interactive fashion, previewing the changes before they are made and accept or reject them as you want. - -Using ADP enables you to make your data ready for model building quickly and easily, without needing prior knowledge of the statistical concepts involved. Models will tend to build and score more quickly - -Note: When ADP prepares a field for analysis, it creates a new field containing the adjustments or transformations, rather than replacing the existing values and properties of the old field. The old field is not used in further analysis; its role is set to None. - -Example. An insurance company with limited resources to investigate homeowner's insurance claims wants to build a model for flagging suspicious, potentially fraudulent claims. Before building the model, they will ready the data for modeling using automated data preparation. Since they want to be able to review the proposed transformations before the transformations are applied, they will use automated data preparation in interactive mode. - -An automotive industry group keeps track of the sales for a variety of personal motor vehicles. In an effort to be able to identify over- and underperforming models, they want to establish a relationship between vehicle sales and vehicle characteristics. They will use automated data preparation to prepare the data for analysis, and build models using the data ""before"" and ""after"" preparation to see how the results differ. - -What is your objective? Automated data preparation recommends data preparation steps that will affect the speed with which other algorithms can build models and improve the predictive power of those models. This can include transforming, constructing and selecting features. The target can also be transformed. You can specify the model-building priorities that the data preparation process should concentrate on. - - - -" -FD94481E337829121072F5E46CC39B6290E43B44_1,FD94481E337829121072F5E46CC39B6290E43B44,"* Balance speed and accuracy. This option prepares the data to give equal priority to both the speed with which data are processed by model-building algorithms and the accuracy of the predictions. -* Optimize for speed. This option prepares the data to give priority to the speed with which data are processed by model-building algorithms. When you are working with very large datasets, or are looking for a quick answer, select this option. -" -9D9C67189BE5D6DB22575CF01A75BD5826B92074_0,9D9C67189BE5D6DB22575CF01A75BD5826B92074," Auto Numeric node - -The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, you could predict housing values using neural net, linear regression, C&RT, and CHAID models to see which performs best, and you could try out different combinations of stepwise, forward, and backward regression methods. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best for use in scoring or further analysis. - -Example -: A municipality wants to more accurately estimate real estate taxes and to adjust values for specific properties as needed without having to inspect every property. Using the Auto Numeric node, the analyst can generate and compare a number of models that predict property values based on building type, neighborhood, size, and other known factors. - -Requirements -: A single target field (with the role set to Target), and at least one input field (with the role set to Input). The target must be a continuous (numeric range) field, such as age or income. Input fields can be continuous or categorical, with the limitation that some inputs may not be appropriate for some model types. For example, C&R Tree models can use categorical string fields as inputs, while linear regression models cannot use these fields and will ignore them if specified. The requirements are the same as when using the individual modeling nodes. For example, a CHAID model works the same whether generated from the CHAID node or the Auto Numeric node. - -Frequency and weight fields -" -9D9C67189BE5D6DB22575CF01A75BD5826B92074_1,9D9C67189BE5D6DB22575CF01A75BD5826B92074,": Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree and CHAID algorithms. A weight field can be used by C&RT, CHAID, Regression, and GenLin algorithms. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building and are not considered when evaluating or scoring models. - -Prefixes -: If you attach a table node to the nugget for the Auto Numeric Node, there are several new variables in the table with names that begin with a $ prefix. -: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes. -" -0294AB8C0FBC393F5C227A0F8BEBCCDC67B78B1D,0294AB8C0FBC393F5C227A0F8BEBCCDC67B78B1D," Balance node - -You can use Balance nodes to correct imbalances in datasets so they conform to specified test criteria. - -For example, suppose that a dataset has only two values--low or high--and that 90% of the cases are low while only 10% of the cases are high. Many modeling techniques have trouble with such biased data because they will tend to learn only the low outcome and ignore the high one, since it is more rare. If the data is well balanced with approximately equal numbers of low and high outcomes, models will have a better chance of finding patterns that distinguish the two groups. In this case, a Balance node is useful for creating a balancing directive that reduces cases with a low outcome. - -Balancing is carried out by duplicating and then discarding records based on the conditions you specify. Records for which no condition holds are always passed through. Because this process works by duplicating and/or discarding records, the original sequence of your data is lost in downstream operations. Be sure to derive any sequence-related values before adding a Balance node to the data stream. -" -1D5D80DFF65EE4195713EEEB43F1291B79779A6B_0,1D5D80DFF65EE4195713EEEB43F1291B79779A6B," Bayes Net node - -The Bayesian Network node enables you to build a probability model by combining observed and recorded evidence with ""common-sense"" real-world knowledge to establish the likelihood of occurrences by using seemingly unlinked attributes. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification. - -Bayesian networks are used for making predictions in many varied situations; some examples are: - - - -* Selecting loan opportunities with low default risk. -* Estimating when equipment will need service, parts, or replacement, based on sensor input and existing records. -* Resolving customer problems via online troubleshooting tools. -* Diagnosing and troubleshooting cellular telephone networks in real-time. -* Assessing the potential risks and rewards of research-and-development projects in order to focus resources on the best opportunities. - - - -A Bayesian network is a graphical model that displays variables (often referred to as nodes) in a dataset and the probabilistic, or conditional, independencies between them. Causal relationships between nodes may be represented by a Bayesian network; however, the links in the network (also known as arcs) do not necessarily represent direct cause and effect. For example, a Bayesian network can be used to calculate the probability of a patient having a specific disease, given the presence or absence of certain symptoms and other relevant data, if the probabilistic independencies between symptoms and disease as displayed on the graph hold true. Networks are very robust where information is missing and make the best possible prediction using whatever information is present. - -" -1D5D80DFF65EE4195713EEEB43F1291B79779A6B_1,1D5D80DFF65EE4195713EEEB43F1291B79779A6B,"A common, basic, example of a Bayesian network was created by Lauritzen and Spiegelhalter (1988). It is often referred to as the ""Asia"" model and is a simplified version of a network that may be used to diagnose a doctor's new patients; the direction of the links roughly corresponding to causality. Each node represents a facet that may relate to the patient's condition; for example, ""Smoking"" indicates that they are a confirmed smoker, and ""VisitAsia"" shows if they recently visited Asia. Probability relationships are shown by the links between any nodes; for example, smoking increases the chances of the patient developing both bronchitis and lung cancer, whereas age only seems to be associated with the possibility of developing lung cancer. In the same way, abnormalities on an x-ray of the lungs may be caused by either tuberculosis or lung cancer, while the chances of a patient suffering from shortness of breath (dyspnea) are increased if they also suffer from either bronchitis or lung cancer. - -Figure 1. Lauritzen and Spegelhalter's Asia network example - -![Lauritzen and Spegelhalter's Asia network example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/bn_asia.jpg) - -There are several reasons why you might decide to use a Bayesian network: - - - -* It helps you learn about causal relationships. From this, it enables you to understand a problem area and to predict the consequences of any intervention. -* The network provides an efficient approach for avoiding the overfitting of data. -* A clear visualization of the relationships involved is easily observed. - - - -Requirements. Target fields must be categorical and can have a measurement level of Nominal, Ordinal, or Flag. Inputs can be fields of any type. Continuous (numeric range) input fields will be automatically binned; however, if the distribution is skewed, you may obtain better results by manually binning the fields using a Binning node before the Bayesian Network node. For example, use Optimal Binning where the Supervisor field is the same as the Bayesian Network node Target field. - -" -1D5D80DFF65EE4195713EEEB43F1291B79779A6B_2,1D5D80DFF65EE4195713EEEB43F1291B79779A6B,"Example. An analyst for a bank wants to be able to predict customers, or potential customers, who are likely to default on their loan repayments. You can use a Bayesian network model to identify the characteristics of customers most likely to default, and build several different types of model to establish which is the best at predicting potential defaulters. - -Example. A telecommunications operator wants to reduce the number of customers who leave the business (known as ""churn""), and update the model on a monthly basis using each preceding month's data. You can use a Bayesian network model to identify the characteristics of customers most likely to churn, and continue training the model each month with the new data. -" -8B5211BC5AC76B26C8C102E576F0AF560DFBCBC2,8B5211BC5AC76B26C8C102E576F0AF560DFBCBC2," Binning node - -The Binning node enables you to automatically create new nominal fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing income groups of equal width, or as deviations from the mean. Alternatively, you can select a categorical ""supervisor"" field in order to preserve the strength of the original association between the two fields. - -Binning can be useful for a number of reasons, including: - - - -* Algorithm requirements. Certain algorithms, such as Naive Bayes and Logistic Regression, require categorical inputs. -* Performance. Algorithms such as multinomial logistic may perform better if the number of distinct values of input fields is reduced. For example, use the median or mean value for each bin rather than using the original values. -* Data Privacy. Sensitive personal information, such as salaries, may be reported in ranges rather than actual salary figures in order to protect privacy. - - - -A number of binning methods are available. After you create bins for the new field, you can generate a Derive node based on the cut points. -" -C5673E6023D99F8354E9B61DA2D2F1B58FBC970F_0,C5673E6023D99F8354E9B61DA2D2F1B58FBC970F," C5.0 node - -This node uses the C5.0 algorithm to build either a decision tree or a rule set. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples cannot be split any further. Finally, the lowest-level splits are reexamined, and those that do not contribute significantly to the value of the model are removed or pruned. - -Note: The C5.0 node can predict only a categorical target. When analyzing data with categorical (nominal or ordinal) fields, the node is likely to group categories together. - -C5.0 can produce two kinds of models. A decision tree is a straightforward description of the splits found by the algorithm. Each terminal (or ""leaf"") node describes a particular subset of the training data, and each case in the training data belongs to exactly one terminal node in the tree. In other words, exactly one prediction is possible for any particular data record presented to a decision tree. - -In contrast, a rule set is a set of rules that tries to make predictions for individual records. Rule sets are derived from decision trees and, in a way, represent a simplified or distilled version of the information found in the decision tree. Rule sets can often retain most of the important information from a full decision tree but with a less complex model. Because of the way rule sets work, they do not have the same properties as decision trees. The most important difference is that with a rule set, more than one rule may apply for any particular record, or no rules at all may apply. If multiple rules apply, each rule gets a weighted ""vote"" based on the confidence associated with that rule, and the final prediction is decided by combining the weighted votes of all of the rules that apply to the record in question. If no rule applies, a default prediction is assigned to the record. - -" -C5673E6023D99F8354E9B61DA2D2F1B58FBC970F_1,C5673E6023D99F8354E9B61DA2D2F1B58FBC970F,"Example. A medical researcher has collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. You can use a C5.0 model, in conjunction with other nodes, to help find out which drug might be appropriate for a future patient with the same illness. - -Requirements. To train a C5.0 model, there must be one categorical (i.e., nominal or ordinal) Target field, and one or more Input fields of any type. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. A weight field can also be specified. - -Strengths. C5.0 models are quite robust in the presence of problems such as missing data and large numbers of input fields. They usually do not require long training times to estimate. In addition, C5.0 models tend to be easier to understand than some other model types, since the rules derived from the model have a very straightforward interpretation. C5.0 also offers the powerful boosting method to increase accuracy of classification. - -Tip: C5.0 model building speed may benefit from enabling parallel processing. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose. -" -DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994,DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994," Caching options for nodes - -To optimize the running of flows, you can set up a cache on any nonterminal node. When you set up a cache on a node, the cache is filled with the data that passes through the node the next time you run the data flow. From then on, the data is read from the cache (which is stored temporarily) rather than from the data source. - -Caching is most useful following a time-consuming operation such as a sort, merge, or aggregation. For example, suppose that you have an import node set to read sales data from a database and an Aggregate node that summarizes sales by location. You can set up a cache on the Aggregate node rather than on the import node because you want the cache to store the aggregated data rather than the entire data set. Note: Caching at import nodes, which simply stores a copy of the original data as it is read into SPSS Modeler, won't improve performance in most circumstances. - -Nodes with caching enabled are displayed with a special circle-backslash icon. When the data is cached at the node, the icon changes to a check mark. - -Figure 1. Node with empty cache vs. node with full cache - -![Shows a node with an empty cache and a node with a full cache](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/cache_nodes.png) - -A circle-backslash icon by node indicates that its cache is empty. When the cache is full, the icon becomes a check mark. If you want to replace the contents of the cache, you must first flush the cache and then re-run the data flow to refill it. - -In your flow, right-click the node and select . -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_0,D43DE202E6D3EEE211893585616BDA7EB09211C4," Continuous machine learning - -As a result of IBM research, and inspired by natural selection in biology, continuous machine learning is available for the Auto Classifier node and the Auto Numeric node. - -An inconvenience with modeling is models getting outdated due to changes to your data over time. This is commonly referred to as model drift or concept drift. To help overcome model drift effectively, SPSS Modeler provides continuous automated machine learning. - -What is model drift? When you build a model based on historical data, it can become stagnant. In many cases, new data is always coming in—new variations, new patterns, new trends, etc.—that the old historical data doesn't capture. To solve this problem, IBM was inspired by the famous phenomenon in biology called the natural selection of species. Think of models as species and think of data as nature. Just as nature selects species, we should let data select the model. There's one big difference between models and species: species can evolve, but models are static after they're built. - -There are two preconditions for species to evolve; the first is gene mutation, and the second is population. Now, from a modeling perspective, to satisfy the first precondition (gene mutation), we should introduce new data changes into the existing model. To satisfy the second precondition (population), we should use a number of models rather than just one. What can represent a number of models? An Ensemble Model Set (EMS)! - -The following figure illustrates how an EMS can evolve. The upper left portion of the figure represents historical data with hybrid partitions. The hybrid partitions ensure a rich initial EMS. The upper right portion of the figure represents a new chunk of data that becomes available, with vertical bars on each side. The left vertical bar represents current status, and the right vertical bar represents the status when there's a risk of model drift. In each new round of continuous machine learning, two steps are performed to evolve your model and avoid model drift. - -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_1,D43DE202E6D3EEE211893585616BDA7EB09211C4,"First, you construct an ensemble model set (EMS) using existing training data. After that, when a new chunk of data becomes available, new models are built against that new data and added to the EMS as component models. The weights of existing component models in the EMS are reevaluated using the new data. As a result of this reevaluation, component models having higher weights are selected for the current prediction, and component models having lower weights may be deleted from the EMS. This process refreshes the EMS for both model weights and model instances, thus evolving in a flexible and efficient way to address the inevitable changes to your data over time. - -Figure 1. Continuous auto machine learning - -![Continuous auto machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml.png) - -The ensemble model set (EMS) is a generated auto model nugget, and there's a refresh link between the auto modeling node and the generated auto model nugget that defines the refresh relationship between them. When you enable continuous auto machine learning, new data assets are continuously fed to auto modeling nodes to generate new component models. The model nugget is updated instead of replaced. - -The following figure provides an example of the internal structure of an EMS in a continuous machine learning scenario. Only the top three component models are selected for the current prediction. For each component model (labeled as M1, M2, and M3), two kinds of weights are maintained. Current Model Weight (CMW) describes how a component model performs with a new chunk of data, and Accumulated Model Weight (AMW) describes the comprehensive performance of a component model against recent chunks of data. AMW is calculated iteratively via CMW and previous values of itself, and there's a hyper parameter beta to balance between them. The formula to calculate AMW is called exponential moving average. - -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_2,D43DE202E6D3EEE211893585616BDA7EB09211C4,"When a new chunk of data becomes available, first SPSS Modeler uses it to build a few new component models. In this example figure, model four (M4) is built with CMW and AMW calculated during the initial model building process. Then SPSS Modeler uses the new chunk of data to reevaluate measures of existing component models (M1, M2, and M3) and update their CMW and AMW based on the reevaluation results. Finally, SPSS Modeler might reorder the component models based on CMW or AMW and select the top three component models accordingly. - -In this figure, CMW is described using normalized value (sum = 1) and AMW is calculated based on CMW. In SPSS Modeler, the absolute value (equal to evaluation-weighted measure selected - for example, accuracy) is chosen to represent CMW and AMW for simplicity. - -Figure 2. EMS structure - -![EMS structure](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml_details.png)Note that there are two types of weights defined for each EMS component model, both of which could be used for selecting top N models and component model drop out: - - - -* Current Model Weight (CMW) is computed via evaluation against the new data chunk (for example, evaluation accuracy on the new data chunk). -* Accumulated Model Weight (AMW) is computed via combining both CMW and existing AMW (for example, exponentially weighted moving average (EWMA). - -Exponential moving average formula for calculating AMW: -![Exponential moving average formula for calculating AMW](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml_alg.png) - - - -In SPSS Modeler, after running an Auto Classifier node to generate a model nugget, the following model options are available for continuous machine learning: - - - -* Enable continuous auto machine learning during model refresh. Select this option to enable continuous machine learning. Keep in mind that consistent metadata (data model) must be used to train the continuous auto model. If you select this option, other options are enabled. -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_3,D43DE202E6D3EEE211893585616BDA7EB09211C4,"* Enable automatic model weights reevaluation. This option controls whether evaluation measures (accuracy, for example) are computed and updated during model refresh. If you select this option, an automatic evaluation process will run after the EMS (during model refresh). This is because it's usually necessary to reevaluate existing component models using new data to reflect the current state of your data. Then the weights of the EMS component models are assigned according to reevaluation results, and the weights are used to decide the proportion a component model contributes to the final ensemble prediction. This option is selected by default. - -Figure 3. Model settings - -![Model settings](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml_settings.png) - -Figure 4. Flag target - -![Flag target](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml_models.png)Following are the supported CMW and AMW for the Auto Classifier node: - - - -Table 1. Supported CMW and AMW - - Target type CMW AMW - - flag target Overall Accuracy
Area Under Curve Accumulated Accuracy
Accumulated AUC - set target Overall Accuracy Accumulated Accuracy - - - -The following three options are related to AMW, which is used to evaluate how a component model performs during recent data chunk periods: -* Enable accumulated factor during model weights reevaluation. If you select this option, AMW computation will be enabled during model weights reevaluation. AMW represents the comprehensive performance of an EMS component model during recent data chunk periods, related to the accumulated factor β defined in the AMW formula listed previously, which you can adjust in the node properties. When this option isn't selected, only CMW will be computed. This option is selected by default. -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_4,D43DE202E6D3EEE211893585616BDA7EB09211C4,"* Perform model reduction based on accumulated limit during model refresh. Select this option if you want component models with an AMW value below the specified limit to be removed from the auto model EMS during model refresh. This can be helpful in discarding component models that are useless to prevent the auto model EMS from becoming too heavy.The accumulated limit value evaluation is related to the weighted measure used when Evaluation-weighted voting is selected as the ensemble method. See the following. - -Figure 5. Set and flag targets - -![Set and flag targets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/caml_targets.png) - -Note that if you select Model Accuracy for the evaluation-weighted measure, models with an accumulated accuracy below the specified limit will be deleted. And if you select Area under curve for the evaluation-weighted measure, models with an accumulated AUC below the specified limit will be deleted. - -By default, Model Accuracy is used for the evaluation-weighted measure for the Auto Classifier node, and there's an optional AUC ROC measure in the case of flag targets. -* Use accumulated evaluation-weighted voting. Select this option if you want AMW to be used for the current scoring/prediction. Otherwise, CMW will be used by default. This option is enabled when Evaluation-weighted voting is selected for the ensemble method. - -Note that for flag targets, by selecting this option, if you select Model Accuracy for the evaluation-weighted measure, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Or if you select Area under curve for the evaluation-weighted measure, then Accumulated AUC will be used as the AMW to perform the current scoring. If you don't select this option and you select Model Accuracy for the evaluation-weighted measure, then Overall Accuracy will be used as the CMW to perform the current scoring. If you select Area under curve, Area under curve will be used as the CMW to perform the current scoring. - -" -D43DE202E6D3EEE211893585616BDA7EB09211C4_5,D43DE202E6D3EEE211893585616BDA7EB09211C4,"For set targets, if you select this Use accumulated evaluation-weighted voting option, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Otherwise, Overall Accuracy will be used as the CMW to perform the current scoring. - - - -With continuous auto machine learning, the auto model nugget is evolving all the time by rebuilding the auto model, which ensures that you get the most updated version reflecting the current state of your data. SPSS Modeler provides the flexibility for different top N component models in the EMS to be selected according to their current weights, which keeps pace with varying data during different periods. - -Note: The Auto Numeric node is a much simpler case, providing a subset of the options in the Auto Classifier node. -" -461D1A8F855174F44550531EF8BE6E67C29D3E3B,461D1A8F855174F44550531EF8BE6E67C29D3E3B," CARMA node - -The CARMA node uses an association rules discovery algorithm to discover association rules in the data. - -Association rules are statements in the form: - -if antecedent(s) then consequent(s) - -For example, if a Web customer purchases a wireless card and a high-end wireless router, the customer is also likely to purchase a wireless music server if offered. The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. This means that the rules generated can be used for a wider variety of applications. For example, you can use rules generated by this node to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season. Using watsonx.ai, you can determine which clients have purchased the antecedent products and construct a marketing campaign designed to promote the consequent product. - -Requirements. In contrast to Apriori, the CARMA node does not require Input or Target fields. This is integral to the way the algorithm works and is equivalent to building an Apriori model with all fields set to Both. You can constrain which items are listed only as antecedents or consequents by filtering the model after it is built. For example, you can use the model browser to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season. - -To create a CARMA rule set, you need to specify an ID field and one or more content fields. The ID field can have any role or measurement level. Fields with the role None are ignored. Field types must be fully instantiated before executing the node. Like Apriori, data may be in tabular or transactional format. - -Strengths. The CARMA node is based on the CARMA association rules algorithm. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than antecedent support. CARMA also allows rules with multiple consequents. Like Apriori, models generated by a CARMA node can be inserted into a data stream to create predictions. -" -37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C,37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C," C&R Tree node - -The Classification and Regression (C&R) Tree node is a tree-based classification and prediction method. Similar to C5.0, this method uses recursive partitioning to split the training records into segments with similar output field values. The C&R Tree node starts by examining the input fields to find the best split, measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is subsequently split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups). -" -D64140C0B8D4187B49046528FF61A54D77A99223,D64140C0B8D4187B49046528FF61A54D77A99223," CHAID node - -CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits. - -CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged. - -Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute. - -Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them. - -Strengths. Unlike the C&R Tree and QUEST nodes, CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables. -" -54EE0BB6FBD2E35C46C41D0065C299408F5AB0A5,54EE0BB6FBD2E35C46C41D0065C299408F5AB0A5," Characters - -Characters (usually shown as CHAR) are typically used within a CLEM expression to perform tests on strings. - -For example, you can use the function isuppercode to determine whether the first character of a string is uppercase. The following CLEM expression uses a character to indicate that the test should be performed on the first character of the string: - -isuppercode(subscrs(1, ""MyString"")) - -To express the code (in contrast to the location) of a particular character in a CLEM expression, use single backquotes of the form . For example, A , Z . - -Note: There is no CHAR storage type for a field, so if a field is derived or filled with an expression that results in a CHAR, then that result will be converted to a string. -" -3C1D83E94DDC08D7A6229AEDC49C895E86E660BF,3C1D83E94DDC08D7A6229AEDC49C895E86E660BF," CLEM datatypes - -This section covers CLEM datatypes. - -CLEM datatypes can be made up of any of the following: - - - -* Integers -* Reals -* Characters -* Strings -* Lists -" -6F900078FD88E14400807E571E1F3A24C633C2DC,6F900078FD88E14400807E571E1F3A24C633C2DC," CLEM examples - -The example expressions in this section illustrate correct syntax and the types of expressions possible with CLEM. - -Additional examples are discussed throughout this CLEM documentation. See [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.htmlclem_language_reference) for more information. -" -628354B3F2FA792B938756225315E3B4024DCC0E_0,628354B3F2FA792B938756225315E3B4024DCC0E," Functions reference - -This section lists CLEM functions for working with data in SPSS Modeler. You can enter these functions as code in various areas of the user interface, such as Derive and Set To Flag nodes, or you can use the Expression Builder to create valid CLEM expressions without memorizing function lists or field names. - - - -CLEM functions for use with SPSS Modeler data - -Table 1. CLEM functions for use with SPSS Modeler data - - Function Type Description - - [Information](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_information.htmlclem_function_ref_information) Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string. - [Conversion](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_conversion.htmlclem_function_ref_conversion) Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp. - [Comparison](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) Used to compare field values to each other or to a specified string. For example, <=is used to compare whether the values of two fields are lesser or equal. - [Logical](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_logical.htmlclem_function_ref_logical) Used to perform logical operations, such as if, then, else operations. -" -628354B3F2FA792B938756225315E3B4024DCC0E_1,628354B3F2FA792B938756225315E3B4024DCC0E," [Numeric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) Used to perform numeric calculations, such as the natural log of field values. - [Trigonometric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.htmlclem_function_ref_trigonometric) Used to perform trigonometric calculations, such as the arccosine of a specified angle. - [Probability](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_probability.htmlclem_function_ref_probability) Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value. - [Spatial](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.htmlclem_function_ref_spatial) Used to perform spatial calculations on geospatial data. - [Bitwise](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.htmlclem_function_ref_bitwise) Used to manipulate integers as bit patterns. - [Random](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_random.htmlclem_function_ref_random) Used to randomly select items or generate numbers. -" -628354B3F2FA792B938756225315E3B4024DCC0E_2,628354B3F2FA792B938756225315E3B4024DCC0E," [String](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character. - [SoundEx](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_soundex.htmlclem_function_ref_soundex) Used to find strings when the precise spelling is not known; based on phonetic assumptions about how certain letters are pronounced. - [Date and time](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_datetime.htmlclem_function_ref_datetime) Used to perform various operations on date, time, and timestamp fields. - [Sequence](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.htmlclem_function_ref_sequence) Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence. - [Global](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.htmlclem_function_ref_global) Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set. -" -1D1659B46A454170A597B0450FD99C16EEC5B1AD_0,1D1659B46A454170A597B0450FD99C16EEC5B1AD," Bitwise integer operations - -These functions enable integers to be manipulated as bit patterns representing two's-complement values, where bit position N has weight 2N. - -Bits are numbered from 0 upward. These operations act as though the sign bit of an integer is extended indefinitely to the left. Thus, everywhere above its most significant bit, a positive integer has 0 bits and a negative integer has 1 bit. - - - -CLEM bitwise integer operations - -Table 1. CLEM bitwise integer operations - - Function Result Description - - INT1 Integer Produces the bitwise complement of the integer INT1. That is, there is a 1 in the result for each bit position for which INT1 has 0. It is always true that INT = –(INT + 1). - INT1 INT2 Integer The result of this operation is the bitwise ""inclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 or both. - INT1 /& INT2 Integer The result of this operation is the bitwise ""exclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 but not in both. - INT1 && INT2 Integer Produces the bitwise ""and"" of the integers INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in both INT1 and INT2. - INT1 && INT2 Integer Produces the bitwise ""and"" of INT1 and the bitwise complement of INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in INT1 and a 0 in INT2. This is the same as INT1&& (INT2) and is useful for clearing bits of INT1 set in INT2. - INT << N Integer Produces the bit pattern of INT1 shifted left by N positions. A negative value for N produces a right shift. -" -1D1659B46A454170A597B0450FD99C16EEC5B1AD_1,1D1659B46A454170A597B0450FD99C16EEC5B1AD," INT >> N Integer Produces the bit pattern of INT1 shifted right by N positions. A negative value for N produces a left shift. - INT1 &&=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 /== 0 but is more efficient. - INT1 &&/=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 == 0 but is more efficient. - integer_bitcount(INT) Integer Counts the number of 1 or 0 bits in the two's-complement representation of INT. If INT is non-negative, N is the number of 1 bits. If INT is negative, it is the number of 0 bits. Owing to the sign extension, there are an infinite number of 0 bits in a non-negative integer or 1 bits in a negative integer. It is always the case that integer_bitcount(INT) = integer_bitcount(-(INT+1)). - integer_leastbit(INT) Integer Returns the bit position N of the least-significant bit set in the integer INT. N is the highest power of 2 by which INT divides exactly. -" -5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A,5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A," Functions handling blanks and null values - -Using CLEM, you can specify that certain values in a field are to be regarded as ""blanks,"" or missing values. - -The following functions work with blanks. - - - -CLEM blank and null value functions - -Table 1. CLEM blank and null value functions - - Function Result Description - - @BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or Import node (Types tab). - @LAST_NON_BLANK(FIELD) Any Returns the last value for FIELD that was not blank, as defined in an upstream Import or Type node. If there are no nonblank values for FIELD in the records read so far, $null$ is returned. Note that blank values, also called user-missing values, can be defined separately for each field. - @NULL(FIELD) Boolean Returns true if the value of FIELD is the system-missing $null$. Returns false for all other values, including user-defined blanks. If you want to check for both, use @BLANK(FIELD) and@NULL(FIELD). - undef Any Used generally in CLEM to enter a $null$ value—for example, to fill blank values with nulls in the Filler node. - - - -Blank fields may be ""filled in"" with the Filler node. In both Filler and Derive nodes (multiple mode only), the special CLEM function @FIELD refers to the current field(s) being examined. -" -32A79D23C94FB1920DB500D2DD9464C1316C62A5_0,32A79D23C94FB1920DB500D2DD9464C1316C62A5," Comparison functions - -Comparison functions are used to compare field values to each other or to a specified string. - -For example, you can check strings for equality using =. An example of string equality verification is: Class = ""class 1"". - -For purposes of numeric comparison, greater means closer to positive infinity, and lesser means closer to negative infinity. That is, all negative numbers are less than any positive number. - - - -CLEM comparison functions - -Table 1. CLEM comparison functions - - Function Result Description - - count_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that are equal to ITEM1 or null if ITEM1 is null. - count_greater_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are greater than ITEM1 or null if ITEM1 is null. - count_less_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are less than ITEM1 or null if ITEM1 is null. - count_not_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that aren't equal to ITEM1 or null if ITEM1 is null. - count_nulls(LIST) Integer Returns the number of null values from a list of fields. - count_non_nulls(LIST) Integer Returns the number of non-null values from a list of fields. - date_before(DATE1, DATE2) Boolean Used to check the ordering of date values. Returns a true value if DATE1 is before DATE2. - first_index(ITEM, LIST) Integer Returns the index of the first field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only. - first_non_null(LIST) Any Returns the first non-null value in the supplied list of fields. All storage types supported. - first_non_null_index(LIST) Integer Returns the index of the first field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported. -" -32A79D23C94FB1920DB500D2DD9464C1316C62A5_1,32A79D23C94FB1920DB500D2DD9464C1316C62A5," ITEM1 = ITEM2 Boolean Returns true for records where ITEM1 is equal to ITEM2. - ITEM1 /= ITEM2 Boolean Returns true if the two strings are not identical or 0 if they're identical. - ITEM1 < ITEM2 Boolean Returns true for records where ITEM1 is less than ITEM2. - ITEM1 <= ITEM2 Boolean Returns true for records where ITEM1 is less than or equal to ITEM2. - ITEM1 > ITEM2 Boolean Returns true for records where ITEM1 is greater than ITEM2. - ITEM1 >= ITEM2 Boolean Returns true for records where ITEM1 is greater than or equal to ITEM2. - last_index(ITEM, LIST) Integer Returns the index of the last field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only. - last_non_null(LIST) Any Returns the last non-null value in the supplied list of fields. All storage types supported. - last_non_null_index(LIST) Integer Returns the index of the last field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported. - max(ITEM1, ITEM2) Any Returns the greater of the two items: ITEM1 or ITEM2. - max_index(LIST) Integer Returns the index of the field containing the maximum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the maximum, the index value 3 is returned. If multiple fields contain the maximum value, the one listed first (leftmost) is returned. - max_n(LIST) Number Returns the maximum value from a list of numeric fields or null if all of the field values are null. - member(ITEM, LIST) Boolean Returns true if ITEM is a member of the specified LIST. Otherwise, a false value is returned. A list of field names can also be specified. - min(ITEM1, ITEM2) Any Returns the lesser of the two items: ITEM1 or ITEM2. -" -32A79D23C94FB1920DB500D2DD9464C1316C62A5_2,32A79D23C94FB1920DB500D2DD9464C1316C62A5," min_index(LIST) Integer Returns the index of the field containing the minimum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the minimum, the index value 3 is returned. If multiple fields contain the minimum value, the one listed first (leftmost) is returned. - min_n(LIST) Number Returns the minimum value from a list of numeric fields or null if all of the field values are null. -" -8CE325D8AFC27359968A8799D58EF4BF0C57D68E_0,8CE325D8AFC27359968A8799D58EF4BF0C57D68E," Conversion functions - -With conversion functions, you can construct new fields and convert the storage type of existing files. - -For example, you can form new strings by joining strings together or by taking strings apart. To join two strings, use the operator ><. For example, if the fieldSite has the value""BRAMLEY"", then ""xx"" >< Site returns ""xxBRAMLEY"". The result of>< is always a string, even if the arguments aren't strings. Thus, if field V1 is 3 and field V2 is 5, then V1 >< V2 returns ""35"" (a string, not a number). - -Conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties. For example, if you want to convert a string field with values Jan 2021, Feb 2021, and so on, select the matching date format MON YYYY as the default date format for the flow. - - - -CLEM conversion functions - -Table 1. CLEM conversion functions - - Function Result Description - - ITEM1 >< ITEM2 String Concatenates values for two fields and returns the resulting string as ITEM1ITEM2. - to_integer(ITEM) Integer Converts the storage of the specified field to an integer. - to_real(ITEM) Real Converts the storage of the specified field to a real. - to_number(ITEM) Number Converts the storage of the specified field to a number. - to_string(ITEM) String Converts the storage of the specified field to a string. When a real is converted to string using this function, it returns a value with 6 digits after the radix point. - to_time(ITEM) Time Converts the storage of the specified field to a time. - to_date(ITEM) Date Converts the storage of the specified field to a date. - to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp. - to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value. -" -8CE325D8AFC27359968A8799D58EF4BF0C57D68E_1,8CE325D8AFC27359968A8799D58EF4BF0C57D68E," datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is a timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged. - stb_centroid_latitude(ITEM) Integer Returns an integer value for latitude corresponding to centroid of the geohash argument. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_0,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," Date and time functions - -CLEM includes a family of functions for handling fields with datetime storage of string variables representing dates and times. - -The formats of date and time used are specific to each flow and are specified in the flow properties. The date and time functions parse date and time strings according to the currently selected format. - -When you specify a year in a date that uses only two digits (that is, the century is not specified), SPSS Modeler uses the default century that's specified in the flow properties. - - - -CLEM date and time functions - -Table 1. CLEM date and time functions - - Function Result Description - - @TODAY String If you select Rollover days/mins in the flow properties, this function returns the current date as a string in the current date format. If you use a two-digit date format and don't select Rollover days/mins, this function returns $null$ on the current server. - to_time(ITEM) Time Converts the storage of the specified field to a time. - to_date(ITEM) Date Converts the storage of the specified field to a date. - to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp. - to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value. - datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_1,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_before(DATE1, DATE2) Boolean Returns a value of true if DATE1 represents a date or timestamp before that represented by DATE2. Otherwise, this function returns a value of 0. - date_days_difference(DATE1, DATE2) Integer Returns the time in days from the date or timestamp represented by DATE1 to that represented by DATE2, as an integer. If DATE2 is before DATE1, this function returns a negative number. - date_in_days(DATE) Integer Returns the time in days from the baseline date to the date or timestamp represented by DATE, as an integer. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist. - date_in_months(DATE) Real Returns the time in months from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist. - date_in_weeks(DATE) Real Returns the time in weeks from the baseline date to the date or timestamp represented by DATE, as a real number. This is based on a week of 7.0 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_2,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_in_years(DATE) Real Returns the time in years from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist. - date_months_difference (DATE1, DATE2) Real Returns the time in months from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE2 is before DATE1, this function returns a negative number. - datetime_date(YEAR, MONTH, DAY) Date Creates a date value for the given YEAR, MONTH, and DAY. The arguments must be integers. - datetime_day(DATE) Integer Returns the day of the month from a given DATE or timestamp. The result is an integer in the range 1 to 31. - datetime_day_name(DAY) String Returns the full name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday). - datetime_hour(TIME) Integer Returns the hour from a TIME or timestamp. The result is an integer in the range 0 to 23. - datetime_in_seconds(TIME) Real Returns the seconds portion stored in TIME. - datetime_in_seconds(DATE), datetime_in_seconds(DATETIME) Real Returns the accumulated number, converted into seconds, from the difference between the current DATE or DATETIME and the baseline date (1900-01-01). - datetime_minute(TIME) Integer Returns the minute from a TIME or timestamp. The result is an integer in the range 0 to 59. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_3,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," datetime_month(DATE) Integer Returns the month from a DATE or timestamp. The result is an integer in the range 1 to 12. - datetime_month_name (MONTH) String Returns the full name of the given MONTH. The argument must be an integer in the range 1 to 12. - datetime_now Timestamp Returns the current time as a timestamp. - datetime_second(TIME) Integer Returns the second from a TIME or timestamp. The result is an integer in the range 0 to 59. - datetime_day_short_name(DAY) String Returns the abbreviated name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday). - datetime_month_short_name(MONTH) String Returns the abbreviated name of the given MONTH. The argument must be an integer in the range 1 to 12. - datetime_time(HOUR, MINUTE, SECOND) Time Returns the time value for the specified HOUR, MINUTE, and SECOND. The arguments must be integers. - datetime_time(ITEM) Time Returns the time value of the given ITEM. - datetime_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND) Timestamp Returns the timestamp value for the given YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND. - datetime_timestamp(DATE, TIME) Timestamp Returns the timestamp value for the given DATE and TIME. - datetime_timestamp(NUMBER) Timestamp Returns the timestamp value of the given number of seconds. - datetime_weekday(DATE) Integer Returns the day of the week from the given DATE or timestamp. - datetime_year(DATE) Integer Returns the year from a DATE or timestamp. The result is an integer such as 2021. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_4,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_weeks_difference(DATE1, DATE2) Real Returns the time in weeks from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is based on a week of 7.0 days. If DATE2 is before DATE1, this function returns a negative number. - date_years_difference (DATE1, DATE2) Real Returns the time in years from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE2 is before DATE1, this function returns a negative number. - date_from_ywd(YEAR, WEEK, DAY) Integer Converts the year, week in year, and day in week, to a date using the ISO 8601 standard. - date_iso_day(DATE) Integer Returns the day in the week from the date using the ISO 8601 standard. - date_iso_week(DATE) Integer Returns the week in the year from the date using the ISO 8601 standard. - date_iso_year(DATE) Integer Returns the year from the date using the ISO 8601 standard. - time_before(TIME1, TIME2) Boolean Returns a value of true if TIME1 represents a time or timestamp before that represented by TIME2. Otherwise, this function returns a value of 0. - time_hours_difference (TIME1, TIME2) Real Returns the time difference in hours between the times or timestamps represented by TIME1 and TIME2, as a real number. If you select Rollover days/mins in the flow properties, a higher value of TIME1 is taken to refer to the previous day. If you don't select the rollover option, a higher value of TIME1 causes the returned value to be negative. - time_in_hours(TIME) Real Returns the time in hours represented by TIME, as a real number. For example, under time format HHMM, the expression time_in_hours('0130') evaluates to 1.5. TIME can represent a time or a timestamp. -" -D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_5,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," time_in_mins(TIME) Real Returns the time in minutes represented by TIME, as a real number. TIME can represent a time or a timestamp. - time_in_secs(TIME) Integer Returns the time in seconds represented by TIME, as an integer. TIME can represent a time or a timestamp. -" -299CEE894DFF422AAC8BF49B53CAC700DE1B172D,299CEE894DFF422AAC8BF49B53CAC700DE1B172D," Global functions - -The functions @MEAN, @SUM, @MIN, @MAX, and @SDEV work on, at most, all of the records read up to and including the current one. In some cases, however, it is useful to be able to work out how values in the current record compare with values seen in the entire data set. Using a Set Globals node to generate values across the entire data set, you can access these values in a CLEM expression using the global functions. - -For example, - -@GLOBAL_MAX(Age) - -returns the highest value of Age in the data set, while the expression - -(Value - @GLOBAL_MEAN(Value)) / @GLOBAL_SDEV(Value) - -expresses the difference between this record's Value and the global mean as a number of standard deviations. You can use global values only after they have been calculated by a Set Globals node. - - - -CLEM global functions - -Table 1. CLEM global functions - - Function Result Description - - @GLOBAL_MAX(FIELD) Number Returns the maximum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs. - @GLOBAL_MIN(FIELD) Number Returns the minimum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs. - @GLOBAL_SDEV(FIELD) Number Returns the standard deviation of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs. -" -C6379E4ACDD7B1C335E9944B8D9DBB08DB220420,C6379E4ACDD7B1C335E9944B8D9DBB08DB220420," Information functions - -You can use information functions to gain insight into the values of a particular field. They're typically used to derive flag fields. - -For example, the @BLANK function creates a flag field indicating records whose values are blank for the selected field. Similarly, you can check the storage type for a field using any of the storage type functions, such as is_string. - - - -CLEM information functions - -Table 1. CLEM information functions - - Function Result Description - - @BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or source node (Types tab). - @NULL(ITEM) Boolean Returns true for all records whose values are undefined. Undefined values are system null values, displayed in SPSS Modeler as $null$. - is_date(ITEM) Boolean Returns true for all records whose type is a date. - is_datetime(ITEM) Boolean Returns true for all records whose type is a date, time, or timestamp. - is_integer(ITEM) Boolean Returns true for all records whose type is an integer. - is_number(ITEM) Boolean Returns true for all records whose type is a number. - is_real(ITEM) Boolean Returns true for all records whose type is a real. - is_string(ITEM) Boolean Returns true for all records whose type is a string. -" -A67EA42903BF8BE22AEB379891B7E1CA3EB2E4D1,A67EA42903BF8BE22AEB379891B7E1CA3EB2E4D1," Logical functions - -CLEM expressions can be used to perform logical operations. - - - -CLEM logical functions - -Table 1. CLEM logical functions - - Function Result Description - - COND1 and COND2 Boolean This operation is a logical conjunction and returns a true value if both COND1 and COND2 are true. If COND1 is false, then COND2 is not evaluated; this makes it possible to have conjunctions where COND1 first tests that an operation in COND2 is legal. For example, length(Label) >=6 and Label(6) = 'x'. - COND1 or COND2 Boolean This operation is a logical (inclusive) disjunction and returns a true value if either COND1 or COND2 is true or if both are true. If COND1 is true, COND2 is not evaluated. - not(COND) Boolean This operation is a logical negation and returns a true value if COND is false. Otherwise, this operation returns a value of 0. -" -EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170_0,EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170," Numeric functions - -CLEM contains a number of commonly used numeric functions. - - - -CLEM numeric functions - -Table 1. CLEM numeric functions - - Function Result Description - - –NUM Number Used to negate NUM. Returns the corresponding number with the opposite sign. - NUM1 + NUM2 Number Returns the sum of NUM1 and NUM2. - NUM1 –NUM2 Number Returns the value of NUM2 subtracted from NUM1. - NUM1 * NUM2 Number Returns the value of NUM1 multiplied by NUM2. - NUM1 / NUM2 Number Returns the value of NUM1 divided by NUM2. - INT1 div INT2 Number Used to perform integer division. Returns the value of INT1 divided by INT2. - INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2. - BASE POWER Number Returns BASE raised to the power POWER, where either may be any number (except that BASE must not be zero if POWER is zero of any type other than integer 0). If POWER is an integer, the computation is performed by successively multiplying powers of BASE. Thus, if BASE is an integer, the result will be an integer. If POWER is integer 0, the result is always a 1 of the same type as BASE. Otherwise, if POWER is not an integer, the result is computed as exp(POWER * log(BASE)). - abs(NUM) Number Returns the absolute value of NUM, which is always a number of the same type. - exp(NUM) Real Returns e raised to the power NUM, where e is the base of natural logarithms. - fracof(NUM) Real Returns the fractional part of NUM, defined as NUM–intof(NUM). - intof(NUM) Integer Truncates its argument to an integer. It returns the integer of the same sign as NUM and with the largest magnitude such that abs(INT) <= abs(NUM). -" -EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170_1,EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170," log(NUM) Real Returns the natural (base e) logarithm of NUM, which must not be a zero of any kind. - log10(NUM) Real Returns the base 10 logarithm of NUM, which must not be a zero of any kind. This function is defined as log(NUM) / log(10). - negate(NUM) Number Used to negate NUM. Returns the corresponding number with the opposite sign. - round(NUM) Integer Used to round NUM to an integer by taking intof(NUM+0.5) if NUM is positive or intof(NUM–0.5) if NUM is negative. - sign(NUM) Number Used to determine the sign of NUM. This operation returns –1, 0, or 1 if NUM is an integer. If NUM is a real, it returns –1.0, 0.0, or 1.0, depending on whether NUM is negative, zero, or positive. - sqrt(NUM) Real Returns the square root of NUM. NUM must be positive. - sum_n(LIST) Number Returns the sum of values from a list of numeric fields or null if all of the field values are null. -" -29DEEC30687F805460A83DD924D2F119274D25F8,29DEEC30687F805460A83DD924D2F119274D25F8," Probability functions - -Probability functions return probabilities based on various distributions, such as the probability that a value from Student's t distribution will be less than a specific value. - - - -CLEM probability functions - -Table 1. CLEM probability functions - - Function Result Description - - cdf_chisq(NUM, DF) Real Returns the probability that a value from the chi-square distribution with the specified degrees of freedom will be less than the specified number. - cdf_f(NUM, DF1, DF2) Real Returns the probability that a value from the F distribution, with degrees of freedom DF1 and DF2, will be less than the specified number. -" -9789F3A8936AD06C653C1C7AEB421C70FFD7C3E1,9789F3A8936AD06C653C1C7AEB421C70FFD7C3E1," Random functions - -The functions listed on this page can be used to randomly select items or randomly generate numbers. - - - -CLEM random functions - -Table 1. CLEM random functions - - Function Result Description - - oneof(LIST) Any Returns a randomly chosen element of LIST. List items should be entered as [ITEM1,ITEM2,...,ITEM_N]. Note that a list of field names can also be specified. -" -BACAF30043E33912E3D7F174B3F8CF858CB3093A,BACAF30043E33912E3D7F174B3F8CF858CB3093A," Sequence functions - -For some operations, the sequence of events is important. - -The application allows you to work with the following record sequences: - - - -* Sequences and time series -* Sequence functions -* Record indexing -* Averaging, summing, and comparing values -* Monitoring change—differentiation -* @SINCE -* Offset values -* Additional sequence facilities - - - -For many applications, each record passing through a stream can be considered as an individual case, independent of all others. In such situations, the order of records is usually unimportant. - -For some classes of problems, however, the record sequence is very important. These are typically time series situations, in which the sequence of records represents an ordered sequence of events or occurrences. Each record represents a snapshot at a particular instant in time; much of the richest information, however, might be contained not in instantaneous values but in the way in which such values are changing and behaving over time. - -Of course, the relevant parameter may be something other than time. For example, the records could represent analyses performed at distances along a line, but the same principles would apply. - -Sequence and special functions are immediately recognizable by the following characteristics: - - - -* They are all prefixed by @ -* Their names are given in uppercase - - - -Sequence functions can refer to the record currently being processed by a node, the records that have already passed through a node, and even, in one case, records that have yet to pass through a node. Sequence functions can be mixed freely with other components of CLEM expressions, although some have restrictions on what can be used as their arguments. -" -88E4E066B89D0A6993F31EA337930D962B76D6D1,88E4E066B89D0A6993F31EA337930D962B76D6D1," SoundEx functions - -SoundEx is a method used to find strings when the sound is known but the precise spelling isn't known. - -Developed in 1918, the method searches out words with similar sounds based on phonetic assumptions about how certain letters are pronounced. SoundEx can be used to search names in a database (for example, where spellings and pronunciations for similar names may vary). The basic SoundEx algorithm is documented in a number of sources and, despite known limitations (for example, leading letter combinations such as ph and f won't match even though they sound the same), is supported in some form by most databases. - - - -CLEM soundex functions - -Table 1. CLEM soundex functions - - Function Result Description - -" -2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2,2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2," Spatial functions - -Spatial functions can be used with geospatial data. For example, they allow you to calculate the distances between two points, the area of a polygon, and so on. - -There can also be situations that require a merge of multiple geospatial data sets that are based on a spatial predicate (within, close to, and so on), which can be done through a merge condition. - -Notes: - - - -* These spatial functions don't apply to three-dimensional data. If you import three-dimensional data into a flow, only the first two dimensions are used by these functions. The z-axis values are ignored. -* Geospatial functions aren't supported. - - - - - -CLEM spatial functions - -Table 1. CLEM spatial functions - - Function Result Description - - close_to(SHAPE,SHAPE,NUM) Boolean Tests whether 2 shapes are within a certain DISTANCE of each other. If a projected coordinate system is used, DISTANCE is in meters. If no coordinate system is used, it is an arbitrary unit. - crosses(SHAPE,SHAPE) Boolean Tests whether 2 shapes cross each other. This function is suitable for 2 linestring shapes, or 1 linestring and 1 polygon. - overlap(SHAPE,SHAPE) Boolean Tests whether there is an intersection between 2 polygons and that the intersection is interior to both shapes. - within(SHAPE,SHAPE) Boolean Tests whether the entirety of SHAPE1 is contained within a POLYGON. - area(SHAPE) Real Returns the area of the specified POLYGON. If a projected system is used, the function returns meters squared. If no coordinate system is used, it is an arbitrary unit. The shape must be a POLYGON or a MULTIPOLYGON. -" -4058D0B5222F1C34ABF1737A10DA705E27480606,4058D0B5222F1C34ABF1737A10DA705E27480606," Special fields - -Special functions are used to denote the specific fields under examination, or to generate a list of fields as input. - -For example, when deriving multiple fields at once, you should use @FIELD to denote perform this derive action on the selected fields. Using the expression log(@FIELD) derives a new log field for each selected field. - - - -CLEM special fields - -Table 1. CLEM special fields - - Function Result Description - - @FIELD Any Performs an action on all fields specified in the expression context. - @TARGET Any When a CLEM expression is used in a user-defined analysis function, @TARGET represents the target field or ""correct value"" for the target/predicted pair being analyzed. This function is commonly used in an Analysis node. - @PREDICTED Any When a CLEM expression is used in a user-defined analysis function, @PREDICTED represents the predicted value for the target/predicted pair being analyzed. This function is commonly used in an Analysis node. - @PARTITION_FIELD Any Substitutes the name of the current partition field. - @TRAINING_PARTITION Any Returns the value of the current training partition. For example, to select training records using a Select node, use the CLEM expression: @PARTITION_FIELD = @TRAINING_PARTITION This ensures that the Select node will always work regardless of which values are used to represent each partition in the data. - @TESTING_PARTITION Any Returns the value of the current testing partition. - @VALIDATION_PARTITION Any Returns the value of the current validation partition. - @FIELDS_BETWEEN(start, end) Any Returns the list of field names between the specified start and end fields (inclusive) based on the natural (that is, insert) order of the fields in the data. -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_0,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," String functions - -With CLEM, you can run operations to compare strings, create strings, or access characters. - -In CLEM, a string is any sequence of characters between matching double quotation marks (""string quotes""). Characters (CHAR) can be any single alphanumeric character. They're declared in CLEM expressions using single back quotes in the form of , such as z , A , or 2 . Characters that are out-of-bounds or negative indices to a string will result in undefined behavior. - -Note: Comparisons between strings that do and do not use SQL pushback may generate different results where trailing spaces exist. - - - -CLEM string functions - -Table 1. CLEM string functions - - Function Result Description - - allbutfirst(N, STRING) String Returns a string, which is STRING with the first N characters removed. - allbutlast(N, STRING) String Returns a string, which is STRING with the last characters removed. - alphabefore(STRING1, STRING2) Boolean Used to check the alphabetical ordering of strings. Returns true if STRING1 precedes STRING2. - count_substring(STRING, SUBSTRING) Integer Returns the number of times the specified substring occurs within the string. For example, count_substring(""foooo.txt"", ""oo"") returns 3. - endstring(LENGTH, STRING) String Extracts the last N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged. - hasendstring(STRING, SUBSTRING) Integer This function is the same as isendstring(SUBSTRING, STRING). - hasmidstring(STRING, SUBSTRING) Integer This function is the same as ismidstring(SUBSTRING, STRING) (embedded substring). - hasstartstring(STRING, SUBSTRING) Integer This function is the same as isstartstring(SUBSTRING, STRING). - hassubstring(STRING, N, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, N, STRING), where N defaults to 1. -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_1,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," hassubstring(STRING, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, 1, STRING), where N defaults to 1. - isalphacode(CHAR) Boolean Returns a value of true if CHAR is a character in the specified string (often a field name) whose character code is a letter. Otherwise, this function returns a value of 0. For example, isalphacode(produce_num(1)). - isendstring(SUBSTRING, STRING) Integer If the string STRING ends with the substring SUBSTRING, then this function returns the integer subscript of SUBSTRING in STRING. Otherwise, this function returns a value of 0. - islowercode(CHAR) Boolean Returns a value of true if CHAR is a lowercase letter character for the specified string (often a field name). Otherwise, this function returns a value of 0. For example, both () and islowercode(country_name(2)) are valid expressions. - ismidstring(SUBSTRING, STRING) Integer If SUBSTRING is a substring of STRING but does not start on the first character of STRING or end on the last, then this function returns the subscript at which the substring starts. Otherwise, this function returns a value of 0. - isnumbercode(CHAR) Boolean Returns a value of true if CHAR for the specified string (often a field name) is a character whose character code is a digit. Otherwise, this function returns a value of 0. For example, isnumbercode(product_id(2)). - isstartstring(SUBSTRING, STRING) Integer If the string STRING starts with the substring SUBSTRING, then this function returns the subscript 1. Otherwise, this function returns a value of 0. - issubstring(SUBSTRING, N, STRING) Integer Searches the string STRING, starting from its Nth character, for a substring equal to the string SUBSTRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. If N is not given, this function defaults to 1. -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_2,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," issubstring(SUBSTRING, STRING) Integer Searches the string STRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. - issubstring_count(SUBSTRING, N, STRING) Integer Returns the index of the Nth occurrence of SUBSTRING within the specified STRING. If there are fewer than N occurrences of SUBSTRING, 0 is returned. - issubstring_lim(SUBSTRING, N, STARTLIM, ENDLIM, STRING) Integer This function is the same as issubstring, but the match is constrained to start on STARTLIM and to end on ENDLIM. The STARTLIM or ENDLIM constraints may be disabled by supplying a value of false for either argument—for example, issubstring_lim(SUBSTRING, N, false, false, STRING) is the same as issubstring. - isuppercode(CHAR) Boolean Returns a value of true if CHAR is an uppercase letter character. Otherwise, this function returns a value of 0. For example, both () and isuppercode(country_name(2)) are valid expressions. - last(STRING) String Returns the last character CHAR of STRING (which must be at least one character long). - length(STRING) Integer Returns the length of the string STRING (that is, the number of characters in it). -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_3,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," locchar(CHAR, N, STRING) Integer Used to identify the location of characters in symbolic fields. The function searches the string STRING for the character CHAR, starting the search at the Nth character of STRING. This function returns a value indicating the location (starting at N) where the character is found. If the character is not found, this function returns a value of 0. If the function has an invalid offset (N) (for example, an offset that is beyond the length of the string), this function returns $null$.
For example, locchar(n, 2, web_page) searches the field called web_page for the n character beginning at the second character in the field value.
Be sure to use single back quotes to encapsulate the specified character. - locchar_back(CHAR, N, STRING) Integer Similar to locchar, except that the search is performed backward starting from the Nth character. For example, locchar_back(n, 9, web_page) searches the field web_page starting from the ninth character and moving backward toward the start of the string. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. Ideally, you should use locchar_back in conjunction with the function length() to dynamically use the length of the current value of the field. For example, locchar_back(n, (length(web_page)), web_page). - lowertoupper(CHAR)lowertoupper (STRING) CHAR or String Input can be either a string or character, which is used in this function to return a new item of the same type, with any lowercase characters converted to their uppercase equivalents. For example, lowertoupper(a), lowertoupper(“My string”), and lowertoupper(field_name(2)) are all valid expressions. -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_4,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," matches Boolean Returns true if a string matches a specified pattern. The pattern must be a string literal; it can't be a field name containing a pattern. You can include a question mark (?) in the pattern to match exactly one character; an asterisk () matches zero or more characters. To match a literal question mark or asterisk (rather than using these as wildcards), use a backslash () as an escape character. - replace(SUBSTRING, NEWSUBSTRING, STRING) String Within the specified STRING, replace all instances of SUBSTRING with NEWSUBSTRING. - replicate(COUNT, STRING) String Returns a string that consists of the original string copied the specified number of times. - stripchar(CHAR,STRING) String Enables you to remove specified characters from a string or field. You can use this function, for example, to remove extra symbols, such as currency notations, from data to achieve a simple number or name. For example, using the syntax stripchar($, 'Cost') returns a new field with the dollar sign removed from all values.
Be sure to use single back quotes to encapsulate the specified character. - skipchar(CHAR, N, STRING) Integer Searches the string STRING for any character other than CHAR, starting at the Nth character. This function returns an integer substring indicating the point at which one is found or 0 if every character from the Nth onward is a CHAR. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$.
locchar is often used in conjunction with the skipchar functions to determine the value of N (the point at which to start searching the string). For example, skipchar(s, (locchar(s, 1, ""MyString"")), ""MyString""). - skipchar_back(CHAR, N, STRING) Integer Similar to skipchar, except that the search is performed backward, starting from the Nth character. -" -9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_5,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," startstring(N, STRING) String Extracts the first N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged. - strmember(CHAR, STRING) Integer Equivalent to locchar(CHAR, 1, STRING). It returns an integer substring indicating the point at which CHAR first occurs, or 0. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. - subscrs(N, STRING) CHAR Returns the Nth character CHAR of the input string STRING. This function can also be written in a shorthand form as STRING(N). For example, lowertoupper(“name”(1)) is a valid expression. - substring(N, LEN, STRING) String Returns a string SUBSTRING, which consists of the LEN characters of the string STRING, starting from the character at subscript N. - substring_between(N1, N2, STRING) String Returns the substring of STRING, which begins at subscript N1 and ends at subscript N2. - textsplit(STRING, N, CHAR) String textsplit(STRING,N,CHAR) returns the substring between the Nth-1 and Nth occurrence of CHAR. If N is 1, then it will return the substring from the beginning of STRING up to but not including CHAR. If N-1 is the last occurrence of CHAR, then it will return the substring from the Nth-1 occurrence of CHAR to the end of the string. - trim(STRING) String Removes leading and trailing white space characters from the specified string. - trimstart(STRING) String Removes leading white space characters from the specified string. - trimend(STRING) String Removes trailing white space characters from the specified string. - unicode_char(NUM) CHAR Input must be decimal, not hexadecimal values. Returns the character with Unicode value NUM. -" -2904E26946523BB3E78975F68A822F5F2A32B9F5,2904E26946523BB3E78975F68A822F5F2A32B9F5," Trigonometric functions - -All of the functions in this section either take an angle as an argument or return one as a result. - - - -CLEM trigonometric functions - -Table 1. CLEM trigonometric functions - - Function Result Description - - arccos(NUM) Real Computes the arccosine of the specified angle. - arccosh(NUM) Real Computes the hyperbolic arccosine of the specified angle. - arcsin(NUM) Real Computes the arcsine of the specified angle. - arcsinh(NUM) Real Computes the hyperbolic arcsine of the specified angle. - arctan(NUM) Real Computes the arctangent of the specified angle. - arctan2(NUM_Y, NUM_X) Real Computes the arctangent of NUM_Y / NUM_X and uses the signs of the two numbers to derive quadrant information. The result is a real in the range - pi < ANGLE <= pi (radians) – 180 < ANGLE <= 180 (degrees) - arctanh(NUM) Real Computes the hyperbolic arctangent of the specified angle. - cos(NUM) Real Computes the cosine of the specified angle. - cosh(NUM) Real Computes the hyperbolic cosine of the specified angle. - pi Real This constant is the best real approximation to pi. - sin(NUM) Real Computes the sine of the specified angle. - sinh(NUM) Real Computes the hyperbolic sine of the specified angle. -" -621083EB36CF3896B77D22EDBCC23FD2716F6B4A,621083EB36CF3896B77D22EDBCC23FD2716F6B4A," Converting date and time values - -Note that conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties. - -For example, if you have a field named DATE that's stored as a string with values Jan 2021, Feb 2021, and so on, you could convert it to date storage as follows: - -to_date(DATE) - -For this conversion to work, select the matching date format MON YYYY as the default date format for the flow. - -Dates stored as numbers. Note that DATE in the previous example is the name of a field, while to_date is a CLEM function. If you have dates stored as numbers, you can convert them using the datetime_date function, where the number is interpreted as a number of seconds since the base date (or epoch). - -datetime_date(DATE) - -By converting a date to a number of seconds (and back), you can perform calculations such as computing the current date plus or minus a fixed number of days. For example: - -datetime_date((date_in_days(DATE)-7)606024) -" -ADBEF9D5635EB271A8BD78B23064DCBA1A1915A6,ADBEF9D5635EB271A8BD78B23064DCBA1A1915A6," CLEM (legacy) language reference - -This section describes the Control Language for Expression Manipulation (CLEM), which is a powerful tool used to analyze and manipulate the data used in SPSS Modeler flows. - -You can use CLEM within nodes to perform tasks ranging from evaluating conditions or deriving values to inserting data into reports. CLEM expressions consist of values, field names, operators, and functions. Using the correct syntax, you can create a wide variety of powerful data operations. - -Figure 1. Expression Builder - -![Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/expressionbuilder_full.png) -" -88467827811ED045A648A3C215F5B91D43EB49CD,88467827811ED045A648A3C215F5B91D43EB49CD," Working with multiple-response data - -You can analyze multiple-response data using a number of comparison functions. - -Available comparison functions include: - - - -* value_at -* first_index / last_index -* first_non_null / last_non_null -* first_non_null_index / last_non_null_index -* min_index / max_index - - - -For example, suppose a multiple-response question asked for the first, second, and third most important reasons for deciding on a particular purchase (for example, price, personal recommendation, review, local supplier, other). In this case, you might determine the importance of price by deriving the index of the field in which it was first included: - -first_index(""price"", [Reason1 Reason2 Reason3]) - -Similarly, suppose you asked customers to rank three cars in order of likelihood to purchase and coded the responses in three separate fields, as follows: - - - -Car ranking example - -Table 1. Car ranking example - - customer id car1 car2 car3 - - 101 1 3 2 - 102 3 2 1 - 103 2 3 1 - - - -In this case, you could determine the index of the field for the car they like most (ranked #1, or the lowest rank) using the min_index function: - -min_index(['car1' 'car2' 'car3']) - -See [Comparison functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) for more information. -" -BC314650433831859C400BFFEFE5F919ED8735EA,BC314650433831859C400BFFEFE5F919ED8735EA," Working with numbers - -Numerous standard operations on numeric values are available in SPSS Modeler. - - - -* Calculating the sine of the specified angle—sin(NUM) -* Calculating the natural log of numeric fields—log(NUM) -* Calculating the sum of two numbers—NUM1 + NUM2 - - - -See [Numeric functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) for more information. -" -595BB1738027C777C1EB5A69631587923690ABC4_0,595BB1738027C777C1EB5A69631587923690ABC4," Working with strings - -There are a number of operations available for strings. - - - -* Converting a string to uppercase or lowercase—uppertolower(CHAR). -* Removing specified characters, such as ID_ or $ , from a string variable—stripchar(CHAR,STRING). -* Determining the length (number of characters) for a string variable—length(STRING). -* Checking the alphabetical ordering of string values—alphabefore(STRING1, STRING2). -* Removing leading or trailing white space from values—trim(STRING), trim_start(STRING), or trimend(STRING). -* Extract the first or last n characters from a string—startstring(LENGTH, STRING) or endstring(LENGTH, STRING). For example, suppose you have a field named item that combines a product name with a four-digit ID code (ACME CAMERA-D109). To create a new field that contains only the four-digit code, specify the following formula in a Derive node: - -endstring(4, item) -* Matching a specific pattern—STRING matches PATTERN. For example, to select persons with ""market"" anywhere in their job title, you could specify the following in a Select node: - -job_title matches ""market"" -* Replacing all instances of a substring within a string—replace(SUBSTRING, NEWSUBSTRING, STRING). For example, to replace all instances of an unsupported character, such as a vertical pipe ( | ), with a semicolon prior to text mining, use the replace function in a Filler node. Under Fill in fields in the node properties, select all fields where the character may occur. For the Replace condition, select Always, and specify the following condition under Replace with. - -replace('|',';',@FIELD) -* Deriving a flag field based on the presence of a specific substring. For example, you could use a string function in a Derive node to generate a separate flag field for each response with an expression such as: - - - -" -595BB1738027C777C1EB5A69631587923690ABC4_1,595BB1738027C777C1EB5A69631587923690ABC4,"hassubstring(museums,""museum_of_design"") - -See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information. -" -D1FEF8C7F5BE28316CAA952CCC76281E6F3FE12F,D1FEF8C7F5BE28316CAA952CCC76281E6F3FE12F," Summarizing multiple fields - -The CLEM language includes a number of functions that return summary statistics across multiple fields. - -These functions may be particularly useful in analyzing survey data, where multiple responses to a question may be stored in multiple fields. See [Working with multiple-response data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_multiple_response_data.htmlclem_overview_multiple_response_data) for more information. -" -DAD2EDE59535330241F2FEBDF9BF99E21DEB4393,DAD2EDE59535330241F2FEBDF9BF99E21DEB4393," Working with times and dates - -Time and date formats may vary depending on your data source and locale. The formats of date and time are specific to each flow and are set in the flow properties. - -The following examples are commonly used functions for working with date/time fields. -" -0F686BF5943844896A5385E01D440548081D2688,0F686BF5943844896A5385E01D440548081D2688," Handling blanks and missing values - -Replacing blanks or missing values is a common data preparation task for data miners. CLEM provides you with a number of tools to automate blank handling. - -The Filler node is the most common place to work with blanks; however, the following functions can be used in any node that accepts CLEM expressions: - - - -* @BLANK(FIELD) can be used to determine records whose values are blank for a particular field, such as Age. -* @NULL(FIELD) can be used to determine records whose values are system-missing for the specified field(s). In SPSS Modeler, system-missing values are displayed as $null$ values. - - - -See [Functions handling blanks and null values](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.htmlclem_function_ref_blanksnulls) for more information. -" -23296AAD76933152D5D3E9DD875EBBD3FB7575EA,23296AAD76933152D5D3E9DD875EBBD3FB7575EA," Building CLEM (legacy) expressions -" -7B9348596E2F005F89842D1B997FA09BDCBE8F06,7B9348596E2F005F89842D1B997FA09BDCBE8F06," Conventions in function descriptions - -This page describes the conventions used throughout this guide when referring to items in a function. - - - -Conventions in function descriptions - -Table 1. Conventions in function descriptions - - Convention Description - - BOOL A Boolean, or flag, such as true or false. - NUM, NUM1, NUM2 Any number. - REAL, REAL1, REAL2 Any real number, such as 1.234 or –77.01. - INT, INT1, INT2 Any integer, such as 1 or –77. - CHAR A character code, such as A . - STRING A string, such as ""referrerID"". - LIST A list of items, such as [""abc"" ""def""] or [A1, A2, A3] or [1 2 4 16]. - ITEM A field, such as Customer or extract_concept. - DATE A date field, such as start_date, where values are in a format such as DD-MON-YYYY. - TIME A time field, such as power_flux, where values are in a format such as HHMMSS. - - - -Functions in this guide are listed with the function in one column, the result type (integer, string, and so on) in another, and a description (where available) in a third column. For example, following is a description of the rem function. - - - -rem function description - -Table 2. rem function description - - Function Result Description - - INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2. - - - -Details on usage conventions, such as how to list items or specify characters in a function, are described elsewhere. See [CLEM datatypes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_datatypes.htmlclem_datatypes) for more information. -" -C2185A8C9156C6B38D76BD3FD29A833D96A5762B,C2185A8C9156C6B38D76BD3FD29A833D96A5762B," Dates - -Date calculations are based on a ""baseline"" date, which is specified in the flow properties. The default baseline date is 1 January 1900. - -The CLEM language supports the following date formats. - - - -CLEM language date formats - -Table 1. CLEM language date formats - - Format Examples - - DDMMYY 150163 - MMDDYY 011563 - YYMMDD 630115 - YYYYMMDD 19630115 - YYYYDDD Four-digit year followed by a three-digit number representing the day of the year—for example, 2000032 represents the 32nd day of 2000, or 1 February 2000. - DAY Day of the week in the current locale—for example, Monday, Tuesday, ..., in English. - MONTH Month in the current locale—for example, January, February, …. - DD/MM/YY 15/01/63 - DD/MM/YYYY 15/01/1963 - MM/DD/YY 01/15/63 - MM/DD/YYYY 01/15/1963 - DD-MM-YY 15-01-63 - DD-MM-YYYY 15-01-1963 - MM-DD-YY 01-15-63 - MM-DD-YYYY 01-15-1963 - DD.MM.YY 15.01.63 - DD.MM.YYYY 15.01.1963 - MM.DD.YY 01.15.63 - MM.DD.YYYY 01.15.1963 - DD-MON-YY 15-JAN-63, 15-jan-63, 15-Jan-63 - DD/MON/YY 15/JAN/63, 15/jan/63, 15/Jan/63 - DD.MON.YY 15.JAN.63, 15.jan.63, 15.Jan.63 - DD-MON-YYYY 15-JAN-1963, 15-jan-1963, 15-Jan-1963 - DD/MON/YYYY 15/JAN/1963, 15/jan/1963, 15/Jan/1963 - DD.MON.YYYY 15.JAN.1963, 15.jan.1963, 15.Jan.1963 - MON YYYY Jan 2004 -" -FE88457CA86FFE3BE30873156A7A0A4FD12975AF,FE88457CA86FFE3BE30873156A7A0A4FD12975AF," Accessing the Expression Builder - -The Expression Builder is available in all nodes where CLEM expressions are used, including Select, Balance, Derive, Filler, Analysis, Report, and Table nodes. - -You can open it by double-clicking the node to open its properties, then click the calculator button by the formula field. -" -56EA4620B049A9E291BF198E71D0C58C2018686D,56EA4620B049A9E291BF198E71D0C58C2018686D," Checking CLEM expressions - -Click Validate in the Expression Builder to validate an expression. - -Expressions that haven't been checked are displayed in red. If errors are found, a message indicating the cause is displayed. - -The following items are checked: - - - -* Correct quoting of values and field names -* Correct usage of parameters and global variables -* Valid usage of operators -* Existence of referenced fields -* Existence and definition of referenced globals - - - -If you encounter errors in syntax, try creating the expression using the lists and operator buttons rather than typing the expression manually. This method automatically adds the proper quotes for fields and values. - -Note: Field names that contain separators must be surrounded by single quotes. To automatically add quotes, you can create expressions using the lists and operator buttons rather than typing expressions manually. The following characters in field names may cause errors: * ! "" $% & '() = |-^ ¥ @"" ""+ "" ""<>? . ,/ :; →(arrow mark), □ △ (graphic mark, etc.) -" -6FD8A950F1EBE6B021EA9D4C775A5CA8660A1101,6FD8A950F1EBE6B021EA9D4C775A5CA8660A1101," Creating expressions - -The Expression Builder provides not only complete lists of fields, functions, and operators but also access to data values if your data is instantiated. -" -B8044B03933E3FCEA5BCF6362199ED083EC2F20F,B8044B03933E3FCEA5BCF6362199ED083EC2F20F," Database functions - -You can run an SPSS Modeler desktop stream file ( .str) that contains database functions. - -But database functions aren't available in the Expression Builder user interface, and you can't edit them. -" -841465AD74B0AFDBEC9EAFF7B038AFC4C000E96C,841465AD74B0AFDBEC9EAFF7B038AFC4C000E96C," Selecting fields - -The field list displays all fields available at this point in the data stream. Double-click a field from the list to add it to your expression. - -After selecting a field, you can also select an associated value from the value list. -" -0093065541AA4C3E90E47E3ACE89596155EA1735_0,0093065541AA4C3E90E47E3ACE89596155EA1735," Selecting functions - -The function list displays all available SPSS Modeler functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators. Available functions are grouped into categories for easier searching. - -Most of these categories are described in the Reference section of the CLEM language description. For more information, see [Functions reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.htmlclem_function_ref). - -The other categories are as follows. - - - -* General Functions. Contains a selection of some of the most commonly-used functions. -* Recently Used. Contains a list of CLEM functions used within the current session. -* @ Functions. Contains a list of all the special functions, which have their names preceded by an ""@"" sign. Note: The @DIFF1(FIELD1,FIELD2) and @DIFF2(FIELD1,FIELD2) functions require that the two field types are the same (for example, both Integer or both Long or both Real). -* Database Functions. If the flow includes a database connection, this selection lists the functions available from within that database, including user-defined functions (UDFs). For more information, see [Database functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.htmlexpressionbuild_database_functions). -* Database Aggregates. If the flow includes a database connection, this selection lists the aggregation options available from within that database. These options are available in the Expression Builder of the Aggregate node. -* Built-In Aggregates. Contains a list of the possible modes of aggregation that can be used. -* Operators. Lists all the operators you can use when building expressions. Operators are also available from the buttons in the center of the dialog box. -" -0093065541AA4C3E90E47E3ACE89596155EA1735_1,0093065541AA4C3E90E47E3ACE89596155EA1735,"* All Functions. Contains a complete list of available CLEM functions. - - - -Double-click a function to insert it into the expression field at the position of the cursor. -" -C89753519B91F85DC9E0ED54A3248CD82D5F2A9E,C89753519B91F85DC9E0ED54A3248CD82D5F2A9E," The Expression Builder - -You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions. - -In addition, the Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions. - -Notes: - - - -" -F4F623D5A7C8913E227E962BD1F347B36AAB7B51,F4F623D5A7C8913E227E962BD1F347B36AAB7B51," Expressions and conditions - -CLEM expressions can return a result (used when deriving new values). - -For example: - -Weight * 2.2 -Age + 1 -sqrt(Signal-Echo) - -Or, they can evaluate true or false (used when selecting on a condition). For example: - -Drug = ""drugA"" -Age < 16 -not(PowerFlux) and Power > 2000 - -You can combine operators and functions arbitrarily in CLEM expressions. For example: - -sqrt(abs(Signal))* max(T1, T2) + Baseline - -Brackets and operator precedence determine the order in which the expression is evaluated. In this example, the order of evaluation is: - - - -* abs(Signal) is evaluated, and sqrt is applied to its result -* max(T1, T2) is evaluated -* The two results are multiplied: x has higher precedence than + -* Finally, Baseline is added to the result - - - -The descending order of precedence (that is, operations that are performed first to operations that are performed last) is as follows: - - - -* Function arguments -* Function calls -* xx -* x / mod div rem -* + – -* > < >= <= /== == = /= - - - -If you want to override precedence, or if you're in any doubt of the order of evaluation, you can use parentheses to make it explicit. For example: - -sqrt(abs(Signal))* (max(T1, T2) + Baseline) -" -85F8B4292483C5747AB2436A2D5D5377F1F6CAB9,85F8B4292483C5747AB2436A2D5D5377F1F6CAB9," Viewing or selecting values - -You can view field values from the Expression Builder. Note that data must be fully instantiated in an Import or Type node to use this feature, so that storage, types, and values are known. - -To view values for a field from the Expression Builder, select the required field and then use the Value list or perform a search with the Find in column Value field to find values for the selected field. You can then double-click a value to insert it into the current expression or list. - -For flag and nominal fields, all defined values are listed. For continuous (numeric range) fields, the minimum and maximum values are displayed. -" -B69246113E589F088E8E1302B32B57720BD27720,B69246113E589F088E8E1302B32B57720BD27720," Fields - -Names in CLEM expressions that aren’t names of functions are assumed to be field names. - -You can write these simply as Power, val27, state_flag, and so on, but if the name begins with a digit or includes non-alphabetic characters, such as spaces (with the exception of the underscore), place the name within single quotation marks (for example, 'Power Increase', '2nd answer', '101', '$P-NextField'). - -Note: Fields that are quoted but undefined in the data set will be misread as strings. -" -C528D240892080AECE146D29FB3496DDD0F1FD48_0,C528D240892080AECE146D29FB3496DDD0F1FD48," Find - -In the Expression Builder, you can search for fields, values, or functions. - -For example, to search for a value, place your cursor in the Find in column Value field and enter the text you want to search for. - -You can also search on special characters such as tabs or newline characters, classes or ranges of characters such as a through d, any digit or non-digit, and boundaries such as the beginning or end of a line. The following types of expressions are supported. - - - -Character matches - -Table 1. Character matches - - Characters Matches - - x The character x - \\ The backslash character - \0n The character with octal value 0n (0 <= n <= 7) - \0nn The character with octal value 0nn (0 <= n <= 7) - \0mnn The character with octal value 0mnn (0 <= m <= 3, 0 <= n <= 7) - \xhh The character with hexadecimal value 0xhh - \uhhhh The character with hexadecimal value 0xhhhh - \t The tab character ('\u0009') - \n The newline (line feed) character ('\u000A') - \r The carriage-return character ('\u000D') - \f The form-feed character ('\u000C') - \a The alert (bell) character ('\u0007') - \e The escape character ('\u001B') - \cx The control character corresponding to x - - - - - -Matching character classes - -Table 2. Matching character classes - - Character classes Matches - - [abc] a, b, or c (simple class) - [^abc] Any character except a, b, or c (subtraction) - [a-zA-Z] a through z or A through Z, inclusive (range) - [a-d[m-p]] a through d, or m through p (union). Alternatively this could be specified as [a-dm-p] - [a-z&&[def]] a through z, and d, e, or f (intersection) -" -C528D240892080AECE146D29FB3496DDD0F1FD48_1,C528D240892080AECE146D29FB3496DDD0F1FD48," [a-z&&[^bc]] a through z, except for b and c (subtraction). Alternatively this could be specified as [ad-z] - [a-z&&[^m-p]] a through z, and not m through p (subtraction). Alternatively this could be specified as [a-lq-z] - - - - - -Predefined character classes - -Table 3. Predefined character classes - - Predefined character classes Matches - - . Any character (may or may not match line terminators) - \d Any digit: [0-9] - \D A non-digit: [^0-9] - \s A white space character: [ \t\n\x0B\f\r] - \S A non-white space character: [^\s] - \w A word character: [a-zA-Z_0-9] - \W A non-word character: [^\w] - - - - - -Boundary matches - -Table 4. Boundary matches - - Boundary matchers Matches - - ^ The beginning of a line - $ The end of a line - \b A word boundary - \B A non-word boundary - \A The beginning of the input -" -C1324A359A58B4D399C10BC59AE94E7E0723836D,C1324A359A58B4D399C10BC59AE94E7E0723836D," Integers - -Integers are represented as a sequence of decimal digits. - -Optionally, you can place a minus sign (−) before the integer to denote a negative number (for example, 1234, 999, −77). - -The CLEM language handles integers of arbitrary precision. The maximum integer size depends on your platform. If the values are too large to be displayed in an integer field, changing the field type to Real usually restores the value. -" -D05F366AFC5726DC1A258EDC3689067381EFDECC,D05F366AFC5726DC1A258EDC3689067381EFDECC," About CLEM - -The Control Language for Expression Manipulation (CLEM) is a powerful language for analyzing and manipulating the data that streams through an SPSS Modeler flow. Data miners use CLEM extensively in flow operations to perform tasks as simple as deriving profit from cost and revenue data or as complex as transforming web log data into a set of fields and records with usable information. - -CLEM is used within SPSS Modeler to: - - - -* Compare and evaluate conditions on record fields -* Derive values for new fields -* Derive new values for existing fields -* Reason about the sequence of records -* Insert data from records into reports - - - -CLEM expressions are indispensable for data preparation in SPSS Modeler and can be used in a wide range of nodes—from record and field operations (Select, Balance, Filler) to plots and output (Analysis, Report, Table). For example, you can use CLEM in a Derive node to create a new field based on a formula such as ratio. - -CLEM expressions can also be used for global search and replace operations. For example, the expression @NULL(@FIELD) can be used in a Filler node to replace system-missing values with the integer value 0. (To replace user-missing values, also called blanks, use the @BLANK function.) - -More complex CLEM expressions can also be created. For example, you can derive new fields based on a conditional set of rules, such as a new value category created by using the following expressions: If: CardID = @OFFSET(CardID,1), Then: @OFFSET(ValueCategory,1), Else: 'exclude'. - -This example uses the @OFFSET function to say: If the value of the field CardID for a given record is the same as for the previous record, then return the value of the field named ValueCategory for the previous record. Otherwise, assign the string ""exclude."" In other words, if the CardIDs for adjacent records are the same, they should be assigned the same value category. (Records with the exclude string can later be culled using a Select node.) -" -B93F8A3A1CED22CF84C45B552D5040A4A17FDB60,B93F8A3A1CED22CF84C45B552D5040A4A17FDB60," Lists - -A list is an ordered sequence of elements, which may be of mixed type. Lists are enclosed in square brackets ([ ]). - -Examples of lists are [1 2 4 16] and [""abc"" ""def""] and [A1, A2, A3]. Lists are not used as the value of SPSS Modeler fields. They are used to provide arguments to functions, such as member and oneof. - -Notes: - - - -" -9455A31E5D6C749F3028F9F5E5F758F713C09973_0,9455A31E5D6C749F3028F9F5E5F758F713C09973," CLEM operators - -This page lists the available CLEM language operators. - - - -CLEM language operators - -Table 1. CLEM language operators - - Operation Comments Precedence (see next section) - - or Used between two CLEM expressions. Returns a value of true if either is true or if both are true. 10 - and Used between two CLEM expressions. Returns a value of true if both are true. 9 - = Used between any two comparable items. Returns true if ITEM1 is equal to ITEM2. 7 - == Identical to =. 7 - /= Used between any two comparable items. Returns true if ITEM1 is not equal to ITEM2. 7 - /== Identical to /=. 7 - > Used between any two comparable items. Returns true if ITEM1 is strictly greater than ITEM2. 6 - >= Used between any two comparable items. Returns true if ITEM1 is greater than or equal to ITEM2. 6 - < Used between any two comparable items. Returns true if ITEM1 is strictly less than ITEM2 6 - <= Used between any two comparable items. Returns true if ITEM1 is less than or equal to ITEM2. 6 - &&=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 = 0. 6 - &&/=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 /= 0. 6 - + Adds two numbers: NUM1 + NUM2. 5 - >< Concatenates two strings; for example, STRING1 >< STRING2. 5 - - Subtracts one number from another: NUM1 - NUM2. Can also be used in front of a number: - NUM. 5 - * Used to multiply two numbers: NUM1 * NUM2. 4 - && Used between two integers. The result is the bitwise 'and' of the integers INT1 and INT2. 4 - && Used between two integers. The result is the bitwise 'and' of INT1 and the bitwise complement of INT2. 4 - Used between two integers. The result is the bitwise 'inclusive or' of INT1 and INT2. 4 -" -9455A31E5D6C749F3028F9F5E5F758F713C09973_1,9455A31E5D6C749F3028F9F5E5F758F713C09973," Used in front of an integer. Produces the bitwise complement of INT. 4 - /& Used between two integers. The result is the bitwise 'exclusive or' of INT1 and INT2. 4 - INT1 << N Used between two integers. Produces the bit pattern of INT shifted left by N positions. 4 - INT1 >> N Used between two integers. Produces the bit pattern of INT shifted right by N positions. 4 - / Used to divide one number by another: NUM1 / NUM2. 4 - Used between two numbers: BASE ** POWER. Returns BASE raised to the power POWER. 3 -" -185C42AB06DE9FF515DCD03213F5C4608C6FAEBF,185C42AB06DE9FF515DCD03213F5C4608C6FAEBF," Reals - -Real refers to a floating-point number. Reals are represented by one or more digits followed by a decimal point followed by one or more digits. CLEM reals are held in double precision. - -Optionally, you can place a minus sign (−) before the real to denote a negative number (for example, 1.234, 0.999, −77.001). Use the form e to express a real number in exponential notation (for example, 1234.0e5, 1.7e−2). When SPSS Modeler reads number strings from files and converts them automatically to numbers, numbers with no leading digit before the decimal point or with no digit after the point are accepted (for example, 999. or .11). However, these forms are illegal in CLEM expressions. - -Note: When referencing real numbers in CLEM expressions, a period must be used as the decimal separator, regardless of any settings for the current flow or locale. For example, specify - -Na > 0.6 - -rather than - -Na > 0,6 - -This applies even if a comma is selected as the decimal symbol in the flow properties and is consistent with the general guideline that code syntax should be independent of any specific locale or convention. -" -385DEC32600A9DED58FEDE3E98568FED789A400A,385DEC32600A9DED58FEDE3E98568FED789A400A," Strings - -Generally, you should enclose strings in double quotation marks. Examples of strings are ""c35product2"" and ""referrerID"". - -To indicate special characters in a string, use a backslash (for example, ""$65443""). (To indicate a backslash character, use a double backslash, \.) You can use single quotes around a string, but the result is indistinguishable from a quoted field ('referrerID'). See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information. -" -839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9,839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9," Time - -The CLEM language supports the time formats listed in this section. - - - -CLEM language time formats - -Table 1. CLEM language time formats - - Format Examples - - HHMMSS 120112, 010101, 221212 - HHMM 1223, 0745, 2207 - MMSS 5558, 0100 - HH:MM:SS 12:01:12, 01:01:01, 22:12:12 - HH:MM 12:23, 07:45, 22:07 - MM:SS 55:58, 01:00 - (H)H:(M)M:(S)S 12:1:12, 1:1:1, 22:12:12 - (H)H:(M)M 12:23, 7:45, 22:7 - (M)M:(S)S 55:58, 1:0 - HH.MM.SS 12.01.12, 01.01.01, 22.12.12 - HH.MM 12.23, 07.45, 22.07 - MM.SS 55.58, 01.00 - (H)H.(M)M.(S)S 12.1.12, 1.1.1, 22.12.12 -" -F975B9964D088181CF34A1341083BC82053812D8,F975B9964D088181CF34A1341083BC82053812D8," Values and data types - -CLEM expressions are similar to formulas constructed from values, field names, operators, and functions. The simplest valid CLEM expression is a value or a field name. - -Examples of valid values are: - -3 -1.79 -'banana' - -Examples of field names are: - -Product_ID -'$P-NextField' - -where Product is the name of a field from a market basket data set, '$P-NextField' is the name of a parameter, and the value of the expression is the value of the named field. Typically, field names start with a letter and may also contain digits and underscores (_). You can use names that don't follow these rules if you place the name within quotation marks. CLEM values can be any of the following: - - - -* Strings (for example, ""c1"", ""Type 2"", ""a piece of free text"") -* Integers (for example, 12, 0, –189) -* Real numbers (for example, 12.34, 0.0, –0.0045) -* Date/time fields (for example, 05/12/2002, 12/05/2002, 12/05/02) - - - -It's also possible to use the following elements: - - - -* Character codes (for example, a or 3) -* Lists of items (for example, [1 2 3], ['Type 1' 'Type 2']) - - - -Character codes and lists don't usually occur as field values. Typically, they're used as arguments of CLEM functions. -" -EE838EA978F9A0B0265A8D2B35FF2F64D00A1738,EE838EA978F9A0B0265A8D2B35FF2F64D00A1738," Collection node - -Collections are similar to histograms, but collections show the distribution of values for one numeric field relative to the values of another, rather than the occurrence of values for a single field. A collection is useful for illustrating a variable or field whose values change over time. - -Using 3-D graphing, you can also include a symbolic axis displaying distributions by category. Two-dimensional collections are shown as stacked bar charts, with overlays where used. -" -5A8AA187972BA8A711AC91447F668B233E580C8C_0,5A8AA187972BA8A711AC91447F668B233E580C8C," Cox node - -Cox Regression builds a predictive model for time-to-event data. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time t for given values of the predictor variables. The shape of the survival function and the regression coefficients for the predictors are estimated from observed subjects; the model can then be applied to new cases that have measurements for the predictor variables. - -Note that information from censored subjects, that is, those that do not experience the event of interest during the time of observation, contributes usefully to the estimation of the model. - -Example. As part of its efforts to reduce customer churn, a telecommunications company is interested in modeling the time to churn in order to determine the factors that are associated with customers who are quick to switch to another service. To this end, a random sample of customers is selected, and their time spent as customers (whether or not they are still active customers) and various demographic fields are pulled from the database. - -Requirements. You need one or more input fields, exactly one target field, and you must specify a survival time field within the Cox node. The target field should be coded so that the ""false"" value indicates survival and the ""true"" value indicates that the event of interest has occurred; it must have a measurement level of Flag, with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. The survival time can be any numeric field. Note: On scoring a Cox Regression model, an error is reported if empty strings in categorical variables are used as input to model building. Avoid using empty strings as input. - -" -5A8AA187972BA8A711AC91447F668B233E580C8C_1,5A8AA187972BA8A711AC91447F668B233E580C8C,"Dates & Times. Date & Time fields cannot be used to directly define the survival time; if you have Date & Time fields, you should use them to create a field containing survival times, based upon the difference between the date of entry into the study and the observation date. - -Kaplan-Meier Analysis. Cox regression can be performed with no input fields. This is equivalent to a Kaplan-Meier analysis. -" -67B99E436854F015A9DB19C775639BA4BB4D5F9B,67B99E436854F015A9DB19C775639BA4BB4D5F9B," CPLEX Optimization node - -With the CPLEX Optimization node, you can use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file. - -For more information about CPLEX optimization and OPL, see the [IBM ILOG CPLEX Optimization Studio documentation](https://www.ibm.com/support/knowledgecenter/SSSA5P). - -When outputting the data generated by the CPLEX Optimization node, you can output the original data from the data sources together as single indexes, or as multiple dimensional indexes of the result. - -Note: - - - -* When running a flow containing a CPLEX Optimization node, the CPLEX library has a limitation of 1000 variables and 1000 constraints. -" -9FA71067981E4FC0D6F68A14C91C694DC4C2AF25,9FA71067981E4FC0D6F68A14C91C694DC4C2AF25," Data Asset Export node - -You can use the Data Asset Export node to write to remote data sources using connections or write data to a project (delimited or . sav). - -Double-click the node to open its properties. Various options are available, described as follows. - -After running the node, you can find the data at the export location you specified. -" -C70BB33E4E6792511DC4E7D88536017E64BCD0F1,C70BB33E4E6792511DC4E7D88536017E64BCD0F1," Data Asset node - -You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer. First, you must create the connection. - -Note for connections to a Planning Analytics database, you must choose a view (not a cube). - -You can also pull in data from a local data file ( .csv, .txt, .json, .xls, .xlsx, .sav, and .sas are supported). Only the first sheet is imported from spreadsheets. In the node's properties, under DATA, select one or more data files to upload. You can also simply drag-and-drop the data file from your local file system onto your canvas. - -Note: You can import a stream ( .str) into watsonx.ai that was created in SPSS Modeler Subscription or SPSS Modeler client. If the imported stream contains one or more import or export nodes, you'll be prompted to convert the nodes. See [Importing an SPSS Modeler stream](https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html). -" -7F4648FD3E7F8564C98CF142E0E09E23E8097A9E,7F4648FD3E7F8564C98CF142E0E09E23E8097A9E," Data Audit node - -The Data Audit node provides a comprehensive first look at the data you bring to SPSS Modeler, presented in an interactive, easy-to-read matrix that can be sorted and used to generate full-size graphs. - -When you run a Data Audit node, interactive output is generated that includes: - - - -* Information such as summary statistics, histograms, box plots, bar charts, pie charts, and more that may be useful in gaining a preliminary understanding of the data. -* Information about outliers, extremes, and missing values. - - - -Figure 1. Data Audit node output example - -![Data Audit node output example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_data_audit_1.png) - -Figure 2. Data Audit node output example - -![Data Audit node output example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_data_audit_2.png) - -Figure 3. Data Audit node output example - -![Data Audit node output example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_data_audit_3.png) - -Figure 4. Data Audit node output example - -![Data Audit node output example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_data_audit_4.png) - -Figure 5. Data Audit node output example - -![Data Audit node output example](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_data_audit_5.png) -" -1A5F15E64AABDCA9E2785588E76F3EBE22A1C426,1A5F15E64AABDCA9E2785588E76F3EBE22A1C426," Decision List node - -Decision List models identify subgroups or segments that show a higher or lower likelihood of a binary (yes or no) outcome relative to the overall sample. - -For example, you might look for customers who are least likely to churn or most likely to say yes to a particular offer or campaign. The Decision List Viewer gives you complete control over the model, enabling you to edit segments, add your own business rules, specify how each segment is scored, and customize the model in a number of other ways to optimize the proportion of hits across all segments. As such, it is particularly well-suited for generating mailing lists or otherwise identifying which records to target for a particular campaign. You can also use multiple mining tasks to combine modeling approaches—for example, by identifying high- and low-performing segments within the same model and including or excluding each in the scoring stage as appropriate. -" -4D299EFFF5B982097A5B9D48EA16041E4820A8BB,4D299EFFF5B982097A5B9D48EA16041E4820A8BB," Derive node - -One of the most powerful features in watsonx.ai is the ability to modify data values and derive new fields from existing data. During lengthy data mining projects, it is common to perform several derivations, such as extracting a customer ID from a string of Web log data or creating a customer lifetime value based on transaction and demographic data. All of these transformations can be performed, using a variety of field operations nodes. - -Several nodes provide the ability to derive new fields: - - - -* The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional. -* The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis. -* The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points. -* The Set to Flag node derives multiple flag fields based on the categorical values defined for one or more nominal fields. -* The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made. - - - -Tip: The Control Language for Expression Manipulation (CLEM) is a powerful tool you can use to analyze and manipulate the data used in your flows. For example, you might use CLEM in a node to derive values. For more information, see the [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html). -" -20CFE34D5494AB0AE2EF8B6F65396EDBF667F688,20CFE34D5494AB0AE2EF8B6F65396EDBF667F688," Space-Time-Boxes node - -Space-Time-Boxes (STB) are an extension of Geohashed spatial locations. More specifically, an STB is an alphanumeric string that represents a regularly shaped region of space and time. - -For example, the STB dr5ru7|2013-01-01 00:00:00|2013-01-01 00:15:00 is made up of the following three parts: - - - -* The geohash dr5ru7 -* The start timestamp 2013-01-01 00:00:00 -* The end timestamp 2013-01-01 00:15:00 - - - -As an example, you could use space and time information to improve confidence that two entities are the same because they are virtually in the same place at the same time. Alternatively, you could improve the accuracy of relationship identification by showing that two entities are related due to their proximity in space and time. - -In the node properties, you can choose the Individual Records or Hangouts mode as appropriate for your requirements. Both modes require the same basic details, as follows: - -Latitude field. Select the field that identifies the latitude (in WGS84 coordinate system). - -Longitude field. Select the field that identifies the longitude (in WGS84 coordinate system). - -Timestamp field. Select the field that identifies the time or date. -" -909B04011F4C2211D6D945EC82217E3F89A79BD7,909B04011F4C2211D6D945EC82217E3F89A79BD7," Disabling nodes in a flow - -You can disable process nodes that have a single input so that they're ignored when the flow runs. This saves you from having to remove or bypass the node and means you can leave it connected to the remaining nodes. - -You can still open and edit the node settings; however, any changes will not take effect until you enable the node again. - -For example, you might use a Filter node to filter several fields, and then build models based on the reduced data set. If you want to also build the same models without fields being filtered, to see if they improve the model results, you can disable the Filter node. When you disable the Filter node, the connections to the modeling nodes pass directly through from the Derive node to the Type node. -" -338F12B976B522389F5FABE438280565490FB280,338F12B976B522389F5FABE438280565490FB280," Discriminant node - -Discriminant analysis builds a predictive model for group membership. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. The functions are generated from a sample of cases for which group membership is known; the functions can then be applied to new cases that have measurements for the predictor variables but have unknown group membership. - -Example. A telecommunications company can use discriminant analysis to classify customers into groups based on usage data. This allows them to score potential customers and target those who are most likely to be in the most valuable groups. - -Requirements. You need one or more input fields and exactly one target field. The target must be a categorical field (with a measurement level of Flag or Nominal) with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. - -Strengths. Discriminant analysis and Logistic Regression are both suitable classification models. However, Discriminant analysis makes more assumptions about the input fields—for example, they are normally distributed and should be continuous, and they give better results if those requirements are met, especially if the sample size is small. -" -5C597F82EC8484220A6FB3193DC78B878E8698F6_0,5C597F82EC8484220A6FB3193DC78B878E8698F6," Distinct node - -Duplicate records in a data set must be removed before data mining can begin. For example, in a marketing database, individuals may appear multiple times with different address or company information. You can use the Distinct node to find or remove duplicate records in your data, or to create a single, composite record from a group of duplicate records. - -To use the Distinct node, you must first define a set of key fields that determine when two records are considered to be duplicates. - -If you do not pick all your fields as key fields, then two ""duplicate"" records may not be truly identical because they can still differ in the values of the remaining fields. In this case, you can also define a sort order that is applied within each group of duplicate records. This sort order gives you fine control over which record is treated as the first within a group. Otherwise, all duplicates are considered to be interchangeable and any record might be selected. The incoming order of the records is not taken into account, so it doesn't help to use an upstream Sort node (see ""Sorting records within the Distinct node"" on this page). - -Mode. Specify whether to create a composite record, or to either include or exclude (discard) the first record. - - - -* Create a composite record for each group. Provides a way for you to aggregate non-numeric fields. Selecting this option makes the Composite tab available where you specify how to create the composite records. -* Include only the first record in each group. Selects the first record from each group of duplicate records and discards the rest. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records. -* Discard only the first record in each group. Discards the first record from each group of duplicate records and selects the remainder instead. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records. This option is useful for finding duplicates in your data so that you can examine them later in the flow. - - - -" -5C597F82EC8484220A6FB3193DC78B878E8698F6_1,5C597F82EC8484220A6FB3193DC78B878E8698F6,"Key fields for grouping. Lists the field or fields used to determine whether records are identical. You can: - - - -* Add fields to this list using the field picker button. -* Delete fields from the list by using the red X (remove) button. - - - -Within groups, sort records by. Lists the fields used to determine how records are sorted within each group of duplicates, and whether they are sorted in ascending or descending order. You can: - - - -* Add fields to this list using the field picker button. -* Delete fields from the list by using the red X (remove) button. -* Move fields using the up or down buttons, if you are sorting by more than one field. - - - -You must specify a sort order if you have chosen to include or exclude the first record in each group, and it matters to you which record is treated as the first. - -You may also want to specify a sort order if you have chosen to create a composite record, for certain options on the Composite tab. - -Specify whether, by default, records are sorted in Ascending or Descending order of the sort key values. -" -570AF2AAF268A3DF1D959D54A5BE1790DC43EAD5,570AF2AAF268A3DF1D959D54A5BE1790DC43EAD5," Distribution node - -A distribution graph or table shows the occurrence of symbolic (non-numeric) values, such as mortgage type or gender, in a dataset. A typical use of the Distribution node is to show imbalances in the data that you can rectify by using a Balance node before creating a model. You can automatically generate a Balance node using the Generate menu in the distribution graph or table window. - -Note: To show the occurrence of numeric values, you should use a Histogram node. -" -D5D31FDA0EEBFCDD87005ED54EBEDFD164FA073B,D5D31FDA0EEBFCDD87005ED54EBEDFD164FA073B," Charts node - -With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated. - -The Charts node is available under the Graphs section on the node palette. After adding a Charts node to your flow, double-click it to open the properties pane. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node. See [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) for details about creating charts. - -Figure 1. Example charts - -![Shows four example charts](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/charts_thumbnail4.png) Notes: - - - -* When you create a chart, it uses a sample of your data. After clicking Save and close to save the chart definition and return to your flow, the Charts node will then use all of your data when you run it. -* Chart definitions are listed in the node properties panel, with icons available for editing them or removing them. -" -8C53BD47030C9BF4E7DBF1EA482CDED9CC8ABAD4,8C53BD47030C9BF4E7DBF1EA482CDED9CC8ABAD4," Ensemble node - -The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any of the individual models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. Models combined in this manner typically perform at least as well as the best of the individual models and often better. - -This combining of nodes happens automatically in the Auto Classifier and Auto Numeric automated modeling nodes. - -After using an Ensemble node, you can use an Analysis node or Evaluation node to compare the accuracy of the combined results with each of the input models. To do this, make sure the Filter out fields generated by ensembled models option is not selected in the Ensemble node settings. -" -4F733928B0F749FFDDF2E6DAEF646A0524C54D67,4F733928B0F749FFDDF2E6DAEF646A0524C54D67," Evaluation node - -The Evaluation node offers an easy way to evaluate and compare predictive models to choose the best model for your application. Evaluation charts show how models perform in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the business criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot. - -Outcomes are handled by defining a specific value or range of values as a hit. Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). You can define hit criteria under the OPTIONS section of the node properties, or you can use the default hit criteria as follows: - - - -* Flag output fields are straightforward; hits correspond to true values. -* For Nominal output fields, the first value in the set defines a hit. -* For Continuous output fields, hits equal values greater than the midpoint of the field's range. - - - -There are six types of evaluation charts, each of which emphasizes a different evaluation criterion. - -Evaluation charts can also be cumulative, so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models. - -Note: The Evaluation node doesn't support the use of commas in field names. If you have field names containing commas, you must either remove the commas or surround the field name in quotes. -" -F5A6D2AE83A7989E17704E69F0A640368C676594,F5A6D2AE83A7989E17704E69F0A640368C676594," Expression Builder - -You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions. - -The Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions. - -Notes: - - - -* The Expression Builder is not supported in parameter settings. -* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected. Because not all databases support all functions, you may encounter an error if you run against a new datasource. -* You can run an SPSS Modeler desktop stream file ( .str) that contains database functions. But they aren't yet available in the Expression Builder user interface. - - - -Figure 1. Expression Builder - -![Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/expressionbuilder_full.png) -" -9DA0D100A88228AB463CB9B1B6CF1C051253911A_0,9DA0D100A88228AB463CB9B1B6CF1C051253911A," Selecting functions - -The function list displays all available CLEM functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators. - -The following categories of functions are available: - - - -Table 1. CLEM functions for use with your data - - Function type Description - - Operators Lists all the operators you can use when building expressions. Operators are also available from the buttons. - Information Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string. - Conversion Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp. - Comparison Used to compare field values to each other or to a specified string. For example, <= is used to compare whether the values of two fields are lesser or equal. - Logical Used to perform logical operations, such as if, then, else operations. - Numeric Used to perform numeric calculations, such as the natural log of field values. - Trigonometric Used to perform trigonometric calculations, such as the arccosine of a specified angle. - Probability Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value. - Spatial Functions Used to perform spatial calculations on geospatial data. - Bitwise Used to manipulate integers as bit patterns. - Random Used to randomly select items or generate numbers. - String Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character. - Date and time Used to perform various operations on date, time, and timestamp fields. - Sequence Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence. - Global Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set. -" -9DA0D100A88228AB463CB9B1B6CF1C051253911A_1,9DA0D100A88228AB463CB9B1B6CF1C051253911A," Blanks and Null Used to access, flag, and frequently fill user-specified blanks or system-missing values. For example, @BLANK(FIELD) is used to raise a true flag for records where blanks are present. - Special Fields Used to denote the specific fields under examination. For example, @FIELD is used when deriving multiple fields. - - - -After you select a group of functions, double-click to insert the functions into the Expression box at the point indicated by the position of the cursor. -" -1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85,1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85," Extension Model node - -With the Extension Model node, you can run R scripts or Python for Spark scripts to build and score models. - -After adding the node to your canvas, double-click the node to open its properties. -" -6402316FEBFAD11A582D9C567811003F4BEE596A,6402316FEBFAD11A582D9C567811003F4BEE596A," Extension Export node - -You can use the Extension Export node to run R scripts or Python for Spark scripts to export data. -" -378F6A8306234029DE1642CBFF8E44ED6848BF74,378F6A8306234029DE1642CBFF8E44ED6848BF74," Extension Import node - -With the Extension Import node, you can run R scripts or Python for Spark scripts to import data. - -After adding the node to your canvas, double-click the node to open its properties. -" -97FA49D526786021CF325FF9AFF15646A8270B48,97FA49D526786021CF325FF9AFF15646A8270B48," Native Python APIs - -You can invoke native Python APIs from your scripts to interact with SPSS Modeler. - -The following APIs are supported. - -To see an example, you can download the sample stream [python-extension-str.zip](https://github.com/IBMDataScience/ModelerFlowsExamples/blob/main/samples) and import it into SPSS Modeler (from your project, click New asset, select SPSS Modeler, then select Local file). Then open the Extension node properties in the flow to see example syntax. -" -1D46D1240377AEA562F14A560CB9F24DF33EDF88,1D46D1240377AEA562F14A560CB9F24DF33EDF88," Extension Output node - -With the Extension Output node, you can run R scripts or Python for Spark scripts to produce output. - -After adding the node to your canvas, double-click the node to open its properties. -" -FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F,FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F," Extension Transform node - -With the Extension Transform node, you can take data from an SPSS Modeler flow and apply transformations to the data using R scripting or Python for Spark scripting. - -When the data has been modified, it's returned to the flow for further processing, model building, and model scoring. The Extension Transform node makes it possible to transform data using algorithms that are written in R or Python for Spark, and enables you to develop data transformation methods that are tailored to a particular problem. - -After adding the node to your canvas, double-click the node to open its properties. -" -63C0DFB695860E1DA7981D86959D998BEBC2DD03,63C0DFB695860E1DA7981D86959D998BEBC2DD03," Python for Spark scripts - -SPSS Modeler supports Python scripts for Apache Spark. - -Note: - - - -* Python nodes depend on the Spark environment. -* Python scripts must use the Spark API because data is presented in the form of a Spark DataFrame. -" -17470065AFC59337B207721AB539B4622BBB3055,17470065AFC59337B207721AB539B4622BBB3055," Scripting with Python for Spark - -SPSS Modeler can run Python scripts using the Apache Spark framework to process data. This documentation provides the Python API description for the interfaces provided. - -The SPSS Modeler installation includes a Spark distribution. -" -7436F8933CA1DD44E05CD59F8E2CB13052763643,7436F8933CA1DD44E05CD59F8E2CB13052763643," Date, time, timestamp - -For operations that use date, time, or timestamp type data, the value is converted to the real value based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time). - -For the date, the value represents the number of days, based on the value 1970-01-01 (using Coordinated Universal Time). - -For the time, the value represents the number of seconds at 24 hours. - -For the timestamp, the value represents the number of seconds based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time). -" -835B998310E6E268F648D4AA28528190EBBB48CA,835B998310E6E268F648D4AA28528190EBBB48CA," Examples - -This section provides Python for Spark scripting examples. -" -AD61BC1B395A071D8850BC2405A8C311CFDC931F,AD61BC1B395A071D8850BC2405A8C311CFDC931F," Exceptions - -This section describes possible exception instances. They are all a subclass of python exception. -" -450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67,450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67," Data metadata - -This section describes how to set up the data model attributes based on pyspark.sql.StructField. -" -B98506EB96C587BDFD06CBF67617E25D9DAE8E60,B98506EB96C587BDFD06CBF67617E25D9DAE8E60," R scripts - -SPSS Modeler supports R scripts. -" -50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED,50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED," PCA/Factor node - -The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Two similar but distinct approaches are provided. - - - -* Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. PCA focuses on all variance, including both shared and unique variance. -* Factor analysis attempts to identify underlying concepts, or factors, that explain the pattern of correlations within a set of observed fields. Factor analysis focuses on shared variance only. Variance that is unique to specific fields is not considered in estimating the model. Several methods of factor analysis are provided by the Factor/PCA node. - - - -For both approaches, the goal is to find a small number of derived fields that effectively summarize the information in the original set of fields. - -Requirements. Only numeric fields can be used in a PCA-Factor model. To estimate a factor analysis or PCA, you need one or more fields with the role set to Input fields. Fields with the role set to Target, Both, or None are ignored, as are non-numeric fields. - -Strengths. Factor analysis and PCA can effectively reduce the complexity of your data without sacrificing much of the information content. These techniques can help you build more robust models that execute more quickly than would be possible with the raw input fields. -" -9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7_0,9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7," Feature Selection node - -Data mining problems may involve hundreds, or even thousands, of fields that can potentially be used as inputs. As a result, a great deal of time and effort may be spent examining which fields or variables to include in the model. To narrow down the choices, the Feature Selection algorithm can be used to identify the fields that are most important for a given analysis. For example, if you are trying to predict patient outcomes based on a number of factors, which factors are the most likely to be important? - -Feature selection consists of three steps: - - - -* Screening. Removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful. -* Ranking. Sorts remaining inputs and assigns ranks based on importance. -* Selecting. Identifies the subset of features to use in subsequent models—for example, by preserving only the most important inputs and filtering or excluding all others. - - - -In an age where many organizations are overloaded with too much data, the benefits of feature selection in simplifying and speeding the modeling process can be substantial. By focusing attention quickly on the fields that matter most, you can reduce the amount of computation required; more easily locate small but important relationships that might otherwise be overlooked; and, ultimately, obtain simpler, more accurate, and more easily explainable models. By reducing the number of fields used in the model, you may find that you can reduce scoring times as well as the amount of data collected in future iterations. - -" -9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7_1,9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7,"Example. A telephone company has a data warehouse containing information about responses to a special promotion by 5,000 of the company's customers. The data includes a large number of fields containing customers' ages, employment, income, and telephone usage statistics. Three target fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future. - -Requirements. A single target field (one with its role set to Target), along with multiple input fields that you want to screen or rank relative to the target. Both target and input fields can have a measurement level of Continuous (numeric range) or Categorical. -" -38D24508B131BEB6138652C2FD1E0380A001BB54_0,38D24508B131BEB6138652C2FD1E0380A001BB54," Filler node - -Filler nodes are used to replace field values and change storage. You can choose to replace values based on a specified CLEM condition, such as @BLANK(FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. Filler nodes are often used in conjunction with the Type node to replace missing values. - -Fill in fields. Select fields from the dataset whose values will be examined and replaced. The default behavior is to replace values depending on the specified Condition and Replace with expressions. You can also select an alternative method of replacement using the Replace options. - -Note: When selecting multiple fields to replace with a user-defined value, it is important that the field types are similar (all numeric or all symbolic). - -Replace. Select to replace the values of the selected field(s) using one of the following methods: - - - -* Based on condition. This option activates the Condition field and Expression Builder for you to create an expression used as a condition for replacement with the value specified. -* Always. Replaces all values of the selected field. For example, you could use this option to convert the storage of income to a string using the following CLEM expression: (to_string(income)). -* Blank values. Replaces all user-specified blank values in the selected field. The standard condition @BLANK(@FIELD) is used to select blanks. Note: You can define blanks using the Types tab of the source node or with a Type node. -* Null values. Replaces all system null values in the selected field. The standard condition @NULL(@FIELD) is used to select nulls. -* Blank and null values. Replaces both blank values and system nulls in the selected field. This option is useful when you are unsure whether or not nulls have been defined as missing values. - - - -Condition. This option is available when you have selected the Based on condition option. Use this text box to specify a CLEM expression for evaluating the selected fields. Click the calculator button to open the Expression Builder. - -" -38D24508B131BEB6138652C2FD1E0380A001BB54_1,38D24508B131BEB6138652C2FD1E0380A001BB54,"Replace with. Specify a CLEM expression to give a new value to the selected fields. You can also replace the value with a null value by typing undef in the text box. Click the calculator button to open the Expression Builder. - -Note: When the field(s) selected are string, you should replace them with a string value. Using the default 0 or another numeric value as the replacement value for string fields will result in an error.Note that use of the following may change row order: - - - -* Running in a database via SQL pushback -" -EED64F79EBFDD957DEEBEC6261B3A70A248F3D35,EED64F79EBFDD957DEEBEC6261B3A70A248F3D35," Filter node - -You can rename or exclude fields at any point in a flow. For example, as a medical researcher, you may not be concerned about the potassium level (field-level data) of patients (record-level data); therefore, you can filter out the K (potassium) field. This can be done using a separate Filter node or using the Filter tab on an import or output node. The functionality is the same regardless of which node it's accessed from. - - - -* From import nodes, you can rename or filter fields as the data is read in. -* Using a Filter node, you can rename or filter fields at any point in the flow. -" -B8522E9801281DD4118A5012ACF885A7EC2354E4,B8522E9801281DD4118A5012ACF885A7EC2354E4," GenLin node - -The generalized linear model expands the general linear model so that the dependent variable is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation. - -Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage. - -A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size. - -Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition. - -Generalized linear models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record. - -Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous or Flag) with two or more categories. Fields used in the model must have their types fully instantiated. - -Strengths. The generalized linear model is extremely flexible, but the process of choosing the model structure is not automated and thus demands a level of familiarity with your data that is not required by ""black box"" algorithms. -" -CF6FE4E4058C24F0BEB94D379FB9E820C09456D2,CF6FE4E4058C24F0BEB94D379FB9E820C09456D2," GLE node - -The GLE model identifies the dependent variable that is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation. - -Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage. - -A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size. - -Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition. - -GLE models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data. - -For a categorical target, for each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record. - -Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous, Categorical, or Flag) with two or more categories. Fields used in the model must have their types fully instantiated. - -Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose. -" -B561F461842BB0D185F097E0ADB8D3AC13266172_0,B561F461842BB0D185F097E0ADB8D3AC13266172," GLMM node - -This node creates a generalized linear mixed model (GLMM). - -Generalized linear mixed models extend the linear model so that: - - - -* The target is linearly related to the factors and covariates via a specified link function -* The target can have a non-normal distribution -* The observations can be correlated - - - -Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data. - -Examples. The district school board can use a generalized linear mixed model to determine whether an experimental teaching method is effective at improving math scores. Students from the same classroom should be correlated since they are taught by the same teacher, and classrooms within the same school may also be correlated, so we can include random effects at school and class levels to account for different sources of variability. - -Medical researchers can use a generalized linear mixed model to determine whether a new anticonvulsant drug can reduce a patient's rate of epileptic seizures. Repeated measurements from the same patient are typically positively correlated so a mixed model with some random effects should be appropriate. The target field – the number of seizures – takes positive integer values, so a generalized linear mixed model with a Poisson distribution and log link may be appropriate. - -Executives at a cable provider of television, phone, and internet services can use a generalized linear mixed model to learn more about potential customers. Since possible answers have nominal measurement levels, the company analyst uses a generalized logit mixed model with a random intercept to capture correlation between answers to the service usage questions across service types (tv, phone, internet) within a given survey responder's answers. - -In the node properties, data structure options allow you to specify the structural relationships between records in your dataset when observations are correlated. If the records in the dataset represent independent observations, you don't need to specify any data structure options. - -" -B561F461842BB0D185F097E0ADB8D3AC13266172_1,B561F461842BB0D185F097E0ADB8D3AC13266172,"Subjects. The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals. In a repeated measures setting, multiple observations are recorded for each subject, so each subject may occupy multiple records in the dataset. - -A subject is an observational unit that can be considered independent of other subjects. For example, the blood pressure readings from a patient in a medical study can be considered independent of the readings from other patients. Defining subjects becomes particularly important when there are repeated measurements per subject and you want to model the correlation between these observations. For example, you might expect that blood pressure readings from a single patient during consecutive visits to the doctor are correlated. - -All of the fields specified as subjects in the node properties are used to define subjects for the residual covariance structure, and provide the list of possible fields for defining subjects for random-effects covariance structures on the Random Effect Block. - -Repeated measures. The fields specified here are used to identify repeated observations. For example, a single variable Week might identify the 10 weeks of observations in a medical study, or Month and Day might be used together to identify daily observations over the course of a year. - -Define covariance groups by. The categorical fields specified here define independent sets of repeated effects covariance parameters; one for each category defined by the cross-classification of the grouping fields. All subjects have the same covariance type, and subjects within the same covariance grouping will have the same values for the parameters. - -Spatial covariance coordinates. The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type. - -Repeated covariance type. This specifies the covariance structure for the residuals. The available structures are: - - - -* First-order autoregressive (AR1) -* Autoregressive moving average (1,1) (ARMA11) -" -B561F461842BB0D185F097E0ADB8D3AC13266172_2,B561F461842BB0D185F097E0ADB8D3AC13266172,"* Compound symmetry -* Diagonal -* Scaled identity -* Spatial: Power -* Spatial: Exponential -* Spatial: Gaussian -* Spatial: Linear -* Spatial: Linear-log -* Spatial: Spherical -* Toeplitz -" -E6B5EAD096E68A255C5526ADD4C828534891C090,E6B5EAD096E68A255C5526ADD4C828534891C090," Gaussian Mixture node - -A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. - -One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.^1^ - -The Gaussian Mixture node in watsonx.ai exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python. - -For more information about Gaussian Mixture modeling algorithms and parameters, see [Gaussian Mixture Models](http://scikit-learn.org/stable/modules/mixture.html) and [Gaussian Mixture](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). ^2^ - -^1^ [User Guide.](https://scikit-learn.org/stable/modules/mixture.html)Gaussian mixture models. Web. © 2007 - 2017. scikit-learn developers. - -^2^ [Scikit-learn: Machine Learning in Python](http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html), Pedregosa et al., JMLR 12, pp. 2825-2830, 2011. -" -A1FE4B06DB60F8A9C916FBEAF5C7482155BD62E3,A1FE4B06DB60F8A9C916FBEAF5C7482155BD62E3," HDBSCAN node - -Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set. - -The HDBSCAN node in watsonx.ai exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first. Unlike most learning methods in watsonx.ai, HDBSCAN models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Rather than trying to predict an outcome, HDBSCAN tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar. The HDBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by HDBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. Outlier points that lie alone in low-density regions are also marked. HDBSCAN also supports scoring of new samples.^1^ - -To use the HDBSCAN node, you must set up an upstream Type node. The HDBSCAN node will read input values from the Type node (or from the Types of an upstream import node). - -For more information about HDBSCAN clustering algorithms, see the [HDBSCAN documentation](http://hdbscan.readthedocs.io/en/latest/). ^1^ - -^1^ ""User Guide / Tutorial."" The hdbscan Clustering Library. Web. © 2016, Leland McInnes, John Healy, Steve Astels. -" -13F7C9C7B52EC7152F2B3D81B6EB42DB0319A6F4,13F7C9C7B52EC7152F2B3D81B6EB42DB0319A6F4," Histogram node - -Histogram nodes show the occurrence of values for numeric fields. They are often used to explore the data before manipulations and model building. Similar to the Distribution node, Histogram nodes are frequently used to reveal imbalances in the data. - -Note: To show the occurrence of values for symbolic fields, you should use a Distribution node. -" -00205C92C52FA28DB619EE1F9C8D76FE8564DB88,00205C92C52FA28DB619EE1F9C8D76FE8564DB88," History node - -History nodes are most often used for sequential data, such as time series data. - -They are used to create new fields containing data from fields in previous records. When using a History node, you may want to use data that is presorted by a particular field. You can use a Sort node to do this. -" -1BC1FE73146C70FA2A76241470314A4732EFD918,1BC1FE73146C70FA2A76241470314A4732EFD918," Isotonic-AS node - -Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in watsonx.ai is implemented in Spark. - -For details, see [Isotonic regression](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html). ^1^ - -^1^ ""Regression - RDD-based API."" Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017. -" -22A8F7539D1374784E9BF247B1370C430910F43D,22A8F7539D1374784E9BF247B1370C430910F43D," KDE node - -Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling. - -Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^ - -To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node). - -The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data. - -The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed. - -For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^ - -^1^ ""User Guide."" Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers. -" -033E2B1CD9E006383C2D2C045B8834BFBBAB0F09,033E2B1CD9E006383C2D2C045B8834BFBBAB0F09," KDE Simulation node - -Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling. - -Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^ - -To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node). - -The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data. - -The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed. - -For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^ - -^1^ ""User Guide."" Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers. -" -13A1FF3338F4AC1EB2CF3FF6781283B49AC8B5A6,13A1FF3338F4AC1EB2CF3FF6781283B49AC8B5A6," K-Means node - -The K-Means node provides a method of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Unlike most learning methods in SPSS Modeler, K-Means models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, K-Means tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar. - -K-Means works by defining a set of starting cluster centers derived from data. It then assigns each record to the cluster to which it is most similar, based on the record's input field values. After all cases have been assigned, the cluster centers are updated to reflect the new set of records assigned to each cluster. The records are then checked again to see whether they should be reassigned to a different cluster, and the record assignment/cluster iteration process continues until either the maximum number of iterations is reached, or the change between one iteration and the next fails to exceed a specified threshold. - -Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model. - -Requirements. To train a K-Means model, you need one or more fields with the role set to Input. Fields with the role set to Output, Both, or None are ignored. - -Strengths. You do not need to have data on group membership to build a K-Means model. The K-Means model is often the fastest method of clustering for large datasets. -" -DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE,DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE," K-Means-AS node - -K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark. - -See [K-Means Algorithms](https://spark.apache.org/docs/2.2.0/ml-clustering.html) for more details.^1^ - -Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables. - -^1^ ""Clustering."" Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017. -" -1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD,1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD," KNN node - -Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other cases. In machine learning, it was developed as a way to recognize patterns of data without requiring an exact match to any stored patterns, or cases. Similar cases are near each other and dissimilar cases are distant from each other. Thus, the distance between two cases is a measure of their dissimilarity. - -Cases that are near each other are said to be ""neighbors."" When a new case (holdout) is presented, its distance from each of the cases in the model is computed. The classifications of the most similar cases – the nearest neighbors – are tallied and the new case is placed into the category that contains the greatest number of nearest neighbors. - -You can specify the number of nearest neighbors to examine; this value is called k. The pictures show how a new case would be classified using two different values of k. When k = 5, the new case is placed in category 1 because a majority of the nearest neighbors belong to category 1. However, when k = 9, the new case is placed in category 0 because a majority of the nearest neighbors belong to category 0. - -Nearest neighbor analysis can also be used to compute values for a continuous target. In this situation, the average or median target value of the nearest neighbors is used to obtain the predicted value for the new case. -" -F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC_0,F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC," Kohonen node - -Kohonen networks are a type of neural network that perform clustering, also known as a knet or a self-organizing map. This type of network can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Records are grouped so that records within a group or cluster tend to be similar to each other, and records in different groups are dissimilar. - -The basic units are neurons, and they are organized into two layers: the input layer and the output layer (also called the output map). All of the input neurons are connected to all of the output neurons, and these connections have strengths, or weights, associated with them. During training, each unit competes with all of the others to ""win"" each record. - -The output map is a two-dimensional grid of neurons, with no connections between the units. - -Input data is presented to the input layer, and the values are propagated to the output layer. The output neuron with the strongest response is said to be the winner and is the answer for that input. - -Initially, all weights are random. When a unit wins a record, its weights (along with those of other nearby units, collectively referred to as a neighborhood) are adjusted to better match the pattern of predictor values for that record. All of the input records are shown, and weights are updated accordingly. This process is repeated many times until the changes become very small. As training proceeds, the weights on the grid units are adjusted so that they form a two-dimensional ""map"" of the clusters (hence the term self-organizing map). - -When the network is fully trained, records that are similar should be close together on the output map, whereas records that are vastly different will be far apart. - -" -F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC_1,F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC,"Unlike most learning methods in watsonx.ai, Kohonen networks do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, Kohonen nets try to uncover patterns in the set of input fields. Usually, a Kohonen net will end up with a few units that summarize many observations (strong units), and several units that don't really correspond to any of the observations (weak units). The strong units (and sometimes other units adjacent to them in the grid) represent probable cluster centers. - -Another use of Kohonen networks is in dimension reduction. The spatial characteristic of the two-dimensional grid provides a mapping from the k original predictors to two derived features that preserve the similarity relationships in the original predictors. In some cases, this can give you the same kind of benefit as factor analysis or PCA. - -Note that the method for calculating default size of the output grid is different from older versions of SPSS Modeler. The method will generally produce smaller output layers that are faster to train and generalize better. If you find that you get poor results with the default size, try increasing the size of the output grid on the Expert tab. - -Requirements. To train a Kohonen net, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored. - -Strengths. You do not need to have data on group membership to build a Kohonen network model. You don't even need to know the number of groups to look for. Kohonen networks start with a large number of units, and as training progresses, the units gravitate toward the natural clusters in the data. You can look at the number of observations captured by each unit in the model nugget to identify the strong units, which can give you a sense of the appropriate number of clusters. -" -67241853FC2471C6C0719F1B98E40625358B2E19,67241853FC2471C6C0719F1B98E40625358B2E19," Reading in source text - -You can use the Language Identifier node to identify the natural language of a text field within your source data. The output of this node is a derived field that contains the detected language code. - -![Language Identifier node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_languageidentifier.png) - -Data for text mining can be in any of the standard formats that are used by SPSS Modeler flows, including databases or other ""rectangular"" formats that represent data in rows and columns. - - - -" -FC8006009802AE14770BE53062787D8A392B0070,FC8006009802AE14770BE53062787D8A392B0070," Linear node - -Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values. - -Requirements. Only numeric fields can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.) - -Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation. - -Tip: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose. -" -2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471,2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471," Linear-AS node - -Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values. - -Requirements. Only numeric fields and categorical predictors can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.) - -Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate non-significant input fields from the equation. - -Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. -" -DE0C1913D6D770641762ED518FEFE8FFFC5A1F13_0,DE0C1913D6D770641762ED518FEFE8FFFC5A1F13," Logistic node - -Logistic regression, also known as nominal regression, is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric one. Both binomial models (for targets with two discrete categories) and multinomial models (for targets with more than two categories) are supported. - -Logistic regression works by building a set of equations that relate the input field values to the probabilities associated with each of the output field categories. After the model is generated, you can use it to estimate probabilities for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record. - -Binomial example. A telecommunications provider is concerned about the number of customers it is losing to competitors. Using service usage data, you can create a binomial model to predict which customers are liable to transfer to another provider and customize offers so as to retain as many customers as possible. A binomial model is used because the target has two distinct categories (likely to transfer or not). - -Note: For binomial models only, string fields are limited to eight characters. If necessary, longer strings can be recoded using a Reclassify node or by using the Anonymize node. - -Multinomial example. A telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. Using demographic data to predict group membership, you can create a multinomial model to classify prospective customers into groups and then customize offers for individual customers. - -Requirements. One or more input fields and exactly one categorical target field with two or more categories. For a binomial model the target must have a measurement level of Flag. For a multinomial model the target can have a measurement level of Flag, or of Nominal with two or more categories. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. - -" -DE0C1913D6D770641762ED518FEFE8FFFC5A1F13_1,DE0C1913D6D770641762ED518FEFE8FFFC5A1F13,"Strengths. Logistic regression models are often quite accurate. They can handle symbolic and numeric input fields. They can give predicted probabilities for all target categories so that a second-best guess can easily be identified. Logistic models are most effective when group membership is a truly categorical field; if group membership is based on values of a continuous range field (for example, high IQ versus low IQ), you should consider using linear regression to take advantage of the richer information offered by the full range of values. Logistic models can also perform automatic field selection, although other approaches such as tree models or Feature Selection might do this more quickly on large datasets. Finally, since logistic models are well understood by many analysts and data miners, they may be used by some as a baseline against which other modeling techniques can be compared. - -When processing large datasets, you can improve performance noticeably by disabling the likelihood-ratio test, an advanced output option. -" -A9E9D62E92156CEBC0D4619CDE322AF48CACE913,A9E9D62E92156CEBC0D4619CDE322AF48CACE913," LSVM node - -With the LSVM node, you can use a linear support vector machine to classify data. LSVM is particularly suited for use with wide datasets--that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the build options to experiment with different settings. - -The LSVM node is similar to the SVM node, but it is linear and is better at handling a large number of records. - -After the model is built, you can: - - - -* Browse the model nugget to display the relative importance of the input fields in building the model. -* Append a Table node to the model nugget to view the model output. - - - -Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an LSVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant. -" -774FD49C617DAC62F48EB31E08757E0AEC3D1282,774FD49C617DAC62F48EB31E08757E0AEC3D1282," Matrix node - -Use the Matrix to create a table that shows relationships between fields. It is most commonly used to show the relationship between two categorical fields (flag, nominal, or ordinal), but it can also be used to show relationships between continuous (numeric range) fields. -" -7B586E10794F26EA2654A7F7C34EC9EA48C8BFD4,7B586E10794F26EA2654A7F7C34EC9EA48C8BFD4," Means node - -The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you can compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did. - -You can compare means in two different ways, depending on your data: - - - -" -6647035446FC3A28586EBABC619D10DB5FE3F4FD,6647035446FC3A28586EBABC619D10DB5FE3F4FD," Merge node - -The function of a Merge node is to take multiple input records and create a single output record containing all or some of the input fields. This is a useful operation when you want to merge data from different sources, such as internal customer data and purchased demographic data. - -You can merge data in the following ways. - - - -* Merge by Order concatenates corresponding records from all sources in the order of input until the smallest data source is exhausted. It is important if using this option that you have sorted your data using a Sort node. -* Merge using a Key field, such as Customer ID, to specify how to match records from one data source with records from the other(s). Several types of joins are possible, including inner join, full outer join, partial outer join, and anti-join. -" -61E8DF28E1A79B4BBA03CDA39F350BE5E55DAC7B,61E8DF28E1A79B4BBA03CDA39F350BE5E55DAC7B," Functions available for missing values - -Different methods are available for dealing with missing values in your data. You may choose to use functionality available in Data Refinery or in nodes. -" -0E5C87704E816097FF9E649620A1818798B5DB3F,0E5C87704E816097FF9E649620A1818798B5DB3F," Handling fields with missing values - -If the majority of missing values are concentrated in a small number of fields, you can address them at the field level rather than at the record level. This approach also allows you to experiment with the relative importance of particular fields before deciding on an approach for handling missing values. If a field is unimportant in modeling, it probably isn't worth keeping, regardless of how many missing values it has. - -For example, a market research company may collect data from a general questionnaire containing 50 questions. Two of the questions address age and political persuasion, information that many people are reluctant to give. In this case, Age and Political_persuasion have many missing values. -" -D5FAFC625D1A1D0793D9521351E9B59A04AF00E9_0,D5FAFC625D1A1D0793D9521351E9B59A04AF00E9," Missing data values - -During the data preparation phase of data mining, you will often want to replace missing values in the data. - -Missing values are values in the data set that are unknown, uncollected, or incorrectly entered. Usually, such values aren't valid for their fields. For example, the field Sex should contain the values M and F. If you discover the values Y or Z in the field, you can safely assume that such values aren't valid and should therefore be interpreted as blanks. Likewise, a negative value for the field Age is meaningless and should also be interpreted as a blank. Frequently, such obviously wrong values are purposely entered, or fields are left blank, during a questionnaire to indicate a nonresponse. At times, you may want to examine these blanks more closely to determine whether a nonresponse, such as the refusal to give one's age, is a factor in predicting a specific outcome. - -Some modeling techniques handle missing data better than others. For example, the [C5.0 node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html) and the [Apriori node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html) cope well with values that are explicitly declared as ""missing"" in a [Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html). Other modeling techniques have trouble dealing with missing values and experience longer training times, resulting in less-accurate models. - -There are several types of missing values recognized by : - - - -* Null or system-missing values. These are nonstring values that have been left blank in the database or source file and have not been specifically defined as ""missing"" in an [Import](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html) or Type node. System-missing values are displayed as $null$. Note that empty strings are not considered nulls in , although they may be treated as nulls by certain databases. -" -D5FAFC625D1A1D0793D9521351E9B59A04AF00E9_1,D5FAFC625D1A1D0793D9521351E9B59A04AF00E9,"* Empty strings and white space. Empty string values and white space (strings with no visible characters) are treated as distinct from null values. Empty strings are treated as equivalent to white space for most purposes. For example, if you select the option to treat white space as blanks in an Import or Type node, this setting applies to empty strings as well. -* Blank or user-defined missing values. These are values such as unknown, 99, or –1 that are explicitly defined in an Import node or Type node as missing. Optionally, you can also choose to treat nulls and white space as blanks, which allows them to be flagged for special treatment and to be excluded from most calculations. For example, you can use the @BLANK function to treat these values, along with other types of missing values, as blanks. - - - -Reading in mixed data. Note that when you're reading in fields with numeric storage (either integer, real, time, timestamp, or date), any non-numeric values are set to null or system missing. This is because, unlike some applications, doesn't allow mixed storage types within a field. To avoid this, you should read in any fields with mixed data as strings by changing the storage type in the Import node or external application as necessary. - -Reading empty strings from Oracle. When reading from or writing to an Oracle database, be aware that, unlike and unlike most other databases, Oracle treats and stores empty string values as equivalent to null values. This means that the same data extracted from an Oracle database may behave differently than when extracted from a file or another database, and the data may return different results. -" -FE9FF9F5CC449798C00D008182F55BDAA91E546C,FE9FF9F5CC449798C00D008182F55BDAA91E546C," Handling records with missing values - -If the majority of missing values are concentrated in a small number of records, you can just exclude those records. For example, a bank usually keeps detailed and complete records on its loan customers. - -If, however, the bank is less restrictive in approving loans for its own staff members, data gathered for staff loans is likely to have several blank fields. In such a case, there are two options for handling these missing values: - - - -" -3BA46A09CF64CE6120BE65C44614995B50B67DA1,3BA46A09CF64CE6120BE65C44614995B50B67DA1," Handling records with system missing values -" -01C8222216B795904018497993CC5E44D51A3B35,01C8222216B795904018497993CC5E44D51A3B35," Handling missing values - -You should decide how to treat missing values in light of your business or domain knowledge. To ease training time and increase accuracy, you may want to remove blanks from your data set. On the other hand, the presence of blank values may lead to new business opportunities or additional insights. - -In choosing the best technique, you should consider the following aspects of your data: - - - -* Size of the data set -* Number of fields containing blanks -* Amount of missing information - - - -In general terms, there are two approaches you can follow: - - - -" -6576530EC5D705B8BF323F6C459C32A87AE3F9A4,6576530EC5D705B8BF323F6C459C32A87AE3F9A4," MultiLayerPerceptron-AS node - -Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers. - -Each layer is fully connected to the next layer in the network. See [Multilayer Perceptron Classifier (MLPC)](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier) for details.^1^ - -The MultiLayerPerceptron-AS node in watsonx.ai is implemented in Spark. To use a this node, you must set up an upstream Type node. The MultiLayerPerceptron-AS node will read input values from the Type node (or from the Types of an upstream import node). - -^1^ ""Multilayer perceptron classifier."" Apache Spark. MLlib: Main Guide. Web. 5 Oct 2018. -" -5F0FC43F57AB9AF130DEA6A795E1E81A6AA95ACC,5F0FC43F57AB9AF130DEA6A795E1E81A6AA95ACC," Multiplot node - -A multiplot is a special type of plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines and each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you have time sequence data and want to explore the fluctuation of several variables over time. -" -9F06DF311976F336CB3164B08D5DA7D6F93419E2,9F06DF311976F336CB3164B08D5DA7D6F93419E2," Neural Net node - -A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption. The form of the relationships is determined during the learning process. If a linear relationship between the target and predictors is appropriate, the results of the neural network should closely approximate those of a traditional linear model. If a nonlinear relationship is more appropriate, the neural network will automatically approximate the ""correct"" model structure. - -The trade-off for this flexibility is that the neural network is not easily interpretable. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model. However, if model interpretability is not important, you can obtain good predictions using a neural network. - -Field requirements. There must be at least one Target and one Input. Fields set to Both or None are ignored. There are no measurement level restrictions on targets or predictors (inputs). - -The initial weights assigned to neural networks during model building, and therefore the final models produced, depend on the order of the fields in the data. Watsonx.ai automatically sorts data by field name before presenting it to the neural network for training. This means that explicitly changing the order of the fields in the data upstream will not affect the generated neural net models when a random seed is set in the model builder. However, changing the input field names in a way that changes their sort order will produce different neural network models, even with a random seed set in the model builder. The model quality will not be affected significantly given different sort order of field names. -" -9933646421686556C9AE8459EE2E51ED9DAB1C33,9933646421686556C9AE8459EE2E51ED9DAB1C33," Disabling or caching nodes in a flow - -You can disable a node so it's ignored when the flow runs. And you can set up a cache on a node. -" -759B6927189FEA6BE3124BF79FA527873CB84EA6,759B6927189FEA6BE3124BF79FA527873CB84EA6," One-Class SVM node - -The One-Class SVM© node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node is implemented in Python and requires the scikit-learn© Python library. - -For details about the scikit-learn library, see [Support Vector Machines](http://scikit-learn.org/stable/modules/svm.htmlsvm-outlier-detection)^1^. - -The Modeling tab on the palette contains the One-Class SVM node and other Python nodes. - -Note: One-Class SVM is used for usupervised outlier and novelty detection. In most cases, we recommend using a known, ""normal"" dataset to build the model so the algorithm can set a correct boundary for the given samples. Parameters for the model – such as nu, gamma, and kernel – impact the result significantly. So you may need to experiment with these options until you find the optimal settings for your situation. - -^1^Smola, Schölkopf. ""A Tutorial on Support Vector Regression."" Statistics and Computing Archive, vol. 14, no. 3, August 2004, pp. 199-222. (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288) -" -98FC8E9A3380E4593D9BF08B78CE6A7797C0204B,98FC8E9A3380E4593D9BF08B78CE6A7797C0204B," Partition node - -Partition nodes are used to generate a partition field that splits the data into separate subsets or samples for the training, testing, and validation stages of model building. By using one sample to generate the model and a separate sample to test it, you can get a good indication of how well the model will generalize to larger datasets that are similar to the current data. - -The Partition node generates a nominal field with the role set to Partition. Alternatively, if an appropriate field already exists in your data, it can be designated as a partition using a Type node. In this case, no separate Partition node is required. Any instantiated nominal field with two or three values can be used as a partition, but flag fields cannot be used. - -Multiple partition fields can be defined in a flow, but if so, a single partition field must be selected in each modeling node that uses partitioning. (If only one partition is present, it is automatically used whenever partitioning is enabled.) - -To create a partition field based on some other criterion such as a date range or location, you can also use a Derive node. See [Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.htmlderive) for more information. - -Example. When building an RFM flow to identify recent customers who have positively responded to previous marketing campaigns, the marketing department of a sales company uses a Partition node to split the data into training and test partitions. -" -CFC54BB4CEA29104BD4F9793B51ABE558AA0250D,CFC54BB4CEA29104BD4F9793B51ABE558AA0250D," Plot node - -Plot nodes show the relationship between numeric fields. You can create a plot using points (also known as a scatterplot), or you can use lines. You can create three types of line plots by specifying an X Mode in the node properties. -" -5E2A4B92C4F5F84B3DDE2EAD6827C7FA89EB0565,5E2A4B92C4F5F84B3DDE2EAD6827C7FA89EB0565," QUEST node - -QUEST—or Quick, Unbiased, Efficient Statistical Tree—is a binary classification method for building decision trees. A major motivation in its development was to reduce the processing time required for large C&R Tree analyses with either many variables or many cases. A second goal of QUEST was to reduce the tendency found in classification tree methods to favor inputs that allow more splits, that is, continuous (numeric range) input fields or those with many categories. - - - -* QUEST uses a sequence of rules, based on significance tests, to evaluate the input fields at a node. For selection purposes, as little as a single test may need to be performed on each input at a node. Unlike C&R Tree, all splits are not examined, and unlike C&R Tree and CHAID, category combinations are not tested when evaluating an input field for selection. This speeds the analysis. -* Splits are determined by running quadratic discriminant analysis using the selected input on groups formed by the target categories. This method again results in a speed improvement over exhaustive search (C&R Tree) to determine the optimal split. - - - -Requirements. Input fields can be continuous (numeric ranges), but the target field must be categorical. All splits are binary. Weight fields cannot be used. Any ordinal (ordered set) fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them. - -Strengths. Like CHAID, but unlike C&R Tree, QUEST uses statistical tests to decide whether or not an input field is used. It also separates the issues of input selection and splitting, applying different criteria to each. This contrasts with CHAID, in which the statistical test result that determines variable selection also produces the split. Similarly, C&R Tree employs the impurity-change measure to both select the input field and to determine the split. -" -2581DD8F04F917BA91F1201137AE0EFEA1F82E26,2581DD8F04F917BA91F1201137AE0EFEA1F82E26," Random Forest node - -Random Forest© is an advanced implementation of a bagging algorithm with a tree model as the base model. - -In random forests, each tree in the ensemble is built from a sample drawn with replacement (for example, a bootstrap sample) from the training set. When splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features. Because of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.^1^ - -The Random Forest node in watsonx.ai is implemented in Python. The nodes palette contains this node and other Python nodes. - -For more information about random forest algorithms, see [Forests of randomized trees](https://scikit-learn.org/stable/modules/ensemble.htmlforest). - -^1^L. Breiman, ""Random Forests,"" Machine Learning, 45(1), 5-32, 2001. -" -01800E00BDFB7CFE0E751FA6C616160C48E6ED21_0,01800E00BDFB7CFE0E751FA6C616160C48E6ED21," Random Trees node - -The Random Trees node can be used with data in a distributed environment. In this node, you build an ensemble model that consists of multiple decision trees. - -The Random Trees node is a tree-based classification and prediction method that is built on Classification and Regression Tree methodology. As with C&R Tree, this prediction method uses recursive partitioning to split the training records into segments with similar output field values. The node starts by examining the input fields available to it to find the best split, which is measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is then split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups). - -The Random Trees node uses bootstrap sampling with replacement to generate sample data. The sample data is used to grow a tree model. During tree growth, Random Trees will not sample the data again. Instead, it randomly selects part of the predictors and uses the best one to split a tree node. This process is repeated when splitting each tree node. This is the basic idea of growing a tree in random forest. - -Random Trees uses C&R Tree-like trees. Since such trees are binary, each field for splitting results in two branches. For a categorical field with multiple categories, the categories are grouped into two groups based on the inner splitting criterion. Each tree grows to the largest extent possible (there is no pruning). In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression). - -Random Trees differ from C&R Trees as follows: - - - -* Random Trees nodes randomly select a specified number of predictors and uses the best one from the selection to split a node. In contrast, C&R Tree finds the best one from all predictors. -" -01800E00BDFB7CFE0E751FA6C616160C48E6ED21_1,01800E00BDFB7CFE0E751FA6C616160C48E6ED21,"* Each tree in Random Trees grows fully until each leaf node typically contains a single record. So the tree depth could be very large. But standard C&R Tree uses different stopping rules for tree growth, which usually leads to a much shallower tree. - - - -Random Trees adds two features compared to C&R Tree: - - - -* The first feature is bagging, where replicas of the training dataset are created by sampling with replacement from the original dataset. This action creates bootstrap samples that are of equal size to the original dataset, after which a component model is built on each replica. Together these component models form an ensemble model. -* The second feature is that, at each split of the tree, only a sampling of the input fields is considered for the impurity measure. - - - -Requirements. To train a Random Trees model, you need one or more Input fields and one Target field. Target and input fields can be continuous (numeric range) or categorical. Fields that are set to either Both or None are ignored. Fields that are used in the model must have their types fully instantiated, and any ordinal (ordered set) fields that are used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them. - -Strengths. Random Trees models are robust when you are dealing with large data sets and numbers of fields. Due to the use of bagging and field sampling, they are much less prone to overfitting and thus the results that are seen in testing are more likely to be repeated when you use new data. - -Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose. -" -2D3F7F5EFB161E0D88AE69C4710D70AA99DB0BDE,2D3F7F5EFB161E0D88AE69C4710D70AA99DB0BDE," Reclassify node - -The Reclassify node enables the transformation from one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis. - -For example, you could reclassify the values for Product into three groups, such as Kitchenware, Bath and Linens, and Appliances. - -Reclassification can be performed for one or more symbolic fields. You can also choose to substitute the new values for the existing field or generate a new field. -" -BBDEDA771A051A9B1871F9BEC9589D91421E7C0C,BBDEDA771A051A9B1871F9BEC9589D91421E7C0C," Regression node - -Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values. - -Requirements. Only numeric fields can be used in a regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node. ) - -Strengths. Regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because regression modeling is a long-established statistical procedure, the properties of these models are well understood. Regression models are also typically very fast to train. The Regression node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation. - -Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. See [Logistic node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.htmllogreg) for more information. -" -8322C981206A5C7EEEC48C32C9DDCEC9FCE98AEE,8322C981206A5C7EEEC48C32C9DDCEC9FCE98AEE," Field Reorder node - -With the Field Reorder node, you can define the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and the Field Chooser. - -This operation is useful, for example, when working with wide datasets to make fields of interest more visible. -" -BF6A65F061558B6AED8A438A887B6474A0FDFFC3,BF6A65F061558B6AED8A438A887B6474A0FDFFC3," Report node - -You can use the Report node to create formatted reports containing fixed text, data, or other expressions derived from the data. Specify the format of the report by using text templates to define the fixed text and the data output constructions. You can provide custom text formatting using HTML tags in the template and by setting output options. Data values and other conditional output are included in the report using CLEM expressions in the template. -" -36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC,36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC," Restructure node - -With the Restructure node, you can generate multiple fields based on the values of a nominal or flag field. The newly generated fields can contain values from another field or numeric flags (0 and 1). The functionality of this node is similar to that of the Set to Flag node. However, it offers more flexibility by allowing you to create fields of any type (including numeric flags), using the values from another field. You can then perform aggregation or other manipulations with other nodes downstream. (The Set to Flag node lets you aggregate fields in one step, which may be convenient if you are creating flag fields.) - -Figure 1. Restructure node - -![Restructure node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/restructure_node.png) -" -265714702B012F1010CE06D97EC16623360F4E2B,265714702B012F1010CE06D97EC16623360F4E2B," RFM Aggregate node - -The Recency, Frequency, Monetary (RFM) Aggregate node allows you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row (using their unique customer ID as a key) that lists when they last dealt with you (recency), how many transactions they have made (frequency), and the total value of those transactions (monetary). - -Before proceeding with any aggregation, you should take time to clean the data, concentrating especially on any missing values. - -After you identify and transform the data using the RFM Aggregate node, you might use an RFM Analysis node to carry out further analysis. - -Note that after the data file has been run through the RFM Aggregate node, it won't have any target values; therefore, before using the data file as input for further predictive analysis with any modeling nodes such as C5.0 or CHAID, you need to merge it with other customer data (for example, by matching the customer IDs). - -The RFM Aggregate and RFM Analysis nodes use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures. -" -9E15D946EDFB82EF911D36032C073CF1736B39DA_0,9E15D946EDFB82EF911D36032C073CF1736B39DA," RFM Analysis node - -You can use the Recency, Frequency, Monetary (RFM) Analysis node to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary). - -The reasoning behind RFM analysis is that customers who purchase a product or service once are more likely to purchase again. The categorized customer data is separated into a number of bins, with the binning criteria adjusted as you require. In each of the bins, customers are assigned a score; these scores are then combined to provide an overall RFM score. This score is a representation of the customer's membership in the bins created for each of the RFM parameters. This binned data may be sufficient for your needs, for example, by identifying the most frequent, high-value customers; alternatively, it can be passed on in a flow for further modeling and analysis. - -Note, however, that although the ability to analyze and rank RFM scores is a useful tool, you must be aware of certain factors when using it. There may be a temptation to target customers with the highest rankings; however, over-solicitation of these customers could lead to resentment and an actual fall in repeat business. It is also worth remembering that customers with low scores should not be neglected but instead may be cultivated to become better customers. Conversely, high scores alone do not necessarily reflect a good sales prospect, depending on the market. For example, a customer in bin 5 for recency, meaning that they have purchased very recently, may not actually be the best target customer for someone selling expensive, longer-life products such as cars or televisions. - -" -9E15D946EDFB82EF911D36032C073CF1736B39DA_1,9E15D946EDFB82EF911D36032C073CF1736B39DA,"Note: Depending on how your data is stored, you may need to precede the RFM Analysis node with an RFM Aggregate node to transform the data into a usable format. For example, input data must be in customer format, with one row per customer; if the customers' data is in transactional form, use an RFM Aggregate node upstream to derive the recency, frequency, and monetary fields. - -The RFM Aggregate and RFM Analysis nodes in are set up to use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures. -" -AF3DA662099BD616B642F69925AEC7C8AFC84611,AF3DA662099BD616B642F69925AEC7C8AFC84611," Sample node - -You can use Sample nodes to select a subset of records for analysis, or to specify a proportion of records to discard. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples. - -Sampling can be used for several reasons: - - - -* To improve performance by estimating models on a subset of the data. Models estimated from a sample are often as accurate as those derived from the full dataset, and may be more so if the improved performance allows you to experiment with different methods you might not otherwise have attempted. -* To select groups of related records or transactions for analysis, such as selecting all the items in an online shopping cart (or market basket), or all the properties in a specific neighborhood. -* To identify units or cases for random inspection in the interest of quality assurance, fraud prevention, or security. - - - -Note: If you simply want to partition your data into training and test samples for purposes of validation, a Partition node can be used instead. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.htmlpartition) for more information. -" -84E8928D464D412B225638BCC41F2837F98AEF43_0,84E8928D464D412B225638BCC41F2837F98AEF43," autodataprepnode properties - -![Auto Data Prep node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/adp_node_icon.png)The Auto Data Prep (ADP) node can analyze your data and identify fixes, screen out fields that are problematic or not likely to be useful, derive new attributes when appropriate, and improve performance through intelligent screening and sampling techniques. You can use the node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they are made and accept, reject, or amend them as desired. - - - -autodataprepnode properties - -Table 1. autodataprepnode properties - - autodataprepnode properties Data type Property description - - objective Balanced
Speed
Accuracy
Custom - custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used. - target field Specifies a single target field. - inputs [field1 ... fieldN] Input or predictor fields used by the model. - use_frequency flag - frequency_field field - use_weight flag - weight_field field - excluded_fields Filter
None - if_fields_do_not_match StopExecution
ClearAnalysis - prepare_dates_and_times flag Control access to all the date and time fields - compute_time_until_date flag - reference_date Today
Fixed - fixed_date date - units_for_date_durations Automatic
Fixed - fixed_date_units Years
Months
Days - compute_time_until_time flag - reference_time CurrentTime
Fixed - fixed_time time - units_for_time_durations Automatic
Fixed - fixed_time_units Hours
Minutes
Seconds - extract_year_from_date flag - extract_month_from_date flag - extract_day_from_date flag -" -84E8928D464D412B225638BCC41F2837F98AEF43_1,84E8928D464D412B225638BCC41F2837F98AEF43," extract_hour_from_time flag - extract_minute_from_time flag - extract_second_from_time flag - exclude_low_quality_inputs flag - exclude_too_many_missing flag - maximum_percentage_missing number - exclude_too_many_categories flag - maximum_number_categories number - exclude_if_large_category flag - maximum_percentage_category number - prepare_inputs_and_target flag - adjust_type_inputs flag - adjust_type_target flag - reorder_nominal_inputs flag - reorder_nominal_target flag - replace_outliers_inputs flag - replace_outliers_target flag - replace_missing_continuous_inputs flag - replace_missing_continuous_target flag - replace_missing_nominal_inputs flag - replace_missing_nominal_target flag - replace_missing_ordinal_inputs flag - replace_missing_ordinal_target flag - maximum_values_for_ordinal number - minimum_values_for_continuous number - outlier_cutoff_value number - outlier_method Replace
Delete - rescale_continuous_inputs flag - rescaling_method MinMax
ZScore - min_max_minimum number - min_max_maximum number - z_score_final_mean number - z_score_final_sd number - rescale_continuous_target flag - target_final_mean number - target_final_sd number - transform_select_input_fields flag - maximize_association_with_target flag - p_value_for_merging number - merge_ordinal_features flag - merge_nominal_features flag - minimum_cases_in_category number - bin_continuous_fields flag - p_value_for_binning number - perform_feature_selection flag - p_value_for_selection number - perform_feature_construction flag - transformed_target_name_extension string - transformed_inputs_name_extension string - constructed_features_root_name string - years_duration_ name_extension string - months_duration_ name_extension string -" -84E8928D464D412B225638BCC41F2837F98AEF43_2,84E8928D464D412B225638BCC41F2837F98AEF43," days_duration_ name_extension string - hours_duration_ name_extension string - minutes_duration_ name_extension string - seconds_duration_ name_extension string - year_cyclical_name_extension string - month_cyclical_name_extension string - day_cyclical_name_extension string - hour_cyclical_name_extension string -" -8CD81C0F5F84DFE58834AEB8B71E6D7780B8DEAD,8CD81C0F5F84DFE58834AEB8B71E6D7780B8DEAD," aggregatenode properties - -![Aggregate node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/aggregatenodeicon.png) The Aggregate node replaces a sequence of input records with summarized, aggregated output records. - - - -aggregatenode properties - -Table 1. aggregatenode properties - - aggregatenode properties Data type Property description - - keys list Lists fields that can be used as keys for aggregation. For example, if Sex and Region are your key fields, each unique combination of M and F with regions N and S (four unique combinations) will have an aggregated record. - contiguous flag Select this option if you know that all records with the same key values are grouped together in the input (for example, if the input is sorted on the key fields). Doing so can improve performance. - aggregates Structured property listing the numeric fields whose values will be aggregated, as well as the selected modes of aggregation. - aggregate_exprs Keyed property which keys the derived field name with the aggregate expression used to compute it. For example:

aggregatenode.setKeyedPropertyValue (""aggregate_exprs"", ""Na_MAX"", ""MAX('Na')"") - extension string Specify a prefix or suffix for duplicate aggregated fields. - add_as Suffix
Prefix - inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record. - count_field string Specifies the name of the record count field. - allow_approximation Boolean Allows approximation of order statistics when aggregation is performed in SPSS Analytic Server. -" -2C17E0A9E72FE65317838E81ACF1FA77620E0C6C,2C17E0A9E72FE65317838E81ACF1FA77620E0C6C," analysisnode properties - -![Analysis node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/analysisnodeicon.png)The Analysis node evaluates predictive models' ability to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values for one or more model nuggets. They can also compare predictive models to each other. - - - -analysisnode properties - -Table 1. analysisnode properties - - analysisnode properties Data type Property description - - output_mode ScreenFile Used to specify target location for output generated from the output node. - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output. - by_fields list - full_filename string If disk, data, or HTML output, the name of the output file. - coincidence flag - performance flag - evaluation_binary flag - confidence flag - threshold number - improve_accuracy number - field_detection_method MetadataName Determines how predicted fields are matched to the original target field. Specify Metadata or Name. - inc_user_measure flag - user_if expr - user_then expr - user_else expr -" -5C2296329A2D24B1A22A3848731708D78949E74C_0,5C2296329A2D24B1A22A3848731708D78949E74C," anomalydetectionnode properties - -![Anomaly node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/anomalydetectionnodeicon.png)The Anomaly node identifies unusual cases, or outliers, that don't conform to patterns of ""normal"" data. With this node, it's possible to identify outliers even if they don't fit any previously known patterns and even if you're not exactly sure what you're looking for. - - - -anomalydetectionnode properties - -Table 1. anomalydetectionnode properties - - anomalydetectionnode Properties Values Property description - - inputs [field1 ... fieldN] Anomaly Detection models screen records based on the specified input fields. They don't use a target field. Weight and frequency fields are also not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - mode ExpertSimple - anomaly_method IndexLevelPerRecordsNumRecords Specifies the method used to determine the cutoff value for flagging records as anomalous. - index_level number Specifies the minimum cutoff value for flagging anomalies. - percent_records number Sets the threshold for flagging records based on the percentage of records in the training data. - num_records number Sets the threshold for flagging records based on the number of records in the training data. - num_fields integer The number of fields to report for each anomalous record. - impute_missing_values flag - adjustment_coeff number Value used to balance the relative weight given to continuous and categorical fields in calculating the distance. - peer_group_num_auto flag Automatically calculates the number of peer groups. - min_num_peer_groups integer Specifies the minimum number of peer groups used when peer_group_num_auto is set to True. -" -5C2296329A2D24B1A22A3848731708D78949E74C_1,5C2296329A2D24B1A22A3848731708D78949E74C," max_num_per_groups integer Specifies the maximum number of peer groups. - num_peer_groups integer Specifies the number of peer groups used when peer_group_num_auto is set to False. -" -B51FF1FBA515035A93290F353D20AD9D54BC043C,B51FF1FBA515035A93290F353D20AD9D54BC043C," applyanomalydetectionnode properties - -You can use Anomaly Detection modeling nodes to generate an Anomaly Detection model nugget. The scripting name of this model nugget is applyanomalydetectionnode. For more information on scripting the modeling node itself, see [anomalydetectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnodeslots.htmlanomalydetectionnodeslots). - - - -applyanomalydetectionnode properties - -Table 1. applyanomalydetectionnode properties - - applyanomalydetectionnode Properties Values Property description - - anomaly_score_method FlagAndScoreFlagOnlyScoreOnly Determines which outputs are created for scoring. - num_fields integer Fields to report. - discard_records flag Indicates whether records are discarded from the output or not. -" -65FFB2E27EACD57BCADC6C1646EB280212D3B2C2,65FFB2E27EACD57BCADC6C1646EB280212D3B2C2," anonymizenode properties - -![Anonymize node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/anonymizenodeicon.png)The Anonymize node transforms the way field names and values are represented downstream, thus disguising the original data. This can be useful if you want to allow other users to build models using sensitive data, such as customer names or other details. - - - -anonymizenode properties - -Table 1. anonymizenode properties - - anonymizenode properties Data type Property description - - enable_anonymize flag When set to True, activates anonymization of field values (equivalent to selecting Yes for that field in the Anonymize Values column). - use_prefix flag When set to True, a custom prefix will be used if one has been specified. Applies to fields that will be anonymized by the Hash method and is equivalent to choosing the Custom option in the Replace Values settings for that field. - prefix string Equivalent to typing a prefix into the text box in the Replace Values settings. The default prefix is the default value if nothing else has been specified. - transformation RandomFixed Determines whether the transformation parameters for a field anonymized by the Transform method will be random or fixed. - set_random_seed flag When set to True, the specified seed value will be used (if transformation is also set to Random). - random_seed integer When set_random_seed is set to True, this is the seed for the random number. -" -8D328FC36822024D739F83A36FEF66E5ABE61128,8D328FC36822024D739F83A36FEF66E5ABE61128," appendnode properties - -![Append node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/appendnodeicon.png) The Append node concatenates sets of records. It's useful for combining datasets with similar structures but different data. - - - -appendnode properties - -Table 1. appendnode properties - - appendnode properties Data type Property description - - match_by PositionName You can append datasets based on the position of fields in the main data source or the name of fields in the input datasets. - match_case flag Enables case sensitivity when matching field names. - include_fields_from MainAll -" -76EC742BC2D093C10C6A5B85456BFBB6571C416D,76EC742BC2D093C10C6A5B85456BFBB6571C416D," apriorinode properties - -![Apriori node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/apriorinodeicon.png)The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions. Apriori requires that input and output fields all be categorical but delivers better performance because it'ss optimized for this type of data. - - - -apriorinode properties - -Table 1. apriorinode properties - - apriorinode Properties Values Property description - - consequents field Apriori models use Consequents and Antecedents in place of the standard target and input fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - antecedents [field1 ... fieldN] - min_supp number - min_conf number - max_antecedents number - true_flags flag - optimize Speed
Memory - use_transactional_data flag - contiguous flag - id_field string - content_field string - mode SimpleExpert - evaluation RuleConfidence
DifferenceToPrior
ConfidenceRatio
InformationDifference
NormalizedChiSquare - lower_bound number -" -292C0E87B8E56B15991C954508AB125A8FB80972,292C0E87B8E56B15991C954508AB125A8FB80972," applyapriorinode properties - -You can use Apriori modeling nodes to generate an Apriori model nugget. The scripting name of this model nugget is applyapriorinode. For more information on scripting the modeling node itself, see [apriorinode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinodeslots.htmlapriorinodeslots). - - - -applyapriorinode properties - -Table 1. applyapriorinode properties - - applyapriorinode Properties Values Property description - - max_predictions number (integer) - ignore_unmatached flag - allow_repeats flag - check_basket NoPredictionsPredictionsNoCheck -" -2BCBD3D61CC24296EA38B26B10306B7F50CE4988_0,2BCBD3D61CC24296EA38B26B10306B7F50CE4988," astimeintervalsnode properties - -![Time Intervals node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/timeintervalnodeicon.png)Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years. - - - -astimeintervalsnode properties - -Table 1. astimeintervalsnode properties - - astimeintervalsnode properties Data type Property description - - time_field field Can accept only a single continuous field. That field is used by the node as the aggregation key for converting the interval. If an integer field is used here it's considered to be a time index. - dimensions [field1 field2 … fieldn] These fields are used to create individual time series based on the field values. - fields_to_aggregate [field1 field2 … fieldn] These fields are aggregated as part of changing the period of the time field. Any fields not included in this picker are filtered out of the data leaving the node. - interval_type_timestamp Years
Quarters
Months
Weeks
Days
Hours
Minutes
Seconds Specify intervals and derive a new time field for estimating or forecasting. - interval_type_time Hours
Minutes
Seconds - interval_type_date Years
Quarters
Months
Weeks
Days Time interval - interval_type_integer Periods Time interval - periods_per_interval integer Periods per interval - start_month JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember - week_begins_on Sunday Monday Tuesday Wednesday Thursday Friday Saturday - minute_interval 1 2 3 4 5 6 10 12 15 20 30 - second_interval 1 2 3 4 5 6 10 12 15 20 30 -" -2BCBD3D61CC24296EA38B26B10306B7F50CE4988_1,2BCBD3D61CC24296EA38B26B10306B7F50CE4988," agg_range_default Sum Mean Min Max Median 1stQuartile 3rdQuartile Available functions for continuous fields include Sum, Mean, Min, Max, Median, 1st Quartile, and 3rd Quartile. - agg_set_default Mode Min Max Nominal options include Mode, Min, and Max. - agg_flag_default TrueIfAnyTrue FalseIfAnyFalse Options are either True if any true or False if any false. - custom_agg array Custom settings for specified fields. -" -27963DF2327FBE202B836AC5905258D063A8770D,27963DF2327FBE202B836AC5905258D063A8770D," applyautoclassifiernode properties - -You can use Auto Classifier modeling nodes to generate an Auto Classifier model nugget. The scripting name of this model nugget is applyautoclassifiernode. For more information on scripting the modeling node itself, see [autoclassifiernode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binaryclassifiernodeslots.htmlbinaryclassifiernodeslots). - - - -applyautoclassifiernode properties - -Table 1. applyautoclassifiernode properties - - applyautoclassifiernode Properties Values Property description - - flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingHighestConfidenceAverageRawPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field. - flag_voting_tie_selection RandomHighestConfidenceRawPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field. -" -E399A5B6FA720C6F21337792F822F20F20F98910_0,E399A5B6FA720C6F21337792F822F20F20F98910," autoclusternode properties - -![Auto Cluster node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/autoclusternodeicon.png)The Auto Cluster node estimates and compares clustering models, which identify groups of records that have similar characteristics. The node works in the same manner as other automated modeling nodes, allowing you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields. - - - -autoclusternode properties - -Table 1. autoclusternode properties - - autoclusternode Properties Values Property description - - evaluation field Note: Auto Cluster node only. Identifies the field for which an importance value will be calculated. Alternatively, can be used to identify how well the cluster differentiates the value of this field and, therefore, how well the model will predict this field. - ranking_measure SilhouetteNum_clustersSize_smallest_clusterSize_largest_clusterSmallest_to_largestImportance - ranking_dataset TrainingTest - summary_limit integer Number of models to list in the report. Specify an integer between 1 and 100. - enable_silhouette_limit flag - silhouette_limit integer Integer between 0 and 100. - enable_number_less_limit flag - number_less_limit number Real number between 0.0 and 1.0. - enable_number_greater_limit flag - number_greater_limit number Integer greater than 0. - enable_smallest_cluster_limit flag - smallest_cluster_units PercentageCounts - smallest_cluster_limit_percentage number - smallest_cluster_limit_count integer Integer greater than 0. - enable_largest_cluster_limit flag - largest_cluster_units PercentageCounts - largest_cluster_limit_percentage number - largest_cluster_limit_count integer - enable_smallest_largest_limit flag - smallest_largest_limit number - enable_importance_limit flag -" -E399A5B6FA720C6F21337792F822F20F20F98910_1,E399A5B6FA720C6F21337792F822F20F20F98910," importance_limit_condition Greater_thanLess_than - importance_limit_greater_than number Integer between 0 and 100. - importance_limit_less_than number Integer between 0 and 100. - flag Enables or disables the use of a specific algorithm. - . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information. - number_of_models integer - enable_model_build_time_limit boolean (K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and Decision List models only.)
Sets a maximum time limit for any one model. For example, if a particular model requires an unexpectedly long time to train because of some complex interaction, you probably don't want it to hold up your entire modeling run. - model_build_time_limit integer Time spent on model build. - enable_stop_after_time_limit boolean (Neural Network, K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and C&R Tree models only.)
Stops a run after a specified number of hours. All models generated up to that point will be included in the model nugget, but no further models will be produced. -" -14416203D840C788359110B18CFD9CE922DE0D67,14416203D840C788359110B18CFD9CE922DE0D67," applyautoclusternode properties - -You can use Auto Cluster modeling nodes to generate an Auto Cluster model nugget. The scripting name of this model nugget is applyautoclusternode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [autoclusternode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.htmlautoclusternodeslots). -" -3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD,3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD," applyautonumericnode properties - -You can use Auto Numeric modeling nodes to generate an Auto Numeric model nugget. The scripting name of this model nugget is applyautonumericnode.For more information on scripting the modeling node itself, see [autonumericnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.htmlrangepredictornodeslots). - - - -applyautonumericnode properties - -Table 1. applyautonumericnode properties - - applyautonumericnode Properties Values Property description - - calculate_standard_error flag -" -D2D9F4E05CABC566B2021116ED28EF413FA96779,D2D9F4E05CABC566B2021116ED28EF413FA96779," Node properties overview - -Each type of node has its own set of legal properties, and each property has a type. This type may be a general type—number, flag, or string—in which case settings for the property are coerced to the correct type. An error is raised if they can't be coerced. Alternatively, the property reference may specify the range of legal values, such as Discard, PairAndDiscard, and IncludeAsText, in which case an error is raised if any other value is used. Flag properties should be read or set by using values of true and false. (Variations including Off, OFF, off, No, NO, no, n, N, f, F, false, False, FALSE, or 0 are also recognized when setting values, but may cause errors when reading property values in some cases. All other values are regarded as true. Using true and false consistently will avoid any confusion.) In this documentation's reference tables, the structured properties are indicated as such in the Property description column, and their usage formats are provided. -" -7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005,7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005," balancenode properties - -![Balance node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/balancenodeicon.png) The Balance node corrects imbalances in a dataset, so it conforms to a specified condition. The balancing directive adjusts the proportion of records where a condition is true by the factor specified. - - - -balancenode properties - -Table 1. balancenode properties - - balancenode properties Data type Property description - - directives Structured property to balance proportion of field values based on number specified. - training_data_only flag Specifies that only training data should be balanced. If no partition field is present in the stream, then this option is ignored. - - - -This node property uses the format: - -[[ number, string ] \ [ number, string] \ ... [number, string ]]. - -Note: If strings (using double quotation marks) are embedded in the expression, they must be preceded by the escape character "" "". The "" "" character is also the line continuation character, which you can use to align the arguments for clarity. -" -FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D_0,FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D," bayesnetnode properties - -![Bayes Net node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/bayesian_network_icon.png)With the Bayesian Network (Bayes Net) node, you can build a probability model by combining observed and recorded evidence with real-world knowledge to establish the likelihood of occurrences. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification. - - - -bayesnetnode properties - -Table 1. bayesnetnode properties - - bayesnetnode Properties Values Property description - - inputs [field1 ... fieldN] Bayesian network models use a single target field, and one or more input fields. Continuous fields are automatically binned. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - continue_training_existing_model flag - structure_type TANMarkovBlanket Select the structure to be used when building the Bayesian network. - use_feature_selection flag - parameter_learning_method LikelihoodBayes Specifies the method used to estimate the conditional probability tables between nodes where the values of the parents are known. - mode ExpertSimple - missing_values flag - all_probabilities flag - independence LikelihoodPearson Specifies the method used to determine whether paired observations on two variables are independent of each other. - significance_level number Specifies the cutoff value for determining independence. - maximal_conditioning_set number Sets the maximal number of conditioning variables to be used for independence testing. - inputs_always_selected [field1 ... fieldN] Specifies which fields from the dataset are always to be used when building the Bayesian network.

Note: The target field is always selected. -" -FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D_1,FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D," maximum_number_inputs number Specifies the maximum number of input fields to be used in building the Bayesian network. - calculate_variable_importance flag - calculate_raw_propensities flag -" -EC154AE6F7FE894644424BFA90C6CA31E13A4B71,EC154AE6F7FE894644424BFA90C6CA31E13A4B71," applybayesnetnode properties - -You can use Bayesian network modeling nodes to generate a Bayesian network model nugget. The scripting name of this model nugget is applybayesnetnode. For more information on scripting the modeling node itself, see [bayesnetnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnodeslots.htmlbayesnetnodeslots). - - - -applybayesnetnode properties - -Table 1. applybayesnetnode properties - - applybayesnetnode Properties Values Property description - - all_probabilities flag - raw_propensity flag - adjusted_propensity flag - calculate_raw_propensities flag -" -CDA0897D49B56EE521BF16E52014DA5E2E1D2710_0,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," autoclassifiernode properties - -![Auto Classifier node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/binaryclassifiernodeicon.png)The Auto Classifier node creates and compares a number of different models for binary outcomes (yes or no, churn or do not churn, and so on), allowing you to choose the best approach for a given analysis. A number of modeling algorithms are supported, making it possible to select the methods you want to use, the specific options for each, and the criteria for comparing the results. The node generates a set of models based on the specified options and ranks the best candidates according to the criteria you specify. - - - -autoclassifiernode properties - -Table 1. autoclassifiernode properties - - autoclassifiernode Properties Values Property description - - target field For flag targets, the Auto Classifier node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - ranking_measure Accuracy
Area_under_curve
Profit
Lift
Num_variables - ranking_dataset Training
Test - number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100. - calculate_variable_importance flag - enable_accuracy_limit flag - accuracy_limit integer Integer between 0 and 100. - enable_area_under_curve_limit flag - area_under_curve_limit number Real number between 0.0 and 1.0. - enable_profit_limit flag - profit_limit number Integer greater than 0. - enable_lift_limit flag - lift_limit number Real number greater than 1.0. - enable_number_of_variables_limit flag - number_of_variables_limit number Integer greater than 0. - use_fixed_cost flag -" -CDA0897D49B56EE521BF16E52014DA5E2E1D2710_1,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," fixed_cost number Real number greater than 0.0. - variable_cost field - use_fixed_revenue flag - fixed_revenue number Real number greater than 0.0. - variable_revenue field - use_fixed_weight flag - fixed_weight number Real number greater than 0.0 - variable_weight field - lift_percentile number Integer between 0 and 100. - enable_model_build_time_limit flag - model_build_time_limit number Integer set to the number of minutes to limit the time taken to build each individual model. - enable_stop_after_time_limit flag - stop_after_time_limit number Real number set to the number of hours to limit the overall elapsed time for an auto classifier run. - enable_stop_after_valid_model_produced flag - use_costs flag - flag Enables or disables the use of a specific algorithm. - . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information. - use_cross_validation field Fields added to this list can take either the condition or prediction role in rules that are generated by the model. This is on a rule by rule basis, so a field might be a condition in one rule and a prediction in another. - number_of_folds integer N fold parameter for cross validation, with range from 3 to 10. - set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228. - random_seed integer Random seed - stop_if_valid_model boolean -" -CDA0897D49B56EE521BF16E52014DA5E2E1D2710_2,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," filter_individual_model_output boolean Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node. Select this option if you're interested only in the combined score from all of the input models. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models - set_ensemble_method ""Voting"" ""ConfidenceWeightedVoting"" ""HighestConfidence"" Ensemble method for set targets. - set_voting_tie_selection ""Random"" ""HighestConfidence"" If voting is tied, select value randomly or by using highest confidence. -" -B741FE5CDD06D606F869B15DEB2173C1F134D22D_0,B741FE5CDD06D606F869B15DEB2173C1F134D22D," binningnode properties - -![Binning node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/binningnodeicon.png)The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points. - - - -binningnode properties - -Table 1. binningnode properties - - binningnode properties Data type Property description - - fields [field1 field2 ... fieldn] Continuous (numeric range) fields pending transformation. You can bin multiple fields simultaneously. - method FixedWidthEqualCountRankSDevOptimal Method used for determining cut points for new field bins (categories). - recalculate_bins AlwaysIfNecessary Specifies whether the bins are recalculated and the data placed in the relevant bin every time the node is executed, or that data is added only to existing bins and any new bins that have been added. - fixed_width_name_extension string The default extension is _BIN. - fixed_width_add_as SuffixPrefix Specifies whether the extension is added to the end (suffix) of the field name or to the start (prefix). The default extension is income_BIN. - fixed_bin_method WidthCount - fixed_bin_count integer Specifies an integer used to determine the number of fixed-width bins (categories) for the new field(s). - fixed_bin_width real Value (integer or real) for calculating width of the bin. - equal_count_name_extension string The default extension is _TILE. - equal_count_add_as SuffixPrefix Specifies an extension, either suffix or prefix, used for the field name generated by using standard p-tiles. The default extension is _TILE plus N, where N is the tile number. -" -B741FE5CDD06D606F869B15DEB2173C1F134D22D_1,B741FE5CDD06D606F869B15DEB2173C1F134D22D," tile4 flag Generates four quantile bins, each containing 25% of cases. - tile5 flag Generates five quintile bins. - tile10 flag Generates 10 decile bins. - tile20 flag Generates 20 vingtile bins. - tile100 flag Generates 100 percentile bins. - use_custom_tile flag - custom_tile_name_extension string The default extension is _TILEN. - custom_tile_add_as SuffixPrefix - custom_tile integer - equal_count_method RecordCountValueSum The RecordCount method seeks to assign an equal number of records to each bin, while ValueSum assigns records so that the sum of the values in each bin is equal. - tied_values_method NextCurrentRandom Specifies which bin tied value data is to be put in. - rank_order AscendingDescending This property includes Ascending (lowest value is marked 1) or Descending (highest value is marked 1). - rank_add_as SuffixPrefix This option applies to rank, fractional rank, and percentage rank. - rank flag - rank_name_extension string The default extension is _RANK. - rank_fractional flag Ranks cases where the value of the new field equals rank divided by the sum of the weights of the nonmissing cases. Fractional ranks fall in the range of 0–1. - rank_fractional_name_extension string The default extension is _F_RANK. - rank_pct flag Each rank is divided by the number of records with valid values and multiplied by 100. Percentage fractional ranks fall in the range of 1–100. - rank_pct_name_extension string The default extension is _P_RANK. - sdev_name_extension string - sdev_add_as SuffixPrefix - sdev_count OneTwoThree - optimal_name_extension string The default extension is _OPTIMAL. - optimal_add_as SuffixPrefix - optimal_supervisor_field field Field chosen as the supervisory field to which the fields selected for binning are related. - optimal_merge_bins flag Specifies that any bins with small case counts will be added to a larger, neighboring bin. -" -B741FE5CDD06D606F869B15DEB2173C1F134D22D_2,B741FE5CDD06D606F869B15DEB2173C1F134D22D," optimal_small_bin_threshold integer - optimal_pre_bin flag Indicates that prebinning of dataset is to take place. - optimal_max_bins integer Specifies an upper limit to avoid creating an inordinately large number of bins. - optimal_lower_end_point InclusiveExclusive -" -5C95F2D19465DDA8969D0498D1B96D870BD02A1F,5C95F2D19465DDA8969D0498D1B96D870BD02A1F," c50node properties - -![C5.0 node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/c50nodeicon.png)The C5.0 node builds either a decision tree or a rule set. The model works by splitting the sample based on the field that provides the maximum information gain at each level. The target field must be categorical. Multiple splits into more than two subgroups are allowed. - - - -c50node properties - -Table 1. c50node properties - - c50node Properties Values Property description - - target field C50 models use a single target field and one or more input fields. You can also specify a weight field. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - output_type DecisionTreeRuleSet - group_symbolics flag - use_boost flag - boost_num_trials number - use_xval flag - xval_num_folds number - mode SimpleExpert - favor AccuracyGenerality Favor accuracy or generality. - expected_noise number - min_child_records number - pruning_severity number - use_costs flag - costs structured This is a structured property. See the example for usage. - use_winnowing flag - use_global_pruning flag On (True) by default. - calculate_variable_importance flag - calculate_raw_propensities flag -" -FCBDBFD3E4BEBEFE552FAD012509948FABA34B44,FCBDBFD3E4BEBEFE552FAD012509948FABA34B44," applyc50node properties - -You can use C5.0 modeling nodes to generate a C5.0 model nugget. The scripting name of this model nugget is applyc50node. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots). - - - -applyc50node properties - -Table 1. applyc50node properties - - applyc50node Properties Values Property description - - sql_generate udfNeverNoMissingValues Used to set SQL generation options during rule set execution. The default value is udf. - calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree. -" -499553788712E55ABE1345C61CCDB15D1CE04E83,499553788712E55ABE1345C61CCDB15D1CE04E83," carmanode properties - -![C5.0 node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/carmanodeicon.png)The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than just antecedent support. This means that the rules generated can be used for a wider variety of applications—for example, to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season. - - - -carmanode properties - -Table 1. carmanode properties - - carmanode Properties Values Property description - - inputs [field1 ... fieldn] CARMA models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - id_field field Field used as the ID field for model building. - contiguous flag Used to specify whether IDs in the ID field are contiguous. - use_transactional_data flag - content_field field - min_supp number(percent) Relates to rule support rather than antecedent support. The default is 20%. - min_conf number(percent) The default is 20%. - max_size number The default is 10. - mode SimpleExpert The default is Simple. - exclude_multiple flag Excludes rules with multiple consequents. The default is False. - use_pruning flag The default is False. - pruning_value number The default is 500. - vary_support flag -" -CE14B5EFF03A17683C6AA16D02F62E1EBAD0D7F2,CE14B5EFF03A17683C6AA16D02F62E1EBAD0D7F2," applycarmanode properties - -You can use Carma modeling nodes to generate a Carma model nugget. The scripting name of this model nugget is applycarmanode. For more information on scripting the modeling node itself, see [carmanode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanodeslots.htmlcarmanodeslots). - - - -applycarmanode properties - -Table 1. applycarmanode properties - - applycarmanode Properties Values Property description - - enable_sql_generation udfnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB_0,CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB," cartnode properties - -![C&R Tree node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/cartnodeicon.png)The Classification and Regression (C&R) Tree node generates a decision tree that allows you to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered ""pure"" if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups). - - - -cartnode properties - -Table 1. cartnode properties - - cartnode Properties Values Property description - - target field C&R Tree models require a single target and one or more input fields. A frequency field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - continue_training_existing_model flag - objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a Server connection. - model_output_type SingleInteractiveBuilder - use_tree_directives flag - tree_directives string Specify directives for growing the tree. Directives can be wrapped in triple quotes to avoid escaping newlines or quotes. Note that directives may be highly sensitive to minor changes in data or modeling options and may not generalize to other datasets. - use_max_depth DefaultCustom - max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom. - prune_tree flag Prune tree to avoid overfitting. - use_std_err flag Use maximum difference in risk (in Standard Errors). - std_err_multiplier number Maximum difference. -" -CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB_1,CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB," max_surrogates number Maximum surrogates. - use_percentage flag - min_parent_records_pc number - min_child_records_pc number - min_parent_records_abs number - min_child_records_abs number - use_costs flag - costs structured Structured property. - priors DataEqualCustom - custom_priors structured Structured property. - adjust_priors flag - trails number Number of component models for boosting or bagging. - set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets. - range_ensemble_method MeanMedian Default combining rule for continuous targets. - large_boost flag Apply boosting to very large data sets. - min_impurity number - impurity_measure GiniTwoingOrdered - train_pct number Overfit prevention set. - set_random_seed flag Replicate results option. - seed number - calculate_variable_importance flag - calculate_raw_propensities flag -" -C53BD428F2955B76BF24620A21A6461A1CC19F11,C53BD428F2955B76BF24620A21A6461A1CC19F11," applycartnode properties - -You can use C&R Tree modeling nodes to generate a C&R Tree model nugget. The scripting name of this model nugget is applycartnode. For more information on scripting the modeling node itself, see [cartnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.htmlcartnodeslots). - - - -applycartnode properties - -Table 1. applycartnode properties - - applycartnode Properties Values Property description - - calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree. - display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned. - calculate_raw_propensities flag -" -B0B1665F022C9E781CE1AE94FA885266391FBCFE_0,B0B1665F022C9E781CE1AE94FA885266391FBCFE," chaidnode properties - -![CHAID node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/chaidnodeicon.png)The CHAID node generates decision trees using chi-square statistics to identify optimal splits. Unlike the C&R Tree and Quest nodes, CHAID can generate non-binary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute. - - - -chaidnode properties - -Table 1. chaidnode properties - - chaidnode Properties Values Property description - - target field CHAID models require a single target and one or more input fields. You can also specify a frequency. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - continue_training_existing_model flag - objective Standard
Boosting
Bagging
psm psm is used for very large datasets, and requires a server connection. - model_output_type Single
InteractiveBuilder - use_tree_directives flag - tree_directives string - method Chaid
ExhaustiveChaid - use_max_depth Default
Custom - max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom. - use_percentage flag - min_parent_records_pc number - min_child_records_pc number - min_parent_records_abs number - min_child_records_abs number - use_costs flag - costs structured Structured property. - trails number Number of component models for boosting or bagging. -" -B0B1665F022C9E781CE1AE94FA885266391FBCFE_1,B0B1665F022C9E781CE1AE94FA885266391FBCFE," set_ensemble_method Voting
HighestProbability
HighestMeanProbability Default combining rule for categorical targets. - range_ensemble_method Mean
Median Default combining rule for continuous targets. - large_boost flag Apply boosting to very large data sets. - split_alpha number Significance level for splitting. - merge_alpha number Significance level for merging. - bonferroni_adjustment flag Adjust significance values using Bonferroni method. - split_merged_categories flag Allow resplitting of merged categories. - chi_square Pearson
LR Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio - epsilon number Minimum change in expected cell frequencies.. - max_iterations number Maximum iterations for convergence. - set_random_seed integer - seed number - calculate_variable_importance flag - calculate_raw_propensities flag - calculate_adjusted_propensities flag - adjusted_propensity_partition Test
Validation -" -6644EAA4A383F7ED21C0CA1ADAE80A634867870A,6644EAA4A383F7ED21C0CA1ADAE80A634867870A," applychaidnode properties - -You can use CHAID modeling nodes to generate a CHAID model nugget. The scripting name of this model nugget is applychaidnode. For more information on scripting the modeling node itself, see [chaidnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnodeslots.htmlchaidnodeslots). - - - -applychaidnode properties - -Table 1. applychaidnode properties - - applychaidnode Properties Values Property description - - calculate_conf flag - display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned. - calculate_raw_propensities flag -" -FD45693344E2B3CC3BDB7D1AA209AD9FBACB5309,FD45693344E2B3CC3BDB7D1AA209AD9FBACB5309," dvcharts properties - -![Charts node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/chartsnodeicon.png)With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated. - - - -dvcharts properties - -Table 1. dvcharts properties - - dvcharts properties Data type Property description - - chart_definition list List of chart definitions, including chart type (string), chart name (string), chart template (string), and used fields (list of field names), -" -F24C445F7AB9052A92E411B826C60DEE2DF78448,F24C445F7AB9052A92E411B826C60DEE2DF78448," collectionnode properties - -![Collection node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/collectionnodeicon.png)The Collection node shows the distribution of values for one numeric field relative to the values of another. (It creates graphs that are similar to histograms.) It's useful for illustrating a variable or field whose values change over time. Using 3-D graphing, you can also include a symbolic axis displaying distributions by category. - - - -collectionnode properties - -Table 1. collectionnode properties - - collectionnode properties Data type Property description - - over_field field - over_label_auto flag - over_label string - collect_field field - collect_label_auto flag - collect_label string - three_D flag - by_field field - by_label_auto flag - by_label string - operation SumMeanMinMaxSDev - color_field string - panel_field string - animation_field string - range_mode AutomaticUserDefined - range_min number - range_max number - bins ByNumberByWidth - num_bins number - bin_width number - use_grid flag -" -F1B21B1232720492424BB07CD73C93DF2B9CD229,F1B21B1232720492424BB07CD73C93DF2B9CD229," coxregnode properties - -![Cox node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/cox_reg_node_icon.png)The Cox regression node enables you to build a survival model for time-to-event data in the presence of censored records. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time (t) for given values of the input variables. - - - -coxregnode properties - -Table 1. coxregnode properties - - coxregnode Properties Values Property description - - survival_time field Cox regression models require a single field containing the survival times. - target field Cox regression models require a single target field, and one or more input fields. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - method Enter
Stepwise
BackwardsStepwise - groups field - model_type MainEffects
Custom - custom_terms [BP*Sex"" ""BP*Age] - mode Expert
Simple - max_iterations number - p_converge 1.0E-4
1.0E-5
1.0E-6
1.0E-7
1.0E-8
0 - l_converge 1.0E-1
1.0E-2
1.0E-3
1.0E-4
1.0E-5
0 - removal_criterion LR
Wald
Conditional - probability_entry number - probability_removal number - output_display EachStep
LastStep - ci_enable flag - ci_value 90
95
99 - correlation flag - display_baseline flag - survival flag - hazard flag - log_minus_log flag - one_minus_survival flag -" -CEBDC984A6E14E7DC6B7526324BF06A0CE6FFE34,CEBDC984A6E14E7DC6B7526324BF06A0CE6FFE34," applycoxregnode properties - -You can use Cox modeling nodes to generate a Cox model nugget. The scripting name of this model nugget is applycoxregnode. For more information on scripting the modeling node itself, see [coxregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.htmlcoxregnodeslots). - - - -applycoxregnode properties - -Table 1. applycoxregnode properties - - applycoxregnode Properties Values Property description - - future_time_as IntervalsFields - time_interval number - num_future_times integer - time_field field - past_survival_time field - all_probabilities flag -" -7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_0,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," cplexoptnode properties - -![CPLEX Optimization node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/cplexnodeicon.png) The CPLEX Optimization node provides the ability to use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file. - - - -cplexoptnode properties - -Table 1. cplexoptnode properties - - cplexoptnode properties Data type Property description - - opl_model_text string The OPL (Optimization Programming Language) script program that the CPLEX Optimization node will run and then generate the optimization result. - opl_tuple_set_name string The tuple set name in the OPL model that corresponds to the incoming data. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source. - data_input_map List of structured properties The input field mappings for a data source. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source. -" -7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_1,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," md_data_input_map List of structured properties The field mappings between each tuple defined in the OPL, with each corresponding field data source (incoming data). Users can edit them each individually per data source. With this script, you can set the property directly to set all mappings at once. This setting isn't shown in the user interface.

Each entity in the list is structured data:

Data Source Tag. The tag of the data source. For example, for 0_Products_Type the tag is 0.

Data Source Index. The physical sequence (index) of the data source. This is determined by the connection order.

Source Node. The source node (annotation) of the data source. For example, for 0_Products_Type the source node is Products.

Connected Node. The prior node (annotation) that connects the current CPLEX optimization node. For example, for 0_Products_Type the connected node is Type.

Tuple Set Name. The tuple set name of the data source. It must match what's defined in the OPL.

Tuple Field Name. The tuple set field name of the data source. It must match what's defined in the OPL tuple set definition.

Storage Type. The field storage type. Possible values are int, float, or string. -" -7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_2,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," Data Field Name. The field name of the data source.

Example:

[0,0,'Product','Type','Products','prod_id_tup','int','prod_id'], 0,0,'Product','Type','Products','prod_name_tup','string', 'prod_name'],1,1,'Components','Type','Components', 'comp_id_tup','int','comp_id'],1,1,'Components','Type', 'Components','comp_name_tup','string','comp_name']] - opl_data_text string The definition of some variables or data used for the OPL. - output_value_mode string Possible values are raw or dvar. If dvar is specified, on the Output tab the user must specify the object function variable name in OPL for the output. If raw is specified, the objective function will be output directly, regardless of name. - decision_variable_name string The objective function variable name in defined in the OPL. This is enabled only when the output_value_mode property is set to dvar. - objective_function_value_fieldname string The field name for the objective function value to use in the output. Default is _OBJECTIVE. -" -02D819D225558542A49AB6E43F94FE062A509EA5,02D819D225558542A49AB6E43F94FE062A509EA5," dataassetexport properties - -![Data Asset Export node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/dataassetexportnode.png)You can use the Data Asset Export node to write to remove data sources using connections, write to a data file on your local computer, or write data to a project. - - - -dataassetexport properties - -Table 1. dataassetexport properties - - dataassetexport properties Data type Property description - - user_settings string Escaped JSON string containing the interaction properties for the connection. Contact IBM for details about available interaction points.

Example:

user_settings: ""{""interactionProperties"":{""write_mode"":""write"",""file_name"":""output.csv"",""file_format"":""csv"",""quote_numerics"":true,""encoding"":""utf-8"",""first_line_header"":true,""include_types"":false}}""

Note that these values will change based on the type of connection you're using. -" -46915AFE957CA00C5B825C5F2BDC618BFEA43DE8,46915AFE957CA00C5B825C5F2BDC618BFEA43DE8," dataassetimport properties - -![Data Asset Import node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/dataassetnodeicon.png) You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer. - - - -dataassetimport properties - -Table 1. dataassetimport properties - - dataassetimport properties Data type Property description - - connection_path string Name of the data asset (table) you want to access from a selected connection. The value of this property is: /asset_name or /schema_name/table_name. -" -CCDF1D5375060FCDE288A920A6F3C1B48454C6DB_0,CCDF1D5375060FCDE288A920A6F3C1B48454C6DB," dataauditnode properties - -![Data Audit node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/dataauditnodeicon.png)The Data Audit node provides a comprehensive first look at the data, including summary statistics, histograms and distribution for each field, as well as information on outliers, missing values, and extremes. Results are displayed in an easy-to-read matrix that can be sorted and used to generate full-size graphs and data preparation nodes. - - - -dataauditnode properties - -Table 1. dataauditnode properties - - dataauditnode properties Data type Property description - - custom_fields flag - fields [field1 … fieldN] - overlay field - display_graphs flag Used to turn the display of graphs in the output matrix on or off. - basic_stats flag - advanced_stats flag - median_stats flag - calculate CountBreakdown Used to calculate missing values. Select either, both, or neither calculation method. - outlier_detection_method stdiqr Used to specify the detection method for outliers and extreme values. - outlier_detection_std_outlier number If outlier_detection_method is std, specifies the number to use to define outliers. - outlier_detection_std_extreme number If outlier_detection_method is std, specifies the number to use to define extreme values. - outlier_detection_iqr_outlier number If outlier_detection_method is iqr, specifies the number to use to define outliers. - outlier_detection_iqr_extreme number If outlier_detection_method is iqr, specifies the number to use to define extreme values. - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_mode ScreenFile Used to specify target location for output generated from the output node. -" -CCDF1D5375060FCDE288A920A6F3C1B48454C6DB_1,CCDF1D5375060FCDE288A920A6F3C1B48454C6DB," output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output. - paginate_output flag When the output_format is HTML, causes the output to be separated into pages. -" -DAFB63017668C5DD34A07A1850CE9E9A37D0F525_0,DAFB63017668C5DD34A07A1850CE9E9A37D0F525," decisionlistnode properties - -![Decision List node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/decisionlistnodeicon.png)The Decision List node identifies subgroups, or segments, that show a higher or lower likelihood of a given binary outcome relative to the overall population. For example, you might look for customers who are unlikely to churn or are most likely to respond favorably to a campaign. You can incorporate your business knowledge into the model by adding your own custom segments and previewing alternative models side by side to compare the results. Decision List models consist of a list of rules in which each rule has a condition and an outcome. Rules are applied in order, and the first rule that matches determines the outcome. - - - -decisionlistnode properties - -Table 1. decisionlistnode properties - - decisionlistnode Properties Values Property description - - target field Decision List models use a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - model_output_type ModelInteractiveBuilder - search_direction UpDown Relates to finding segments; where Up is the equivalent of High Probability, and Down is the equivalent of Low Probability. - target_value string If not specified, will assume true value for flags. - max_rules integer The maximum number of segments excluding the remainder. - min_group_size integer Minimum segment size. - min_group_size_pct number Minimum segment size as a percentage. - confidence_level number Minimum threshold that an input field has to improve the likelihood of response (give lift), to make it worth adding to a segment definition. - max_segments_per_rule integer - mode SimpleExpert - bin_method EqualWidthEqualCount - bin_count number - max_models_per_cycle integer Search width for lists. -" -DAFB63017668C5DD34A07A1850CE9E9A37D0F525_1,DAFB63017668C5DD34A07A1850CE9E9A37D0F525," max_rules_per_cycle integer Search width for segment rules. - segment_growth number - include_missing flag - final_results_only flag - reuse_fields flag Allows attributes (input fields which appear in rules) to be re-used. - max_alternatives integer - calculate_raw_propensities flag -" -082349F7C1E486D18BCA3BB7569D4DE25A8E81A7,082349F7C1E486D18BCA3BB7569D4DE25A8E81A7," applydecisionlistnode properties - -You can use Decision List modeling nodes to generate a Decision List model nugget. The scripting name of this model nugget is applydecisionlistnode. For more information on scripting the modeling node itself, see [decisionlistnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnodeslots.htmldecisionlistnodeslots). - - - -applydecisionlistnode properties - -Table 1. applydecisionlistnode properties - - applydecisionlistnode Properties Values Property description - - enable_sql_generation flag When true, SPSS Modeler will try to push back the Decision List model to SQL. - calculate_raw_propensities flag -" -CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_0,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," Flow properties - -You can control a variety of flow properties with scripting. To reference flow properties, you must set the execution method to use scripts: - -stream = modeler.script.stream() -stream.setPropertyValue(""execute_method"", ""Script"") - -The previous example uses the node property to create a list of all nodes in the flow and write that list in the flow annotations. The annotation produced looks like this: - -This flow is called ""druglearn"" and contains the following nodes: - -type node called ""Define Types"" -derive node called ""Na_to_K"" -variablefile node called ""DRUG1n"" -neuralnetwork node called ""Drug"" -c50 node called ""Drug"" -filter node called ""Discard Fields"" - -Flow properties are described in the following table. - - - -Flow properties - -Table 1. Flow properties - - Property name Data type Property description - - execute_method Normal
Script - date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY - date_baseline number - date_2digit_baseline number -" -CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_1,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," time_format ""HHMMSS"" ""HHMM"" ""MMSS"" ""HH:MM:SS"" ""HH:MM"" ""MM:SS"" ""(H)H:(M)M:(S)S"" ""(H)H:(M)M"" ""(M)M:(S)S"" ""HH.MM.SS"" ""HH.MM"" ""MM.SS"" ""(H)H.(M)M.(S)S"" ""(H)H.(M)M"" ""(M)M.(S)S"" - time_rollover flag - import_datetime_as_string flag - decimal_places number - decimal_symbol Default
Period
Comma - angles_in_radians flag - use_max_set_size flag - max_set_size number - ruleset_evaluation Voting
FirstHit - refresh_source_nodes flag Use to refresh import nodes automatically upon flow execution. - script string - annotation string - name string This property is read-only. If you want to change the name of a flow, you should save it with a different name. - parameters Use this property to update flow parameters from within a stand-alone script. - nodes See detailed information that follows. - encoding SystemDefault
""UTF-8"" - stream_rewriting boolean - stream_rewriting_maximise_sql boolean - stream_rewriting_optimise_clem_ execution boolean - stream_rewriting_optimise_syntax_ execution boolean - enable_parallelism boolean - sql_generation boolean - database_caching boolean - sql_logging boolean - sql_generation_logging boolean - sql_log_native boolean - sql_log_prettyprint boolean - record_count_suppress_input boolean - record_count_feedback_interval integer - use_stream_auto_create_node_ settings boolean If true, then flow-specific settings are used, otherwise user preferences are used. -" -CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_2,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," create_model_applier_for_new_ models boolean If true, when a model builder creates a new model, and it has no active update links, a new model applier is added. - create_model_applier_update_links createEnabled

createDisabled

doNotCreate Defines the type of link created when a model applier node is added automatically. - create_source_node_from_builders boolean If true, when a source builder creates a new source output, and it has no active update links, a new import node is added. - create_source_node_update_links createEnabled

createDisabled

doNotCreate Defines the type of link created when an import node is added automatically. - has_coordinate_system boolean If true, applies a coordinate system to the entire flow. - coordinate_system string The name of the selected projected coordinate system. - deployment_area modelRefresh

Scoring

None Choose how you want to deploy the flow. If this value is set to None, no other deployment entries are used. - scoring_terminal_node_id string Choose the scoring branch in the flow. It can be any terminal node in the flow. -" -ABD445CE46B0329348E6AD464735BDB1D525EDAA,ABD445CE46B0329348E6AD464735BDB1D525EDAA," SuperNode properties - -The tables in this section describe properties that are specific to SuperNodes. Note that common node properties also apply to SuperNodes. - - - -Terminal supernode properties - -Table 1. Terminal supernode properties - - Property name Property type/List of values Property description - -" -84573D3FDA739326819C7303EA21DB6DDF2ACC21_0,84573D3FDA739326819C7303EA21DB6DDF2ACC21," derivenode properties - -![Derive node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/derive_node_icon.png)The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional. - - - -derivenode properties - -Table 1. derivenode properties - - derivenode properties Data type Property description - - new_name string Name of new field. - mode SingleMultiple Specifies single or multiple fields. - fields list Used in Multiple mode only to select multiple fields. - name_extension string Specifies the extension for the new field name(s). - add_as SuffixPrefix Adds the extension as a prefix (at the beginning) or as a suffix (at the end) of the field name. - result_type FormulaFlagSetStateCountConditional The six types of new fields that you can create. - formula_expr string Expression for calculating a new field value in a Derive node. - flag_expr string - flag_true string - flag_false string - set_default string - set_value_cond string Structured to supply the condition associated with a given value. - state_on_val string Specifies the value for the new field when the On condition is met. - state_off_val string Specifies the value for the new field when the Off condition is met. - state_on_expression string - state_off_expression string - state_initial OnOff Assigns each record of the new field an initial value of On or Off. This value can change as each condition is met. - count_initial_val string - count_inc_condition string - count_inc_expression string - count_reset_condition string - cond_if_cond string - cond_then_expr string - cond_else_expr string -" -84573D3FDA739326819C7303EA21DB6DDF2ACC21_1,84573D3FDA739326819C7303EA21DB6DDF2ACC21," formula_measure_type Range / MeasureType.RANGEDiscrete / MeasureType.DISCRETEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESSCollection / MeasureType.COLLECTIONGeospatial / MeasureType.GEOSPATIAL This property can be used to define the measurement associated with the derived field. The setter function can be passed either a string or one of the MeasureType values. The getter will always return on the MeasureType values. - collection_measure Range / MeasureType.RANGEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this property defines the measurement type associated with the underlying values. - geo_type PointMultiPointLineStringMultiLineStringPolygonMultiPolygon For geospatial fields, this property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values -" -16048584B029B9BE5DA50D7F9D9AE85FFE740718_0,16048584B029B9BE5DA50D7F9D9AE85FFE740718," discriminantnode properties - -![Discriminant node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/discriminantnodeicon.png)Discriminant analysis makes more stringent assumptions than logistic regression, but can be a valuable alternative or supplement to a logistic regression analysis when those assumptions are met. - - - -discriminantnode properties - -Table 1. discriminantnode properties - - discriminantnode Properties Values Property description - - target field Discriminant models require a single target field and one or more input fields. Weight and frequency fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - method Enter
Stepwise - mode Simple
Expert - prior_probabilities AllEqual
ComputeFromSizes - covariance_matrix WithinGroups
SeparateGroups - means flag Statistics options in the node properties under Expert Options. - univariate_anovas flag - box_m flag - within_group_covariance flag - within_groups_correlation flag - separate_groups_covariance flag - total_covariance flag - fishers flag - unstandardized flag - casewise_results flag Classification options in the node properties under Expert Options. - limit_to_first number Default value is 10. - summary_table flag - leave_one_classification flag - separate_groups_covariance flag Matrices option Separate-groups covariance. - territorial_map flag - combined_groups flag Plot option Combined-groups. - separate_groups flag Plot option Separate-groups. - summary_of_steps flag - F_pairwise flag - stepwise_method WilksLambda
UnexplainedVariance
MahalanobisDistance
SmallestF
RaosV - V_to_enter number - criteria UseValue
UseProbability - F_value_entry number Default value is 3.84. -" -16048584B029B9BE5DA50D7F9D9AE85FFE740718_1,16048584B029B9BE5DA50D7F9D9AE85FFE740718," F_value_removal number Default value is 2.71. - probability_entry number Default value is 0.05. - probability_removal number Default value is 0.10. - calculate_variable_importance flag - calculate_raw_propensities flag -" -2C1E91540BD58780F781F8A06E2B5C62035CA84B,2C1E91540BD58780F781F8A06E2B5C62035CA84B," applydiscriminantnode properties - -You can use Discriminant modeling nodes to generate a Discriminant model nugget. The scripting name of this model nugget is applydiscriminantnode. For more information on scripting the modeling node itself, see [discriminantnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnodeslots.htmldiscriminantnodeslots). - - - -applydiscriminantnode properties - -Table 1. applydiscriminantnode properties - - applydiscriminantnode Properties Values Property description - - calculate_raw_propensities flag -" -BAD5210D0F8114CD4E9B1DB05EB92F0EABC6E233,BAD5210D0F8114CD4E9B1DB05EB92F0EABC6E233," distinctnode properties - -![Distinct node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/distinctnodeicon.png) The Distinct node removes duplicate records, either by passing the first distinct record to the data flow or by discarding the first record and passing any duplicates to the data flow instead. - -Example - -node = stream.create(""distinct"", ""My node"") -node.setPropertyValue(""mode"", ""Include"") -node.setPropertyValue(""fields"", [""Age"" ""Sex""]) -node.setPropertyValue(""keys_pre_sorted"", True) - - - -distinctnode properties - -Table 1. distinctnode properties - - distinctnode properties Data type Property description - - mode Include
Discard You can include the first distinct record in the data stream, or discard the first distinct record and pass any duplicate records to the data stream instead. - composite_value Structured slot See example below. - composite_values Structured slot See example below. - inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record. - count_field string Specifies the name of the record count field. - default_ascending flag - low_distinct_key_count flag Specifies that you have only a small number of records and/or a small number of unique values of the key field(s). - keys_pre_sorted flag Specifies that all records with the same key values are grouped together in the input. - disable_sql_generation flag - grouping_fields array Lists the field or fields used to determine whether records are identical. - sort_keys array Lists the fields used to determine how records are sorted within each group of duplicates, and whether they're sorted in ascending or descending order. You must specify a sort order if you've chosen to include or exclude the first record in each group, and if it matters to you which record is treated as the first. -" -DCB8FB91999D79190F3E5D54DE32B1B7F1401779,DCB8FB91999D79190F3E5D54DE32B1B7F1401779," distributionnode properties - -![Distribution node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/distributionnodeicon.png)The Distribution node shows the occurrence of symbolic (categorical) values, such as mortgage type or gender. Typically, you might use the Distribution node to show imbalances in the data, which you could then rectify using a Balance node before creating a model. - - - -distributionnode properties - -Table 1. distributionnode properties - - distributionnode properties Data type Property description - - plot SelectedFieldsFlags - x_field field - color_field field Overlay field. - normalize flag - sort_mode ByOccurenceAlphabetic -" -5DCC543A106EC708FF97817AA0CFDEF8CB89894D,5DCC543A106EC708FF97817AA0CFDEF8CB89894D," ensemblenode properties - -![Ensemble node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/ensemblenodeicon.png)The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any one model. - - - -ensemblenode properties - -Table 1. ensemblenode properties - - ensemblenode properties Data type Property description - - ensemble_target_field field Specifies the target field for all models used in the ensemble. - filter_individual_model_output flag Specifies whether scoring results from individual models should be suppressed. - flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingAdjustedPropensityWeightedVotingHighestConfidenceAverageRawPropensityAverageAdjustedPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field. - set_ensemble_method VotingConfidenceWeightedVotingHighestConfidence Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a nominal field. - flag_voting_tie_selection RandomHighestConfidenceRawPropensityAdjustedPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field. -" -98B447B5AF1CD17524E2BA82FED83B8966DDFEFB_0,98B447B5AF1CD17524E2BA82FED83B8966DDFEFB," evaluationnode properties - -![Evaluation node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/evaluationnodeicon.png)The Evaluation node helps to evaluate and compare predictive models. The evaluation chart shows how well models predict particular outcomes. It sorts records based on the predicted value and confidence of the prediction. It splits the records into groups of equal size (quantiles) and then plots the value of the business criterion for each quantile from highest to lowest. Multiple models are shown as separate lines in the plot. - - - -evaluationnode properties - -Table 1. evaluationnode properties - - evaluationnode properties Data type Property description - - chart_type Gains
Response
Lift
Profit
ROI
ROC - inc_baseline flag - field_detection_method Metadata
Name - use_fixed_cost flag - cost_value number - cost_field string - use_fixed_revenue flag - revenue_value number - revenue_field string - use_fixed_weight flag - weight_value number - weight_field field - n_tile Quartiles
Quintles
Deciles
Vingtiles
Percentiles
1000-tiles - cumulative flag - style Line
Point - point_type Rectangle
Dot
Triangle
Hexagon
Plus
Pentagon
Star
BowTie
HorizontalDash
VerticalDash
IronCross
Factory
House
Cathedral
OnionDome
ConcaveTriangleOblateGlobe
CatEye
FourSidedPillow
RoundRectangle
Fan - export_data flag - data_filename string - delimiter string - new_line flag - inc_field_names flag - inc_best_line flag - inc_business_rule flag - business_rule_condition string - plot_score_fields flag - score_fields [field1 ... fieldN] - target_field field -" -98B447B5AF1CD17524E2BA82FED83B8966DDFEFB_1,98B447B5AF1CD17524E2BA82FED83B8966DDFEFB," use_hit_condition flag - hit_condition string - use_score_expression flag - score_expression string - caption_auto flag - split_by_partition boolean If a partition field is used to split records into training, test, and validation samples, use this option to display a separate evaluation chart for each partition. -" -6CB2797AB2EF876F05A39F4CEE08EEE4249716D8,6CB2797AB2EF876F05A39F4CEE08EEE4249716D8," Flow script example: Training a neural net - -You can use a flow to train a neural network model when executed. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node. - -Using an SPSS Modeler script, you can automate the process of testing the model nugget after you create it. Following is an example: - -stream = modeler.script.stream() -neuralnetnode = stream.findByType(""neuralnetwork"", None) -results = [] -neuralnetnode.run(results) -appliernode = stream.createModelApplierAt(results[0], ""Drug"", 594, 187) -analysisnode = stream.createAt(""analysis"", ""Drug"", 688, 187) -typenode = stream.findByType(""type"", None) -stream.linkBetween(appliernode, typenode, analysisnode) -analysisnode.run([]) - -The following bullets describe each line in this script example. - - - -* The first line defines a variable that points to the current flow -* In line 2, the script finds the Neural Net builder node -* In line 3, the script creates a list where the execution results can be stored -* In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3. -* In line 5, a model apply node is created for the model nugget and placed on the flow canvas -* In line 6, an analysis node called Drug is created -* In line 7, the script finds the Type node -* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node -* Finally, the Analysis node runs to produce the Analysis report - - - -It's possible to use a script to build and run a flow from scratch, starting with a blank canvas. To learn more about the scripting language in general, see [Scripting overview](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/using_scripting.html). -" -123987D173C0DB88D8E1F59AF46A8D9313A8E601,123987D173C0DB88D8E1F59AF46A8D9313A8E601," extensionexportnode properties - -![Extension Export node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/extensionexportnode.png)With the Extension Export node, you can run R or Python for Spark scripts to export data. - - - -extensionexportnode properties - -Table 1. extensionexportnode properties - - extensionexportnode properties Data type Property description - - syntax_type RPython Specify which script runs: R or Python (R is the default). - r_syntax string The R scripting syntax to run. - python_syntax string The Python scripting syntax to run. - convert_flags StringsAndDoubles LogicalValues Option to convert flag fields. - convert_missing flag Option to convert missing values to the R NA value. -" -9AA00A347BD6F7725014C840F3D39BC0DDF26599,9AA00A347BD6F7725014C840F3D39BC0DDF26599," extensionimportnode properties - -![Extension Import node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/extensionimportnode.png) With the Extension Import node, you can run R or Python for Spark scripts to import data. - - - -extensionimportnode properties - -Table 1. extensionimportnode properties - - extensionimportnode properties Data type Property description - - syntax_type RPython Specify which script runs – R or Python (R is the default). -" -7985570F01D50D057EBD4FAFCF8C8A1BCACB3006,7985570F01D50D057EBD4FAFCF8C8A1BCACB3006," extensionmodelnode properties - -![Extension Model node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/extensionmodelnode.png)With the Extension Model node, you can run R or Python for Spark scripts to build and score results. - -Note that many of the properties and much of the information on this page is only applicable to SPSS Modeler Desktop streams. - - - -extensionmodelnode properties - -Table 1. extensionmodelnode properties - - extensionmodelnode Properties Values Property description - - syntax_type RPython Specify which script runs: R or Python (R is the default). - r_build_syntax string The R scripting syntax for model building. - r_score_syntax string The R scripting syntax for model scoring. - python_build_syntax string The Python scripting syntax for model building. - python_score_syntax string The Python scripting syntax for model scoring. - convert_flags StringsAndDoubles
LogicalValues Option to convert flag fields. - convert_missing flag Option to convert missing values to R NA value. - convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats. - convert_datetime_class POSIXct

POSIXlt
Options to specify to what format variables with date or datetime formats are converted. -" -E85352E9588726771A8CD594A268ECA7D04379BD,E85352E9588726771A8CD594A268ECA7D04379BD," applyextension properties - -You can use Extension Model nodes to generate an Extension model nugget. The scripting name of this model nugget is applyextension. For more information on scripting the modeling node itself, see [extensionmodelnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionmodelnodeslots.htmlextensionmodelnodeslots). - - - -applyextension properties - -Table 1. applyextension properties - - applyextension Properties Values Property Description - - r_syntax string R scripting syntax for model scoring. - python_syntax string Python scripting syntax for model scoring. - use_batch_size flag Enable use of batch processing. - batch_size integer Specify the number of data records to be included in each batch. - convert_flags StringsAndDoubles
LogicalValues Option to convert flag fields. - convert_missing flag Option to convert missing values to the R NA value. -" -14005F26F286B03F8AC692D42E9F3DFCE1F66962,14005F26F286B03F8AC692D42E9F3DFCE1F66962," extensionoutputnode properties - -![Extension Output node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/extensionoutputnode.png)With the Extension Output node, you can analyze data and the results of model scoring using your own custom R or Python for Spark script. The output of the analysis can be text or graphical. - -Note that many of the properties on this page are for streams from SPSS Modeler desktop. - - - -extensionoutputnode properties - -Table 1. extensionoutputnode properties - - extensionoutputnode properties Data type Property description - - syntax_type RPython Specify which script runs: R or Python (R is the default). - r_syntax string R scripting syntax for model scoring. - python_syntax string Python scripting syntax for model scoring. - convert_flags StringsAndDoubles LogicalValues Option to convert flag fields. - convert_missing flag Option to convert missing values to the R NA value. - convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats. - convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted. - output_to Screen File Specify the output type (Screen or File). - output_type Graph Text Specify whether to produce graphical or text output. - full_filename string File name to use for the generated output. -" -D487DB53087C5FD4CD2A25112F1F8A8E496EFC72,D487DB53087C5FD4CD2A25112F1F8A8E496EFC72," extensionprocessnode properties - -![Extension Transform node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/extensionprocessnode.png) With the Extension Transform node, you can take data from a flow and apply transformations to the data using R scripting or Python for Spark scripting. - - - -extensionprocessnode properties - -Table 1. extensionprocessnode properties - - extensionprocessnode properties Data type Property description - - syntax_type RPython Specify which script runs – R or Python (R is the default). - r_syntax string The R scripting syntax to run. - python_syntax string The Python scripting syntax to run. - use_batch_size flag Enable use of batch processing. - batch_size integer Specify the number of data records to include in each batch. - convert_flags StringsAndDoubles LogicalValues Option to convert flag fields. - convert_missing flag Option to convert missing values to the R NA value. -" -5EDDA143971CE5735307FEDE23FB0CD7E963264C,5EDDA143971CE5735307FEDE23FB0CD7E963264C," factornode properties - -![PCA/Factor node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pcafactornodeicon.png)The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. Factor analysis attempts to identify underlying factors that explain the pattern of correlations within a set of observed fields. For both approaches, the goal is to find a small number of derived fields that effectively summarizes the information in the original set of fields. - - - -factornode properties - -Table 1. factornode properties - - factornode Properties Values Property description - - inputs [field1 ... fieldN] PCA/Factor models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - method PCULSGLSMLPAFAlphaImage - mode SimpleExpert - max_iterations number - complete_records flag - matrix CorrelationCovariance - extract_factors ByEigenvaluesByFactors - min_eigenvalue number - max_factor number - rotation NoneVarimaxDirectObliminEquamaxQuartimaxPromax - delta number If you select DirectOblimin as your rotation data type, you can specify a value for delta. If you don't specify a value, the default value for delta is used. - kappa number If you select Promax as your rotation data type, you can specify a value for kappa. If you don't specify a value, the default value for kappa is used. - sort_values flag -" -92442D67350644BFCAEC2B2A47B98F4EDE943DC3,92442D67350644BFCAEC2B2A47B98F4EDE943DC3," applyfactornode properties - -You can use PCA/Factor modeling nodes to generate a PCA/Factor model nugget. The scripting name of this model nugget is applyfactornode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [factornode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornodeslots.htmlfactornodeslots). -" -D5863A9857F07023885A810210DFB819AD692ED7,D5863A9857F07023885A810210DFB819AD692ED7," Setting algorithm properties - -For the Auto Classifier, Auto Numeric, and Auto Cluster nodes, you can set properties for specific algorithms used by the node by using the general form: - -autonode.setKeyedPropertyValue(, , ) - -For example: - -node.setKeyedPropertyValue(""neuralnetwork"", ""method"", ""MultilayerPerceptron"") - -Algorithm names for the Auto Classifier node are cart, chaid, quest, c50, logreg, decisionlist, bayesnet, discriminant, svm and knn. - -Algorithm names for the Auto Numeric node are cart, chaid, neuralnetwork, genlin, svm, regression, linear and knn. - -Algorithm names for the Auto Cluster node are twostep, k-means, and kohonen. - -Property names are standard as documented for each algorithm node. - -Algorithm properties that contain periods or other punctuation must be wrapped in single quotes. For example: - -node.setKeyedPropertyValue(""logreg"", ""tolerance"", ""1.0E-5"") - -Multiple values can also be assigned for a property. For example: - -node.setKeyedPropertyValue(""decisionlist"", ""search_direction"", [""Up"", ""Down""]) - -To enable or disable the use of a specific algorithm: - -node.setPropertyValue(""chaid"", True) - -Note: In cases where certain algorithm options aren't available in the Auto Classifier node, or when only a single value can be specified rather than a range of values, the same limits apply with scripting as when accessing the node in the standard manner. -" -055727FBA02274A87D30DA162E6F5ECA3ACE233D_0,055727FBA02274A87D30DA162E6F5ECA3ACE233D," featureselectionnode properties - -![Feature Selection node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/featureselectionnodeicon.png)The Feature Selection node screens input fields for removal based on a set of criteria (such as the percentage of missing values); it then ranks the importance of remaining inputs relative to a specified target. For example, given a data set with hundreds of potential inputs, which are most likely to be useful in modeling patient outcomes? - - - -featureselectionnode properties - -Table 1. featureselectionnode properties - - featureselectionnode Properties Values Property description - - target field Feature Selection models rank predictors relative to the specified target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information. - screen_single_category flag If True, screens fields that have too many records falling into the same category relative to the total number of records. - max_single_category number Specifies the threshold used when screen_single_category is True. - screen_missing_values flag If True, screens fields with too many missing values, expressed as a percentage of the total number of records. - max_missing_values number - screen_num_categories flag If True, screens fields with too many categories relative to the total number of records. - max_num_categories number - screen_std_dev flag If True, screens fields with a standard deviation of less than or equal to the specified minimum. - min_std_dev number - screen_coeff_of_var flag If True, screens fields with a coefficient of variance less than or equal to the specified minimum. - min_coeff_of_var number - criteria PearsonLikelihoodCramersVLambda When ranking categorical predictors against a categorical target, specifies the measure on which the importance value is based. -" -055727FBA02274A87D30DA162E6F5ECA3ACE233D_1,055727FBA02274A87D30DA162E6F5ECA3ACE233D," unimportant_below number Specifies the threshold p values used to rank variables as important, marginal, or unimportant. Accepts values from 0.0 to 1.0. - important_above number Accepts values from 0.0 to 1.0. - unimportant_label string Specifies the label for the unimportant ranking. - marginal_label string - important_label string - selection_mode ImportanceLevelImportanceValueTopN - select_important flag When selection_mode is set to ImportanceLevel, specifies whether to select important fields. - select_marginal flag When selection_mode is set to ImportanceLevel, specifies whether to select marginal fields. - select_unimportant flag When selection_mode is set to ImportanceLevel, specifies whether to select unimportant fields. -" -9A5011652C8FAD610EF217B82B7F28C8256DCE8B,9A5011652C8FAD610EF217B82B7F28C8256DCE8B," applyfeatureselectionnode properties - -You can use Feature Selection modeling nodes to generate a Feature Selection model nugget. The scripting name of this model nugget is applyfeatureselectionnode. For more information on scripting the modeling node itself, see [featureselectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.htmlfeatureselectionnodeslots). - - - -applyfeatureselectionnode properties - -Table 1. applyfeatureselectionnode properties - - applyfeatureselectionnode Properties Values Property description - -" -76910487C819D14F9FEFCBC6252F25652AF1E65B,76910487C819D14F9FEFCBC6252F25652AF1E65B," fillernode properties - -![Filler node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/fillernodeicon.png)The Filler node replaces field values and changes storage. You can choose to replace values based on a CLEM condition, such as @BLANK(@FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. A Filler node is often used together with a Type node to replace missing values. - -Example - -node = stream.create(""filler"", ""My node"") -node.setPropertyValue(""fields"", [""Age""]) -node.setPropertyValue(""replace_mode"", ""Always"") -node.setPropertyValue(""condition"", ""(""Age"" > 60) and (""Sex"" = ""M"""") -node.setPropertyValue(""replace_with"", """"old man"""") - - - -fillernode properties - -Table 1. fillernode properties - - fillernode properties Data type Property description - - fields list Fields from the dataset whose values will be examined and replaced. - replace_mode AlwaysConditionalBlankNullBlankAndNull You can replace all values, blank values, or null values, or replace based on a specified condition. -" -D91044A492D05F87613BBA485CD2FAE1F54764DB_0,D91044A492D05F87613BBA485CD2FAE1F54764DB," filternode properties - -![Filter node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/filternodeicon.png)The Filter node filters (discards) fields, renames fields, and maps fields from one import node to another. - -Using the default_include property. Note that setting the value of the default_include property doesn't automatically include or exclude all fields; it simply determines the default for the current selection. This is functionally equivalent to selecting the Include All Fields option in the Filter node properties. For example, suppose you run the following script: - -node = modeler.script.stream().create(""filter"", ""Filter"") -node.setPropertyValue(""default_include"", False) - Include these two fields in the list -for f in [""Age"", ""Sex""]: -node.setKeyedPropertyValue(""include"", f, True) - -This will cause the node to pass the fields Age and Sex and discard all others. Now suppose you run the same script again but name two different fields: - -node = modeler.script.stream().create(""filter"", ""Filter"") -node.setPropertyValue(""default_include"", False) - Include these two fields in the list -for f in [""BP"", ""Na""]: -node.setKeyedPropertyValue(""include"", f, True) - -This will add two more fields to the filter so that a total of four fields are passed (Age, Sex, BP, Na). In other words, resetting the value of default_include to False doesn't automatically reset all fields. - -Alternatively, if you now change default_include to True, either using a script or in the Filter node dialog box, this would flip the behavior so the four fields listed previously would be discarded rather than included. When in doubt, experimenting with the controls in the Filter node properties may be helpful in understanding this interaction. - - - -filternode properties - -Table 1. filternode properties - - filternode properties Data type Property description - -" -D91044A492D05F87613BBA485CD2FAE1F54764DB_1,D91044A492D05F87613BBA485CD2FAE1F54764DB," default_include flag Keyed property to specify whether the default behavior is to pass or filter fields: Note that setting this property doesn't automatically include or exclude all fields; it simply determines whether selected fields are included or excluded by default. -" -916F0A90D0B8383F2353B3320628E23E38B380B5_0,916F0A90D0B8383F2353B3320628E23E38B380B5," genlinnode properties - -![GenLin node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/genlinnodeicon.png)The Generalized Linear (GenLin) model expands the general linear model so that the dependent variable is linearly related to the factors and covariates through a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers the functionality of a wide number of statistical models, including linear regression, logistic regression, loglinear models for count data, and interval-censored survival models. - - - -genlinnode properties - -Table 1. genlinnode properties - - genlinnode Properties Values Property description - - target field GenLin models require a single target field which must be a nominal or flag field, and one or more input fields. A weight field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - use_weight flag - weight_field field Field type is only continuous. - target_represents_trials flag - trials_type VariableFixedValue - trials_field field Field type is continuous, flag, or ordinal. - trials_number number Default value is 10. - model_type MainEffectsMainAndAllTwoWayEffects - offset_type VariableFixedValue - offset_field field Field type is only continuous. - offset_value number Must be a real number. - base_category LastFirst - include_intercept flag - mode SimpleExpert - distribution BINOMIALGAMMAIGAUSSNEGBINNORMALPOISSONTWEEDIEMULTINOMIAL IGAUSS: Inverse Gaussian. NEGBIN: Negative binomial. - negbin_para_type SpecifyEstimate - negbin_parameter number Default value is 1. Must contain a non-negative real number. - tweedie_parameter number -" -916F0A90D0B8383F2353B3320628E23E38B380B5_1,916F0A90D0B8383F2353B3320628E23E38B380B5," link_function IDENTITYCLOGLOGLOGLOGCLOGITNEGBINNLOGLOGODDSPOWERPROBITPOWERCUMCAUCHITCUMCLOGLOGCUMLOGITCUMNLOGLOGCUMPROBIT CLOGLOG: Complementary log-log. LOGC: log complement. NEGBIN: Negative binomial. NLOGLOG: Negative log-log. CUMCAUCHIT: Cumulative cauchit. CUMCLOGLOG: Cumulative complementary log-log. CUMLOGIT: Cumulative logit. CUMNLOGLOG: Cumulative negative log-log. CUMPROBIT: Cumulative probit. - power number Value must be real, nonzero number. - method HybridFisherNewtonRaphson - max_fisher_iterations number Default value is 1; only positive integers allowed. - scale_method MaxLikelihoodEstimateDeviancePearsonChiSquareFixedValue - scale_value number Default value is 1; must be greater than 0. - covariance_matrix ModelEstimatorRobustEstimator - max_iterations number Default value is 100; non-negative integers only. - max_step_halving number Default value is 5; positive integers only. - check_separation flag - start_iteration number Default value is 20; only positive integers allowed. - estimates_change flag - estimates_change_min number Default value is 1E-006; only positive numbers allowed. - estimates_change_type AbsoluteRelative - loglikelihood_change flag - loglikelihood_change_min number Only positive numbers allowed. - loglikelihood_change_type AbsoluteRelative - hessian_convergence flag - hessian_convergence_min number Only positive numbers allowed. - hessian_convergence_type AbsoluteRelative - case_summary flag - contrast_matrices flag - descriptive_statistics flag - estimable_functions flag - model_info flag - iteration_history flag - goodness_of_fit flag - print_interval number Default value is 1; must be positive integer. - model_summary flag - lagrange_multiplier flag - parameter_estimates flag - include_exponential flag - covariance_estimates flag -" -916F0A90D0B8383F2353B3320628E23E38B380B5_2,916F0A90D0B8383F2353B3320628E23E38B380B5," correlation_estimates flag - analysis_type TypeITypeIIITypeIAndTypeIII - statistics WaldLR - citype WaldProfile - tolerancelevel number Default value is 0.0001. - confidence_interval number Default value is 95. - loglikelihood_function FullKernel - singularity_tolerance 1E-0071E-0081E-0091E-0101E-0111E-012 - value_order AscendingDescendingDataOrder - calculate_variable_importance flag - calculate_raw_propensities flag -" -BC3D88E89001BB639E418AE5971B209535603A18,BC3D88E89001BB639E418AE5971B209535603A18," applygeneralizedlinearnode properties - -You can use Generalized Linear (GenLin) modeling nodes to generate a GenLin model nugget. The scripting name of this model nugget is applygeneralizedlinearnode. For more information on scripting the modeling node itself, see [genlinnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnodeslots.htmlgenlinnodeslots). - - - -applygeneralizedlinearnode properties - -Table 1. applygeneralizedlinearnode properties - - applygeneralizedlinearnode Properties Values Property description - - calculate_raw_propensities flag -" -ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_0,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," gle properties - -![GLE node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/glenodeicon.png)A GLE extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data. - - - -gle properties - -Table 1. gle properties - - gle Properties Values Property description - - custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true). - target_field field Field to use as target if custom_target is true. - use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false. - use_trials_field_or_value Field
Value Indicates whether field (default) or value is used to specify number of trials. - trials_field field Field to use to specify number of trials. - trials_value integer Value to use to specify number of trials. If specified, minimum value is 1. - use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false. - target_reference_value string Reference category to use if use_custom_target_reference is true. - dist_link_combination NormalIdentity
GammaLog
PoissonLog
NegbinLog
TweedieIdentity
NominalLogit
BinomialLogit
BinomialProbit
BinomialLogC
CUSTOM Common models for distribution of values for target. Choose CUSTOM to specify a distribution from the list provided by target_distribution. -" -ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_1,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," target_distribution Normal
Binomial
Multinomial
Gamma
INVERSE_GAUSS
NEG_BINOMIAL
Poisson
TWEEDIE
UNKNOWN Distribution of values for target when dist_link_combination is Custom. - link_function_type UNKNOWN
IDENTITY
LOG
LOGIT
PROBIT
COMPL_LOG_LOG
POWER
LOG_COMPL
NEG_LOG_LOG
ODDS_POWER
NEG_BINOMIAL
GEN_LOGIT
CUMUL_LOGIT
CUMUL_PROBIT
CUMUL_COMPL_LOG_LOG
CUMUL_NEG_LOG_LOG
CUMUL_CAUCHIT Link function to relate target values to predictors. If target_distribution is Binomial you can use:

UNKNOWNIDENTITYLOGLOGITPROBITCOMPL_LOG_LOGPOWERLOG_COMPLNEG_LOG_LOGODDS_POWER

If target_distribution is NEG_BINOMIAL you can use:

NEG_BINOMIAL

If target_distribution is UNKNOWN, you can use:

GEN_LOGITCUMUL_LOGITCUMUL_PROBITCUMUL_COMPL_LOG_LOGCUMUL_NEG_LOG_LOGCUMUL_CAUCHIT - link_function_param number Tweedie parameter value to use. Only applicable if normal_link_function or link_function_type is POWER. - tweedie_param number Link function parameter value to use. Only applicable if dist_link_combination is set to TweedieIdentity, or link_function_type is TWEEDIE. - use_predefined_inputs flag Indicates whether model effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false). -" -ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_2,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," model_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as model effect fields. - use_intercept flag If true (default), includes the intercept in the model. - regression_weight_field field Field to use as analysis weight field. - use_offset None
Value
Variable Indicates how offset is specified. Value None means no offset is used. - offset_value number Value to use for offset if use_offset is set to offset_value. - offset_field field Field to use for offset value if use_offset is set to offset_field. - target_category_order Ascending
Descending Sorting order for categorical targets. Default is Ascending. - inputs_category_order Ascending
Descending Sorting order for categorical predictors. Default is Ascending. - max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100. - confidence_level number Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95. - test_fixed_effects_coeffecients Model
Robust Method for computing the parameter estimates covariance matrix. - detect_outliers flag When true the algorithm finds influential outliers for all distributions except multinomial distribution. - conduct_trend_analysis flag When true the algorithm conducts trend analysis for the scatter plot. - estimation_method FISHER_SCORING
NEWTON_RAPHSON
HYBRID Specify the maximum likelihood estimation algorithm. - max_fisher_iterations integer If using the FISHER_SCORINGestimation_method, the maximum number of iterations. Minimum 0, maximum 20. - scale_parameter_method MLE
FIXED
DEVIANCE
PEARSON_CHISQUARE Specify the method to be used for the estimation of the scale parameter. -" -ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_3,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," scale_value number Only available if scale_parameter_method is set to Fixed. - negative_binomial_method MLE
FIXED Specify the method to be for the estimation of the negative binomial ancillary parameter. - negative_binomial_value number Only available if negative_binomial_method is set to Fixed. - use_p_converge flag Option for parameter convergence. - p_converge number Blank, or any positive value. - p_converge_type flag True = Absolute, False = Relative - use_l_converge flag Option for log-likelihood convergence. - l_converge number Blank, or any positive value. - l_converge_type flag True = Absolute, False = Relative - use_h_converge flag Option for Hessian convergence. - h_converge number Blank, or any positive value. - h_converge_type flag True = Absolute, False = Relative - max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100. - sing_tolerance integer - use_model_selection flag Enables the parameter threshold and model selection method controls.. - method LASSO

ELASTIC_NET

FORWARD_STEPWISE
RIDGE Determines the model selection method, or if using Ridge the regularization method, used. - detect_two_way_interactions flag When True the model will automatically detect two-way interactions between input fields. This control should only be enabled if the model is main effects only (that is, where the user has not created any higher order effects) and if the method selected is Forward Stepwise, Lasso, or Elastic Net. - automatic_penalty_params flag Only available if model selection method is Lasso or Elastic Net. Use this function to enter penalty parameters associated with either the Lasso or Elastic Net variable selection methods. If True, default values are used. If False, the penalty parameters are enabled custom values can be entered. -" -ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_4,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," lasso_penalty_param number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Lasso. - elastic_net_penalty_param1 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 1. - elastic_net_penalty_param2 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 2. - probability_entry number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect inclusion. - probability_removal number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect removal. - use_max_effects flag Only available if the method selected is Forward Stepwise. Enables the max_effects control. When False the default number of effects included should equal the total number of effects supplied to the model, minus the intercept. - max_effects integer Specify the maximum number of effects when using the forward stepwise building method. - use_max_steps flag Enables the max_steps control. When False the default number of steps should equal three times the number of effects supplied to the model, excluding the intercept. - max_steps integer Specify the maximum number of steps to be taken when using the Forward Stepwise building method. - use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false. - model_name string If use_model_name is true, specifies the model name to use. - usePI flag If true, predictor importance is calculated.. -" -863FD4EEE7625CF4012BC9E37B5B66CD25554B8A,863FD4EEE7625CF4012BC9E37B5B66CD25554B8A," applygle properties - -You can use the GLE modeling node to generate a GLE model nugget. The scripting name of this model nugget is applygle. For more information on scripting the modeling node itself, see [gle properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenodeslots.htmlglenodeslots). - - - -applygle properties - -Table 1. applygle properties - - applygle Properties Values Property description - - enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_0,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," glmmnode properties - -![GLMM node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/glmmnodeicon.png)A generalized linear mixed model (GLMM) extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. GLMM models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data. - - - -glmmnode properties - -Table 1. glmmnode properties - - glmmnode Properties Values Property description - - residual_subject_spec structured The combination of values of the specified categorical fields that uniquely define subjects within the data set - repeated_measures structured Fields used to identify repeated observations. - residual_group_spec [field1 ... fieldN] Fields that define independent sets of repeated effects covariance parameters. - residual_covariance_type Diagonal
AR1
ARMA11
COMPOUND_SYMMETRY
IDENTITY
TOEPLITZ
UNSTRUCTURED
VARIANCE_COMPONENTS Specifies covariance structure for residuals. - custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true). - target_field field Field to use as target if custom_target is true. - use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false. - use_field_or_value Field
Value Indicates whether field (default) or value is used to specify number of trials. - trials_field field Field to use to specify number of trials. - trials_value integer Value to use to specify number of trials. If specified, minimum value is 1. -" -0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_1,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false. - target_reference_value string Reference category to use if use_custom_target_reference is true. - dist_link_combination Nominal
Logit
GammaLog
BinomialLogit
PoissonLog
BinomialProbit
NegbinLog
BinomialLogC
Custom Common models for distribution of values for target. Choose Custom to specify a distribution from the list provided bytarget_distribution. - target_distribution Normal
Binomial
Multinomial
Gamma
Inverse
NegativeBinomial
Poisson Distribution of values for target when dist_link_combination is Custom. - link_function_type Identity
LogC
Log
CLOGLOGLogit
NLOGLOGPROBIT
POWER
CAUCHIT Link function to relate target
values to predictors.
If target_distribution is
Binomial you can use any
of the listed link functions.
If target_distribution is
Multinomial you can use
CLOGLOG, CAUCHIT, LOGIT,
NLOGLOG, or PROBIT.
If target_distribution is
anything other than Binomial or
Multinomial you can use
IDENTITY, LOG, or POWER. - link_function_param number Link function parameter value to use. Only applicable if normal_link_function or link_function_type is POWER. - use_predefined_inputs flag Indicates whether fixed effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false). Default is false. - fixed_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as fixed effect fields. -" -0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_2,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_intercept flag If true (default), includes the intercept in the model. - random_effects_list structured List of fields to specify as random effects. - regression_weight_field field Field to use as analysis weight field. - use_offset Noneoffset_valueoffset_field Indicates how offset is specified. Value None means no offset is used. - offset_value number Value to use for offset if use_offset is set to offset_value. - offset_field field Field to use for offset value if use_offset is set to offset_field. - target_category_order AscendingDescendingData Sorting order for categorical targets. Value Data specifies using the sort order found in the data. Default is Ascending. - inputs_category_order AscendingDescendingData Sorting order for categorical predictors. Value Data specifies using the sort order found in the data. Default is Ascending. - max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100. - confidence_level integer Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95. - degrees_of_freedom_method FixedVaried Specifies how degrees of freedom are computed for significance test. - test_fixed_effects_coeffecients ModelRobust Method for computing the parameter estimates covariance matrix. - use_p_converge flag Option for parameter convergence. - p_converge number Blank, or any positive value. - p_converge_type AbsoluteRelative - use_l_converge flag Option for log-likelihood convergence. - l_converge number Blank, or any positive value. - l_converge_type AbsoluteRelative - use_h_converge flag Option for Hessian convergence. - h_converge number Blank, or any positive value. - h_converge_type AbsoluteRelative - max_fisher_step integer - sing_tolerance number -" -0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_3,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false. - model_name string If use_model_name is true, specifies the model name to use. - confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities. - score_category_probabilities flag If true, produces predicted probabilities for categorical targets. Default is false. - max_categories integer If score_category_probabilities is true, specifies maximum number of categories to save. - score_propensity flag If true, produces propensity scores for flag target fields that indicate likelihood of ""true"" outcome for field. - emeans structure For each categorical field from the fixed effects list, specifies whether to produce estimated marginal means. - covariance_list structure For each continuous field from the fixed effects list, specifies whether to use the mean or a custom value when computing estimated marginal means. - mean_scale OriginalTransformed Specifies whether to compute estimated marginal means based on the original scale of the target (default) or on the link function transformation. - comparison_adjustment_method LSDSEQBONFERRONISEQSIDAK Adjustment method to use when performing hypothesis tests with multiple contrasts. - use_trials_field_or_value ""field""""value"" - residual_subject_ui_spec array Residual subject specification: The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals. -" -337CC5401082DFD6C8C79D49CD97F7BC197C7303,337CC5401082DFD6C8C79D49CD97F7BC197C7303," applyglmmnode properties - -You can use GLMM modeling nodes to generate a GLMM model nugget. The scripting name of this model nugget is applyglmmnode. For more information on scripting the modeling node itself, see [glmmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.htmlglmmnodeslots). - - - -applyglmmnode properties - -Table 1. applyglmmnode properties - - applyglmmnode Properties Values Property description - - confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities. - score_category_probabilities flag If set to True, produces the predicted probabilities for categorical targets. A field is created for each category. Default is False. - max_categories integer Maximum number of categories for which to predict probabilities. Used only if score_category_probabilities is True. -" -D1C3F3DB7837F7C5803F52829A542F6BA8B4837D_0,D1C3F3DB7837F7C5803F52829A542F6BA8B4837D," gmm properties - -![Gaussian Mixture node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythongmmnodeicon.png)A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. The Gaussian Mixture node in SPSS Modeler exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python. - - - -gmm properties - -Table 1. gmm properties - - gmm properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - inputs field List of the field names for input. - target field One field name for target. - fast_build boolean Utilize multiple CPU cores to improve model building. - use_partition boolean Set to True or False to specify whether to use partitioned data. Default is False. - covariance_type string Specify Full, Tied, Diag, or Spherical to set the covariance type. - number_component integer Specify an integer for the number of mixture components. Minimum value is 1. Default value is 2. - component_lable boolean Specify True to set the cluster label to a string or False to set the cluster label to a number. Default is False. - label_prefix string If using a string cluster label, you can specify a prefix. - enable_random_seed boolean Specify True if you want to use a random seed. Default is False. - random_seed integer If using a random seed, specify an integer to be used for generating random samples. -" -D1C3F3DB7837F7C5803F52829A542F6BA8B4837D_1,D1C3F3DB7837F7C5803F52829A542F6BA8B4837D," tol Double Specify the convergence threshold. Default is 0.000.1. - max_iter integer Specify the maximum number of iterations to perform. Default is 100. -" -F2D3C76D5EABBBF72A0314F29374527C8339591A,F2D3C76D5EABBBF72A0314F29374527C8339591A," applygmm properties - -You can use the Gaussian Mixture node to generate a Gaussian Mixture model nugget. The scripting name of this model nugget is applygmm. For more information on scripting the modeling node itself, see [gmm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnodeslots.htmlgmmnodeslots). - - - -applygmm properties - -Table 1. applygmm properties - - applygmm properties Data type Property description - - centers - item_count - total - dimension -" -1F781DA5779DAFEFBB53038F71A18BBE2649117B_0,1F781DA5779DAFEFBB53038F71A18BBE2649117B," associationrulesnode properties - -![Association Rules node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/gsar_node_icon.png)The Association Rules node is similar to the Apriori Node. However, unlike Apriori, the Association Rules node can process list data. In addition, the Association Rules node can be used with SPSS Analytic Server to process big data and take advantage of faster parallel processing. - - - -associationrulesnode properties - -Table 1. associationrulesnode properties - - associationrulesnode properties Data type Property description - - predictions field Fields in this list can only appear as a predictor of a rule - conditions [field1...fieldN] Fields in this list can only appear as a condition of a rule - max_rule_conditions integer The maximum number of conditions that can be included in a single rule. Minimum 1, maximum 9. - max_rule_predictions integer The maximum number of predictions that can be included in a single rule. Minimum 1, maximum 5. - max_num_rules integer The maximum number of rules that can be considered as part of rule building. Minimum 1, maximum 10,000. - rule_criterion_top_n ConfidenceRulesupportLiftConditionsupportDeployability The rule criterion that determines the value by which the top ""N"" rules in the model are chosen. - true_flags Boolean Setting as Y determines that only the true values for flag fields are considered during rule building. - rule_criterion Boolean Setting as Y determines that the rule criterion values are used for excluding rules during model building. - min_confidence number 0.1 to 100 - the percentage value for the minimum required confidence level for a rule produced by the model. If the model produces a rule with a confidence level less than the value specified here the rule is discarded. - min_rule_support number 0.1 to 100 - the percentage value for the minimum required rule support for a rule produced by the model. If the model produces a rule with a rule support level less than the specified value the rule is discarded. -" -1F781DA5779DAFEFBB53038F71A18BBE2649117B_1,1F781DA5779DAFEFBB53038F71A18BBE2649117B," min_condition_support number 0.1 to 100 - the percentage value for the minimum required condition support for a rule produced by the model. If the model produces a rule with a condition support level less than the specified value the rule is discarded. - min_lift integer 1 to 10 - represents the minimum required lift for a rule produced by the model. If the model produces a rule with a lift level less than the specified value the rule is discarded. - exclude_rules Boolean Used to select a list of related fields from which you do not want the model to create rules. Example: set :gsarsnode.exclude_rules = [field1,field2, field3]],field4, field5]]] - where each list of fields separated by [] is a row in the table. - num_bins integer Set the number of automatic bins that continuous fields are binned to. Minimum 2, maximum 10. - max_list_length integer Applies to any list fields for which the maximum length is not known. Elements in the list up until the number specified here are included in the model build; any further elements are discarded. Minimum 1, maximum 100. - output_confidence Boolean - output_rule_support Boolean - output_lift Boolean - output_condition_support Boolean - output_deployability Boolean - rules_to_display uptoall The maximum number of rules to display in the output tables. - display_upto integer If upto is set in rules_to_display, set the number of rules to display in the output tables. Minimum 1. - field_transformations Boolean - records_summary Boolean - rule_statistics Boolean - most_frequent_values Boolean - most_frequent_fields Boolean - word_cloud Boolean - word_cloud_sort ConfidenceRulesupportLiftConditionsupportDeployability - word_cloud_display integer Minimum 1, maximum 20 - max_predictions integer The maximum number of rules that can be applied to each input to the score. - criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules. -" -9DA9A2809D484A6CAA70A66A3548CF4A537950FC,9DA9A2809D484A6CAA70A66A3548CF4A537950FC," applyassociationrulesnode properties - -You can use the Association Rules modeling node to generate an association rules model nugget. The scripting name of this model nugget is applyassociationrulesnode. For more information on scripting the modeling node itself, see [associationrulesnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnodeslots.htmlgsarnodeslots). - - - -applyassociationrulesnode properties - -Table 1. applyassociationrulesnode properties - - applyassociationrulesnode properties Data type Property description - - max_predictions integer The maximum number of rules that can be applied to each input to the score. - criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules. -" -E01184BCBA866D676B5A236D6638E78D3F55C794_0,E01184BCBA866D676B5A236D6638E78D3F55C794," hdbscannode properties - -![HDBSCAN node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonhdbscannodeicon.png)Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set. The HDBSCAN node in SPSS Modeler exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first. - - - -hdbscannode properties - -Table 1. hdbscannode properties - - hdbscannode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - inputs field Input fields for clustering. - useHPO boolean Specify true or false to enable or disable Hyper-Parameter Optimization (HPO) based on Rbfopt, which automatically discovers the optimal combination of parameters so that the model will achieve the expected or lesser error rate on the samples. Default is false. - min_cluster_size integer The minimum size of clusters. Specify an integer. Default is 5. - min_samples integer The number of samples in a neighborhood for a point to be considered a core point. Specify an integer. If set to 0, the min_cluster_size is used. Default is 0. - algorithm string Specify which algorithm to use: best, generic, prims_kdtree, prims_balltree, boruvka_kdtree, or boruvka_balltree. Default is best. -" -E01184BCBA866D676B5A236D6638E78D3F55C794_1,E01184BCBA866D676B5A236D6638E78D3F55C794," metric string Specify which metric to use when calculating distance between instances in a feature array: euclidean, cityblock, L1, L2, manhattan, braycurtis, canberra, chebyshev, correlation, minkowski, or sqeuclidean. Default is euclidean. - useStringLabel boolean Specify true to use a string cluster label, or false to use a number cluster label. Default is false. - stringLabelPrefix string If the useStringLabel parameter is set to true, specify a value for the string label prefix. Default prefix is cluster. - approx_min_span_tree boolean Specify true to accept an approximate minimum spanning tree, or false if you are willing to sacrifice speed for correctness. Default is true. - cluster_selection_method string Specify the method to use for selecting clusters from the condensed tree: eom or leaf. Default is eom (Excess of Mass algorithm). - allow_single_cluster boolean Specify true if you want to allow single cluster results. Default is false. - p_value double Specify the p value to use if you're using minkowski for the metric. Default is 1.5. - leaf_size integer If using a space tree algorithm (boruvka_kdtree, or boruvka_balltree), specify the number of points in a leaf node of the tree. Default is 40. - outputValidity boolean Specify true or false to control whether the Validity Index chart is included in the model output. - outputCondensed boolean Specify true or false to control whether the Condensed Tree chart is included in the model output. - outputSingleLinkage boolean Specify true or false to control whether the Single Linkage Tree chart is included in the model output. -" -4F0098CE544BA8AC594F98AF8DF26B7911399750,4F0098CE544BA8AC594F98AF8DF26B7911399750," hdbscannugget properties - -You can use the HDBSCAN node to generate an HDBSCAN model nugget. The scripting name of this model nugget is hdbscannugget. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [hdbscannode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.htmlhdbscannodeslots). -" -8DAA2C34D27A7E09C0AB837C191E87F320790F75,8DAA2C34D27A7E09C0AB837C191E87F320790F75," histogramnode properties - -![Histogram node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/histogramnodeicon.png)The Histogram node shows the occurrence of values for numeric fields. It's often used to explore the data before manipulations and model building. Similar to the Distribution node, the Histogram node frequently reveals imbalances in the data. - - - -histogramnode properties - -Table 1. histogramnode properties - - histogramnode properties Data type Property description - - field field - color_field field - panel_field field - animation_field field - range_mode AutomaticUserDefined - range_min number - range_max number - bins ByNumberByWidth - num_bins number - bin_width number - normalize flag - separate_bands flag - x_label_auto flag - x_label string - y_label_auto flag - y_label string - use_grid flag - graph_background color Standard graph colors are described at the beginning of this section. -" -BDC1B4283563848E2C775804FC0857DBDE8843AF,BDC1B4283563848E2C775804FC0857DBDE8843AF," historynode properties - -![History node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/historynodeicon.png)The History node creates new fields containing data from fields in previous records. History nodes are most often used for sequential data, such as time series data. Before using a History node, you may want to sort the data using a Sort node. - -Example - -node = stream.create(""history"", ""My node"") -node.setPropertyValue(""fields"", [""Drug""]) -node.setPropertyValue(""offset"", 1) -node.setPropertyValue(""span"", 3) -node.setPropertyValue(""unavailable"", ""Discard"") -node.setPropertyValue(""fill_with"", ""undef"") - - - -historynode properties - -Table 1. historynode properties - - historynode properties Data type Property description - - fields list Fields for which you want a history. - offset number Specifies the latest record (prior to the current record) from which you want to extract historical field values. - span number Specifies the number of prior records from which you want to extract values. -" -2756DEAD36AC092838F80ACFFE6ECEE13A22A376,2756DEAD36AC092838F80ACFFE6ECEE13A22A376," isotonicasnode properties - -![Isotonic-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sparkisotonicasnodeicon.png)Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in SPSS Modeler is implemented in Spark. For details about Isotonic Regression algorithms, see [https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html). - - - -isotonicasnode properties - -Table 1. isotonicasnode properties - - isotonicasnode properties Data type Property description - - label string This property is a dependent variable for which isotonic regression is calculated. - features string This property is an independent variable. - weightCol string The weight represents a number of measures. Default is 1. -" -F67E458A29CF154C33221A8889789241725FE5C7_0,F67E458A29CF154C33221A8889789241725FE5C7," Python and Jython - -Jython is an implementation of the Python scripting language, which is written in the Java language and integrated with the Java platform. Python is a powerful object-oriented scripting language. - -Jython is useful because it provides the productivity features of a mature scripting language and, unlike Python, runs in any environment that supports a Java virtual machine (JVM). This means that the Java libraries on the JVM are available to use when you're writing programs. With Jython, you can take advantage of this difference, and use the syntax and most of the features of the Python language. - -As a scripting language, Python (and its Jython implementation) is easy to learn and efficient to code, and has minimal required structure to create a running program. Code can be entered interactively, that is, one line at a time. Python is an interpreted scripting language; there is no precompile step, as there is in Java. Python programs are simply text files that are interpreted as they're input (after parsing for syntax errors). Simple expressions, like defined values, as well as more complex actions, such as function definitions, are immediately executed and available for use. Any changes that are made to the code can be tested quickly. Script interpretation does, however, have some disadvantages. For example, use of an undefined variable is not a compiler error, so it's detected only if (and when) the statement in which the variable is used is executed. In this case, you can edit and run the program to debug the error. - -" -F67E458A29CF154C33221A8889789241725FE5C7_1,F67E458A29CF154C33221A8889789241725FE5C7,"Python sees everything, including all data and code, as an object. You can, therefore, manipulate these objects with lines of code. Some select types, such as numbers and strings, are more conveniently considered as values, not objects; this is supported by Python. There is one null value that's supported. This null value has the reserved name None. - -For a more in-depth introduction to Python and Jython scripting, and for some example scripts, see [http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html](http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html) and [http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html](http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html). -" -033F114BFF6D5479C2B4BE7C1542A4C778ABA53E,033F114BFF6D5479C2B4BE7C1542A4C778ABA53E," Adding attributes to a class instance - -Unlike in Java, in Python clients can add attributes to an instance of a class. Only the one instance is changed. For example, to add attributes to an instance x, set new values on that instance: - -x.attr1 = 1 -x.attr2 = 2 -. -" -8BC347015FD7CE2AF13B17DE4D287471CB994F38,8BC347015FD7CE2AF13B17DE4D287471CB994F38," The scripting API - -The Scripting API provides access to a wide range of SPSS Modeler functionality. All the methods described so far are part of the API and can be accessed implicitly within the script without further imports. However, if you want to reference the API classes, you must import the API explicitly with the following statement: - -import modeler.api - -This import statement is required by many of the scripting API examples. -" -F290D0C61B4A664E303DE559BBC559015FD375F9,F290D0C61B4A664E303DE559BBC559015FD375F9," Example: Searching for nodes using a custom filter - -The section [Finding nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.htmlpython_node_find) includes an example of searching for a node in a flow using the type name of the node as the search criterion. In some situations, a more generic search is required and this can be accomplished by using the NodeFilter class and the flow findAll() method. This type of search involves the following two steps: - - - -1. Creating a new class that extends NodeFilter and that implements a custom version of the accept() method. -2. Calling the flow findAll() method with an instance of this new class. This returns all nodes that meet the criteria defined in the accept() method. - - - -The following example shows how to search for nodes in a flow that have the node cache enabled. The returned list of nodes can be used to either flush or disable the caches of these nodes. - -import modeler.api - -class CacheFilter(modeler.api.NodeFilter): -""""""A node filter for nodes with caching enabled"""""" -def accept(this, node): -return node.isCacheEnabled() - -cachingnodes = modeler.script.stream().findAll(CacheFilter(), False) -" -78488A77CB39BDD413DBB7682F1DBE2675B3E3A0,78488A77CB39BDD413DBB7682F1DBE2675B3E3A0," Defining a class - -Within a Python class, you can define both variables and methods. Unlike in Java, in Python you can define any number of public classes per source file (or module). Therefore, you can think of a module in Python as similar to a package in Java. - -In Python, classes are defined using the class statement. The class statement has the following form: - -class name (superclasses): statement - -or - -class name (superclasses): -assignment -. -. -function -. -. - -When you define a class, you have the option to provide zero or more assignment statements. These create class attributes that are shared by all instances of the class. You can also provide zero or more function definitions. These function definitions create methods. The superclasses list is optional. - -The class name should be unique in the same scope, that is within a module, function, or class. You can define multiple variables to reference the same class. -" -E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0,E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0," Creating a class instance - -You can use classes to hold class (or shared) attributes or to create class instances. To create an instance of a class, you call the class as if it were a function. For example, consider the following class: - -class MyClass: -pass - -Here, the pass statement is used because a statement is required to complete the class, but no action is required programmatically. - -The following statement creates an instance of the class MyClass: - -x = MyClass() -" -3491F666270894EE4BE071FD4A8551DF94CB9889,3491F666270894EE4BE071FD4A8551DF94CB9889," Defining class attributes and methods - -Any variable that's bound in a class is a class attribute. Any function defined within a class is a method. Methods receive an instance of the class, conventionally called self, as the first argument. For example, to define some class attributes and methods, you might enter the following script: - -class MyClass -attr1 = 10 class attributes -attr2 = ""hello"" - -def method1(self): -print MyClass.attr1 reference the class attribute - -def method2(self): -print MyClass.attr2 reference the class attribute - -def method3(self, text): -self.text = text instance attribute -print text, self.text print my argument and my attribute - -method4 = method3 make an alias for method3 - -Inside a class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1). All references to instance attributes should be qualified with the self variable (for example, self.text). Outside the class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1) or with an instance of the class (for example, x.attr1, where x is an instance of the class). Outside the class, all references to instance variables should be qualified with an instance of the class (for example, x.text). -" -5ED78E19C7780C6856E1FA05D4B8A3F671FC878B,5ED78E19C7780C6856E1FA05D4B8A3F671FC878B," Diagrams - -The term diagram covers the functions that are supported by both normal flows and SuperNode flows, such as adding and removing nodes and modifying connections between the nodes. -" -640F57E8262F846DA7884E43F6F2F6C04CD15667,640F57E8262F846DA7884E43F6F2F6C04CD15667," Global values - -Global values are used to compute various summary statistics for specified fields. These summary values can be accessed anywhere within the flow. Global values are similar to flow parameters in that they are accessed by name through the flow. They're different from flow parameters in that the associated values are updated automatically when a Set Globals node is run, rather than being assigned by scripting. The global values for a flow are accessed by calling the flow's getGlobalValues() method. - -The GlobalValues object defines the functions that are shown in the following table. - - - -Functions that are defined by the GlobalValues object - -Table 1. Functions that are defined by the GlobalValues object - - Method Return type Description - - g.fieldNameIterator() Iterator Returns an iterator for each field name with at least one global value. - g.getValue(type, fieldName) Object Returns the global value for the specified type and field name, or None if no value can be located. The returned value is generally expected to be a number, although future functionality may return different value types. - g.getValues(fieldName) Map Returns a map containing the known entries for the specified field name, or None if there are no existing entries for the field. - - - -GlobalValues.Type defines the type of summary statistics that are available. The following summary statistics are available: - - - -* MAX: the maximum value of the field. -* MEAN: the mean value of the field. -* MIN: the minimum value of the field. -* STDDEV: the standard deviation of the field. -* SUM: the sum of the values in the field. - - - -For example, the following script accesses the mean value of the ""income"" field, which is computed by a Set Globals node: - -import modeler.api - -" -CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_0,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74," Parameters - -Parameters provide a useful way of passing values at runtime, rather than hard coding them directly in a script. Parameters and their values are defined in the same way as for flows; that is, as entries in the parameters table of a flow, or as parameters on the command line. The Stream class implements a set of functions defined by the ParameterProvider object as shown in the following table. Session provides a getParameters() call which returns an object that defines those functions. - - - -Functions defined by the ParameterProvider object - -Table 1. Functions defined by the ParameterProvider object - - Method Return type Description - - p.parameterIterator() Iterator Returns an iterator of parameter names for this object. - p.getParameterDefinition( parameterName) ParameterDefinition Returns the parameter definition for the parameter with the specified name, or None if no such parameter exists in this provider. The result may be a snapshot of the definition at the time the method was called and need not reflect any subsequent modifications made to the parameter through this provider. - p.getParameterLabel(parameterName) string Returns the label of the named parameter, or None if no such parameter exists. - p.setParameterLabel(parameterName, label) Not applicable Sets the label of the named parameter. - p.getParameterStorage( parameterName) ParameterStorage Returns the storage of the named parameter, or None if no such parameter exists. - p.setParameterStorage( parameterName, storage) Not applicable Sets the storage of the named parameter. - p.getParameterType(parameterName) ParameterType Returns the type of the named parameter, or None if no such parameter exists. - p.setParameterType(parameterName, type) Not applicable Sets the type of the named parameter. - p.getParameterValue(parameterName) Object Returns the value of the named parameter, or None if no such parameter exists. - p.setParameterValue(parameterName, value) Not applicable Sets the value of the named parameter. - - - -" -CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_1,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74,"In the following example, the script aggregates some Telco data to find which region has the lowest average income data. A flow parameter is then set with this region. That flow parameter is then used in a Select node to exclude that region from the data, before a churn model is built on the remainder. - -The example is artificial because the script generates the Select node itself and could therefore have generated the correct value directly into the Select node expression. However, flows are typically pre-built, so setting parameters in this way provides a useful example. - -The first part of this example script creates the flow parameter that will contain the region with the lowest average income. The script also creates the nodes in the aggregation branch and the model building branch, and connects them together. - -import modeler.api - -stream = modeler.script.stream() - - Initialize a flow parameter -stream.setParameterStorage(""LowestRegion"", modeler.api.ParameterStorage.INTEGER) - - First create the aggregation branch to compute the average income per region -sourcenode = stream.findByID(""idGXVBG5FBZH"") - -aggregatenode = modeler.script.stream().createAt(""aggregate"", ""Aggregate"", 294, 142) -aggregatenode.setPropertyValue(""keys"", [""region""]) -aggregatenode.setKeyedPropertyValue(""aggregates"", ""income"", [""Mean""]) - -tablenode = modeler.script.stream().createAt(""table"", ""Table"", 462, 142) - -stream.link(sourcenode, aggregatenode) -stream.link(aggregatenode, tablenode) - -selectnode = stream.createAt(""select"", ""Select"", 210, 232) -selectnode.setPropertyValue(""mode"", ""Discard"") - Reference the flow parameter in the selection -selectnode.setPropertyValue(""condition"", ""'region' = '$P-LowestRegion'"") - -typenode = stream.createAt(""type"", ""Type"", 366, 232) -" -CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_2,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74,"typenode.setKeyedPropertyValue(""direction"", ""Drug"", ""Target"") - -c50node = stream.createAt(""c50"", ""C5.0"", 534, 232) - -stream.link(sourcenode, selectnode) -stream.link(selectnode, typenode) -stream.link(typenode, c50node) - -The example script creates the following flow. - -Figure 1. Flow that results from the example script - -![Flow that results from the example script](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/images/example_stream_session_2.png) -" -E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB,E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB," Hidden variables - -You can hide data by creating Private variables. Private variables can be accessed only by the class itself. If you declare names of the form __xxx or __xxx_yyy, that is with two preceding underscores, the Python parser will automatically add the class name to the declared name, creating hidden variables. For example: - -class MyClass: -__attr = 10 private class attribute - -def method1(self): -pass - -def method2(self, p1, p2): -pass - -def __privateMethod(self, text): -self.__text = text private attribute - -Unlike in Java, in Python all references to instance variables must be qualified with self; there's no implied use of this. -" -9EE303CB0D99042537564DCDFC134B592BF0A3FE,9EE303CB0D99042537564DCDFC134B592BF0A3FE," Inheritance - -The ability to inherit from classes is fundamental to object-oriented programming. Python supports both single and multiple inheritance. Single inheritance means that there can be only one superclass. Multiple inheritance means that there can be more than one superclass. - -Inheritance is implemented by subclassing other classes. Any number of Python classes can be superclasses. In the Jython implementation of Python, only one Java class can be directly or indirectly inherited from. It's not required for a superclass to be supplied. - -Any attribute or method in a superclass is also in any subclass and can be used by the class itself, or by any client as long as the attribute or method isn't hidden. Any instance of a subclass can be used wherever an instance of a superclass can be used; this is an example of polymorphism. These features enable reuse and ease of extension. -" -97050C74E0C144E4F16AA808D275A9A472489EFB,97050C74E0C144E4F16AA808D275A9A472489EFB," The scripting language - -With the scripting facility for SPSS Modeler, you can create scripts that operate on the SPSS Modeler user interface, manipulate output objects, and run command syntax. You can also run scripts directly from within SPSS Modeler. - -Scripts in SPSS Modeler are written in the Python scripting language. The Java-based implementation of Python that's used by SPSS Modeler is called Jython. The scripting language consists of the following features: - - - -* A format for referencing nodes, flows, projects, output, and other SPSS Modeler objects -* A set of scripting statements or commands you can use to manipulate these objects -* A scripting expression language for setting the values of variables, parameters, and other objects -* Support for comments, continuations, and blocks of literal text - - - -The following sections of this documentation describe the Python scripting language, the Jython implementation of Python, and the basic syntax for getting started with scripting in SPSS Modeler. Information about specific properties and commands is provided in the sections that follow. -" -1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_0,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC," Metadata: Information about data - -Because nodes are connected together in a flow, information about the columns or fields that are available at each node is available. For example, in the SPSS Modeler user interface, this allows you to select which fields to sort or aggregate by. This information is called the data model. - -Scripts can also access the data model by looking at the fields coming into or out of a node. For some nodes, the input and output data models are the same (for example, a Sort node simply reorders the records but doesn't change the data model). Some, such as the Derive node, can add new fields. Others, such as the Filter node, can rename or remove fields. - -In the following example, the script takes a standard IBM® SPSS® Modeler druglearn.str flow, and for each field, builds a model with one of the input fields dropped. It does this by: - - - -1. Accessing the output data model from the Type node. -2. Looping through each field in the output data model. -3. Modifying the Filter node for each input field. -4. Changing the name of the model being built. -5. Running the model build node. - - - -Note: Before running the script in the druglean.str flow, remember to set the scripting language to Python if the flow was created in an old version of IBM SPSS Modeler desktop and its scripting language is set to Legacy). - -import modeler.api - -stream = modeler.script.stream() -filternode = stream.findByType(""filter"", None) -typenode = stream.findByType(""type"", None) -c50node = stream.findByType(""c50"", None) - Always use a custom model name -c50node.setPropertyValue(""use_model_name"", True) - -lastRemoved = None -fields = typenode.getOutputDataModel() -for field in fields: - If this is the target field then ignore it -if field.getModelingRole() == modeler.api.ModelingRole.OUT: -continue - - Re-enable the field that was most recently removed -if lastRemoved != None: -" -1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_1,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC,"filternode.setKeyedPropertyValue(""include"", lastRemoved, True) - - Remove the field -lastRemoved = field.getColumnName() -filternode.setKeyedPropertyValue(""include"", lastRemoved, False) - - Set the name of the new model then run the build -c50node.setPropertyValue(""model_name"", ""Exclude "" + lastRemoved) -c50node.run([]) - -The DataModel object provides a number of methods for accessing information about the fields or columns within the data model. These methods are summarized in the following table. - - - -DataModel object methods for accessing information about fields or columns - -Table 1. DataModel object methods for accessing information about fields or columns - - Method Return type Description - - d.getColumnCount() int Returns the number of columns in the data model. - d.columnIterator() Iterator Returns an iterator that returns each column in the ""natural"" insert order. The iterator returns instances of Column. - d.nameIterator() Iterator Returns an iterator that returns the name of each column in the ""natural"" insert order. - d.contains(name) Boolean Returns True if a column with the supplied name exists in this DataModel, False otherwise. - d.getColumn(name) Column Returns the column with the specified name. - d.getColumnGroup(name) ColumnGroup Returns the named column group or None if no such column group exists. - d.getColumnGroupCount() int Returns the number of column groups in this data model. - d.columnGroupIterator() Iterator Returns an iterator that returns each column group in turn. - d.toArray() Column[] Returns the data model as an array of columns. The columns are ordered in their ""natural"" insert order. - - - -Each field (Column object) includes a number of methods for accessing information about the column. The following table shows a selection of these. - - - -" -1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_2,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC,"Column object methods for accessing information about the column - -Table 2. Column object methods for accessing information about the column - - Method Return type Description - - c.getColumnName() string Returns the name of the column. - c.getColumnLabel() string Returns the label of the column or an empty string if there is no label associated with the column. - c.getMeasureType() MeasureType Returns the measure type for the column. - c.getStorageType() StorageType Returns the storage type for the column. - c.isMeasureDiscrete() Boolean Returns True if the column is discrete. Columns that are either a set or a flag are considered discrete. - c.isModelOutputColumn() Boolean Returns True if the column is a model output column. - c.isStorageDatetime() Boolean Returns True if the column's storage is a time, date or timestamp value. - c.isStorageNumeric() Boolean Returns True if the column's storage is an integer or a real number. - c.isValidValue(value) Boolean Returns True if the specified value is valid for this storage, and valid when the valid column values are known. - c.getModelingRole() ModelingRole Returns the modeling role for the column. - c.getSetValues() Object[] Returns an array of valid values for the column, or None if either the values are not known or the column is not a set. - c.getValueLabel(value) string Returns the label for the value in the column, or an empty string if there is no label associated with the value. - c.getFalseFlag() Object Returns the ""false"" indicator value for the column, or None if either the value is not known or the column is not a flag. - c.getTrueFlag() Object Returns the ""true"" indicator value for the column, or None if either the value is not known or the column is not a flag. -" -1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_3,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC," c.getLowerBound() Object Returns the lower bound value for the values in the column, or None if either the value is not known or the column is not continuous. - c.getUpperBound() Object Returns the upper bound value for the values in the column, or None if either the value is not known or the column is not continuous. - - - -Note that most of the methods that access information about a column have equivalent methods defined on the DataModel object itself. For example, the two following statements are equivalent: - -" -83A5FC83AA65717942A3437217F2114454552144,83A5FC83AA65717942A3437217F2114454552144," Creating nodes - -Flows provide a number of ways to create nodes. These methods are summarized in the following table. - - - -Methods for creating nodes - -Table 1. Methods for creating nodes - - Method Return type Description - - s.create(nodeType, name) Node Creates a node of the specified type and adds it to the specified flow. - s.createAt(nodeType, name, x, y) Node Creates a node of the specified type and adds it to the specified flow at the specified location. If either x < 0 or y < 0, the location is not set. - s.createModelApplier(modelOutput, name) Node Creates a model applier node that's derived from the supplied model output object. - - - -For example, you can use the following script to create a new Type node in a flow: - -stream = modeler.script.stream() -" -D9304450E79DC05B5ECC4FE98D48FECEF76A852E_0,D9304450E79DC05B5ECC4FE98D48FECEF76A852E," Finding nodes - -Flows provide a number of ways for locating an existing node. These methods are summarized in the following table. - - - -Methods for locating an existing node - -Table 1. Methods for locating an existing node - - Method Return type Description - - s.findAll(type, label) Collection Returns a list of all nodes with the specified type and label. Either the type or label can be None, in which case the other parameter is used. - s.findAll(filter, recursive) Collection Returns a collection of all nodes that are accepted by the specified filter. If the recursive flag is True, any SuperNodes within the specified flow are also searched. - s.findByID(id) Node Returns the node with the supplied ID or None if no such node exists. The search is limited to the current stream. - s.findByType(type, label) Node Returns the node with the supplied type, label, or both. Either the type or name can be None, in which case the other parameter is used. If multiple nodes result in a match, then an arbitrary one is chosen and returned. If no nodes result in a match, then the return value is None. - s.findDownstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes downstream of the supplied nodes. The returned list includes the originally supplied nodes. - s.findUpstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes upstream of the supplied nodes. The returned list includes the originally supplied nodes. - s.findProcessorForID(String id, boolean recursive) Node Returns the node with the supplied ID or None if no such node exists. If the recursive flag is true, then any composite nodes within this diagram are also searched. - - - -As an example, if a flow contains a single Filter node that the script needs to access, the Filter node can be found by using the following script: - -stream = modeler.script.stream() -node = stream.findByType(""filter"", None) -... - -" -D9304450E79DC05B5ECC4FE98D48FECEF76A852E_1,D9304450E79DC05B5ECC4FE98D48FECEF76A852E,"Alternatively, you can use the ID of a node. For example: - -stream = modeler.script.stream() -node = stream.findByID(""id49CVL4GHVV8"") the Derive node ID -node.setPropertyValue(""mode"", ""Multiple"") -node.setPropertyValue(""name_extension"", ""new_derive"") - -To obtain the ID for any node in a flow, click the Scripting icon on the toolbar, then select the desired node in your flow and click Insert selected node ID.![Node ID](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_node_id.png) -" -0CB42F245DF436AF2BCCB54B612786CA493B917B,0CB42F245DF436AF2BCCB54B612786CA493B917B," Importing, replacing, and deleting nodes - -Along with creating and connecting nodes, it's often necessary to replace and delete nodes from a flow. The methods that are available for importing, replacing, and deleting nodes are summarized in the following table. - - - -Methods for importing, replacing, and deleting nodes - -Table 1. Methods for importing, replacing, and deleting nodes - - Method Return type Description - - s.replace(originalNode, replacementNode, discardOriginal) Not applicable Replaces the specified node from the specified flow. Both the original node and replacement node must be owned by the specified flow. - s.insert(source, nodes, newIDs) List Inserts copies of the nodes in the supplied list. It's assumed that all nodes in the supplied list are contained within the specified flow. The newIDs flag indicates whether new IDs should be generated for each node, or whether the existing ID should be copied and used. It's assumed that all nodes in a flow have a unique ID, so this flag must be set to True if the source flow is the same as the specified flow. The method returns the list of newly inserted nodes, where the order of the nodes is undefined (that is, the ordering is not necessarily the same as the order of the nodes in the input list). - s.delete(node) Not applicable Deletes the specified node from the specified flow. The node must be owned by the specified flow. -" -1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7_0,1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7," Getting information about nodes - -Nodes fall into a number of different categories such as data import and export nodes, model building nodes, and other types of nodes. Every node provides a number of methods that can be used to find out information about the node. - -The methods that can be used to obtain the ID, name, and label of a node are summarized in the following table. - - - -Methods to obtain the ID, name, and label of a node - -Table 1. Methods to obtain the ID, name, and label of a node - - Method Return type Description - - n.getLabel() string Returns the display label of the specified node. The label is the value of the property custom_name only if that property is a non-empty string and the use_custom_name property is not set; otherwise, the label is the value of getName(). - n.setLabel(label) Not applicable Sets the display label of the specified node. If the new label is a non-empty string it is assigned to the property custom_name, and False is assigned to the property use_custom_name so that the specified label takes precedence; otherwise, an empty string is assigned to the property custom_name and True is assigned to the property use_custom_name. - n.getName() string Returns the name of the specified node. - n.getID() string Returns the ID of the specified node. A new ID is created each time a new node is created. The ID is persisted with the node when it's saved as part of a flow so that when the flow is opened, the node IDs are preserved. However, if a saved node is inserted into a flow, the inserted node is considered to be a new object and will be allocated a new ID. - - - -Methods that can be used to obtain other information about a node are summarized in the following table. - - - -Methods for obtaining information about a node - -Table 2. Methods for obtaining information about a node - - Method Return type Description - - n.getTypeName() string Returns the scripting name of this node. This is the same name that could be used to create a new instance of this node. -" -1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7_1,1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7," n.isInitial() Boolean Returns True if this is an initial node (one that occurs at the start of a flow). - n.isInline() Boolean Returns True if this is an in-line node (one that occurs mid-flow). - n.isTerminal() Boolean Returns True if this is a terminal node (one that occurs at the end of a flow). - n.getXPosition() int Returns the x position offset of the node in the flow. - n.getYPosition() int Returns the y position offset of the node in the flow. - n.setXYPosition(x, y) Not applicable Sets the position of the node in the flow. - n.setPositionBetween(source, target) Not applicable Sets the position of the node in the flow so that it's positioned between the supplied nodes. - n.isCacheEnabled() Boolean Returns True if the cache is enabled; returns False otherwise. - n.setCacheEnabled(val) Not applicable Enables or disables the cache for this object. If the cache is full and the caching becomes disabled, the cache is flushed. -" -AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271_0,AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271," Linking and unlinking nodes - -When you add a new node to a flow, you must connect it to a sequence of nodes before it can be used. Flows provide a number of methods for linking and unlinking nodes. These methods are summarized in the following table. - - - -Methods for linking and unlinking nodes - -Table 1. Methods for linking and unlinking nodes - - Method Return type Description - - s.link(source, target) Not applicable Creates a new link between the source and the target nodes. - s.link(source, targets) Not applicable Creates new links between the source node and each target node in the supplied list. - s.linkBetween(inserted, source, target) Not applicable Connects a node between two other node instances (the source and target nodes) and sets the position of the inserted node to be between them. Any direct link between the source and target nodes is removed first. - s.linkPath(path) Not applicable Creates a new path between node instances. The first node is linked to the second, the second is linked to the third, and so on. - s.unlink(source, target) Not applicable Removes any direct link between the source and the target nodes. - s.unlink(source, targets) Not applicable Removes any direct links between the source node and each object in the targets list. - s.unlinkPath(path) Not applicable Removes any path that exists between node instances. - s.disconnect(node) Not applicable Removes any links between the supplied node and any other nodes in the specified flow. - s.isValidLink(source, target) boolean Returns True if it would be valid to create a link between the specified source and target nodes. This method checks that both objects belong to the specified flow, that the source node can supply a link and the target node can receive a link, and that creating such a link will not cause a circularity in the flow. - - - -The example script that follows performs these five tasks: - - - -1. Creates a Data Asset node, a Filter node, and a Table output node. -2. Connects the nodes together. -" -AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271_1,AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271,"3. Filters the field ""Drug"" from the resulting output. -4. Runs the Table node. - - - -stream = modeler.script.stream() -sourcenode = stream.findByID(""idGXVBG5FBZH"") -filternode = stream.createAt(""filter"", ""Filter"", 192, 64) -tablenode = stream.createAt(""table"", ""Table"", 288, 64) -stream.link(sourcenode, filternode) -stream.link(filternode, tablenode) -filternode.setKeyedPropertyValue(""include"", ""Drug"", False) -" -F0EF147DBC0554F53B331E7B6D5715D0269FFBA8,F0EF147DBC0554F53B331E7B6D5715D0269FFBA8," Referencing existing nodes - -A flow is often pre-built with some parameters that must be modified before the flow runs. Modifying these parameters involves the following tasks: - - - -" -5EE63FCC911BA90930D413B58E1310EFE0E24243,5EE63FCC911BA90930D413B58E1310EFE0E24243," Traversing through nodes in a flow - -A common requirement is to identify nodes that are either upstream or downstream of a particular node. The flow provides a number of methods that can be used to identify these nodes. These methods are summarized in the following table. - - - -Methods to identify upstream and downstream nodes - -Table 1. Methods to identify upstream and downstream nodes - - Method Return type Description - - s.iterator() Iterator Returns an iterator over the node objects that are contained in the specified flow. If the flow is modified between calls of the next() function, the behavior of the iterator is undefined. - s.predecessorAt(node, index) Node Returns the specified immediate predecessor of the supplied node or None if the index is out of bounds. - s.predecessorCount(node) int Returns the number of immediate predecessors of the supplied node. - s.predecessors(node) List Returns the immediate predecessors of the supplied node. - s.successorAt(node, index) Node Returns the specified immediate successor of the supplied node or None if the index is out of bounds. -" -B6EC6454711B4946DBC663324DC478953723B1DD,B6EC6454711B4946DBC663324DC478953723B1DD," Creating nodes and modifying flows - -In some situations, you might want to add new nodes to existing flows. Adding nodes to existing flows typically involves the following tasks: - - - -" -9E77548AF396E9E9474371705BCFFF55684C5760,9E77548AF396E9E9474371705BCFFF55684C5760," Object-oriented programming - -Object-oriented programming is based on the notion of creating a model of the target problem in your programs. Object-oriented programming reduces programming errors and promotes the reuse of code. Python is an object-oriented language. Objects defined in Python have the following features: - - - -* Identity. Each object must be distinct, and this must be testable. The is and is not tests exist for this purpose. -* State. Each object must be able to store state. Attributes, such as fields and instance variables, exist for this purpose. -* Behavior. Each object must be able to manipulate its state. Methods exist for this purpose. - - - -Python includes the following features for supporting object-oriented programming: - - - -* Class-based object creation. Classes are templates for the creation of objects. Objects are data structures with associated behavior. -" -381D767DECD07EF388611FD22C3F08FB89BA73EC_0,381D767DECD07EF388611FD22C3F08FB89BA73EC," The scripting context - -The modeler.script module provides the context in which a script runs. The module is automatically imported into an SPSS® Modeler script at run time. The module defines four functions that provide a script with access to its execution environment: - - - -* The session() function returns the session for the script. The session defines information such as the locale and the SPSS Modeler backend (either a local process or a networked SPSS Modeler Server) that's being used to run any flows. -* The stream() function can be used with flow and SuperNode scripts. This function returns the flow that owns either the flow script or the SuperNode script that's being run. -* The diagram() function can be used with SuperNode scripts. This function returns the diagram within the SuperNode. For other script types, this function returns the same as the stream() function. -* The supernode() function can be used with SuperNode scripts. This function returns the SuperNode that owns the script that's being run. - - - -The four functions and their outputs are summarized in the following table. - - - -Summary of modeler.script functions - -Table 1. Summary of modeler.script functions - - Script type session() stream() diagram() supernode() - - Standalone Returns a session Returns the current managed flow at the time the script was invoked (for example, the flow passed via the batch mode -stream option), or None. Same as for stream() Not applicable - Flow Returns a session Returns a flow Same as for stream() Not applicable - SuperNode Returns a session Returns a flow Returns a SuperNode flow Returns a SuperNode - - - -The modeler.script module also defines a way of terminating the script with an exit code. The exit(exit-code) function stops the script from running and returns the supplied integer exit code. - -One of the methods that's defined for a flow is runAll(List). This method runs all executable nodes. Any models or outputs that are generated by running the nodes are added to the supplied list. - -" -381D767DECD07EF388611FD22C3F08FB89BA73EC_1,381D767DECD07EF388611FD22C3F08FB89BA73EC,"It's common for a flow run to generate outputs such as models, graphs, and other output. To capture this output, a script can supply a variable that's initialized to a list. For example: - -stream = modeler.script.stream() -results = [] -stream.runAll(results) - -When execution is complete, any objects that are generated by the execution can be accessed from the results list. -" -65998CB8747B70477477179E023332FD410E72D6,65998CB8747B70477477179E023332FD410E72D6," Scripting in SPSS Modeler -" -B416F3605ADF246170E1B462EE0F2CFCDF5E591B,B416F3605ADF246170E1B462EE0F2CFCDF5E591B," Setting properties - -Nodes, flows, models, and outputs all have properties that can be accessed and, in most cases, set. Properties are typically used to modify the behavior or appearance of the object. The methods that are available for accessing and setting object properties are summarized in the following table. - - - -Methods for accessing and setting object properties - -Table 1. Methods for accessing and setting object properties - - Method Return type Description - - p.getPropertyValue(propertyName) Object Returns the value of the named property or None if no such property exists. - p.setPropertyValue(propertyName, value) Not applicable Sets the value of the named property. - p.setPropertyValues(properties) Not applicable Sets the values of the named properties. Each entry in the properties map consists of a key that represents the property name and the value that should be assigned to that property. - p.getKeyedPropertyValue( propertyName, keyName) Object Returns the value of the named property and associated key or None if no such property or key exists. - p.setKeyedPropertyValue( propertyName, keyName, value) Not applicable Sets the value of the named property and key. - - - -For example, the following script sets the value of a Derive node for a flow: - -stream = modeler.script.stream() -node = stream.findByType(""derive"", None) -node.setPropertyValue(""name_extension"", ""new_derive"") - -Alternatively, you might want to filter a field from a Filter node. In this case, the value is also keyed on the field name. For example: - -stream = modeler.script.stream() - Locate the filter node ... -node = stream.findByType(""filter"", None) -" -542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E,542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E," Running a flow - -The following example runs all executable nodes in the flow, and is the simplest type of flow script: - -modeler.script.stream().runAll(None) - -The following example also runs all executable nodes in the flow: - -stream = modeler.script.stream() -stream.runAll(None) - -In this example, the flow is stored in a variable called stream. Storing the flow in a variable is useful because a script is typically used to modify either the flow or the nodes within a flow. Creating a variable that stores the flow results in a more concise script. -" -D1CDE4FF34352A6E5CDC9914FD26CF72574E2D59,D1CDE4FF34352A6E5CDC9914FD26CF72574E2D59," Flows - -A flow is the main IBM® SPSS® Modeler document type. It can be saved, loaded, edited and executed. Flows can also have parameters, global values, a script, and other information associated with them. -" -6524DFDEABF32BAE384ACB9BB21637ADE3B4AC4F,6524DFDEABF32BAE384ACB9BB21637ADE3B4AC4F," Flows, SuperNode streams, and diagrams - -Most of the time, the term flow means the same thing, regardless of whether it's a flow that's loaded from a file or used within a SuperNode. It generally means a collection of nodes that are connected together and can be executed. In scripting, however, not all operations are supported in all places. So as a script author, you should be aware of which flow variant they're using. -" -A4799F6BDEA1B1508528FC647DAD5D1B2EF777AA,A4799F6BDEA1B1508528FC647DAD5D1B2EF777AA," SuperNode flows - -A SuperNode flow is the type of flow used within a SuperNode. Like a normal flow, it contains nodes that are linked together. SuperNode flows differ from normal flows in various ways: - - - -" -D6DB1FBF1B0A11FD3423B6F057182019496FF3F5,D6DB1FBF1B0A11FD3423B6F057182019496FF3F5," Python scripting - -This guide to the Python scripting language is an introduction to the components that you're most likely to use when scripting in SPSS Modeler, including concepts and programming basics. - -This provides you with enough knowledge to start developing your own Python scripts to use in SPSS Modeler. -" -C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3_0,C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3," Using non-ASCII characters - -To use non-ASCII characters, Python requires explicit encoding and decoding of strings into Unicode. In SPSS Modeler, Python scripts are assumed to be encoded in UTF-8, which is a standard Unicode encoding that supports non-ASCII characters. The following script will compile because the Python compiler has been set to UTF-8 by SPSS Modeler. - -![Scripting example showing Japanese characters. The node that's created has an incorrect label.](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/images/japanese_example1.jpg) - -However, the resulting node has an incorrect label. - -Figure 1. Node label containing non-ASCII characters, displayed incorrectly - -![Node label containing non-ASCII characters, displayed incorrectly](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/images/incorrect_node_label.jpg) - -The label is incorrect because the string literal itself has been converted to an ASCII string by Python. - -Python allows Unicode string literals to be specified by adding a u character prefix before the string literal: - -![Scripting example showing Japanese characters. The node that's created has the correct label.](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/images/japanese_example2.jpg) - -This will create a Unicode string and the label will be appear correctly. - -Figure 2. Node label containing non-ASCII characters, displayed correctly - -" -C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3_1,C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3,"![Node label containing non-ASCII characters, displayed correctly](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/images/correct_node_label.jpg) - -Using Python and Unicode is a large topic that's beyond the scope of this document. Many books and online resources are available that cover this topic in great detail. -" -2413C64687E434B4B2095163A5106C0C62AA3F59,2413C64687E434B4B2095163A5106C0C62AA3F59," Blocks of code - -Blocks of code are groups of statements you can use where single statements are expected. - -Blocks of code can follow any of the following statements: if, elif, else, for, while, try, except, def, and class. These statements introduce the block of code with the colon character (:). For example: - -if x == 1: -y = 2 -z = 3 -elif: -y = 4 -z = 5 - -Use indentation to delimit code blocks (rather than the curly braces used in Java). All lines in a block must be indented to the same position. This is because a change in the indentation indicates the end of a code block. It's common to indent by four spaces per level. We recommend you use spaces to indent the lines, rather than tabs. Spaces and tabs must not be mixed. The lines in the outermost block of a module must start at column one, or a SyntaxError will occur. - -The statements that make up a code block (and follow the colon) can also be on a single line, separated by semicolons. For example: - -if x == 1: y = 2; z = 3; -" -20D6B2732BE17C12226F186559FBEA647799F3B8,20D6B2732BE17C12226F186559FBEA647799F3B8," Examples - -The print keyword prints the arguments immediately following it. If the statement is followed by a comma, a new line isn't included in the output. For example: - -print ""This demonstrates the use of a"", -print "" comma at the end of a print statement."" - -This will result in the following output: - -This demonstrates the use of a comma at the end of a print statement. - -The for statement iterates through a block of code. For example: - -mylist1 = [""one"", ""two"", ""three""] -for lv in mylist1: -print lv -continue - -In this example, three strings are assigned to the list mylist1. The elements of the list are then printed, with one element of each line. This results in the following output: - -one -two -three - -In this example, the iterator lv takes the value of each element in the list mylist1 in turn as the for loop implements the code block for each element. An iterator can be any valid identifier of any length. - -The if statement is a conditional statement. It evaluates the condition and returns either true or false, depending on the result of the evaluation. For example: - -mylist1 = [""one"", ""two"", ""three""] -for lv in mylist1: -if lv == ""two"" -print ""The value of lv is "", lv -else -print ""The value of lv is not two, but "", lv -continue - -In this example, the value of the iterator lv is evaluated. If the value of lv is two, a different string is returned to the string that's returned if the value of lv is not two. This results in the following output: - -The value of lv is not two, but one -" -03C28B0A536906CA3597B4D382759BD791D0CFEC,03C28B0A536906CA3597B4D382759BD791D0CFEC," Identifiers - -Identifiers are used to name variables, functions, classes, and keywords. - -Identifiers can be any length, but must start with either an alphabetical character of uppercase or lowercase, or the underscore character (_). Names that start with an underscore are generally reserved for internal or private names. After the first character, the identifier can contain any number and combination of alphabetical characters, numbers from 0-9, and the underscore character. - -There are some reserved words in Jython that can't be used to name variables, functions, or classes. They fall under the following categories: - - - -* Statement introducers:assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, pass, print, raise, return, try, and while -* Parameter introducers:as, import, and in -* Operators:and, in, is, lambda, not, and or - - - -Improper keyword use generally results in a SyntaxError. -" -659E43BA12550AA1E885BAEC945B7B1B25FD18E2,659E43BA12550AA1E885BAEC945B7B1B25FD18E2," Lists - -Lists are sequences of elements. A list can contain any number of elements, and the elements of the list can be any type of object. Lists can also be thought of as arrays. The number of elements in a list can increase or decrease as elements are added, removed, or replaced. -" -F837E34ED0AD4739783010D9FFD3684C37FD465C_0,F837E34ED0AD4739783010D9FFD3684C37FD465C," Mathematical methods - -From the math module you can access useful mathematical methods. Some of these methods are listed in the following table. Unless specified otherwise, all values are returned as floats. - - - -Mathematical methods - -Table 1. Mathematical methods - - Method Usage - - math.ceil(x) Return the ceiling of x as a float, that is the smallest integer greater than or equal to x - math.copysign(x, y) Return x with the sign of y. copysign(1, -0.0) returns -1 - math.fabs(x) Return the absolute value of x - math.factorial(x) Return x factorial. If x is negative or not an integer, a ValueError is raised. - math.floor(x) Return the floor of x as a float, that is the largest integer less than or equal to x - math.frexp(x) Return the mantissa (m) and exponent (e) of x as the pair (m, e). m is a float and e is an integer, such that x == m * 2e exactly. If x is zero, returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. - math.fsum(iterable) Return an accurate floating point sum of values in iterable - math.isinf(x) Check if the float x is positive or negative infinitive - math.isnan(x) Check if the float x is NaN (not a number) - math.ldexp(x, i) Return x * (2i). This is essentially the inverse of the function frexp. - math.modf(x) Return the fractional and integer parts of x. Both results carry the sign of x and are floats. - math.trunc(x) Return the Real value x, that has been truncated to an Integral. - math.exp(x) Return ex - math.log(x[, base]) Return the logarithm of x to the given value of base. If base is not specified, the natural logarithm of x is returned. - math.log1p(x) Return the natural logarithm of 1+x (base e) - math.log10(x) Return the base-10 logarithm of x -" -F837E34ED0AD4739783010D9FFD3684C37FD465C_1,F837E34ED0AD4739783010D9FFD3684C37FD465C," math.pow(x, y) Return x raised to the power y. pow(1.0, x) and pow(x, 0.0) always return 1, even when x is zero or NaN. - math.sqrt(x) Return the square root of x - - - -Along with the mathematical functions, there are also some useful trigonometric methods. These methods are listed in the following table. - - - -Trigonometric methods - -Table 2. Trigonometric methods - - Method Usage - - math.acos(x) Return the arc cosine of x in radians - math.asin(x) Return the arc sine of x in radians - math.atan(x) Return the arc tangent of x in radians - math.atan2(y, x) Return atan(y / x) in radians. - math.cos(x) Return the cosine of x in radians. - math.hypot(x, y) Return the Euclidean norm sqrt(xx + yy). This is the length of the vector from the origin to the point (x, y). - math.sin(x) Return the sine of x in radians - math.tan(x) Return the tangent of x in radians - math.degrees(x) Convert angle x from radians to degrees - math.radians(x) Convert angle x from degrees to radians - math.acosh(x) Return the inverse hyperbolic cosine of x - math.asinh(x) Return the inverse hyperbolic sine of x - math.atanh(x) Return the inverse hyperbolic tangent of x - math.cosh(x) Return the hyperbolic cosine of x - math.sinh(x) Return the hyperbolic cosine of x - math.tanh(x) Return the hyperbolic tangent of x - - - -There are also two mathematical constants. The value of math.pi is the mathematical constant pi. The value of math.e is the mathematical constant e. -" -48CCA78CEB92570BCE08F4E1A5677E8CD7936095,48CCA78CEB92570BCE08F4E1A5677E8CD7936095," Operations - -Use an equals sign (=) to assign values. - -For example, to assign the value 3 to a variable called x, you would use the following statement: - -x = 3 - -You can also use the equals sign to assign string type data to a variable. For example, to assign the value a string value to the variable y, you would use the following statement: - -y = ""a string value"" - -The following table lists some commonly used comparison and numeric operations, and their descriptions. - - - -Common comparison and numeric operations - -Table 1. Common comparison and numeric operations - - Operation Description - - x < y Is x less than y? - x > y Is x greater than y? - x <= y Is x less than or equal to y? - x >= y Is x greater than or equal to y? - x == y Is x equal to y? - x != y Is x not equal to y? - x <> y Is x not equal to y? - x + y Add y to x - x - y Subtract y from x - x * y Multiply x by y -" -622526F6C171CED140394F3DD707B612778B661E,622526F6C171CED140394F3DD707B612778B661E," Passing arguments to a script - -Passing arguments to a script is useful because a script can be used repeatedly without modification. - -The arguments you pass on the command line are passed as values in the list sys.argv. You can use the len(sys.argv) command to obtain the number of values passed. For example: - -import sys -print ""test1"" -print sys.argv[0] -print sys.argv[1] -print len(sys.argv) - -In this example, the import command imports the entire sys class so that you can use the existing methods for this class, such as argv. - -The script in this example can be invoked using the following line: - -/u/mjloos/test1 mike don - -The result is the following output: - -/u/mjloos/test1 mike don -test1 -mike -" -03A70C271775C3B15541B86E53E467844EF87296,03A70C271775C3B15541B86E53E467844EF87296," Remarks - -Remarks are comments that are introduced by the pound (or hash) sign (). All text that follows the pound sign on the same line is considered part of the remark and is ignored. A remark can start in any column. - -The following example demonstrates the use of remarks: - -" -9F27A4650B0B0BF36223937D0CF60E460B66A723,9F27A4650B0B0BF36223937D0CF60E460B66A723," Statement syntax - -The statement syntax for Python is very simple. - -In general, each source line is a single statement. Except for expression and assignment statements, each statement is introduced by a keyword name, such as if or for. Blank lines or remark lines can be inserted anywhere between any statements in the code. If there's more than one statement on a line, each statement must be separated by a semicolon (;). - -Very long statements can continue on more than one line. In this case, the statement that is to continue on to the next line must end with a backslash (). For example: - -x = ""A loooooooooooooooooooong string"" + -""another looooooooooooooooooong string"" - -When you enclose a structure by parentheses (()), brackets ([]), or curly braces ({}), the statement can be continued on a new line after any comma, without having to insert a backslash. For example: - -" -14F850B810E969CE2646D5641300FB407A6C49C5,14F850B810E969CE2646D5641300FB407A6C49C5," Strings - -A string is an immutable sequence of characters that's treated as a value. Strings support all of the immutable sequence functions and operators that result in a new string. For example, ""abcdef""[1:4] results in the output ""bcd"". - -In Python, characters are represented by strings of length one. - -Strings literals are defined by the use of single or triple quoting. Strings that are defined using single quotes can't span lines, while strings that are defined using triple quotes can. You can enclose a string in single quotes (') or double quotes (""). A quoting character may contain the other quoting character un-escaped or the quoting character escaped, that's preceded by the backslash () character. -" -398A23291331968098B47496D504743991855A61_0,398A23291331968098B47496D504743991855A61," kdemodel properties - -![KDE Modeling node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonkdenodeicon.png)Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. - - - -kdemodel properties - -Table 1. kdemodel properties - - kdemodel properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - inputs field List of the field names for input. - bandwidth double Default is 1. - kernel string The kernel to use: gaussian, tophat, epanechnikov, exponential, linear, or cosine. Default is gaussian. - algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto. - metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean. - atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0. -" -398A23291331968098B47496D504743991855A61_1,398A23291331968098B47496D504743991855A61," rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8. - breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True. - leaf_size integer The leaf size of the underlying tree. Default is 40. Changing this value may significantly impact the performance. - p_value double Specify the P Value to use if you're using Minkowski for the metric. Default is 1.5. - custom_name -" -0EA3470872BF545059B23B040AB1EB393630A29D_0,0EA3470872BF545059B23B040AB1EB393630A29D," kdeexport properties - -![KDE Simulation node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonkdenodeicon.png)Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. - - - -kdeexport properties - -Table 1. kdeexport properties - - kdeexport properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required. - inputs field List of the field names for input. - bandwidth double Default is 1. - kernel string The kernel to use: gaussian or tophat. Default is gaussian. - algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto. - metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean. - atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0. -" -0EA3470872BF545059B23B040AB1EB393630A29D_1,0EA3470872BF545059B23B040AB1EB393630A29D," rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8. - breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True. -" -9BEA57D80C215D963CB0C54046136FB3E88C7D5C,9BEA57D80C215D963CB0C54046136FB3E88C7D5C," kdeapply properties - -You can use the KDE Modeling node to generate a KDE model nugget. The scripting name of this model nugget is kdeapply. For information on scripting the modeling node itself, see [kdemodel properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdemodelnodeslots.htmlkdemodelnodeslots). - - - -kdeapply properties - -Table 1. kdeapply properties - - kdeapply properties Data type Property description - - out_log_density boolean Specify True or False to include or exclude the log density value in the output. Default is False. -" -720712D40BFDEF5974C7C025A6AC0D0649124B79_0,720712D40BFDEF5974C7C025A6AC0D0649124B79," kmeansasnode properties - -![K-Means-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sparkkmeansasnodeicon.png)K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark. For details about K-Means algorithms, see [https://spark.apache.org/docs/2.2.0/ml-clustering.html](https://spark.apache.org/docs/2.2.0/ml-clustering.html). Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables. - - - -kmeansasnode properties - -Table 1. kmeansasnode properties - - kmeansasnode Properties Values Property description - - roleUse string Specify predefined to use predefined roles, or custom to use custom field assignments. Default is predefined. - autoModel Boolean Specify true to use the default name ($S-prediction) for the new generated scoring field, or false to use a custom name. Default is true. - features field List of the field names for input when the roleUse property is set to custom. - name string The name of the new generated scoring field when the autoModel property is set to false. - clustersNum integer The number of clusters to create. Default is 5. - initMode string The initialization algorithm. Possible values are k-means or random. Default is k-means . - initSteps integer The number of initialization steps when initMode is set to k-means . Default is 2. - advancedSettings Boolean Specify true to make the following four properties available. Default is false. - maxIteration integer Maximum number of iterations for clustering. Default is 20. -" -720712D40BFDEF5974C7C025A6AC0D0649124B79_1,720712D40BFDEF5974C7C025A6AC0D0649124B79," tolerance string The tolerance to stop the iterations. Possible settings are 1.0E-1, 1.0E-2, ..., 1.0E-6. Default is 1.0E-4. - setSeed Boolean Specify true to use a custom random seed. Default is false. -" -6F35B89192B6C9A233B859CF66FCC435F3F9E650,6F35B89192B6C9A233B859CF66FCC435F3F9E650," kmeansnode properties - -![K-Means node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/kmeansnodeicon.png)The K-Means node clusters the data set into distinct groups (or clusters). The method defines a fixed number of clusters, iteratively assigns records to clusters, and adjusts the cluster centers until further refinement can no longer improve the model. Instead of trying to predict an outcome, k-means uses a process known as unsupervised learning to uncover patterns in the set of input fields. - - - -kmeansnode properties - -Table 1. kmeansnode properties - - kmeansnode Properties Values Property description - - inputs [field1 ... fieldN] K-means models perform cluster analysis on a set of input fields but do not use a target field. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - num_clusters number - gen_distance flag - cluster_label StringNumber - label_prefix string - mode SimpleExpert - stop_on DefaultCustom - max_iterations number - tolerance number -" -57D441EF305442BCDBBE48B980B87D47B825FFF9,57D441EF305442BCDBBE48B980B87D47B825FFF9," applykmeansnode properties - -You can use K-Means modeling nodes to generate a K-Means model nugget. The scripting name of this model nugget is applykmeansnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [kmeansnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.htmlkmeansnodeslots). -" -CC60FEBF8E5D1907CE0CCF3868CD9E4B494AA1BF,CC60FEBF8E5D1907CE0CCF3868CD9E4B494AA1BF," knnnode properties - -![KNN node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/knn_nodeicon.png)The k-Nearest Neighbor (KNN) node associates a new case with the category or value of the k objects nearest to it in the predictor space, where k is an integer. Similar cases are near each other and dissimilar cases are distant from each other. - - - -knnnode properties - -Table 1. knnnode properties - - knnnode Properties Values Property description - - analysis PredictTargetIdentifyNeighbors - objective BalanceSpeedAccuracyCustom - normalize_ranges flag - use_case_labels flag Check box to enable next option. - case_labels_field field - identify_focal_cases flag Check box to enable next option. - focal_cases_field field - automatic_k_selection flag - fixed_k integer Enabled only if automatic_k_selectio is False. - minimum_k integer Enabled only if automatic_k_selectio is True. - maximum_k integer - distance_computation EuclideanCityBlock - weight_by_importance flag - range_predictions MeanMedian - perform_feature_selection flag - forced_entry_inputs [field1 ... fieldN] - stop_on_error_ratio flag - number_to_select integer - minimum_change number - validation_fold_assign_by_field flag - number_of_folds integer Enabled only if validation_fold_assign_by_field is False - set_random_seed flag - random_seed number - folds_field field Enabled only if validation_fold_assign_by_field is True - all_probabilities flag - save_distances flag - calculate_raw_propensities flag -" -8B32EB4742D88B5CEC2E1C9616958BD7F8986785,8B32EB4742D88B5CEC2E1C9616958BD7F8986785," applyknnnode properties - -You can use KNN modeling nodes to generate a KNN model nugget. The scripting name of this model nugget is applyknnnode. For more information on scripting the modeling node itself, see [knnnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnodeslots.htmlknnnodeslots). - - - -applyknnnode properties - -Table 1. applyknnnode properties - - applyknnnode Properties Values Property description - - all_probabilities flag -" -0563FC6874B43FA0BCA09AE54805FE98BFA33042,0563FC6874B43FA0BCA09AE54805FE98BFA33042," kohonennode properties - -![Kohonen node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/kohonennodeicon.png)The Kohonen node generates a type of neural network that can be used to cluster the data set into distinct groups. When the network is fully trained, records that are similar should be close together on the output map, while records that are different will be far apart. You can look at the number of observations captured by each unit in the model nugget to identify the strong units. This may give you a sense of the appropriate number of clusters. - - - -kohonennode properties - -Table 1. kohonennode properties - - kohonennode Properties Values Property description - - inputs [field1 ... fieldN] Kohonen models use a list of input fields, but no target. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - continue flag - show_feedback flag - stop_on Default
Time - time number - optimize Speed
Memory Use to specify whether model building should be optimized for speed or for memory. - cluster_label flag - mode Simple
Expert - width number - length number - decay_style Linear
Exponential - phase1_neighborhood number - phase1_eta number - phase1_cycles number - phase2_neighborhood number - phase2_eta number - phase2_cycles number -" -2939716BFA6089C8B6373ED7C6397AF71389A5C8,2939716BFA6089C8B6373ED7C6397AF71389A5C8," applykohonennode properties - -You can use Kohonen modeling nodes to generate a Kohonen model nugget. The scripting name of this model nugget is applykohonennode. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots). - - - -applykohonennode properties - -Table 1. applykohonennode properties - - applykohonennode Properties Values Property description - - enable_sql_generation falsetruenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B_0,87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B," logregnode properties - -![Logistic node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/logisticnodeicon.png)Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric range. - - - -logregnode properties - -Table 1. logregnode properties - - logregnode Properties Values Property description - - target field Logistic regression models require a single target field and one or more input fields. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - logistic_procedure BinomialMultinomial - include_constant flag - mode SimpleExpert - method EnterStepwiseForwardsBackwardsBackwardsStepwise - binomial_method EnterForwardsBackwards - model_type MainEffectsFullFactorialCustom When FullFactorial is specified as the model type, stepping methods will not run, even if specified. Instead, Enter will be the method used. If the model type is set to Custom but no custom fields are specified, a main-effects model will be built. - custom_terms [[BP Sex][BP][Age]] - multinomial_base_category string Specifies how the reference category is determined. - binomial_categorical_input string - binomial_input_contrast IndicatorSimpleDifferenceHelmertRepeatedPolynomialDeviation Keyed property for categorical input that specifies how the contrast is determined. See the example for usage. - binomial_input_category FirstLast Keyed property for categorical input that specifies how the reference category is determined. See the example for usage. - scale NoneUserDefinedPearsonDeviance - scale_value number - all_probabilities flag -" -87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B_1,87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B," tolerance 1.0E-51.0E-61.0E-71.0E-81.0E-91.0E-10 - min_terms number - use_max_terms flag - max_terms number - entry_criterion ScoreLR - removal_criterion LRWald - probability_entry number - probability_removal number - binomial_probability_entry number - binomial_probability_removal number - requirements HierarchyDiscreteHierarchyAllContainmentNone - max_iterations number - max_steps number - p_converge 1.0E-41.0E-51.0E-61.0E-71.0E-80 - l_converge 1.0E-11.0E-21.0E-31.0E-41.0E-50 - delta number - iteration_history flag - history_steps number - summary flag - likelihood_ratio flag - asymptotic_correlation flag - goodness_fit flag - parameters flag - confidence_interval number - asymptotic_covariance flag - classification_table flag - stepwise_summary flag - info_criteria flag - monotonicity_measures flag - binomial_output_display at_each_stepat_last_step - binomial_goodness_of_fit flag - binomial_parameters flag - binomial_iteration_history flag - binomial_classification_plots flag - binomial_ci_enable flag - binomial_ci number - binomial_residual outliersall - binomial_residual_enable flag - binomial_outlier_threshold number - binomial_classification_cutoff number - binomial_removal_criterion LRWaldConditional -" -7C8BCAFBD032E30DCC7C39E28A2B5DE1E340DA6B,7C8BCAFBD032E30DCC7C39E28A2B5DE1E340DA6B," applylogregnode properties - -You can use Logistic modeling nodes to generate a Logistic model nugget. The scripting name of this model nugget is applylogregnode. For more information on scripting the modeling node itself, [logregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.htmllogregnodeslots). - - - -applylogregnode properties - -Table 1. applylogregnode properties - - applylogregnode Properties Values Property description - - calculate_raw_propensities flag -" -7C4F082004DBA0B946D64AA6C0127041F4622C7B,7C4F082004DBA0B946D64AA6C0127041F4622C7B," lsvmnode properties - -![LSVM node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/lsvm_icon.png)With the Linear Support Vector Machine (LSVM) node, you can classify data into one of two groups without overfitting. LSVM is linear and works well with wide data sets, such as those with a very large number of records. - - - -lsvmnode properties - -Table 1. lsvmnode properties - - lsvmnode Properties Values Property description - - intercept flag Includes the intercept in the model. Default value is True. - target_order AscendingDescending Specifies the sorting order for the categorical target. Ignored for continuous targets. Default is Ascending. - precision number Used only if measurement level of target field is Continuous. Specifies the parameter related to the sensitiveness of the loss for regression. Minimum is 0 and there is no maximum. Default value is 0.1. - exclude_missing_values flag When True, a record is excluded if any single value is missing. The default value is False. - penalty_function L1L2 Specifies the type of penalty function used. The default value is L2. -" -5890D52D3DDE4C249AD06C5A4DFE25542723F1C1,5890D52D3DDE4C249AD06C5A4DFE25542723F1C1," applylsvmnode properties - -You can use LSVM modeling nodes to generate an LSVM model nugget. The scripting name of this model nugget is applylsvmnode. For more information on scripting the modeling node itself, see [lsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html). - - - -applylsvmnode properties - -Table 1. applylsvmnode properties - - applylsvmnode Properties Values Property description - -" -3426FB738655136D42FA32BD6CFBFD979A3D5574,3426FB738655136D42FA32BD6CFBFD979A3D5574," matrixnode properties - -![Matrix node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/matrixnodeicon.png)The Matrix node creates a table that shows relationships between fields. It's most commonly used to show the relationship between two symbolic fields, but it can also show relationships between flag fields or numeric fields. - - - -matrixnode properties - -Table 1. matrixnode properties - - matrixnode properties Data type Property description - - fields SelectedFlagsNumerics - row field - column field - include_missing_values flag Specifies whether user-missing (blank) and system missing (null) values are included in the row and column output. - cell_contents CrossTabsFunction - function_field string - function SumMeanMinMaxSDev - sort_mode UnsortedAscendingDescending - highlight_top number If non-zero, then true. - highlight_bottom number If non-zero, then true. - display [CountsExpectedResidualsRowPctColumnPctTotalPct] - include_totals flag - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_mode ScreenFile Used to specify target location for output generated from the output node. - output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output. Both the Formatted and Delimited formats can take the modifier transposed, which transposes the rows and columns in the table. - paginate_output flag When the output_format is HTML, causes the output to be separated into pages. -" -DA3D295DA633CD271FB3970AD2ED4B31BDCB6247_0,DA3D295DA633CD271FB3970AD2ED4B31BDCB6247," meansnode properties - -![Means node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/matrixnodeicon.png)The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you could compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did. - - - -meansnode properties - -Table 1. meansnode properties - - meansnode properties Data type Property description - - means_mode BetweenGroupsBetweenFields Specifies the type of means statistic to be executed on the data. - test_fields [field1 ... fieldn] Specifies the test field when means_mode is set to BetweenGroups. - grouping_field field Specifies the grouping field. - paired_fields [field1 field2]field3 field4]...] Specifies the field pairs to use when means_mode is set to BetweenFields. - label_correlations flag Specifies whether correlation labels are shown in output. This setting applies only when means_mode is set to BetweenFields. - correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value. - weak_label string - medium_label string - strong_label string - weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90. - strong_above_probability number Cutoff value for strong correlations. - weak_below_absolute number When correlation_mode is set to Absolute, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90. - strong_above_absolute number Cutoff value for strong correlations. - unimportant_label string - marginal_label string - important_label string - unimportant_below number Cutoff value for low field importance. This must be a value between 0 and 1—for example, 0.90. -" -DA3D295DA633CD271FB3970AD2ED4B31BDCB6247_1,DA3D295DA633CD271FB3970AD2ED4B31BDCB6247," important_above number - use_output_name flag Specifies whether a custom output name is used. - output_name string Name to use. - output_mode ScreenFile Specifies the target location for output generated from the output node. - output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Specifies the type of output. -" -A148122DA72AD9FF05B3483D6F50975C50B4AB33_0,A148122DA72AD9FF05B3483D6F50975C50B4AB33," mergenode properties - -![Merge node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/mergenodeicon.png) The Merge node takes multiple input records and creates a single output record containing some or all of the input fields. It's useful for merging data from different sources, such as internal customer data and purchased demographic data. - - - -mergenode properties - -Table 1. mergenode properties - - mergenode properties Data type Property description - - method Order
Keys
Condition
Rankedcondition Specify whether records are merged in the order they are listed in the data files, if one or more key fields will be used to merge records with the same value in the key fields, if records will be merged if a specified condition is satisfied, or if each row pairing in the primary and all secondary data sets are to be merged; using the ranking expression to sort any multiple matches into order from low to high. - condition string If method is set to Condition, specifies the condition for including or discarding records. - key_fields list - common_keys flag - join Inner
FullOuter
PartialOuter
Anti - outer_join_tag.n flag In this property, n is the tag name as displayed in the node properties. Note that multiple tag names may be specified, as any number of datasets could contribute incomplete records. - single_large_input flag Specifies whether optimization for having one input relatively large compared to the other inputs will be used. - single_large_input_tag string Specifies the tag name as displayed in the note properties. Note that the usage of this property differs slightly from the outer_join_tag property (flag versus string) because only one input dataset can be specified. - use_existing_sort_keys flag Specifies whether the inputs are already sorted by one or more key fields. - existing_sort_keys [['string','Ascending'] \ ['string'','Descending']] Specifies the fields that are already sorted and the direction in which they are sorted. -" -A148122DA72AD9FF05B3483D6F50975C50B4AB33_1,A148122DA72AD9FF05B3483D6F50975C50B4AB33," primary_dataset string If method is Rankedcondition, select the primary data set in the merge. This can be considered as the left side of an outer join merge. - rename_duplicate_fields boolean If method is Rankedcondition, and this is set to Y, if the resulting merged data set contains multiple fields with the same name from different data sources the respective tags from the data sources are added at the start of the field column headers. - merge_condition string - ranking_expression string -" -ADB2D2B53C7F2A464A38F7DE5D7A74A39E697528,ADB2D2B53C7F2A464A38F7DE5D7A74A39E697528," Common modeling node properties - -The following properties are common to some or all modeling nodes. Any exceptions are noted in the documentation for individual modeling nodes as appropriate. - - - -Common modeling node properties - -Table 1. Common modeling node properties - - Property Values Property description - - custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used. - target or targets field or [field1 ... fieldN] Specifies a single target field or multiple target fields depending on the model type. - inputs [field1 ... fieldN] Input or predictor fields used by the model. - partition field - use_partitioned_data flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model. - use_split_data flag - splits [field1 ... fieldN] Specifies the field or fields to use for split modeling. Effective only if use_split_data is set to True. - use_frequency flag Weight and frequency fields are used by specific models as noted for each model type. - frequency_field field - use_weight flag - weight_field field - use_model_name flag -" -5E1CE04D915B9A758F234F859DFFEFAB46484C97,5E1CE04D915B9A758F234F859DFFEFAB46484C97," multilayerperceptronnode properties - -![MultiLayerPerceptron-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sparkmultilayerperceptronnodeicon.png)Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers. Each layer is fully connected to the next layer in the network. The MultiLayerPerceptron-AS node in SPSS Modeler is implemented in Spark. For details about the multilayer perceptron classifier (MLPC), see - -[https://spark.apache.org/docs/latest/ml-classification-regression.html#multilayer-perceptron-classifier](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier). - - - -multilayerperceptronnode properties - -Table 1. multilayerperceptronnode properties - - multilayerperceptronnode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - target field One field name for target. - inputs field List of the field names for input. - num_hidden_layers string Specify the number of hidden layers. Use a comma between multiple hidden layers. - num_output_number string Specify the number of output layers. - random_seed integer Generate the seed used by the random number generator. - maxiter integer Specify the maximum number of iterations to perform. - set_expert boolean Select the Expert Mode option in the Model Building section if you want to specify the block size for stacking input data in matrices. - block_size integer This option can speed up the computation. -" -093BFFCB43C46F1068A59A6B6338C955BF20AABF,093BFFCB43C46F1068A59A6B6338C955BF20AABF," multiplotnode properties - -![Multiplot node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/multiplotnodeicon.png)The Multiplot node creates a plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines; each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you want to explore the fluctuation of several variables over time. - - - -multiplotnode properties - -Table 1. multiplotnode properties - - multiplotnode properties Data type Property description - - x_field field - y_fields list - panel_field field - animation_field field - normalize flag - use_overlay_expr flag - overlay_expression string - records_limit number - if_over_limit PlotBinsPlotSamplePlotAll - x_label_auto flag - x_label string - y_label_auto flag - y_label string - use_grid flag -" -665B81FCF30212BA535DEDFFC35E22901ED3E3B6,665B81FCF30212BA535DEDFFC35E22901ED3E3B6," applyocsvmnode properties - -You can use One-Class SVM nodes to generate a One-Class SVM model nugget. The scripting name of this model nugget is applyocsvmnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [ocsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/oneclasssvmnodeslots.htmloneclasssvmnodeslots). -" -B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_0,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," ocsvmnode properties - -![One-Class SVM node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythononeclassnodeicon.png)The One-Class SVM node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library. - - - -ocsvmnode properties - -Table 1. ocsvmnode properties - - ocsvmnode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - inputs field List of the field names for input. - role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined. - splits field List of the field names for split. - use_partition Boolean Specify true or false. Default is true. If set to true, only training data will be used when building the model. - mode_type string The mode. Possible values are simple or expert. All parameters on the Expert tab will be disabled if simple is specified. - stopping_criteria string A string of scientific notation. Possible values are 1.0E-1, 1.0E-2, 1.0E-3, 1.0E-4, 1.0E-5, or 1.0E-6. Default is 1.0E-3. - precision float The regression precision (nu). Bound on the fraction of training errors and support vectors. Specify a number greater than 0 and less than or equal to 1.0. Default is 0.1. - kernel string The kernel type to use in the algorithm. Possible values are linear, poly, rbf, sigmoid, or precomputed. Default is rbf. -" -B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_1,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," enable_gamma Boolean Enables the gamma parameter. Specify true or false. Default is true. - gamma float This parameter is only enabled for the kernels rbf, poly, and sigmoid. If the enable_gamma parameter is set to false, this parameter will be set to auto. If set to true, the default is 0.1. - coef0 float Independent term in the kernel function. This parameter is only enabled for the poly kernel and the sigmoid kernel. Default value is 0.0. - degree integer Degree of the polynomial kernel function. This parameter is only enabled for the poly kernel. Specify any integer. Default is 3. - shrinking Boolean Specifies whether to use the shrinking heuristic option. Specify true or false. Default is false. - enable_cache_size Boolean Enables the cache_size parameter. Specify true or false. Default is false. - cache_size float The size of the kernel cache in MB. Default is 200. - enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false. - random_seed integer The random number seed to use when shuffling data for probability estimation. Specify any integer. - pc_type string The type of the parallel coordinates graphic. Possible options are independent or general. - lines_amount integer Maximum number of lines to include on the graphic. Specify an integer between 1 and 1000. - lines_fields_custom Boolean Enables the lines_fields parameter, which allows you to specify custom fields to show in the graph output. If set to false, all fields will be shown. If set to true, only the fields specified with the lines_fields parameter will be shown. For performance reasons, a maximum of 20 fields will be displayed. - lines_fields field List of the field names to include on the graphic as vertical axes. - enable_graphic Boolean Specify true or false. Enables graphic output (disable this option if you want to save time and reduce stream file size). -" -B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_2,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to find out the ""best"" One-Class SVM model automatically, which reaches the target objective value defined by the user with the following target_objval parameter. - target_objval float The objective function value (error rate of the model on the samples) we want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01). -" -1B83FE669CB3776D00A1A78E4764F115DFD5A40A,1B83FE669CB3776D00A1A78E4764F115DFD5A40A," Output node properties - -Refer to this section for a list of available properties for Output nodes. - -Output node properties differ slightly from those of other node types. Rather than referring to a particular node option, output node properties store a reference to the output object. This can be useful in taking a value from a table and then setting it as a flow parameter. -" -1C19733ED0D3400BAF6FF05317475A6518B5BA1A,1C19733ED0D3400BAF6FF05317475A6518B5BA1A," partitionnode properties - -![Partition node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/partitionnodeicon.png)The Partition node generates a partition field, which splits the data into separate subsets for the training, testing, and validation stages of model building. - - - -partitionnode properties - -Table 1. partitionnode properties - - partitionnode properties Data type Property description - - new_name string Name of the partition field generated by the node. - create_validation flag Specifies whether a validation partition should be created. - training_size integer Percentage of records (0–100) to be allocated to the training partition. - testing_size integer Percentage of records (0–100) to be allocated to the testing partition. - validation_size integer Percentage of records (0–100) to be allocated to the validation partition. Ignored if a validation partition is not created. - training_label string Label for the training partition. - testing_label string Label for the testing partition. - validation_label string Label for the validation partition. Ignored if a validation partition is not created. - value_mode SystemSystemAndLabelLabel Specifies the values used to represent each partition in the data. For example, the training sample can be represented by the system integer 1, the label Training, or a combination of the two, 1_Training. - set_random_seed Boolean Specifies whether a user-specified random seed should be used. - random_seed integer A user-specified random seed value. For this value to be used, set_random_seed must be set to True. -" -F1CDB96AD5A56206F662BB3025B93F6D5820242B_0,F1CDB96AD5A56206F662BB3025B93F6D5820242B," plotnode properties - -![Plot node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/plotnodeicon.png)The Plot node shows the relationship between numeric fields. You can create a plot by using points (a scatterplot) or lines. - - - -plotnode properties - -Table 1. plotnode properties - - plotnode properties Data type Property description - - x_field field Specifies a custom label for the x axis. Available only for labels. - y_field field Specifies a custom label for the y axis. Available only for labels. - three_D flag Specifies a custom label for the y axis. Available only for labels in 3-D graphs. - z_field field - color_field field Overlay field. - size_field field - shape_field field - panel_field field Specifies a nominal or flag field for use in making a separate chart for each category. Charts are paneled together in one output window. - animation_field field Specifies a nominal or flag field for illustrating data value categories by creating a series of charts displayed in sequence using animation. - transp_field field Specifies a field for illustrating data value categories by using a different level of transparency for each category. Not available for line plots. - overlay_type NoneSmootherFunction Specifies whether an overlay function or LOESS smoother is displayed. - overlay_expression string Specifies the expression used when overlay_type is set to Function. - style PointLine - point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangle OblateGlobe CatEye FourSidedPillow RoundRectangle Fan - x_mode SortOverlayAsRead - x_range_mode AutomaticUserDefined - x_range_min number - x_range_max number - y_range_mode AutomaticUserDefined - y_range_min number - y_range_max number - z_range_mode AutomaticUserDefined - z_range_min number - z_range_max number - jitter flag - records_limit number -" -F1CDB96AD5A56206F662BB3025B93F6D5820242B_1,F1CDB96AD5A56206F662BB3025B93F6D5820242B," if_over_limit PlotBinsPlotSamplePlotAll - x_label_auto flag - x_label string - y_label_auto flag - y_label string - z_label_auto flag - z_label string - use_grid flag - graph_background color Standard graph colors are described at the beginning of this section. -" -8BAD741CD92F2DB6AB2CE3A3C2D35D000235BFE9,8BAD741CD92F2DB6AB2CE3A3C2D35D000235BFE9," applylinearasnode properties - -You can use Linear-AS modeling nodes to generate a Linear-AS model nugget. The scripting name of this model nugget is applylinearasnode. For more information on scripting the modeling node itself, see [linearasnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearASslots.htmllinearASslots). - - - -applylinearasnode Properties - -Table 1. applylinearasnode Properties - - applylinearasnode Property Values Property description - - enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -0E7FF3238A69FA701A2067672493CCB1B9698CC1,0E7FF3238A69FA701A2067672493CCB1B9698CC1," applylinearnode properties - -Linear modeling nodes can be used to generate a Linear model nugget. The scripting name of this model nugget is applylinearnode. For more information on scripting the modeling node itself, see [linearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearslots.htmllinearslots). - - - -applylinearnode Properties - -Table 1. applylinearnode Properties - - linear Properties Values Property description - - use_custom_name flag -" -CE40B0CEF1449476821A1EBD8D0CF339C866D16A,CE40B0CEF1449476821A1EBD8D0CF339C866D16A," applyneuralnetworknode properties - -You can use Neural Network modeling nodes to generate a Neural Network model nugget. The scripting name of this model nugget is applyneuralnetworknode. For more information on scripting the modeling node itself, see [neuralnetworknode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.htmlneuralnetworkslots). - - - -applyneuralnetworknode properties - -Table 1. applyneuralnetworknode properties - - applyneuralnetworknode Properties Values Property description - - use_custom_name flag - custom_name string - confidence onProbability
onIncrease - score_category_probabilities flag - max_categories number -" -90FAFE76840267470228854A202752832D54A787,90FAFE76840267470228854A202752832D54A787," linearasnode properties - -![Linear-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/almas_nodeicon.png)Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors. - - - -linearasnode properties - -Table 1. linearasnode properties - - linearasnode Properties Values Property description - - target field Specifies a single target field. - inputs [field1 ... fieldN] Predictor fields used by the model. - weight_field field Analysis field used by the model. - custom_fields flag The default value is TRUE. - intercept flag The default value is TRUE. - detect_2way_interaction flag Whether or not to consider two way interaction. The default value is TRUE. - cin number The interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default value is 95. - factor_order ascendingdescending The sort order for categorical predictors. The default value is ascending. - var_select_method ForwardStepwiseBestSubsetsnone The model selection method to use. The default value is ForwardStepwise. - criteria_for_forward_stepwise AICCFstatisticsAdjustedRSquareASE The statistic used to determine whether an effect should be added to or removed from the model. The default value is AdjustedRSquare. - pin number The effect that has the smallest p-value less than this specified pin threshold is added to the model. The default value is 0.05. - pout number Any effects in the model with a p-value greater than this specified pout threshold are removed. The default value is 0.10. - use_custom_max_effects flag Whether to use max number of effects in the final model. The default value is FALSE. - max_effects number Maximum number of effects to use in the final model. The default value is 1. - use_custom_max_steps flag Whether to use the maximum number of steps. The default value is FALSE. -" -4DEAFAC111CF37F37A2F20CFF35606827D940390,4DEAFAC111CF37F37A2F20CFF35606827D940390," linearnode properties - -![Linear node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/alm_nodeicon.png)Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors. - - - -linearnode properties - -Table 1. linearnode properties - - linearnode Properties Values Property description - - target field Specifies a single target field. - inputs [field1 ... fieldN] Predictor fields used by the model. - continue_training_existing_model flag - objective Standard
Bagging
Boosting
psm psm is used for very large datasets, and requires a server connection. - use_auto_data_preparation flag - confidence_level number - model_selection ForwardStepwise
BestSubsets
None - criteria_forward_stepwise AICC
Fstatistics
AdjustedRSquare
ASE - probability_entry number - probability_removal number - use_max_effects flag - max_effects number - use_max_steps flag - max_steps number - criteria_best_subsets AICC
AdjustedRSquare
ASE - combining_rule_continuous Mean
Median - component_models_n number - use_random_seed flag - random_seed number - use_custom_model_name flag - custom_model_name string - use_custom_name flag - custom_name string - tooltip string - keywords string - annotation string - perform_model_effect_tests boolean Perform model effect tests for each regression effect. - confidence_level double This is the interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default is 95. -" -7F4719A688D4C15D72918EBBE43B908300138D2C_0,7F4719A688D4C15D72918EBBE43B908300138D2C," neuralnetworknode properties - -![Neural Net node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/neuralnetnodeicon.png)The Neural Net node uses a simplified model of the way the human brain processes information. It works by simulating a large number of interconnected simple processing units that resemble abstract versions of neurons. Neural networks are powerful general function estimators and require minimal statistical or mathematical knowledge to train or apply. - - - -neuralnetworknode properties - -Table 1. neuralnetworknode properties - - neuralnetworknode Properties Values Property description - - targets [field1 ... fieldN] Specifies target fields. - inputs [field1 ... fieldN] Predictor fields used by the model. - splits [field1 ... fieldN Specifies the field or fields to use for split modeling. - use_partition flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model. - continue flag Continue training existing model. - objective Standard
Bagging
Boosting
psm psm is used for very large datasets, and requires a server connection. - method MultilayerPerceptron
RadialBasisFunction - use_custom_layers flag - first_layer_units number - second_layer_units number - use_max_time flag - max_time number - use_max_cycles flag - max_cycles number - use_min_accuracy flag - min_accuracy number - combining_rule_categorical Voting
HighestProbability
HighestMeanProbability - combining_rule_continuous MeanMedian - component_models_n number - overfit_prevention_pct number - use_random_seed flag - random_seed number - missing_values listwiseDeletion
missingValueImputation - use_model_name boolean - model_name string - confidence onProbability
onIncrease - score_category_probabilities flag - max_categories number - score_propensity flag -" -7F4719A688D4C15D72918EBBE43B908300138D2C_1,7F4719A688D4C15D72918EBBE43B908300138D2C," use_custom_name flag - custom_name string - tooltip string - keywords string -" -3DC76AC891E282BADF1D7845B2B8A9B3A26DE3D2,3DC76AC891E282BADF1D7845B2B8A9B3A26DE3D2," Export node properties - -Refer to this section for a list of available properties for Export nodes. -" -049829EEA8EECD997E6CA05584CDE2D9BAE92218,049829EEA8EECD997E6CA05584CDE2D9BAE92218," Field Operations node properties - -Refer to this section for a list of available properties for Field Operations nodes. -" -9A1025416CDA5EA57E6B2D9525BDFC7F1AE58692,9A1025416CDA5EA57E6B2D9525BDFC7F1AE58692," Graph node properties - -Refer to this section for a list of available properties for Graph nodes. -" -9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D,9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D," Modeling node properties - -Refer to this section for a list of available properties for Modeling nodes. -" -F650943069620AA0BD7652DF1ABDCE2C076DE464,F650943069620AA0BD7652DF1ABDCE2C076DE464," Python node properties - -Refer to this section for a list of available properties for Python nodes. -" -8CE361C94FAB69503049EA703FD6D5A53CD81057,8CE361C94FAB69503049EA703FD6D5A53CD81057," Record Operations node properties - -Refer to this section for a list of available properties for Record Operations nodes. -" -179BDEFA68B788A2C197F0094C43979D9265BA77,179BDEFA68B788A2C197F0094C43979D9265BA77," Data Asset Import node properties - -Refer to this section for a list of available properties for Import nodes. -" -F585DF82F7A94309AF9FB51196F188B4FA212118,F585DF82F7A94309AF9FB51196F188B4FA212118," Spark node properties - -Refer to this section for a list of available properties for Spark nodes. -" -C1CA39FF2C12CC12697E62A37C7C52A256248AF7_0,C1CA39FF2C12CC12697E62A37C7C52A256248AF7," questnode properties - -![Quest node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/questnodeicon.png)The Quest node provides a binary classification method for building decision trees, designed to reduce the processing time required for large C&R Tree analyses while also reducing the tendency found in classification tree methods to favor inputs that allow more splits. Input fields can be numeric ranges (continuous), but the target field must be categorical. All splits are binary. - - - -questnode properties - -Table 1. questnode properties - - questnode Properties Values Property description - - target field Quest models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information. - continue_training_existing_model flag - objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a server connection. - model_output_type SingleInteractiveBuilder - use_tree_directives flag - tree_directives string - use_max_depth DefaultCustom - max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom. - prune_tree flag Prune tree to avoid overfitting. - use_std_err flag Use maximum difference in risk (in Standard Errors). - std_err_multiplier number Maximum difference. - max_surrogates number Maximum surrogates. - use_percentage flag - min_parent_records_pc number - min_child_records_pc number - min_parent_records_abs number - min_child_records_abs number - use_costs flag - costs structured Structured property. - priors DataEqualCustom - custom_priors structured Structured property. - adjust_priors flag - trails number Number of component models for boosting or bagging. -" -C1CA39FF2C12CC12697E62A37C7C52A256248AF7_1,C1CA39FF2C12CC12697E62A37C7C52A256248AF7," set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets. - range_ensemble_method MeanMedian Default combining rule for continuous targets. - large_boost flag Apply boosting to very large data sets. - split_alpha number Significance level for splitting. - train_pct number Overfit prevention set. - set_random_seed flag Replicate results option. - seed number - calculate_variable_importance flag - calculate_raw_propensities flag -" -2B2899A3878E20A4B73B0F11CFC4FD815A81E13F,2B2899A3878E20A4B73B0F11CFC4FD815A81E13F," applyquestnode properties - -You can use QUEST modeling nodes can be used to generate a QUEST model nugget. The scripting name of this model nugget is applyquestnode. For more information on scripting the modeling node itself, see [questnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html). - - - -applyquestnode properties - -Table 1. applyquestnode properties - - applyquestnode Properties Values Property description - - sql_generate Never
NoMissingValues
MissingValues
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. - calculate_conf flag - display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned. -" -19AE3ADCF2DA2FFE5186553229FEF07CB2B55043_0,19AE3ADCF2DA2FFE5186553229FEF07CB2B55043," autonumericnode properties - -![Auto Numeric node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/rangepredictornodeicon.png)The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods. The node works in the same manner as the Auto Classifier node, allowing you to choose the algorithms to use and to experiment with multiple combinations of options in a single modeling pass. Supported algorithms include neural networks, C&R Tree, CHAID, linear regression, generalized linear regression, and support vector machines (SVM). Models can be compared based on correlation, relative error, or number of variables used. - - - -autonumericnode properties - -Table 1. autonumericnode properties - - autonumericnode Properties Values Property description - - custom_fields flag If True, custom field settings will be used instead of type node settings. - target field The Auto Numeric node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - inputs [field1 … field2] - partition field - use_frequency flag - frequency_field field - use_weight flag - weight_field field - use_partitioned_data flag If a partition field is defined, only the training data is used for model building. - ranking_measure CorrelationNumberOfFields - ranking_dataset TestTraining - number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100. - calculate_variable_importance flag - enable_correlation_limit flag - correlation_limit integer - enable_number_of_fields_limit flag - number_of_fields_limit integer - enable_relative_error_limit flag - relative_error_limit integer -" -19AE3ADCF2DA2FFE5186553229FEF07CB2B55043_1,19AE3ADCF2DA2FFE5186553229FEF07CB2B55043," enable_model_build_time_limit flag - model_build_time_limit integer - enable_stop_after_time_limit flag - stop_after_time_limit integer - stop_if_valid_model flag - flag Enables or disables the use of a specific algorithm. - . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information. - use_cross_validation boolean Instead of using a single partition, a cross validation partition is used. - number_of_folds integer N fold parameter for cross validation, with range from 3 to 10. - set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228. - random_seed integer Random seed -" -B7CAC3027EB08D3E2CFBFAB0F0AF2ACF4DD0F990,B7CAC3027EB08D3E2CFBFAB0F0AF2ACF4DD0F990," reclassifynode properties - -![Reclassify node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/reclassifynodeicon.png)The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis. - - - -reclassifynode properties - -Table 1. reclassifynode properties - - reclassifynode properties Data type Property description - - mode SingleMultiple Single reclassifies the categories for one field. Multiple activates options enabling the transformation of more than one field at a time. - replace_field flag - field string Used only in Single mode. - new_name string Used only in Single mode. - fields [field1 field2 ... fieldn] Used only in Multiple mode. - name_extension string Used only in Multiple mode. - add_as SuffixPrefix Used only in Multiple mode. - reclassify string Structured property for field values. - use_default flag Use the default value. -" -8023AC0A48264DB31F3C9DA92FD84F947BFD4047,8023AC0A48264DB31F3C9DA92FD84F947BFD4047," regressionnode properties - -![Regression node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/regressionnodeicon.png)Linear regression is a common statistical technique for summarizing data and making predictions by fitting a straight line or surface that minimizes the discrepancies between predicted and actual output values. - - - -regressionnode properties - -Table 1. regressionnode properties - - regressionnode Properties Values Property description - - target field Regression models require a single target field and one or more input fields. A weight field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - method Enter
Stepwise
Backwards
Forwards - include_constant flag - use_weight flag - weight_field field - mode Simple
Expert - complete_records flag - tolerance 1.0E-1
1.0E-2
1.0E-3
1.0E-4
1.0E-5
1.0E-6
1.0E-7
1.0E-8
1.0E-9
1.0E-10
1.0E-11
1.0E-12 Use double quotes for arguments. - stepping_method useP
useF useP: use probability of F useF: use F value - probability_entry number - probability_removal number - F_value_entry number - F_value_removal number - selection_criteria flag - confidence_interval flag - covariance_matrix flag - collinearity_diagnostics flag - regression_coefficients flag - exclude_fields flag - durbin_watson flag - model_fit flag - r_squared_change flag - p_correlations flag - descriptives flag -" -D6A347CB86DF46925701892180F4D8A5B8E14508,D6A347CB86DF46925701892180F4D8A5B8E14508," applyregressionnode properties - -You can use Linear Regression modeling nodes to generate a Linear Regression model nugget. The scripting name of this model nugget is applyregressionnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [regressionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnodeslots.htmlregressionnodeslots). -" -56DC9CABDA3980A4D5D41AA5B3E5612E727B289A,56DC9CABDA3980A4D5D41AA5B3E5612E727B289A," reordernode properties - -![Field Reorder node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/fieldreordernodeicon.png)The Field Reorder node defines the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and when selecting fields. This operation is useful when working with wide datasets to make fields of interest more visible. - - - -reordernode properties - -Table 1. reordernode properties - - reordernode properties Data type Property description - - mode CustomAuto You can sort values automatically or specify a custom order. - sort_by NameTypeStorage - ascending flag -" -57ED2F2E8EAA8DAB5B26C3759FD1BD102D03B975,57ED2F2E8EAA8DAB5B26C3759FD1BD102D03B975," reportnode properties - -![Report node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/reportnodeicon.png)The Report node creates formatted reports containing fixed text as well as data and other expressions derived from the data. You specify the format of the report using text templates to define the fixed text and data output constructions. You can provide custom text formatting by using HTML tags in the template and by setting output options. You can include data values and other conditional output by using CLEM expressions in the template. - - - -reportnode properties - -Table 1. reportnode properties - - reportnode properties Data type Property description - - output_mode ScreenFile Used to specify target location for output generated from the output node. - output_format HTML (.html) Text (.txt) Output (.cou) Used to specify the type of file output. - format AutoCustom Used to choose whether output is automatically formatted or formatted using HTML included in the template. To use HTML formatting in the template, specify Custom. - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - text string - full_filename string - highlights flag -" -5D9039607C167566CED9A4D7CC9F30F2B0C58554,5D9039607C167566CED9A4D7CC9F30F2B0C58554," restructurenode properties - -![Restructure node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/restructurenodeicon.png)The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made. - -Example - -node = stream.create(""restructure"", ""My node"") -node.setKeyedPropertyValue(""fields_from"", ""Drug"", [""drugA"", ""drugX""]) -node.setPropertyValue(""include_field_name"", True) -node.setPropertyValue(""value_mode"", ""OtherFields"") -node.setPropertyValue(""value_fields"", [""Age"", ""BP""]) - - - -restructurenode properties - -Table 1. restructurenode properties - - restructurenode properties Data type Property description - - fields_from [category category category] all - include_field_name flag Indicates whether to use the field name in the restructured field name. -" -CD0745062372B6A66356728DEA39EE6D8237D0DE_0,CD0745062372B6A66356728DEA39EE6D8237D0DE," randomtrees properties - -![Random Trees node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/rfnodeicon.png)The Random Trees node is similar to the C&RT Tree node; however, the Random Trees node is designed to process big data to create a single tree. The Random Trees tree node generates a decision tree that you use to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered pure if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups). - - - -randomtrees properties - -Table 1. randomtrees properties - - randomtrees Properties Values Property description - - target field In the Random Trees node, models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - number_of_models integer Determines the number of models to build as part of the ensemble modeling. - use_number_of_predictors flag Determines whether number_of_predictors is used. - number_of_predictors integer Specifies the number of predictors to be used when building split models. - use_stop_rule_for_accuracy flag Determines whether model building stops when accuracy can't be improved. - sample_size number Reduce this value to improve performance when processing very large datasets. -" -CD0745062372B6A66356728DEA39EE6D8237D0DE_1,CD0745062372B6A66356728DEA39EE6D8237D0DE," handle_imbalanced_data flag If the target of the model is a particular flag outcome, and the ratio of the desired outcome to a non-desired outcome is very small, then the data is imbalanced and the bootstrap sampling that's conducted by the model may affect the model's accuracy. Enable imbalanced data handling so that the model will capture a larger proportion of the desired outcome and generate a stronger model. - use_weighted_sampling flag When False, variables for each node are randomly selected with the same probability. When True, variables are weighted and selected accordingly. - max_node_number integer Maximum number of nodes allowed in individual trees. If the number would be exceeded on the next split, tree growth halts. - max_depth integer Maximum tree depth before growth halts. - min_child_node_size integer Determines the minimum number of records allowed in a child node after the parent node is split. If a child node would contain fewer records than specified here, the parent node won't be split. - use_costs flag - costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue(""costs"", [""drugA"", ""drugB"", 3.0], ""drugX"", ""drugY"", 4.0]]) - default_cost_increase nonelinearsquarecustom Note this is only enabled for ordinal targets. Set default values in the costs matrix. - max_pct_missing integer If the percentage of missing values in any input is greater than the value specified here, the input is excluded. Minimum 0, maximum 100. - exclude_single_cat_pct integer If one category value represents a higher percentage of the records than specified here, the entire field is excluded from model building. Minimum 1, maximum 99. - max_category_number integer If the number of categories in a field exceeds this value, the field is excluded from model building. Minimum 2. -" -CD0745062372B6A66356728DEA39EE6D8237D0DE_2,CD0745062372B6A66356728DEA39EE6D8237D0DE," min_field_variation number If the coefficient of variation of a continuous field is smaller than this value, the field is excluded from model building. -" -E10CEBBD89F23E057645097B776A51DEA0C1555F,E10CEBBD89F23E057645097B776A51DEA0C1555F," applyrandomtrees properties - -You can use the Random Trees modeling node to generate a Random Trees model nugget. The scripting name of this model nugget is applyrandomtrees. For more information on scripting the modeling node itself, see [randomtrees properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nodeslots.htmlrf_nodeslots). - - - -applyrandomtrees properties - -Table 1. applyrandomtrees properties - - applyrandomtrees Properties Values Property description - -" -F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529_0,F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529," rfmaggregatenode properties - -![RFM Aggregate node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/mergenodeicon.png) The Recency, Frequency, Monetary (RFM) Aggregate node enables you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row that lists when they last dealt with you, how many transactions they have made, and the total monetary value of those transactions. - -Example - -node = stream.create(""rfmaggregate"", ""My node"") -node.setPropertyValue(""relative_to"", ""Fixed"") -node.setPropertyValue(""reference_date"", ""2007-10-12"") -node.setPropertyValue(""id_field"", ""CardID"") -node.setPropertyValue(""date_field"", ""Date"") -node.setPropertyValue(""value_field"", ""Amount"") -node.setPropertyValue(""only_recent_transactions"", True) -node.setPropertyValue(""transaction_date_after"", ""2000-10-01"") - - - -rfmaggregatenode properties - -Table 1. rfmaggregatenode properties - - rfmaggregatenode properties Data type Property description - - relative_to FixedToday Specify the date from which the recency of transactions will be calculated. - reference_date date Only available if Fixed is chosen in relative_to. - contiguous flag If your data is presorted so that all records with the same ID appear together in the data stream, selecting this option speeds up processing. - id_field field Specify the field to be used to identify the customer and their transactions. - date_field field Specify the date field to be used to calculate recency against. - value_field field Specify the field to be used to calculate the monetary value. - extension string Specify a prefix or suffix for duplicate aggregated fields. -" -F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529_1,F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529," add_as SuffixPrefix Specify if the extension should be added as a suffix or a prefix. - discard_low_value_records flag Enable use of the discard_records_below setting. - discard_records_below number Specify a minimum value below which any transaction details are not used when calculating the RFM totals. The units of value relate to the value field selected. - only_recent_transactions flag Enable use of either the specify_transaction_date or transaction_within_last settings. - specify_transaction_date flag - transaction_date_after date Only available if specify_transaction_date is selected. Specify the transaction date after which records will be included in your analysis. - transaction_within_last number Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis. - transaction_scale DaysWeeksMonthsYears Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis. -" -4292721E4524AC59FA259576D39665946DB8849D_0,4292721E4524AC59FA259576D39665946DB8849D," rfmanalysisnode properties - -![RFM Analysis node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/rfm_analysis_icon.png)The Recency, Frequency, Monetary (RFM) Analysis node enables you to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary). - - - -rfmanalysisnode properties - -Table 1. rfmanalysisnode properties - - rfmanalysisnode properties Data type Property description - - recency field Specify the recency field. This may be a date, timestamp, or simple number. - frequency field Specify the frequency field. - monetary field Specify the monetary field. - recency_bins integer Specify the number of recency bins to be generated. - recency_weight number Specify the weighting to be applied to recency data. The default is 100. - frequency_bins integer Specify the number of frequency bins to be generated. - frequency_weight number Specify the weighting to be applied to frequency data. The default is 10. - monetary_bins integer Specify the number of monetary bins to be generated. - monetary_weight number Specify the weighting to be applied to monetary data. The default is 1. - tied_values_method NextCurrent Specify which bin tied value data is to be put in. - recalculate_bins AlwaysIfNecessary - add_outliers flag Available only if recalculate_bins is set to IfNecessary. If set, records that lie below the lower bin will be added to the lower bin, and records above the highest bin will be added to the highest bin. - binned_field RecencyFrequencyMonetary -" -4292721E4524AC59FA259576D39665946DB8849D_1,4292721E4524AC59FA259576D39665946DB8849D," recency_thresholds value value Available only if recalculate_bins is set to Always. Specify the upper and lower thresholds for the recency bins. The upper threshold of one bin is used as the lower threshold of the next—for example, [10 30 60] would define two bins, the first bin with upper and lower thresholds of 10 and 30, with the second bin thresholds of 30 and 60. -" -D1908D2F2C1701D4A9AC3354E42DFF295C06B40D_0,D1908D2F2C1701D4A9AC3354E42DFF295C06B40D," rfnode properties - -![Random Forest node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonrfnodeicon.png)The Random Forest node uses an advanced implementation of a bagging algorithm with a tree model as the base model. This Random Forest modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library. - - - -rfnode properties - -Table 1. rfnode properties - - rfnode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. - inputs field List of the field names for input. - target field One field name for target. - fast_build boolean Utilize multiple CPU cores to improve model building. - role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined. - splits field List of the field names for split. - n_estimators integer Number of trees to build. Default is 10. - specify_max_depth Boolean Specify custom max depth. If false, nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Default is false. - max_depth integer The maximum depth of the tree. Default is 10. - min_samples_leaf integer Minimum leaf node size. Default is 1. - max_features string The number of features to consider when looking for the best split:



* If auto, then max_features=sqrt(n_features) for classifier and max_features=sqrt(n_features) for regression.
* If sqrt, then max_features=sqrt(n_features).
* If log2, then max_features=log2 (n_features).



Default is auto. -" -D1908D2F2C1701D4A9AC3354E42DFF295C06B40D_1,D1908D2F2C1701D4A9AC3354E42DFF295C06B40D," bootstrap Boolean Use bootstrap samples when building trees. Default is true. - oob_score Boolean Use out-of-bag samples to estimate the generalization accuracy. Default value is false. - extreme Boolean Use extremely randomized trees. Default is false. - use_random_seed Boolean Specify this to get replicated results. Default is false. - random_seed integer The random number seed to use when build trees. Specify any integer. - cache_size float The size of the kernel cache in MB. Default is 200. - enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false. - enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to determine the ""best"" Random Forest model automatically, which reaches the target objective value defined by the user with the following target_objval parameter. - target_objval float The objective function value (error rate of the model on the samples) you want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01). -" -949025C4DEEA46FD131C7B8D89978D75FCC440C4_0,949025C4DEEA46FD131C7B8D89978D75FCC440C4," samplenode properties - -![Sample node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/samplenodeicon.png) The Sample node selects a subset of records. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples. Sampling can be useful for improving performance, and for selecting groups of related records or transactions for analysis. - -Example - -/* Create two Sample nodes to extract -different samples from the same data / - -node = stream.create(""sample"", ""My node"") -node.setPropertyValue(""method"", ""Simple"") -node.setPropertyValue(""mode"", ""Include"") -node.setPropertyValue(""sample_type"", ""First"") -node.setPropertyValue(""first_n"", 500) - -node = stream.create(""sample"", ""My node"") -node.setPropertyValue(""method"", ""Complex"") -node.setPropertyValue(""stratify_by"", [""Sex"", ""Cholesterol""]) -node.setPropertyValue(""sample_units"", ""Proportions"") -node.setPropertyValue(""sample_size_proportions"", ""Custom"") -node.setPropertyValue(""sizes_proportions"", [""M"", ""High"", ""Default""], ""M"", ""Normal"", ""Default""], -""F"", ""High"", 0.3], ""F"", ""Normal"", 0.3]]) - - - -samplenode properties - -Table 1. samplenode properties - - samplenode properties Data type Property description - - method Simple Complex - mode IncludeDiscard Include or discard records that meet the specified condition. - sample_type FirstOneInNRandomPct Specifies the sampling method. - first_n integer Records up to the specified cutoff point will be included or discarded. - one_in_n number Include or discard every nth record. -" -949025C4DEEA46FD131C7B8D89978D75FCC440C4_1,949025C4DEEA46FD131C7B8D89978D75FCC440C4," rand_pct number Specify the percentage of records to include or discard. - use_max_size flag Enable use of the maximum_size setting. - maximum_size integer Specify the largest sample to be included or discarded from the data stream. This option is redundant and therefore disabled when First and Include are specified. - set_random_seed flag Enables use of the random seed setting. - random_seed integer Specify the value used as a random seed. - complex_sample_type RandomSystematic - sample_units ProportionsCounts - sample_size_proportions FixedCustomVariable - sample_size_counts FixedCustomVariable - fixed_proportions number - fixed_counts integer - variable_proportions field - variable_counts field - use_min_stratum_size flag - minimum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions. - use_max_stratum_size flag - maximum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions. - clusters field - stratify_by [field1 ... fieldN] - specify_input_weight flag - input_weight field - new_output_weight string - sizes_proportions [[stringstring value][stringstring value]���] If sample_units=proportions and sample_size_proportions=Custom, specifies a value for each possible combination of values of stratification fields. - default_proportion number -" -3E0860FD12FA0BB5BE75C68FBD34D69A631F2324,3E0860FD12FA0BB5BE75C68FBD34D69A631F2324," Running and interrupting scripts - -You can run scripts in a number of ways. For example, in the flow script or standalone script pane, click Run This Script to run the complete script. - -You can run a script using any of the following methods: - - - -* Click Run script within a flow script or standalone script. -* Run a flow where Run script is set as the default execution method. - - - -Note: A SuperNode script runs when the SuperNode is run as long as you select Run script within the SuperNode script dialog box. -" -27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE_0,27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE," Accessing flow run results - -Many SPSS Modeler nodes produce output objects such as models, charts, and tabular data. Many of these outputs contain useful values that can be used by scripts to guide subsequent runs. These values are grouped into content containers (referred to as simply containers) which can be accessed using tags or IDs that identify each container. The way these values are accessed depends on the format or ""content model"" used by that container. - -For example, many predictive model outputs use a variant of XML called PMML to represent information about the model such as which fields a decision tree uses at each split, or how the neurons in a neural network are connected and with what strengths. Model outputs that use PMML provide an XML Content Model that can be used to access that information. For example: - -stream = modeler.script.stream() - Assume the flow contains a single C5.0 model builder node - and that the datasource, predictors, and targets have already been - set up -modelbuilder = stream.findByType(""c50"", None) -results = [] -modelbuilder.run(results) -modeloutput = results[0] - - Now that we have the C5.0 model output object, access the - relevant content model -cm = modeloutput.getContentModel(""PMML"") - - The PMML content model is a generic XML-based content model that - uses XPath syntax. Use that to find the names of the data fields. - The call returns a list of strings match the XPath values -dataFieldNames = cm.getStringValues(""/PMML/DataDictionary/DataField"", ""name"") - -SPSS Modeler supports the following content models in scripting: - - - -* Table content model provides access to the simple tabular data represented as rows and columns. -* XML content model provides access to content stored in XML format. -* JSON content model provides access to content stored in JSON format. -* Column statistics content model provides access to summary statistics about a specific field. -* Pair-wise column statistics content model provides access to summary statistics between two fields or values between two separate fields. - - - -Note that the following nodes don't contain these content models: - - - -* Time Series -* Discriminant -" -27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE_1,27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE,"* SLRM -" -6638B9F61F15821F7A92D9C30FC6C24C029B78DC_0,6638B9F61F15821F7A92D9C30FC6C24C029B78DC," Column Statistics content model and Pairwise Statistics content model - -The Column Statistics content model provides access to statistics that can be computed for each field (univariate statistics). The Pairwise Statistics content model provides access to statistics that can be computed between pairs of fields or values in a field. - -Any of these statistics measures are possible: - - - -* Count -* UniqueCount -* ValidCount -* Mean -* Sum -* Min -* Max -* Range -* Variance -* StandardDeviation -* StandardErrorOfMean -* Skewness -* SkewnessStandardError -* Kurtosis -* KurtosisStandardError -* Median -* Mode -* Pearson -* Covariance -* TTest -* FTest - - - -Some values are only appropriate from single column statistics while others are only appropriate for pairwise statistics. - -Nodes that produce these are: - - - -* Statistics node produces column statistics and can produce pairwise statistics when correlation fields are specified -* Data Audit node produces column and can produce pairwise statistics when an overlay field is specified. -* Means node produces pairwise statistics when comparing pairs of fields or comparing a field's values with other field summaries. - - - -Which content models and statistics are available depends on both the particular node's capabilities and the settings within the node. - - - -Methods for the Column Statistics content model - -Table 1. Methods for the Column Statistics content model - - Method Return types Description - - getAvailableStatistics() List Returns the available statistics in this model. Not all fields necessarily have values for all statistics. - getAvailableColumns() List Returns the column names for which statistics were computed. - getStatistic(String column, StatisticType statistic) Number Returns the statistic values associated with the column. - reset() void Flushes any internal storage associated with this content model. - - - - - -Methods for the Pairwise Statistics content model - -Table 2. Methods for the Pairwise Statistics content model - - Method Return types Description - - getAvailableStatistics() List Returns the available statistics in this model. Not all fields necessarily have values for all statistics. -" -6638B9F61F15821F7A92D9C30FC6C24C029B78DC_1,6638B9F61F15821F7A92D9C30FC6C24C029B78DC," getAvailablePrimaryColumns() List Returns the primary column names for which statistics were computed. - getAvailablePrimaryValues() List Returns the values of the primary column for which statistics were computed. - getAvailableSecondaryColumns() List Returns the secondary column names for which statistics were computed. - getStatistic(String primaryColumn, String secondaryColumn, StatisticType statistic) Number Returns the statistic values associated with the columns. -" -6FC8A7D53D6951306E0FD23667A802538A81D6FF,6FC8A7D53D6951306E0FD23667A802538A81D6FF," JSON content model - -The JSON content model is used to access content stored in JSON format. It provides a basic API to allow callers to extract values on the assumption that they know which values are to be accessed. - - - -Methods for the JSON content model - -Table 1. Methods for the JSON content model - - Method Return types Description - - getJSONAsString() String Returns the JSON content as a string. - getObjectAt( path, JSONArtifact artifact) throws Exception Object Returns the object at the specified path. The supplied root artifact might be null, in which case the root of the content is used. The returned value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array). - getChildValuesAt( path, JSONArtifact artifact) throws Exception Hash table (key:object, value:object> Returns the child values of the specified path if the path leads to a JSON object or null otherwise. The keys in the table are strings while the associated value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array). -" -8FDDCA5B0D9D19DB5B349AB7F72625B8C6D5744C,8FDDCA5B0D9D19DB5B349AB7F72625B8C6D5744C," Table content model - -The table content model provides a simple model for accessing simple row and column data. The values in a particular column must all have the same type of storage (for example, strings or integers). -" -198246E6E7F694D36936989D23B2255B15C2A92B,198246E6E7F694D36936989D23B2255B15C2A92B," XML content model - -The XML content model provides access to XML-based content. - -The XML content model supports the ability to access components based on XPath expressions. XPath expressions are strings that define which elements or attributes are required by the caller. The XML content model hides the details of constructing various objects and compiling expressions that are typically required by XPath support. It is simpler to call from Python scripting. - -The XML content model includes a function that returns the XML document as a string, so Python script users can use their preferred Python library to parse the XML. - - - -Methods for the XML content model - -Table 1. Methods for the XML content model - - Method Return types Description - - getXMLAsString() String Returns the XML as a string. - getNumericValue(String xpath) number Returns the result of evaluating the path with return type of numeric (for example, count the number of elements that match the path expression). - getBooleanValue(String xpath) boolean Returns the boolean result of evaluating the specified path expression. - getStringValue(String xpath, String attribute) String Returns either the attribute value or XML node value that matches the specified path. - getStringValues(String xpath, String attribute) List of strings Returns a list of all attribute values or XML node values that match the specified path. - getValuesList(String xpath, attributes, boolean includeValue) List of lists of strings Returns a list of all attribute values that match the specified path along with the XML node value if required. - getValuesMap(String xpath, String keyAttribute, attributes, boolean includeValue) Hash table (key:string, value:list of string) Returns a hash table that uses either the key attribute or XML node value as key, and the list of specified attribute values as table values. - isNamespaceAware() boolean Returns whether the XML parsers should be aware of namespaces. Default is False. -" -4D8B25691C26B2BA05F7E8A96B99FD3F15A124C6,4D8B25691C26B2BA05F7E8A96B99FD3F15A124C6," Looping through nodes - -You can use a for loop to loop through all the nodes in a flow. For example, the following two script examples loop through all nodes and change field names in any Filter nodes to uppercase. - -You can use this script in any flow that contains a Filter node, even if no fields are actually filtered. Simply add a Filter node that passes all fields in order to change field names to uppercase across the board. - - Alternative 1: using the data model nameIterator() function -stream = modeler.script.stream() -for node in stream.iterator(): -if (node.getTypeName() == ""filter""): - nameIterator() returns the field names -for field in node.getInputDataModel().nameIterator(): -newname = field.upper() -node.setKeyedPropertyValue(""new_name"", field, newname) - - Alternative 2: using the data model iterator() function -stream = modeler.script.stream() -for node in stream.iterator(): -if (node.getTypeName() == ""filter""): - iterator() returns the field objects so we need - to call getColumnName() to get the name -for field in node.getInputDataModel().iterator(): -newname = field.getColumnName().upper() -node.setKeyedPropertyValue(""new_name"", field.getColumnName(), newname) - -The script loops through all nodes in the current flow, and checks whether each node is a Filter. If so, the script loops through each field in the node and uses either the field.upper() or field.getColumnName().upper() function to change the name to uppercase. -" -14A06DE43E6B08188A7672B5BE8068A572DE5B7C,14A06DE43E6B08188A7672B5BE8068A572DE5B7C," Scripting and automation - -Scripting in SPSS Modeler is a powerful tool for automating processes in the user interface. Scripts can perform the same types of actions that you perform with a mouse or a keyboard, and you can use them to automate tasks that would be highly repetitive or time consuming to perform manually. - -You can use scripts to: - - - -* Impose a specific order for node executions in a flow. -* Set properties for a node as well as perform derivations using a subset of CLEM (Control Language for Expression Manipulation). -* Specify an automatic sequence of actions that normally involves user interaction—for example, you can build a model and then test it. -" -AE3F5B72354288CC106BB10263673EBC80B2D544,AE3F5B72354288CC106BB10263673EBC80B2D544," Scripting tips - -This section provides tips and techniques for using scripts, including modifying flow execution, and using an encoded password in a script. -" -0301D6611A36E44C345083F6E2C3BDE58DE59982,0301D6611A36E44C345083F6E2C3BDE58DE59982," Types of scripts - -SPSS Modeler uses three types of scripts: - - - -* Flow scripts are stored as a flow property and are therefore saved and loaded with a specific flow. For example, you can write a flow script that automates the process of training and applying a model nugget. You can also specify that whenever a particular flow runs, the script should be run instead of the flow's canvas content. -" -92FE6B199A3B4773C5B57EDEDBA80500E6C66FAF,92FE6B199A3B4773C5B57EDEDBA80500E6C66FAF," selectnode properties - -![Select node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/select_node_icon.png) The Select node selects or discards a subset of records from the data stream based on a specific condition. For example, you might select the records that pertain to a particular sales region. - - - -selectnode properties - -Table 1. selectnode properties - - selectnode properties Data type Property description - -" -2B4D4CA6A91C05D12F5C7942E73ABAE74BF08472,2B4D4CA6A91C05D12F5C7942E73ABAE74BF08472," slrmnode properties - -![SLRM ode icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/selflearn_icon.png)The Self-Learning Response Model (SLRM) node enables you to build a model in which a single new case, or small number of new cases, can be used to reestimate the model without having to retrain the model using all data. - - - -slrmnode properties - -Table 1. slrmnode properties - - slrmnode Properties Values Property description - - target field The target field must be a nominal or flag field. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - target_response field Type must be flag. - continue_training_existing_model flag - target_field_values flag Use all: Use all values from source. Specify: Select values required. - target_field_values_specify [field1 ... fieldN] - include_model_assessment flag - model_assessment_random_seed number Must be a real number. - model_assessment_sample_size number Must be a real number. - model_assessment_iterations number Number of iterations. - display_model_evaluation flag - max_predictions number - randomization number - scoring_random_seed number - sort AscendingDescending Specifies whether the offers with the highest or lowest scores will be displayed first. -" -AEE1A739F2EA11F815EC571163BA99C9B2A97245,AEE1A739F2EA11F815EC571163BA99C9B2A97245," applyselflearningnode properties - -You can use Self-Learning Response Model (SLRM) modeling nodes to generate a SLRM model nugget. The scripting name of this model nugget is applyselflearningnode. For more information on scripting the modeling node itself, see [slrmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.htmlselflearnnodeslots). - - - -applyselflearningnode properties - -Table 1. applyselflearningnode properties - - applyselflearningnode Properties Values Property description - - max_predictions number - randomization number - scoring_random_seed number - sort ascending
descending Specifies whether the offers with the highest or lowest scores will be displayed first. -" -641B0015A5A634BFC40F10AE59873CA784232F14,641B0015A5A634BFC40F10AE59873CA784232F14," sequencenode properties - -![Sequence node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sequencenodeicon.png)The Sequence node discovers association rules in sequential or time-oriented data. A sequence is a list of item sets that tends to occur in a predictable order. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences. - - - -sequencenode properties - -Table 1. sequencenode properties - - sequencenode Properties Values Property description - - id_field field To create a Sequence model, you need to specify an ID field, an optional time field, and one or more content fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - time_field field - use_time_field flag - content_fields [field1 ... fieldn] - contiguous flag - min_supp number - min_conf number - max_size number - max_predictions number - mode SimpleExpert - use_max_duration flag - max_duration number - use_gaps flag - min_item_gap number - max_item_gap number - use_pruning flag - pruning_value number -" -29AF55B95D387BE39D4E9D328936B95CAD5BEB67,29AF55B95D387BE39D4E9D328936B95CAD5BEB67," applysequencenode properties - -You can use Sequence modeling nodes to generate a Sequence model nugget. The scripting name of this model nugget is applysequencenode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [sequencenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenodeslots.htmlsequencenodeslots). -" -2F88CC7897776EAD3F1A7052A740701B8E1A6969,2F88CC7897776EAD3F1A7052A740701B8E1A6969," setglobalsnode properties - -![Set Globals node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/setglobalsnodeicon.png)The Set Globals node scans the data and computes summary values that can be used in CLEM expressions. For example, you can use this node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age). - - - -setglobalsnode properties - -Table 1. setglobalsnode properties - - setglobalsnode properties Data type Property description - - globals [Sum Mean Min Max SDev] Structured property where fields to be set must be referenced with the following syntax: node.setKeyedPropertyValue( ""globals"", ""Age"", [""Max"", ""Sum"", ""Mean"", ""SDev""]) -" -17E39C164E92D0646C4DDDADFDF178BF3B5E2AD0,17E39C164E92D0646C4DDDADFDF178BF3B5E2AD0," settoflagnode properties - -![SetToFlag node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/settoflagnodeicon.png)The SetToFlag node derives multiple flag fields based on the categorical values defined for one or more nominal fields. - - - -settoflagnode properties - -Table 1. settoflagnode properties - - settoflagnode properties Data type Property description - - fields_from [category category category] all - true_value string Specifies the true value used by the node when setting a flag. The default is T. - false_value string Specifies the false value used by the node when setting a flag. The default is F. - use_extension flag Use an extension as a suffix or prefix to the new flag field. - extension string - add_as SuffixPrefix Specifies whether the extension is added as a suffix or prefix. -" -723FD865C01F3AC097E03B74F7D81D574A1A13D4,723FD865C01F3AC097E03B74F7D81D574A1A13D4," simfitnode properties - -![Sim Fit node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/simfitnodeicon.png)The Simulation Fitting (Sim Fit) node examines the statistical distribution of the data in each field and generates (or updates) a Simulation Generate node, with the best fitting distribution assigned to each field. The Simulation Generate node can then be used to generate simulated data. - - - -simfitnode properties - -Table 1. simfitnode properties - - simfitnode properties Data type Property description - - custom_gen_node_name boolean You can generate the name of the generated (or updated) Simulation Generate node automatically by selecting Auto. - gen_node_name string Specify a custom name for the generated (or updated) node. - used_cases_type string Specifies the number of cases to use when fitting distributions to the fields in the data set. Use AllCases or FirstNCases. - used_cases integer The number of cases - good_fit_type string For continuous fields, specify either the AnderDarling test or the KolmogSmirn test of goodness of fit to rank distributions when fitting distributions to the fields. -" -C24646ED4724E2A2D856392DDA9C1B9B05145E11,C24646ED4724E2A2D856392DDA9C1B9B05145E11," simgennode properties - -![Sim Gen node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/simgennodeicon.png) The Simulation Generate (Sim Gen) node provides an easy way to generate simulated data—either from scratch using user specified statistical distributions or automatically using the distributions obtained from running a Simulation Fitting (Sim Fit) node on existing historical data. This is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs. - - - -simgennode properties - -Table 1. simgennode properties - - simgennode properties Data type Property description - - fields Structured property See example - correlations Structured property See example - keep_min_max_setting boolean - refit_correlations boolean - max_cases integer Minimum value is 1000, maximum value is 2,147,483,647 - create_iteration_field boolean - iteration_field_name string - replicate_results boolean -" -984B203B8A0054A07F5BE3EB99438C7FBCB6CE85,984B203B8A0054A07F5BE3EB99438C7FBCB6CE85," Node and flow property examples - -You can use node and flow properties in a variety of ways with SPSS Modeler. They're most commonly used as part of a script: either a standalone script, used to automate multiple flows or operations, or a flow script, used to automate processes within a single flow. You can also specify node parameters by using the node properties within the SuperNode. At the most basic level, properties can also be used as a command line option for starting SPSS Modeler. Using the -p argument as part of command line invocation, you can use a flow property to change a setting in the flow. - - - -Node and flow property examples - -Table 1. Node and flow property examples - - Property Meaning - - s.max_size Refers to the property max_size of the node named s. - s:samplenode.max_size Refers to the property max_size of the node named s, which must be a Sample node. - :samplenode.max_size Refers to the property max_size of the Sample node in the current flow (there must be only one Sample node). - s:sample.max_size Refers to the property max_size of the node named s, which must be a Sample node. - t.direction.Age Refers to the role of the field Age in the Type node t. - :.max_size *** NOT LEGAL *** You must specify either the node name or the node type. - - - -The example s:sample.max_size illustrates that you don't need to spell out node types in full. - -The example t.direction.Age illustrates that some slot names can themselves be structured—in cases where the attributes of a node are more complex than simply individual slots with individual values. Such slots are called structured or complex properties. -" -6601B619D597C89F715BC2FAFD703452D64F21CD,6601B619D597C89F715BC2FAFD703452D64F21CD," Syntax for properties - -You can set properties using the following syntax: - -OBJECT.setPropertyValue(PROPERTY, VALUE) - -or: - -OBJECT.setKeyedPropertyValue(PROPERTY, KEY, VALUE) - -You can retrieve the value of properties using the following syntax: - -VARIABLE = OBJECT.getPropertyValue(PROPERTY) - -or: - -VARIABLE = OBJECT.getKeyedPropertyValue(PROPERTY, KEY) - -where OBJECT is a node or output, PROPERTY is the name of the node property that your expression refers to, and KEY is the key value for keyed properties. For example, the following syntax finds the Filter node and then sets the default to include all fields and filter the Age field from downstream data: - -filternode = modeler.script.stream().findByType(""filter"", None) -filternode.setPropertyValue(""default_include"", True) -filternode.setKeyedPropertyValue(""include"", ""Age"", False) - -All nodes used in SPSS Modeler can be located using the flow function findByType(TYPE, LABEL). At least one of TYPE or LABEL must be specified. -" -6008CEE94719E6B3CAABFBA9BFF1973B9125E02F,6008CEE94719E6B3CAABFBA9BFF1973B9125E02F," Abbreviations - -Standard abbreviations are used throughout the syntax for node properties. Learning the abbreviations is helpful in constructing scripts. - - - -Standard abbreviations used throughout the syntax - -Table 1. Standard abbreviations used throughout the syntax - - Abbreviation Meaning - - abs Absolute value - len Length - min Minimum - max Maximum - correl Correlation - covar Covariance - num Number or numeric - pct Percent or percentage - transp Transparency -" -FBD84CB5A6901DDAF7412396F4C6CC190E1B7328,FBD84CB5A6901DDAF7412396F4C6CC190E1B7328," Common node properties - -A number of properties are common to all nodes in SPSS Modeler. - - - -Common node properties - -Table 1. Common node properties - - Property name Data type Property description - - use_custom_name flag - name string Read-only property that reads the name (either auto or custom) for a node on the canvas. - custom_name string Specifies a custom name for the node. - tooltip string - annotation string - keywords string Structured slot that specifies a list of keywords associated with the object (for example, [""Keyword1"" ""Keyword2""]). - cache_enabled flag - node_type source_supernode

process_supernode

terminal_supernode

all node names as specified for scripting Read-only property used to refer to a node by type. For example, instead of referring to a node only by name, such as real_income, you can also specify the type, such as userinputnode or filternode. - - - -SuperNode-specific properties are discussed separately, as with all other nodes. See [SuperNode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.htmldefining_slot_parameters_in_supernodes) for more information. -" -6F2CB7C072A05F7BE0C6CE2ECA39FC9A1BA5E107,6F2CB7C072A05F7BE0C6CE2ECA39FC9A1BA5E107," Model nugget node properties - -Refer to this section for a list of available properties for Model nuggets. - -Model nugget nodes share the same common properties as other nodes. See [Common node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.htmlslot_parameters_common) for more information. -" -29DCFC3FB6EE0CCBA63E0FF3A797936DA9E0C874,29DCFC3FB6EE0CCBA63E0FF3A797936DA9E0C874," Properties reference overview - -You can specify a number of different properties for nodes, flows, projects, and SuperNodes. Some properties are common to all nodes, such as name, annotation, and ToolTip, while others are specific to certain types of nodes. Other properties refer to high-level flow operations, such as caching or SuperNode behavior. Properties can be accessed through the standard user interface (for example, when you open the properties for a node) and can also be used in a number of other ways. - - - -* Properties can be modified through scripts, as described in this section. For more information, see [Syntax for properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_syntax.html). -* Node properties can be used in SuperNode parameters. - - - -In the context of scripting within SPSS Modeler, node and flow properties are often called slot parameters. In this documentation, they are referred to as node properties or flow properties. - -For more information about the scripting language, see [The scripting language](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_language_overview.html). -" -F127EFF442D2C1D1A1EA01B23E8135B502EF2E79,F127EFF442D2C1D1A1EA01B23E8135B502EF2E79," smotenode properties - -![SMOTE node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/smotenodeicon.png)The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE process node in SPSS Modeler is implemented in Python and requires the imbalanced-learn© Python library. - - - -smotenode properties - -Table 1. smotenode properties - - smotenode properties Data type Property description - - target field The target field. - sample_ratio string Enables a custom ratio value. The two options are Auto (sample_ratio_auto) or Set ratio (sample_ratio_manual). - sample_ratio_value float The ratio is the number of samples in the minority class over the number of samples in the majority class. It must be larger than 0 and less than or equal to 1. Default is auto. - enable_random_seed Boolean If set to true, the random_seed property will be enabled. - random_seed integer The seed used by the random number generator. - k_neighbours integer The number of nearest neighbors to be used for constructing synthetic samples. Default is 5. - m_neighbours integer The number of nearest neighbors to be used for determining if a minority sample is in danger. This option is only enabled with the SMOTE algorithm types borderline1 and borderline2. Default is 10. -" -3259E737315294C6380ED46645AB8D073A5ED861,3259E737315294C6380ED46645AB8D073A5ED861," sortnode properties - -![Sort node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sortnodeicon.png) The Sort node sorts records into ascending or descending order based on the values of one or more fields. - - - -sortnode properties - -Table 1. sortnode properties - - sortnode properties Data type Property description - - keys list Specifies the fields you want to sort against. If no direction is specified, the default is used. - default_ascending flag Specifies the default sort order. - use_existing_keys flag Specifies whether sorting is optimized by using the previous sort order for fields that are already sorted. -" -F3DD7962CB3AA07C8C469EDE0C7852993AC3F290,F3DD7962CB3AA07C8C469EDE0C7852993AC3F290," Import node common properties - -Properties that are common to most import nodes are listed here, with information on specific nodes in the topics that follow. - - - -Import node common properties - -Table 1. Import node common properties - - Property name Data type Property description - - asset_type DataAsset
Connection Specify your data type: DataAsset or Connection. - asset_id string When DataAsset is set for the asset_type, this is the ID of the asset. - asset_name string When DataAsset is set for the asset_type, this is the name of the asset. - connection_id string When Connection is set for the asset_type, this is the ID of the connection. - connection_name string When Connection is set for the asset_type, this is the name of the connection. -" -8F42BD98BE9767332CE949506A9E193393DA73FA,8F42BD98BE9767332CE949506A9E193393DA73FA," statisticsnode properties - -![Statistics node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/statisticsnodeicon.png)The Statistics node provides basic summary information about numeric fields. It calculates summary statistics for individual fields and correlations between fields. - - - -statisticsnode properties - -Table 1. statisticsnode properties - - statisticsnode properties Data type Property description - - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_mode ScreenFile Used to specify target location for output generated from the output node. - output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output. - full_filename string - examine list - correlate list - statistics [count mean sum min max range variance sdev semean median mode] - correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value. - label_correlations flag - weak_label string - medium_label string - strong_label string - weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90. - strong_above_probability number Cutoff value for strong correlations. -" -5B85770138782723E09D9ED65F8655484D03BE44_0,5B85770138782723E09D9ED65F8655484D03BE44," derive_stbnode properties - -![Space-Time-Boxes node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/stbnodeicon.png) The Space-Time-Boxes node derives Space-Time-Boxes from latitude, longitude, and timestamp fields. You can also identify frequent Space-Time-Boxes as hangouts. - - - -Space-Time-Boxes node properties - -Table 1. Space-Time-Boxes node properties - - derive_stbnode properties Data type Property description - - mode IndividualRecords
Hangouts - latitude_field field - longitude_field field - timestamp_field field - hangout_density density A single density. See densities for valid density values. - densities [density,density,..., density] Each density is a string (for example, STB_GH8_1DAY). Note that there are limits to which densities are valid. For the geohash, you can use values from GH1 to GH15. For the temporal part, you can use the following values:
EVER
1YEAR
1MONTH
1DAY
12HOURS
8HOURS
6HOURS
4HOURS
3HOURS
2HOURS
1HOUR
30MIN
15MIN
10MIN
5MIN
2MIN
1MIN
30SECS
15SECS
10SECS
5SECS
2SECS
1SEC - id_field field -" -5B85770138782723E09D9ED65F8655484D03BE44_1,5B85770138782723E09D9ED65F8655484D03BE44," qualifying_duration 1DAY
12HOURS
8HOURS
6HOURS
4HOURS
3HOURS
2HOURS
1HOUR
30MIN
15MIN
10MIN
5MIN
2MIN
1MIN
30SECS
15SECS
10SECS
5SECS
2SECS
1SECS Must be a string. - min_events integer Minimum valid integer value is 2. - qualifying_pct integer Must be in the range of 1 and 100. - add_extension_as Prefix
Suffix -" -5D193C88D3E3235EA441BB82CCEEAAE20BB3EFCC,5D193C88D3E3235EA441BB82CCEEAAE20BB3EFCC," Flow scripts - -You can use scripts to customize operations within a particular flow, and they're saved with that flow. You can specify a particular execution order for the terminal nodes within a flow. You use the flow script settings to edit the script that's saved with the current flow. - -To access the flow script settings: - - - -1. Click the Flow Properties icon on the toolbar. -2. Open the Scripting section to work with scripts for the current flow. You can also launch the Expression Builder from here by clicking the calculator icon. ![Expression Builder icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/expressionbuilder.png) - - - -You can specify whether a script does or doesn't run when the flow runs. To run the script each time the flow runs, respecting the execution order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. However, the default setting is to ignore this script during flow execution ( Run all terminal nodes. -" -DA0357B0ADE596E1A23F676F76FF4304B97AEF2B,DA0357B0ADE596E1A23F676F76FF4304B97AEF2B," Jython code size limits - -Jython compiles each script to Java bytecode, which the Java Virtual Machine (JVM) then runs. However, Java imposes a limit on the size of a single bytecode file. So when Jython attempts to load the bytecode, it can cause the JVM to crash. SPSS Modeler is unable to prevent this from happening. - -Ensure that you write your Jython scripts using good coding practices (such as minimizing duplicated code by using variables or functions to compute common intermediate values). If necessary, you may need to split your code over several source files or define it using modules as these are compiled into separate bytecode files. -" -AAC6535CAB0B4600A9683433FCAB805B2C4EAA53,AAC6535CAB0B4600A9683433FCAB805B2C4EAA53," Structured properties - -There are two ways in which scripting uses structured properties for increased clarity when parsing: - - - -" -C64A69EBC1360788037B11E8B0DC5BB74D913819,C64A69EBC1360788037B11E8B0DC5BB74D913819," svmnode properties - -![SVM node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/svm_icon.png)The Support Vector Machine (SVM) node enables you to classify data into one of two groups without overfitting. SVM works well with wide data sets, such as those with a very large number of input fields. - - - -svmnode properties - -Table 1. svmnode properties - - svmnode Properties Values Property description - - all_probabilities flag - stopping_criteria 1.0E-1
1.0E-2
1.0E-3
1.0E-4
1.0E-5
1.0E-6 Determines when to stop the optimization algorithm. - regularization number Also known as the C parameter. - precision number Used only if measurement level of target field is Continuous. - kernel RBF
Polynomial
Sigmoid
Linear Type of kernel function used for the transformation. RBF is the default. - rbf_gamma number Used only if kernel is RBF. - gamma number Used only if kernel is Polynomial or Sigmoid. - bias number - degree number Used only if kernel is Polynomial. - calculate_variable_importance flag - calculate_raw_propensities flag -" -BCAE38614C57F1ABB775C4C9372DC02531830659,BCAE38614C57F1ABB775C4C9372DC02531830659," applysvmnode properties - -You can use SVM modeling nodes to generate an SVM model nugget. The scripting name of this model nugget is applysvmnode. For more information on scripting the modeling node itself, see [svmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.htmlsvmnodeslots). - - - -applysvmnode properties - -Table 1. applysvmnode properties - - applysvmnode Properties Values Property description - - all_probabilities flag - calculate_raw_propensities flag -" -3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F_0,3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F," tablenode properties - -![Table node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/table_node_icon.png)The Table node displays data in table format. This is useful whenever you need to inspect your data values. - -Note: Some of the properties on this page might not be available in your platform. - - - -tablenode properties - -Table 1. tablenode properties - - tablenode properties Data type Property description - - full_filename string If disk, data, or HTML output, the name of the output file. - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_mode Screen
File Used to specify target location for output generated from the output node. - output_format Formatted (.tab)
Delimited (.csv)
HTML (.html)
Output (.cou) Used to specify the type of output. - transpose_data flag Transposes the data before export so that rows represent fields and columns represent records. - paginate_output flag When the output_format is HTML, causes the output to be separated into pages. - lines_per_page number When used with paginate_output, specifies the lines per page of output. - highlight_expr string - output string A read-only property that holds a reference to the last table built by the node. - value_labels [[Value LabelString]
[Value LabelString] ...] Used to specify labels for value pairs. - display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the flow default. - export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default. - decimal_separator DEFAULT
PERIOD
COMMA Sets the decimal separator for the field (applies only to fields with REAL storage). -" -3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F_1,3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F," date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage). - time_format ""HHMMSS""
""HHMM""
""MMSS""
""HH:MM:SS""
""HH:MM""
""MM:SS""
""(H)H:(M)M:(S)S""
""(H)H:(M)M""
""(M)M:(S)S""
""HH.MM.SS""
""HH.MM""
""MM.SS""
""(H)H.(M)M.(S)S""
""(H)H.(M)M""
""(M)M.(S)S"" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage). -" -85C99B52BBBC96007BD819861E675C61D7B742CA_0,85C99B52BBBC96007BD819861E675C61D7B742CA," tcmnode properties - -![TCM node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/tcmnodeicon.png)Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have the most significant causal relationship with the target. - - - -tcmnode properties - -Table 1. tcmnode properties - - tcmnode Properties Values Property description - - custom_fields Boolean - dimensionlist [dimension1 ... dimensionN] - data_struct Multiple
Single - metric_fields fields - both_target_and_input [f1 ... fN] - targets [f1 ... fN] - candidate_inputs [f1 ... fN] - forced_inputs [f1 ... fN] - use_timestamp Timestamp
Period - input_interval None
Unknown
Year
Quarter
Month
Week
Day
Hour
Hour_nonperiod
Minute
Minute_nonperiod
Second
Second_nonperiod - period_field string - period_start_value integer - num_days_per_week integer - start_day_of_week Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday - num_hours_per_day integer - start_hour_of_day integer - timestamp_increments integer - cyclic_increments integer - cyclic_periods list - output_interval None
Year
Quarter
Month
Week
Day
Hour
Minute
Second - is_same_interval Same
Notsame - cross_hour Boolean -" -85C99B52BBBC96007BD819861E675C61D7B742CA_1,85C99B52BBBC96007BD819861E675C61D7B742CA," aggregate_and_distribute list - aggregate_default Mean
Sum
Mode
Min
Max - distribute_default Mean
Sum - group_default Mean
Sum
Mode
Min
Max - missing_imput Linear_interp
Series_mean
K_mean
K_meridian
Linear_trend
None - k_mean_param integer - k_median_param integer - missing_value_threshold integer - conf_level integer - max_num_predictor integer - max_lag integer - epsilon number - threshold integer - is_re_est Boolean - num_targets integer - percent_targets integer - fields_display list - series_dispaly list - network_graph_for_target Boolean - sign_level_for_target number - fit_and_outlier_for_target Boolean - sum_and_para_for_target Boolean - impact_diag_for_target Boolean - impact_diag_type_for_target Effect
Cause
Both - impact_diag_level_for_target integer - series_plot_for_target Boolean - res_plot_for_target Boolean - top_input_for_target Boolean - forecast_table_for_target Boolean - same_as_for_target Boolean - network_graph_for_series Boolean - sign_level_for_series number - fit_and_outlier_for_series Boolean - sum_and_para_for_series Boolean - impact_diagram_for_series Boolean - impact_diagram_type_for_series Effect
Cause
Both - impact_diagram_level_for_series integer - series_plot_for_series Boolean - residual_plot_for_series Boolean - forecast_table_for_series Boolean - outlier_root_cause_analysis Boolean - causal_levels integer - outlier_table Interactive
Pivot
Both - rmsp_error Boolean - bic Boolean -" -85C99B52BBBC96007BD819861E675C61D7B742CA_2,85C99B52BBBC96007BD819861E675C61D7B742CA," r_square Boolean - outliers_over_time Boolean - series_transormation Boolean - use_estimation_period Boolean - estimation_period Times
Observation - observations list - observations_type Latest
Earliest - observations_num integer - observations_exclude integer - extend_records_into_future Boolean - forecastperiods integer - max_num_distinct_values integer - display_targets FIXEDNUMBER
PERCENTAGE - goodness_fit_measure ROOTMEAN
BIC
RSQUARE - top_input_for_series Boolean - aic Boolean - rmse Boolean - date_time_field field Time/Date field - auto_detect_lag Boolean This setting specifies the number of lag terms for each input in the model for each target. - numoflags Integer By default, the number of lag terms is automatically determined from the time interval that is used for the analysis. -" -DB504727C8688251CAAB0C18E12BDE9DC625ECD1,DB504727C8688251CAAB0C18E12BDE9DC625ECD1," applytcmnode properties - -You can use Temporal Causal Modeling (TCM) modeling nodes to generate a TCM model nugget. The scripting name of this model nugget is applytcmnode. For more information on scripting the modeling node itself, see [tcmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnodeslots.htmltcmnodeslots). - - - -applytcmnode properties - -Table 1. applytcmnode properties - - applytcmnode Properties Values Property description - - ext_future boolean - ext_future_num integer - noise_res boolean - conf_limits boolean -" -5062008D59B761C5CF7F32F131021EA81A03B048,5062008D59B761C5CF7F32F131021EA81A03B048," timeplotnode properties - -![Time Plot node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/timeplotnodeicon.png)The Time Plot node displays one or more sets of time series data. Typically, you would first use a Time Intervals node to create a TimeLabel field, which would be used to label the x axis. - - - -timeplotnode properties - -Table 1. timeplotnode properties - - timeplotnode properties Data type Property description - - plot_series SeriesModels - use_custom_x_field flag - x_field field - y_fields list - panel flag - normalize flag - line flag - points flag - point_type Rectangle
Dot
Triangle
Hexagon
Plus
Pentagon
Star
BowTie
HorizontalDash
VerticalDash
IronCross
Factory
House
Cathedral
OnionDome
ConcaveTriangleOblateGlobe
CatEye
FourSidedPillow
RoundRectangle
Fan - smoother flag You can add smoothers to the plot only if you set panel to True. - use_records_limit flag - records_limit integer - symbol_size number Specifies a symbol size. -" -76B3F98C842554781D96B8DDE05A74D4D78B4E7A_0,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," ts properties - -![Time Series node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/timeseriesnodeicon.png)The Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series data and produces forecasts of future performance. - - - -ts properties - -Table 1. ts properties - - ts Properties Values Property description - - targets field The Time Series node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model. - use_period flag - date_time_field field - input_interval None
Unknown
Year
Quarter
Month
Week
Day
Hour
Hour_nonperiod
Minute
Minute_nonperiod
Second
Second_nonperiod - period_field field - period_start_value integer - num_days_per_week integer - start_day_of_week Sunday
Monday
Tuesday
Wednesday
Thursday
Friday
Saturday - num_hours_per_day integer - start_hour_of_day integer - timestamp_increments integer - cyclic_increments integer - cyclic_periods list - output_interval None
Year
Quarter
Month
Week
Day
Hour
Minute
Second - is_same_interval flag - cross_hour flag -" -76B3F98C842554781D96B8DDE05A74D4D78B4E7A_1,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," aggregate_and_distribute list - aggregate_default Mean
Sum
Mode
Min
Max - distribute_default Mean
Sum - group_default Mean
Sum
Mode
Min
Max - missing_imput Linear_interp
Series_mean
K_mean
K_median
Linear_trend - k_span_points integer - use_estimation_period flag - estimation_period Observations
Times - date_estimation list Only available if you use date_time_field - period_estimation list Only available if you use use_period - observations_type Latest
Earliest - observations_num integer - observations_exclude integer - method ExpertModeler
Exsmooth
Arima - expert_modeler_method ExpertModeler
Exsmooth
Arima - consider_seasonal flag - detect_outliers flag - expert_outlier_additive flag - expert_outlier_level_shift flag - expert_outlier_innovational flag - expert_outlier_level_shift flag - expert_outlier_transient flag - expert_outlier_seasonal_additive flag - expert_outlier_local_trend flag - expert_outlier_additive_patch flag - consider_newesmodels flag - exsmooth_model_type Simple
HoltsLinearTrend
BrownsLinearTrend
DampedTrend
SimpleSeasonal
WintersAdditive
WintersMultiplicative
DampedTrendAdditive
DampedTrendMultiplicative
MultiplicativeTrendAdditive
MultiplicativeSeasonal
MultiplicativeTrendMultiplicative
MultiplicativeTrend Specifies the Exponential Smoothing method. Default is Simple. -" -76B3F98C842554781D96B8DDE05A74D4D78B4E7A_2,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," futureValue_type_method Compute
specify If Compute is used, the system computes the Future Values for the forecast period for each predictor.

For each predictor, you can choose from a list of functions (blank, mean of recent points, most recent value) or use specify to enter values manually. To specify individual fields and properties, use the extend_metric_values property. For example:

set :ts.futureValue_type_method=""specify"" set :ts.extend_metric_values=[{'Market_1','USER_SPECIFY', 1,2,3]}, {'Market_2','MOST_RECENT_VALUE', ''},{'Market_3','RECENT_POINTS_MEAN', ''}] - exsmooth_transformation_type None
SquareRoot
NaturalLog - arima.p integer - arima.d integer - arima.q integer - arima.sp integer - arima.sd integer - arima.sq integer - arima_transformation_type None
SquareRoot
NaturalLog - arima_include_constant flag - tf_arima.p.fieldname integer For transfer functions. - tf_arima.d.fieldname integer For transfer functions. - tf_arima.q.fieldname integer For transfer functions. - tf_arima.sp.fieldname integer For transfer functions. - tf_arima.sd.fieldname integer For transfer functions. - tf_arima.sq.fieldname integer For transfer functions. - tf_arima.delay.fieldname integer For transfer functions. - tf_arima.transformation_type.fieldname None
SquareRoot
NaturalLog For transfer functions. - arima_detect_outliers flag - arima_outlier_additive flag - arima_outlier_level_shift flag - arima_outlier_innovational flag - arima_outlier_transient flag -" -76B3F98C842554781D96B8DDE05A74D4D78B4E7A_3,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," arima_outlier_seasonal_additive flag - arima_outlier_local_trend flag - arima_outlier_additive_patch flag - max_lags integer - cal_PI flag - conf_limit_pct real - events fields - continue flag - scoring_model_only flag Use for models with very large numbers (tens of thousands) of time series. - forecastperiods integer - extend_records_into_future flag - extend_metric_values fields Allows you to provide future values for predictors. - conf_limits flag - noise_res flag - max_models_output integer Controls how many models are shown in output. Default is 10. Models are not shown in output if the total number of models built exceeds this value. Models are still available for scoring. -" -EED66538A3E4854D56210AB1D6AC49016F1E40A2_0,EED66538A3E4854D56210AB1D6AC49016F1E40A2," streamingtimeseries properties - -![Streaming TS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/timeseriesprocessnode.png)The Streaming Time Series node builds and scores time series models in one step. - - - -streamingtimeseries properties - -Table 1. streamingtimeseries properties - - streamingtimeseries properties Values Property description - - targets field The Streaming TS node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model. - use_period flag - date_time_field field - input_interval NoneUnknownYearQuarterMonthWeekDayHourHour_nonperiodMinuteMinute_nonperiodSecondSecond_nonperiod - period_field field - period_start_value integer - num_days_per_week integer - start_day_of_week SundayMondayTuesdayWednesdayThursdayFridaySaturday - num_hours_per_day integer - start_hour_of_day integer - timestamp_increments integer - cyclic_increments integer - cyclic_periods list - output_interval NoneYearQuarterMonthWeekDayHourMinuteSecond - is_same_interval flag - cross_hour flag - aggregate_and_distribute list - aggregate_default MeanSumModeMinMax - distribute_default MeanSum - group_default MeanSumModeMinMax - missing_imput Linear_interpSeries_meanK_meanK_medianLinear_trend - k_span_points integer - use_estimation_period flag - estimation_period ObservationsTimes -" -EED66538A3E4854D56210AB1D6AC49016F1E40A2_1,EED66538A3E4854D56210AB1D6AC49016F1E40A2," date_estimation list Only available if you use date_time_field. - period_estimation list Only available if you use use_period. - observations_type LatestEarliest - observations_num integer - observations_exclude integer - method ExpertModelerExsmoothArima - expert_modeler_method ExpertModelerExsmoothArima - consider_seasonal flag - detect_outliers flag - expert_outlier_additive flag - expert_outlier_innovational flag - expert_outlier_level_shift flag - expert_outlier_transient flag - expert_outlier_seasonal_additive flag - expert_outlier_local_trend flag - expert_outlier_additive_patch flag - consider_newesmodels flag - exsmooth_model_type SimpleHoltsLinearTrendBrownsLinearTrendDampedTrendSimpleSeasonalWintersAdditiveWintersMultiplicativeDampedTrendAdditiveDampedTrendMultiplicativeMultiplicativeTrendAdditiveMultiplicativeSeasonalMultiplicativeTrendMultiplicativeMultiplicativeTrend - futureValue_type_method Computespecify - exsmooth_transformation_type NoneSquareRootNaturalLog - arima.p integer - arima.d integer - arima.q integer - arima.sp integer - arima.sd integer - arima.sq integer - arima_transformation_type NoneSquareRootNaturalLog - arima_include_constant flag - tf_arima.p.fieldname integer For transfer functions. - tf_arima.d.fieldname integer For transfer functions. - tf_arima.q.fieldname integer For transfer functions. - tf_arima.sp.fieldname integer For transfer functions. - tf_arima.sd.fieldname integer For transfer functions. - tf_arima.sq.fieldname integer For transfer functions. - tf_arima.delay.fieldname integer For transfer functions. - tf_arima.transformation_type.fieldname NoneSquareRootNaturalLog For transfer functions. - arima_detect_outliers flag -" -EED66538A3E4854D56210AB1D6AC49016F1E40A2_2,EED66538A3E4854D56210AB1D6AC49016F1E40A2," arima_outlier_additive flag - arima_outlier_level_shift flag - arima_outlier_innovational flag - arima_outlier_transient flag - arima_outlier_seasonal_additive flag - arima_outlier_local_trend flag - arima_outlier_additive_patch flag - conf_limit_pct real - events fields - forecastperiods integer - extend_records_into_future flag - conf_limits flag - noise_res flag - max_models_output integer Specify the maximum number of models you want to include in the output. Note that if the number of models built exceeds this threshold, the models aren't shown in the output but they're still available for scoring. Default value is 10. Displaying a large number of models may result in poor performance or instability. - custom_fields boolean This option tells the node to use the field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required. -" -9087B2B5302FD4B7C8343C568C7C8A925544BB40,9087B2B5302FD4B7C8343C568C7C8A925544BB40," applyts properties - -You can use the Time Series modeling node to generate a Time Series model nugget. The scripting name of this model nugget is applyts. For more information on scripting the modeling node itself, see [ts properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots.htmltimeser_as_nodeslots). - - - -applyts properties - -Table 1. applyts properties - - applyts Properties Values Property description - - extend_records_into_future Boolean - ext_future_num integer - compute_future_values_input Boolean - forecastperiods integer - noise_res boolean - conf_limits boolean - target_fields list -" -EA4CB9CD97FFB8C956B4F5D28D2759C0ED832BB5,EA4CB9CD97FFB8C956B4F5D28D2759C0ED832BB5," transformnode properties - -![Table node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/transformnodeicon.png)The Transform node allows you to select and visually preview the results of transformations before applying them to selected fields. - - - -transformnode properties - -Table 1. transformnode properties - - transformnode properties Data type Property description - - fields [ field1… fieldn] The fields to be used in the transformation. - formula AllSelect Indicates whether all or selected transformations should be calculated. - formula_inverse flag Indicates if the inverse transformation should be used. - formula_inverse_offset number Indicates a data offset to be used for the formula. Set as 0 by default, unless specified by user. - formula_log_n flag Indicates if the logn transformation should be used. - formula_log_n_offset number - formula_log_10 flag Indicates if the log10 transformation should be used. - formula_log_10_offset number - formula_exponential flag Indicates if the exponential transformation (e^x^) should be used. - formula_square_root flag Indicates if the square root transformation should be used. - use_output_name flag Specifies whether a custom output name is used. - output_name string If use_output_name is true, specifies the name to use. - output_mode ScreenFile Used to specify target location for output generated from the output node. - output_format HTML (.html) Output (.cou) Used to specify the type of output. - paginate_output flag When the output_format is HTML, causes the output to be separated into pages. -" -A20FCF106BA3053C247DAF57A4A396F073D1E4E2_0,A20FCF106BA3053C247DAF57A4A396F073D1E4E2," transposenode properties - -![Transpose node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/transposenodeicon.png)The Transpose node swaps the data in rows and columns so that records become fields and fields become records. - - - -transposenode properties - -Table 1. transposenode properties - - transposenode properties Data type Property description - - transpose_method enum Specifies the transpose method: Normal (normal), CASE to VAR (casetovar), or VAR to CASE (vartocase). - transposed_names PrefixRead Property for the Normal transpose method. New field names can be generated automatically based on a specified prefix, or they can be read from an existing field in the data. - prefix string Property for the Normal transpose method. - num_new_fields integer Property for the Normal transpose method. When using a prefix, specifies the maximum number of new fields to create. - read_from_field field Property for the Normal transpose method. Field from which names are read. This must be an instantiated field or an error will occur when the node is executed. - max_num_fields integer Property for the Normal transpose method. When reading names from a field, specifies an upper limit to avoid creating an inordinately large number of fields. - transpose_type NumericStringCustom Property for the Normal transpose method. By default, only continuous (numeric range) fields are transposed, but you can choose a custom subset of numeric fields or transpose all string fields instead. - transpose_fields list Property for the Normal transpose method. Specifies the fields to transpose when the Custom option is used. - id_field_name field Property for the Normal transpose method. - transpose_casetovar_idfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as index fields. field1 ... fieldN -" -A20FCF106BA3053C247DAF57A4A396F073D1E4E2_1,A20FCF106BA3053C247DAF57A4A396F073D1E4E2," transpose_casetovar_columnfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as column fields. field1 ... fieldN - transpose_casetovar_valuefields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as value fields. field1 ... fieldN - transpose_vartocase_idfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as ID variable fields. field1 ... fieldN - transpose_vartocase_valfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as value variable fields. field1 ... fieldN - transpose_new_field_names array New field names. -" -E01C7D12E53747C7ED71D615D7E9DCD8F17638ED_0,E01C7D12E53747C7ED71D615D7E9DCD8F17638ED," treeas properties - -![Tree-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/treeASnodeicon.png)The Tree-AS node is similar to the CHAID node; however, the Tree-AS node is designed to process big data to create a single tree and displays the resulting model in the output viewer. The node generates a decision tree by using chi-square statistics (CHAID) to identify optimal splits. This use of CHAID can generate nonbinary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute. - - - -treeas properties - -Table 1. treeas properties - - treeas Properties Values Property description - - target field In the Tree-AS node, CHAID models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - method chaidexhaustive_chaid - max_depth integer Maximum tree depth, from 0 to 20. The default value is 5. - num_bins integer Only used if the data is made up of continuous inputs. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100. - record_threshold integer The number of records at which the model will switch from using p-values to Effect sizes while building the tree. The default is 1,000,000; increase or decrease this in increments of 10,000. - split_alpha number Significance level for splitting. The value must be between 0.01 and 0.99. - merge_alpha number Significance level for merging. The value must be between 0.01 and 0.99. -" -E01C7D12E53747C7ED71D615D7E9DCD8F17638ED_1,E01C7D12E53747C7ED71D615D7E9DCD8F17638ED," bonferroni_adjustment flag Adjust significance values using Bonferroni method. - effect_size_threshold_cont number Set the Effect size threshold when splitting nodes and merging categories when using a continuous target. The value must be between 0.01 and 0.99. - effect_size_threshold_cat number Set the Effect size threshold when splitting nodes and merging categories when using a categorical target. The value must be between 0.01 and 0.99. - split_merged_categories flag Allow resplitting of merged categories. - grouping_sig_level number Used to determine how groups of nodes are formed or how unusual nodes are identified. - chi_square pearsonlikelihood_ratio Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio - minimum_record_use use_percentageuse_absolute - min_parent_records_pc number Default value is 2. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch. - min_child_records_pc number Default value is 1. Minimum 1, maximum 100, in increments of 1. - min_parent_records_abs number Default value is 100. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch. - min_child_records_abs number Default value is 50. Minimum 1, maximum 100, in increments of 1. - epsilon number Minimum change in expected cell frequencies.. - max_iterations number Maximum iterations for convergence. - use_costs flag - costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue(""costs"", [""drugA"", ""drugB"", 3.0], ""drugX"", ""drugY"", 4.0]]) - default_cost_increase nonelinearsquarecustom Only enabled for ordinal targets. Set default values in the costs matrix. -" -8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823,8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823," applytreeas properties - -You can use Tree-AS modeling nodes to generate a Tree-AS model nugget. The scripting name of this model nugget is applytreenas. For more information on scripting the modeling node itself, see [treeas properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.htmltreeASnodeslots). - - - -applytreeas properties - -Table 1. applytreeas properties - - applytreeas Properties Values Property description - - calculate_conf flag This property includes confidence calculations in the generated tree. -" -3B763FFD1393292F4C3CA9D236440065B6660E8E_0,3B763FFD1393292F4C3CA9D236440065B6660E8E," twostepAS properties - -![Twostep-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/twostepASnodeicon.png)TwoStep Cluster is an exploratory tool that's designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that's employed by this procedure has several desirable features that differentiate it from traditional clustering techniques, such as handling of categorical and continuous variables, automatic selection of number of clusters, and scalability. - - - -twostepAS properties - -Table 1. twostepAS properties - - twostepAS Properties Values Property description - - inputs [f1 ... fN] TwoStepAS models use a list of input fields, but no target. Weight and frequency fields are not recognized. - use_predefined_roles Boolean Default=True - use_custom_field_assignments Boolean Default=False - cluster_num_auto Boolean Default=True - min_num_clusters integer Default=2 - max_num_clusters integer Default=15 - num_clusters integer Default=5 - clustering_criterion AIC
BIC - automatic_clustering_method use_clustering_criterion_setting
Distance_jump
Minimum
Maximum - feature_importance_method use_clustering_criterion_setting
effect_size - use_random_seed Boolean - random_seed integer - distance_measure Euclidean
Loglikelihood - include_outlier_clusters Boolean Default=True - num_cases_in_feature_tree_leaf_is_less_than integer Default=10 - top_perc_outliers integer Default=5 - initial_dist_change_threshold integer Default=0 - leaf_node_maximum_branches integer Default=8 - non_leaf_node_maximum_branches integer Default=8 - max_tree_depth integer Default=3 -" -3B763FFD1393292F4C3CA9D236440065B6660E8E_1,3B763FFD1393292F4C3CA9D236440065B6660E8E," adjustment_weight_on_measurement_level integer Default=6 - memory_allocation_mb number Default=512 - delayed_split Boolean Default=True - fields_not_to_standardize [f1 ... fN] - adaptive_feature_selection Boolean Default=True - featureMisPercent integer Default=70 - coefRange number Default=0.05 - percCasesSingleCategory integer Default=95 - numCases integer Default=24 - include_model_specifications Boolean Default=True - include_record_summary Boolean Default=True - include_field_transformations Boolean Default=True - excluded_inputs Boolean Default=True - evaluate_model_quality Boolean Default=True - show_feature_importance bar chart Boolean Default=True - show_feature_importance_ word_cloud Boolean Default=True - show_outlier_clusters_interactive_table_and_chart Boolean Default=True - show_outlier_clusters_pivot_table Boolean Default=True - across_cluster_feature_importance Boolean Default=True - across_cluster_profiles_pivot_table Boolean Default=True - withinprofiles Boolean Default=True - cluster_distances Boolean Default=True - cluster_label String
Number - label_prefix String -" -356DD425AD5BE4EE255F2F95F7860B6FDFE3BCC0,356DD425AD5BE4EE255F2F95F7860B6FDFE3BCC0," applytwostepAS properties - -You can use TwoStep-AS modeling nodes to generate a TwoStep-AS model nugget. The scripting name of this model nugget is applytwostepAS. For more information on scripting the modeling node itself, see [twostepAS properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nodeslots.htmltwostep_as_nodeslots). - - - -applytwostepAS Properties - -Table 1. applytwostepAS Properties - - applytwostepAS Properties Values Property description - - enable_sql_generation false
true
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -0B54763A8146178F9F4809DA458E4DDBD9E28B39,0B54763A8146178F9F4809DA458E4DDBD9E28B39," twostepnode properties - -![Twostep node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/twostepnodeicon.png)The TwoStep node uses a two-step clustering method. The first step makes a single pass through the data to compress the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters. TwoStep has the advantage of automatically estimating the optimal number of clusters for the training data. It can handle mixed field types and large data sets efficiently. - - - -twostepnode properties - -Table 1. twostepnode properties - - twostepnode Properties Values Property description - - inputs [field1 ... fieldN] TwoStep models use a list of input fields, but no target. Weight and frequency fields are not recognized. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information. - standardize flag - exclude_outliers flag - percentage number - cluster_num_auto flag - min_num_clusters number - max_num_clusters number - num_clusters number - cluster_label StringNumber - label_prefix string -" -BAB82891CA84875B6EEC64974558FC838197C99A,BAB82891CA84875B6EEC64974558FC838197C99A," applytwostepnode properties - -You can use TwoStep modeling nodes to generate a TwoStep model nugget. The scripting name of this model nugget is applytwostepnode. For more information on scripting the modeling node itself, see [twostepnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.htmltwostepnodeslots). - - - -applytwostepnode properties - -Table 1. applytwostepnode properties - - applytwostepnode Properties Values Property description - - enable_sql_generation udf
native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_0,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," typenode properties - -![Type node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/typenodeicon.png)The Type node specifies field metadata and properties. For example, you can specify a measurement level (continuous, nominal, ordinal, or flag) for each field, set options for handling missing values and system nulls, set the role of a field for modeling purposes, specify field and value labels, and specify values for a field. - -Note that in some cases you may need to fully instantiate the Type node for other nodes to work correctly, such as the fields from property of the SetToFlag node. You can simply connect a Table node and run it to instantiate the fields: - -tablenode = stream.createAt(""table"", ""Table node"", 150, 50) -stream.link(node, tablenode) -tablenode.run(None) -stream.delete(tablenode) - - - -typenode properties - -Table 1. typenode properties - - typenode properties Data type Property description - - direction Input
Target
Both
None
Partition
Split
Frequency
RecordID Keyed property for field roles. - type Range
Flag
Set
Typeless
Discrete
OrderedSet
Default Measurement level of the field (previously
called the ""type"" of field). Setting type to
Default will clear any values parameter
setting, and if value_mode has the value
Specify, it will be reset to Read.
If value_mode is set to Pass or Read,
setting type will not affect value_mode.

The data types used internally differ from those visible in the type node. The correspondence is as follows: Range -> Continuous Set - > Nominal OrderedSet -> Ordinal Discrete- > Categorical. -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_1,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," storage Unknown
String
Integer
Real
Time
Date
Timestamp Read-only keyed property for field storage type. - check None
Nullify
Coerce
Discard
Warn
Abort Keyed property for field type and range checking. - values [value value] For continuous fields, the first value is the minimum, and the last value is the maximum. For nominal fields, specify all values. For flag fields, the first value represents false, and the last value represents true. Setting this property automatically sets the value_mode property to Specify. - value_mode Read
Pass
Read+
Current
Specify Determines how values are set. Note that you cannot set this property to Specify directly; to use specific values, set the values property. - extend_values flag Applies when value_mode is set to Read. Set to T to add newly read values to any existing values for the field. Set to F to discard existing values in favor of the newly read values. - enable_missing flag When set to T, activates tracking of missing values for the field. - missing_values [value value ...] Specifies data values that denote missing data. - range_missing flag Specifies whether a missing-value (blank) range is defined for a field. - missing_lower string When range_missing is true, specifies the lower bound of the missing-value range. - missing_upper string When range_missing is true, specifies the upper bound of the missing-value range. - null_missing flag When set to T, nulls (undefined values that are displayed as $null$ in the software) are considered missing values. - whitespace_ missing flag When set to T, values containing only white space (spaces, tabs, and new lines) are considered missing values. - description string Specifies the description for a field. - value_labels [[Value LabelString] [ Value LabelString] ...] Used to specify labels for value pairs. -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_2,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the stream default. - export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default. - decimal_separator DEFAULT
PERIOD
COMMA Sets the decimal separator for the field (applies only to fields with REAL storage). - date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage). - time_format ""HHMMSS"" ""HHMM"" ""MMSS"" ""HH:MM:SS"" ""HH:MM"" ""MM:SS"" ""(H)H:(M)M:(S)S"" ""(H)H:(M)M"" ""(M)M:(S)S"" ""HH.MM.SS"" ""HH.MM"" ""MM.SS"" ""(H)H.(M)M.(S)S"" ""(H)H.(M)M"" ""(M)M.(S)S"" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage). -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_3,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," number_format DEFAULT
STANDARD
SCIENTIFIC
CURRENCY Sets the number display format for the field. - standard_places integer Sets the number of decimal places for the field when displayed in standard format. A value of –1 will use the stream default. - scientific_places integer Sets the number of decimal places for the field when displayed in scientific format. A value of –1 will use the stream default. - currency_places integer Sets the number of decimal places for the field when displayed in currency format. A value of –1 will use the stream default. - grouping_symbol DEFAULT
NONE
LOCALE
PERIOD
COMMA
SPACE Sets the grouping symbol for the field. - column_width integer Sets the column width for the field. A value of –1 will set column width to Auto. - justify AUTO
CENTER
LEFT
RIGHT Sets the column justification for the field. - measure_type Range / MeasureType.RANGE
Discrete / MeasureType.DISCRETE
Flag / MeasureType.FLAG
Set / MeasureType.SET
OrderedSet / MeasureType.ORDERED_SET
Typeless / MeasureType.TYPELESS
Collection / MeasureType.COLLECTION
Geospatial / MeasureType.GEOSPATIAL This keyed property is similar to type in that it can be used to define the measurement associated with the field. What is different is that in Python scripting, the setter function can also be passed one of the MeasureType values while the getter will always return on the MeasureType values. -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_4,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," collection_ measure Range / MeasureType.RANGE
Flag / MeasureType.FLAG
Set / MeasureType.SET
OrderedSet / MeasureType.ORDERED_SET
Typeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this keyed property defines the measurement type associated with the underlying values. - geo_type Point
MultiPoint
LineString
MultiLineString
Polygon
MultiPolygon For geospatial fields, this keyed property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values. - has_coordinate_ system boolean For geospatial fields, this property defines whether this field has a coordinate system - coordinate_system string For geospatial fields, this keyed property defines the coordinate system for this field. - custom_storage_ type Unknown / MeasureType.UNKNOWN
String / MeasureType.STRING
Integer / MeasureType.INTEGER
Real / MeasureType.REAL
Time / MeasureType.TIME
Date / MeasureType.DATE
Timestamp / MeasureType.TIMESTAMP
List / MeasureType.LIST This keyed property is similar to custom_storage in that it can be used to define the override storage for the field. What is different is that in Python scripting, the setter function can also be passed one of the StorageType values while the getter will always return on the StorageType values. -" -7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_5,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," custom_list_ storage_type String / MeasureType.STRING
Integer / MeasureType.INTEGER
Real / MeasureType.REAL
Time / MeasureType.TIME
Date / MeasureType.DATE
Timestamp / MeasureType.TIMESTAMP For list fields, this keyed property specifies the storage type of the underlying values. - custom_list_depth integer For list fields, this keyed property specifies the depth of the field - max_list_length integer Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain. -" -0B14841AF65A8855E9D497EF05270B54B245DAF8,0B14841AF65A8855E9D497EF05270B54B245DAF8," userinputnode properties - -![User Input node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/userinputnodeicon.png)The User Input node provides an easy way to create synthetic data—either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling. - - - -userinputnode properties - -Table 1. userinputnode properties - - userinputnode properties Data type Property description - - data - names Structured slot that sets or returns a list of field names generated by the node. -" -679F2F7A79672580B5FB797D9C5280B1A83806EF,679F2F7A79672580B5FB797D9C5280B1A83806EF," Scripting overview - -This section provides high-level descriptions and examples of flow-level scripts and standalone scripts in the SPSS Modeler interface. More information on scripting language, syntax, and commands is provided in the sections that follow. - -Notes: - - - -" -B3FFE77064106EE619C664233B7B7A9ABA75C30A,B3FFE77064106EE619C664233B7B7A9ABA75C30A," webnode properties - -![Time Plot node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/webnodeicon.png)The Web node illustrates the strength of the relationship between values of two or more symbolic (categorical) fields. The graph uses lines of various widths to indicate connection strength. You might use a Web node, for example, to explore the relationship between the purchase of a set of items at an e-commerce site. - - - -webnode properties - -Table 1. webnode properties - - webnode properties Data type Property description - - use_directed_web flag - fields list - to_field field - from_fields list - true_flags_only flag - line_values AbsoluteOverallPctPctLargerPctSmaller - strong_links_heavier flag - num_links ShowMaximumShowLinksAboveShowAll - max_num_links number - links_above number - discard_links_min flag - links_min_records number - discard_links_max flag - links_max_records number - weak_below number - strong_above number - link_size_continuous flag - web_display CircularNetworkDirectedGrid - graph_background color Standard graph colors are described at the beginning of this section. - symbol_size number Specifies a symbol size. - directed_line_values AbsoluteOverallPctPctToPctFrom Specify a threshold type. -" -D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_0,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," xgboostasnode properties - -![XGBoost-AS node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/sparkxgboostASnodeicon.png)XGBoost is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in SPSS Modeler exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark. - - - -xgboostasnode properties - -Table 1. xgboostasnode properties - - xgboostasnode properties Data type Property description - - target_field field List of the field names for target. - input_fields field List of the field names for inputs. - nWorkers integer The number of workers used to train the XGBoost model. Default is 1. - numThreadPerTask integer The number of threads used per worker. Default is 1. - useExternalMemory Boolean Whether to use external memory as cache. Default is false. - boosterType string The booster type to use. Available options are gbtree, gblinear, or dart. Default is gbtree. - numBoostRound integer The number of rounds for boosting. Specify a value of 0 or higher. Default is 10. - scalePosWeight Double Control the balance of positive and negative weights. Default is 1. - randomseed integer The seed used by the random number generator. Default is 0. - objectiveType string The learning objective. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types. Default is reg:linear. -" -D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_1,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," evalMetric string Evaluation metrics for validation data. A default metric will be assigned according to the objective. Possible values are rmse, mae, logloss, error, merror, mlogloss, auc, ndcg, map, or gamma-deviance. Default is rmse. - lambda Double L2 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 1. - alpha Double L1 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 0. - lambdaBias Double L2 regularization term on bias. If the gblinear booster type is used, this lambda bias linear booster parameter is available. Specify any number 0 or greater. Default is 0. - treeMethod string If the gbtree or dart booster type is used, this tree method parameter for tree growth (and the other tree parameters that follow) is available. It specifies the XGBoost tree construction algorithm to use. Available options are auto, exact, or approx. Default is auto. - maxDepth integer The maximum depth for trees. Specify a value of 2 or higher. Default is 6. - minChildWeight Double The minimum sum of instance weight (hessian) needed in a child. Specify a value of 0 or higher. Default is 1. - maxDeltaStep Double The maximum delta step to allow for each tree's weight estimation. Specify a value of 0 or higher. Default is 0. - sampleSize Double The sub sample for is the ratio of the training instance. Specify a value between 0.1 and 1.0. Default is 1.0. - eta Double The step size shrinkage used during the update step to prevent overfitting. Specify a value between 0 and 1. Default is 0.3. - gamma Double The minimum loss reduction required to make a further partition on a leaf node of the tree. Specify any number 0 or greater. Default is 6. - colsSampleRatio Double The sub sample ratio of columns when constructing each tree. Specify a value between 0.01 and 1. Default is1. -" -D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_2,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," colsSampleLevel Double The sub sample ratio of columns for each split, in each level. Specify a value between 0.01 and 1. Default is 1. - normalizeType string If the dart booster type is used, this dart parameter and the following three dart parameters are available. This parameter sets the normalization algorithm. Specify tree or forest. Default is tree. - sampleType string The sampling algorithm type. Specify uniform or weighted. Default is uniform. -" -80CCB2CF7A994D218D5C47BBF7F8BBB0D479E399,80CCB2CF7A994D218D5C47BBF7F8BBB0D479E399," xgboostlinearnode properties - -![XGBoost Linear node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonxgboostlinearnodeicon.png)XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in SPSS Modeler is implemented in Python. - - - -xgboostlinearnode properties - -Table 1. xgboostlinearnode properties - - xgboostlinearnode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify fields as required. - target field - inputs field - alpha Double The alpha linear booster parameter. Specify any number 0 or greater. Default is 0. - lambda Double The lambda linear booster parameter. Specify any number 0 or greater. Default is 1. - lambdaBias Double The lambda bias linear booster parameter. Specify any number. Default is 0. - num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10. - objectiveType string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types. -" -8672A0AEF022CD97D9E834AB2FD3A607FBDAED4D,8672A0AEF022CD97D9E834AB2FD3A607FBDAED4D," applyxgboostlinearnode properties - -XGBoost Linear nodes can be used to generate an XGBoost Linear model nugget. The scripting name of this model nugget is applyxgboostlinearnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [xgboostlinearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnodeslots.htmlxboostlinearnodeslots). -" -D05D9570CD32ACCCF91588C5886A1C4F5DA56D01_0,D05D9570CD32ACCCF91588C5886A1C4F5DA56D01," xgboosttreenode properties - -![XGBoost Tree node icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/images/pythonxgboosttreenodeicon.png)XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in SPSS Modeler exposes the core features and commonly used parameters. The node is implemented in Python. - - - -xgboosttreenode properties - -Table 1. xgboosttreenode properties - - xgboosttreenode properties Data type Property description - - custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required. - target field The target fields. - inputs field The input fields. - tree_method string The tree method for model building. Possible values are auto, exact, or approx. Default is auto. - num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10. - max_depth integer The max depth for tree growth. Specify a value of 1 or higher. Default is 6. - min_child_weight Double The min child weight for tree growth. Specify a value of 0 or higher. Default is 1. - max_delta_step Double The max delta step for tree growth. Specify a value of 0 or higher. Default is 0. -" -D05D9570CD32ACCCF91588C5886A1C4F5DA56D01_1,D05D9570CD32ACCCF91588C5886A1C4F5DA56D01," objective_type string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types. - early_stopping Boolean Whether to use the early stopping function. Default is False. - early_stopping_rounds integer Validation error needs to decrease at least every early stopping round(s) to continue training. Default is 10. - evaluation_data_ratio Double Ration of input data used for validation errors. Default is 0.3. - random_seed integer The random number seed. Any number between 0 and 9999999. Default is 0. - sample_size Double The sub sample for control overfitting. Specify a value between 0.1 and 1.0. Default is 0.1. - eta Double The eta for control overfitting. Specify a value between 0 and 1. Default is 0.3. - gamma Double The gamma for control overfitting. Specify any number 0 or greater. Default is 6. - col_sample_ratio Double The colsample by tree for control overfitting. Specify a value between 0.01 and 1. Default is 1. - col_sample_level Double The colsample by level for control overfitting. Specify a value between 0.01 and 1. Default is 1. - lambda Double The lambda for control overfitting. Specify any number 0 or greater. Default is 1. - alpha Double The alpha for control overfitting. Specify any number 0 or greater. Default is 0. -" -116575C57D15C410AC921AEBFAF607E2F86E6C05,116575C57D15C410AC921AEBFAF607E2F86E6C05," applyxgboosttreenode properties - -You can use the XGBoost Tree node to generate an XGBoost Tree model nugget. The scripting name of this model nugget is applyxgboosttreenode. For more information on scripting the modeling node itself, see [xgboosttreenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenodeslots.htmlxboosttreenodeslots). - - - -applyxgboosttreenode properties - -Table 1. applyxgboosttreenode properties - - applyxgboosttreenode properties Data type Property description - -" -5C2F280E5C4326883F7B3623EF1B64FE4DDE7C05,5C2F280E5C4326883F7B3623EF1B64FE4DDE7C05," Select node - -You can use Select nodes to select or discard a subset of records from the data stream based on a specific condition, such as BP (blood pressure) = ""HIGH"". - -Mode. Specifies whether records that meet the condition will be included or excluded from the data stream. - - - -* Include. Select to include records that meet the selection condition. -* Discard. Select to exclude records that meet the selection condition. - - - -Condition. Displays the selection condition that will be used to test each record, which you specify using a CLEM expression. Either enter an expression in the window or use the Expression Builder by clicking the calculator (Expression Builder) button. - -If you choose to discard records based on a condition, such as the following: - -(var1='value1' and var2='value2') - -the Select node by default also discards records having null values for all selection fields. To avoid this, append the following condition to the original one: - -and not(@NULL(var1) and @NULL(var2)) - -Select nodes are also used to choose a proportion of records. Typically, you would use a different node, the Sample node, for this operation. However, if the condition you want to specify is more complex than the parameters provided, you can create your own condition using the Select node. For example, you can create a condition such as: - -BP = ""HIGH"" and random(10) <= 4 - -This will select approximately 40% of the records showing high blood pressure and pass those records downstream for further analysis. -" -CBC6BDA4EC8356F2CE95DD4548406ABEE1EC5B76,CBC6BDA4EC8356F2CE95DD4548406ABEE1EC5B76," Sequence node - -The Sequence node discovers patterns in sequential or time-oriented data, in the format bread -> cheese. The elements of a sequence are item sets that constitute a single transaction. - -For example, if a person goes to the store and purchases bread and milk and then a few days later returns to the store and purchases some cheese, that person's buying activity can be represented as two item sets. The first item set contains bread and milk, and the second one contains cheese. A sequence is a list of item sets that tend to occur in a predictable order. The Sequence node detects frequent sequences and creates a generated model node that can be used to make predictions. - -Requirements. To create a Sequence rule set, you need to specify an ID field, an optional time field, and one or more content fields. Note that these settings must be made on the Fields tab of the modeling node; they cannot be read from an upstream Type node. The ID field can have any role or measurement level. If you specify a time field, it can have any role but its storage must be numeric, date, time, or timestamp. If you do not specify a time field, the Sequence node will use an implied timestamp, in effect using row numbers as time values. Content fields can have any measurement level and role, but all content fields must be of the same type. If they are numeric, they must be integer ranges (not real ranges). - -Strengths. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences. In addition, the generated model node created by a Sequence node can be inserted into a data stream to create predictions. The generated model node can also generate supernodes for detecting and counting specific sequences and for making predictions based on specific sequences. -" -A447EC7366D2EB328BCE8E44A73B3A825A9B757B,A447EC7366D2EB328BCE8E44A73B3A825A9B757B," Set Globals node - -The Set Globals node scans the data and computes summary values that can be used in CLEM expressions. - -For example, you can use a Set Globals node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age). -" -5CC48263B0C282CA1D65ACCB46D73D7EA3C8A665,5CC48263B0C282CA1D65ACCB46D73D7EA3C8A665," Set to Flag node - -Use the Set to Flag node to derive flag fields based on the categorical values defined for one or more nominal fields. - -For example, your dataset might contain a nominal field, BP (blood pressure), with the values High, Normal, and Low. For easier data manipulation, you might create a flag field for high blood pressure, which indicates whether or not the patient has high blood pressure. -" -82546B72EDBFB76F571CFD06A7009E01615FA054,82546B72EDBFB76F571CFD06A7009E01615FA054," Sim Eval node - -The Simulation Evaluation (Sim Eval) node is a terminal node that evaluates a specified field, provides a distribution of the field, and produces charts of distributions and correlations. - -This node is primarily used to evaluate continuous fields. It therefore compliments the evaluation chart, which is generated by an Evaluation node and is useful for evaluating discrete fields. Another difference is that the Sim Eval node evaluates a single prediction across several iterations, whereas the Evaluation node evaluates multiple predictions each with a single iteration. Iterations are generated when more than one value is specified for a distribution parameter in the Sim Gen node. - -The Sim Eval node is designed to be used with data that was obtained from the Sim Fit and Sim Gen nodes. The node can, however, be used with any other node. Any number of processing steps can be placed between the Sim Gen node and the Sim Eval node. - -Important: The Sim Eval node requires a minimum of 1000 records with valid values for the target field. -" -51389B2D808C1F7D81DF9EC75F053528AE1BC128,51389B2D808C1F7D81DF9EC75F053528AE1BC128," Sim Fit node - -The Simulation Fitting node fits a set of candidate statistical distributions to each field in the data. The fit of each distribution to a field is assessed using a goodness of fit criterion. When a Simulation Fitting node runs, a Simulation Generate node is built (or an existing node is updated). Each field is assigned its best fitting distribution. The Simulation Generate node can then be used to generate simulated data for each field. - -Although the Simulation Fitting node is a terminal node, it does not add output to the Outputs panel, or export data. - -Note: If the historical data is sparse (that is, there are many missing values), it may be difficult for the fitting component to find enough valid values to fit distributions to the data. In cases where the data is sparse, before fitting you should either remove the sparse fields if they are not required, or impute the missing values. Using the QUALITY options in the Data Audit node, you can view the number of complete records, identify which fields are sparse, and select an imputation method. If there are an insufficient number of records for distribution fitting, you can use a Balance node to increase the number of records. -" -EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183,EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183," Sim Gen node - -The Simulation Generate node provides an easy way to generate simulated data, either without historical data using user specified statistical distributions, or automatically using the distributions obtained from running a Simulation Fitting node on existing historical data. Generating simulated data is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs. -" -EFAE4449CEB6F88AA4545F33BD886EC3080171B4,EFAE4449CEB6F88AA4545F33BD886EC3080171B4," SLRM node - -Use the Self-Learning Response Model (SLRM) node to build a model that you can continually update, or reestimate, as a dataset grows without having to rebuild the model every time using the complete dataset. For example, this is useful when you have several products and you want to identify which one a customer is most likely to buy if you offer it to them. This model allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted. - -Initially, you can build the model using a small dataset with randomly made offers and the responses to those offers. As the dataset grows, the model can be updated and therefore becomes more able to predict the most suitable offers for customers and the probability of their acceptance based upon other input fields such as age, gender, job, and income. You can change the offers available by adding or removing them from within the node, instead of having to change the target field of the dataset. - -Before running an SLRM node, you must specify both the target and target response fields in the node properties. The target field must have string storage, not numeric. The target response field must be a flag. The true value of the flag indicates offer acceptance and the false value indicates offer refusal. - -Example. A financial institution wants to achieve more profitable results by matching the offer that is most likely to be accepted to each customer. You can use a self-learning model to identify the characteristics of customers most likely to respond favorably based on previous promotions and to update the model in real time based on the latest customer responses. -" -F837935A2FEFED20E2CAC93656E376F9868CC515,F837935A2FEFED20E2CAC93656E376F9868CC515," SMOTE node - -The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE node in watsonx.ai is implemented in Python and requires the imbalanced-learn© Python library. - -For details about the imbalanced-learn library, see [imbalanced-learn documentation](https://imbalanced-learn.org/stable/index.html)^1^. - -The Modeling tab on the nodes palette contains the SMOTE node and other Python nodes. - -^1^Lemaître, Nogueira, Aridas. ""Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning."" Journal of Machine Learning Research, vol. 18, no. 17, 2017, pp. 1-5. (http://jmlr.org/papers/v18/16-365.html) -" -8F64225936D78B691574900D641C0CB7C3CE78EF,8F64225936D78B691574900D641C0CB7C3CE78EF," Sort node - -You can use Sort nodes to sort records into ascending or descending order based on the values of one or more fields. For example, Sort nodes are frequently used to view and select records with the most common data values. Typically, you would first aggregate the data using the Aggregate node and then use the Sort node to sort the aggregated data into descending order of record counts. Displaying these results in a table will allow you to explore the data and to make decisions, such as selecting the records of the 10 best customers. - -The following settings are available for the Sort node - -Sort by. All fields selected to use as sort keys are displayed in a table. A key field works best for sorting when it is numeric. - - - -* Add fields to this list using the Field Chooser button. -* Select an order by clicking the Ascending or Descending arrow in the table's Order column. -* Delete fields using the red delete button. -* Sort directives using the arrow buttons. - - - -Default sort order. Select either Ascending or Descending to use as the default sort order when new fields are added. - -Note: The Sort node is not applied if there is a Distinct node down the model flow. For information about the Distinct node, see [Distinct node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distinct.htmldistinct). -" -C81BEEA067CCC7FED12806F3FF0F20519092F2E4,C81BEEA067CCC7FED12806F3FF0F20519092F2E4," Statistics node - -The Statistics node gives you basic summary information about numeric fields. You can get summary statistics for individual fields and correlations between fields. -" -2E2A2BE1CB20EF0C663E591532D71CFB5637E57F,2E2A2BE1CB20EF0C663E591532D71CFB5637E57F," Streaming TCM node - -You can use this node to build and score temporal causal models in one step. - -After adding a Streaming TCM node to your flow canvas, double-click it to open the node properties. To see information about the properties, hover over the tool-tip icons. For more information about temporal causal modeling, see [TCM node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tcm.html). -" -84D42E162FEFC977AE807AF123CEDFDF400E403A,84D42E162FEFC977AE807AF123CEDFDF400E403A," SuperNodes - -One of the reasons the SPSS Modeler visual interface is so easy to learn is that each node has a clearly defined function. However, for complex processing, a long sequence of nodes may be necessary. Eventually, this may clutter your flow canvas and make it difficult to follow flow diagrams. - -There are two ways to avoid the clutter of a long and complex flow: - - - -* You can split a processing sequence into several flows. The first flow, for example, creates a data file that the second uses as input. The second creates a file that the third uses as input, and so on. However, this requires you to manage multiple flows. -* You can create a SuperNode as a more streamlined alternative when working with complex flow processes. SuperNodes group multiple nodes into a single node by encapsulating sections of flow. This provides benefits to the data miner: - - - -* Grouping nodes results in a neater and more manageable flow. -* Nodes can be combined into a business-specific SuperNode. - - - - - -To group nodes into a SuperNode: - - - -1. Ctrl + click to select the nodes you want to group. -2. Right-click and select Create supernode. The nodes are grouped into a single SuperNode with a special star icon. - -Figure 1. SuperNode icon - -![SuperNode icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/supernodes.png) -" -8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E,8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E," SVM node - -The SVM node uses a support vector machine to classify data. SVM is particularly suited for use with wide datasets, that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the Expert settings to experiment with different types of SVM models. - -After the model is built, you can: - - - -* Browse the model nugget to display the relative importance of the input fields in building the model. -* Append a Table node to the model nugget to view the model output. - - - -Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an SVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant. -" -7434988303BF295C1586C5EE42100E8AF244859C_0,7434988303BF295C1586C5EE42100E8AF244859C," Reusing custom category sets - -You can customize a category set in Text Analytics Workbench and then download it to use in other SPSS Modeler flows. - -" -7434988303BF295C1586C5EE42100E8AF244859C_1,7434988303BF295C1586C5EE42100E8AF244859C," Procedure - - - -1. Optional: Customize the category set. - - - -1. Select a category to customize. -2. To add descriptors, click the Descriptors tab and then drag-and-drop from the Descriptors tab into categories to add them. - - - -2. Download the customized category set. - - - -1. From the Text Analytics Workbench, go to the Categories tab. -2. Click the Options icon and select Download category set. -3. Give the category set a name and click Download. - - - -3. Add the category set to another Text Mining node. - - - -1. In a different flow session, go to the Categories tab in the Text Analytics Workbench. -2. Click the Options icon and select Add category set. -3. Browse to or drag-and-drop your category set. -" -E6A2EF28A33AA6A8C8B2321133A8816257CD1612_0,E6A2EF28A33AA6A8C8B2321133A8816257CD1612," Reusing a project asset in Resource editor - -From the Text Analytics Workbench, you can save a template or library as a project asset. You can then use the template or library in other Text Mining nodes by loading it in the Resource editor. - -" -E6A2EF28A33AA6A8C8B2321133A8816257CD1612_1,E6A2EF28A33AA6A8C8B2321133A8816257CD1612," Procedure - - - -1. Save a library or template in Text Analytics Workbench. - - - -1. On the Resource Editor tab, select the template or library to save. -2. Click the Options icon and select Save as project asset. -3. Enter details about the asset, and click Submit. - - - -2. Load a library or template in a different Text Analytics Workbench. - - - -1. On the Resource Editor tab, open the toolbar menu for your current template or library. -2. Click the Options icon and select Load library or Change template. -" -0F58073F0D5B237C3241126E98851A9E0C912792_0,0F58073F0D5B237C3241126E98851A9E0C912792," Uploading a custom asset in a Text Mining node - -You can add a custom text analysis package (TAP) or template directly in the Text Mining node. When your SPSS Modeler flow runs, it will use your custom asset. - -" -0F58073F0D5B237C3241126E98851A9E0C912792_1,0F58073F0D5B237C3241126E98851A9E0C912792," Procedure - - - -1. If you want to download a TAP, save it locally. - - - -1. Click Text analysis package while in the Text Analytics Workbench. -2. Enter details about the asset, and then click Submit. The text analysis package is saved locally as a .tap file. - - - -2. If you want to download a template, see [Linguistic resources](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.htmltmwb-templates-intro__DownloadAssetsSteps). -3. Add the TAP or template file to another Text Mining node. - - - -1. In the Text Mining node, click Select resources. -2. Click the Text analysis package or Resource template tab depending on the asset you want. -3. Click Import , and then browse to or drag-and-drop your TAP or template. -" -8654D0CBB99EE82483F99972EF5247401EB8E8D9,8654D0CBB99EE82483F99972EF5247401EB8E8D9," Table node - -The Table node creates a table that lists the values in your data. All fields and all values in the stream are included, making this an easy way to inspect your data values or export them in an easily readable form. Optionally, you can highlight records that meet a certain condition. - -Note: Unless you are working with small datasets, we recommend that you select a subset of the data to pass into the Table node. The Table node cannot display properly when the number of records surpasses a size that can be contained in the display structure (for example, 100 million rows). -" -6B6D315FFD086296183DE20086EE752A6A2B88C8,6B6D315FFD086296183DE20086EE752A6A2B88C8," TCM node - -Use this node to create a temporal causal model (TCM). - -Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have a causal relationship with the target. This approach differs from traditional time series modeling where you must explicitly specify the predictors for a target series. Since temporal causal modeling typically involves building models for multiple related time series, the result is referred to as a model system. - -In the context of temporal causal modeling, the term causal refers to Granger causality. A time series X is said to ""Granger cause"" another time series Y if regressing for Y in terms of past values of both X and Y results in a better model for Y than regressing only on past values of Y. - -Note: To build a temporal causal model, you need enough data points. The product uses the constraint: - -m>(L + KL + 1) - -where m is the number of data points, L is the number of lags, and K is the number of predictors. Make sure your data set is big enough so that the number of data points (m) satisfies the condition. -" -6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6_0,6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6," Mining for text links - -The Text Link Analysis (TLA) node adds pattern-matching technology to text mining's concept extraction in order to identify relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents. - -![Text Link Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_tla.png) - -For example, extracting your competitor’s product name may not be interesting enough to you. Using this node, you could also learn how people feel about this product, if such opinions exist in the data. The relationships and associations are identified and extracted by matching known patterns to your text data. - -You can use the TLA pattern rules inside certain resource templates shipped with Text Analytics or create/edit your own. Pattern rules are made up of macros, word lists, and word gaps to form a Boolean query, or rule, that is compared to your input text. Whenever a TLA pattern rule matches text, this text can be extracted as a TLA result and restructured as output data. - -The Text Link Analysis node offers a more direct way to identify and extract TLA pattern results from your text and then add the results to the dataset in the flow. But the Text Link Analysis node is not the only way in which you can perform text link analysis. You can also use a Text Analytics Workbench session in the Text Mining modeling node. - -In the Text Analytics Workbench, you can explore the TLA pattern results and use them as category descriptors and/or to learn more about the results using drill-down and graphs. In fact, using the Text Mining node to extract TLA results is a great way to explore and fine-tune templates to your data for later use directly in the TLA node. - -The output can be represented in up to 6 slots, or parts. - -You can find this node under the Text Analytics section of the node palette. - -" -6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6_1,6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6,"Requirements. The Text Link Analysis node accepts text data read into a field using an Import node. - -Strengths. The Text Link Analysis node goes beyond basic concept extraction to provide information about the relationships between concepts, as well as related opinions or qualifiers that may be revealed in the data. -" -0FAF8791603EB1A93ADC49EA8F9E5859D1E3360F,0FAF8791603EB1A93ADC49EA8F9E5859D1E3360F," Time Intervals node - -Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years. - -Use the node to derive a new time field. The new field has the same storage type as the input time field you chose. The node generates the following items: - - - -* The field specified in the node properties as the Time Field, along with the chosen prefix/suffix. By default the prefix is $TI_. -* The fields specified in the node properties as the Dimension fields. -* The fields specified in the node properties as the Fields to aggregate. - - - -You can also generate a number of extra fields, depending on the selected interval or period (such as the minute or second within which a measurement falls). -" -99675D0DDD35D743F2F0BECF008D9CBED68C0534,99675D0DDD35D743F2F0BECF008D9CBED68C0534," Time Plot node - -Time Plot nodes allow you to view one or more time series plotted over time. The series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform. - -Figure 1. Plotting sales of men's and women's clothing and jewelry over time - -![Plotting sales of men's and women's clothing and jewelry over time](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/graphs_timeseries_jewelry_sales.jpg) -" -AC040F5709AB00AB3ED8275862FA2328D20842B2_0,AC040F5709AB00AB3ED8275862FA2328D20842B2," Expert options - -With the Text Link Analysis (TLA) node, the extraction of text link analysis pattern results is automatically enabled. In the node's properties, the expert options include certain additional parameters that impact how text is extracted and handled. The expert parameters control the basic behavior, as well as a few advanced behaviors, of the extraction process. There are also a number of linguistic resources and options that also impact the extraction results, which are controlled by the resource template you select. - -Limit extraction to concepts with a global frequency of at least [n]. This option specifies the minimum number of times a word or phrase must occur in the text in order for it to be extracted. In this way, a value of 5 limits the extraction to those words or phrases that occur at least five times in the entire set of records or documents. - -In some cases, changing this limit can make a big difference in the resulting extraction results, and consequently, your categories. Let's say that you're working with some restaurant data and you don't increase the limit beyond 1 for this option. In this case, you might find pizza (1), thin pizza (2), spinach pizza (2), and favorite pizza (2) in your extraction results. However, if you were to limit the extraction to a global frequency of 5 or more and re-extract, you would no longer get three of these concepts. Instead you would get pizza (7), since pizza is the simplest form and this word already existed as a possible candidate. And depending on the rest of your text, you might actually have a frequency of more than seven, depending on whether there are still other phrases with pizza in the text. Additionally, if spinach pizza was already a category descriptor, you might need to add pizza as a descriptor instead to capture all of the records. For this reason, change this limit with care whenever categories have already been created. - -Note that this is an extraction-only feature; if your template contains terms (they usually do), and a term for the template is found in the text, then the term will be indexed regardless of its frequency. - -" -AC040F5709AB00AB3ED8275862FA2328D20842B2_1,AC040F5709AB00AB3ED8275862FA2328D20842B2,"For example, suppose you use a Basic Resources template that includes ""los angeles"" under the type in the Core library; if your document contains Los Angeles only once, then Los Angeles will be part of the list of concepts. To prevent this, you'll need to set a filter to display concepts occurring at least the same number of times as the value entered in the Limit extraction to concepts with a global frequency of at least [n] field. - -Accommodate punctuation errors. This option temporarily normalizes text containing punctuation errors (for example, improper usage) during extraction to improve the extractability of concepts. This option is extremely useful when text is short and of poor quality (as, for example, in open-ended survey responses, e-mail, and CRM data), or when the text contains many abbreviations. - -Accommodate spelling for a minimum word character length of [n]. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same so that modeling and modelling would be grouped together. However, if each term is assigned to a different type, excluding the type, the fuzzy grouping technique won't be applied. - -" -AC040F5709AB00AB3ED8275862FA2328D20842B2_2,AC040F5709AB00AB3ED8275862FA2328D20842B2,"You can also define the minimum number of root characters required before fuzzy grouping is used. The number of root characters in a term is calculated by totaling all of the characters and subtracting any characters that form inflection suffixes and, in the case of compound-word terms, determiners and prepositions. For example, the term exercises is counted as 8 root characters in the form ""exercise,"" since the letter s at the end of the word is an inflection (plural form). Similarly, apple sauce counts as 10 root characters (""apple sauce"") and manufacturing of cars counts as 16 root characters (“manufacturing car”). This method of counting is only used to check whether the fuzzy grouping should be applied but doesn't influence how the words are matched. - -Note: If you find that certain words are later grouped incorrectly, you can exclude word pairs from this technique by explicitly declaring them in the Fuzzy Grouping: Exceptions section under the Advanced Resources properties. - -Extract uniterms. This option extracts single words (uniterms) as long as the word isn't already part of a compound word and if it's either a noun or an unrecognized part of speech. - -Extract nonlinguistic entities. This option extracts nonlinguistic entities, such as phone numbers, social security numbers, times, dates, currencies, digits, percentages, e-mail addresses, and HTTP addresses. You can include or exclude certain types of nonlinguistic entities in the Nonlinguistic Entities: Configuration section under the Advanced Resources properties. By disabling any unnecessary entities, the extraction engine won't waste processing time. - -Uppercase algorithm. This option extracts simple and compound terms that aren't in the built-in dictionaries as long as the first letter of the term is in uppercase. This option offers a good way to extract most proper nouns. - -" -AC040F5709AB00AB3ED8275862FA2328D20842B2_3,AC040F5709AB00AB3ED8275862FA2328D20842B2,"Group partial and full person names together when possible. This option groups names that appear differently in the text together. This feature is helpful since names are often referred to in their full form at the beginning of the text and then only by a shorter version. This option attempts to match any uniterm with the type to the last word of any of the compound terms that is typed as . For example, if doe is found and initially typed as , the extraction engine checks to see if any compound terms in the type include doe as the last word, such as john doe. This option doesn't apply to first names since most are never extracted as uniterms. - -Maximum nonfunction word permutation. This option specifies the maximum number of nonfunction words that can be present when applying the permutation technique. This permutation technique groups similar phrases that differ from each other only by the nonfunction words (for example, of and the) contained, regardless of inflection. For example, let's say that you set this value to—at most—two words, and both company officials and officials of the company were extracted. In this case, both extracted terms would be grouped together in the final concept list since both terms are deemed to be the same when of the is ignored. - -Use derivation when grouping multiterms. When processing Big Data, select this option to group multiterms by using derivation rules. -" -EFD36F1BF92225311B684D6AA0D05A597F00D707,EFD36F1BF92225311B684D6AA0D05A597F00D707," TLA node output - -After running a Text Link Analysis node, the data is restructured. It's important to understand the way text mining restructures your data. - -If you desire a different structure for data mining, you can use nodes on the Field Operations palette to accomplish this. For example, if you're working with data in which each row represents a text record, then one row is created for each pattern uncovered in the source text data. For each row in the output, there are 15 fields: - - - -* Six fields ( Concept#, such as Concept1, Concept2, ..., and Concept6) represent any concepts found in the pattern match -* Six fields ( Type#, such as Type1, Type2, ..., and Type6) represent the type for each concept -* Rule Name represents the name of the text link rule used to match the text and produce the output -" -B2250C2A2E20F6F123C6D1091BFD635DC74EE4FE,B2250C2A2E20F6F123C6D1091BFD635DC74EE4FE," Linguistic resources - -SPSS Modeler uses an extraction process that relies on linguistic resources. These resources serve as the basis for how to process the text data and extract information to get the concepts, types, and sometimes patterns. - -The linguistic resources can be divided into different types: - -Category sets -: Categories are a group of closely related ideas and patterns that the text data is assigned to through a scoring process. - -Libraries -: Libraries are used as building blocks for both TAPs and templates. Each library is made up of several dictionaries, which are used to define and manage terms, synonyms, and exclude lists. While libraries are also delivered individually, they are prepackaged together in templates and TAPs. - -Templates -: Templates are made up of a set of libraries and some advanced linguistic and nonlinguistic resources. These resources form a specialized set that is adapted to a particular domain or context, such as product opinions. - -Text analysis packages (TAP) -: A text analysis package is a predefined template that is bundled with one or more sets of predefined category sets. TAPs bundle together these resources so that the categories and the resources that were used to generate them are both stored together and reusable. - -Note: During extraction, some compiled internal resources are also used. These compiled resources contain many definitions that complement the types in the Core library. These compiled resources cannot be edited. -" -05275F4EC521878B13AD7DCE825E167B2FC7EF93_0,05275F4EC521878B13AD7DCE825E167B2FC7EF93," Advanced frequency settings - -You can build categories based on a straightforward and mechanical frequency technique. With this technique, you can build one category for each item (type, concept, or pattern) that was found to be higher than a given record or document count. Additionally, you can build a single category for all of the less frequently occurring items. By count, we refer to the number of records or documents containing the extracted concept (and any of its synonyms), type, or pattern in question as opposed to the total number of occurrences in the entire text. - -Grouping frequently occurring items can yield interesting results, since it may indicate a common or significant response. The technique is very useful on the unused extraction results after other techniques have been applied. Another application is to run this technique immediately after extraction when no other categories exist, edit the results to delete uninteresting categories, and then extend those categories so that they match even more records or documents. - -Instead of using this technique, you could sort the concepts or concept patterns by descending number of records or documents in the extraction results pane and then drag-and-drop the ones with the most records into the categories pane to create the corresponding categories. - -The following advanced settings are available for the Use frequencies to build categories option in the category settings. - -Generate category descriptors at. Select the kind of input for descriptors. - - - -* Concepts level. Selecting this option means that concepts or concept patterns frequencies will be used. Concepts will be used if types were selected as input for category building and concept patterns are used, if type patterns were selected. In general, applying this technique to the concept level will produce more specific results, since concepts and concept patterns represent a lower level of measurement. -* Types level. Selecting this option means that type or type patterns frequencies will be used. Types will be used if types were selected as input for category building and type patterns are used, if type patterns were selected. By applying this technique to the type level, you can get a quick view of the kind of information given. - - - -" -05275F4EC521878B13AD7DCE825E167B2FC7EF93_1,05275F4EC521878B13AD7DCE825E167B2FC7EF93,"Minimum record/doc. count for items to have their own category. With this option, you can build categories from frequently occurring items. This option restricts the output to only those categories containing a descriptor that occurred in at least X number of records or documents, where X is the value to enter for this option. - -Group all remaining items into a category called. Use this option if you want to group all concepts or types occurring infrequently into a single catch-all category with the name of your choice. By default, this category is named Other. - -Category input. Select the group to which to apply the techniques: - - - -* Unused extraction results. This option enables categories to be built from extraction results that aren't used in any existing categories. This minimizes the tendency for records to match multiple categories and limits the number of categories produced. -* All extraction results. This option enables categories to be built using any of the extraction results. This is most useful when no or few categories already exist. - - - -Resolve duplicate category names by. Select how to handle any new categories or subcategories whose names would be the same as existing categories. You can either merge the new ones (and their descriptors) with the existing categories with the same name, or you can choose to skip the creation of any categories if a duplicate name is found in the existing categories. -" -A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F,A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F," Advanced linguistic settings - -When you build categories, you can select from a number of advanced linguistic category building techniques such as concept inclusion and semantic networks (English text only). These techniques can be used individually or in combination with each other to create categories. - -Keep in mind that because every dataset is unique, the number of methods and the order in which you apply them may change over time. Since your text mining goals may be different from one set of data to the next, you may need to experiment with the different techniques to see which one produces the best results for the given text data. None of the automatic techniques will perfectly categorize your data; therefore we recommend finding and applying one or more automatic techniques that work well with your data. - -The following advanced settings are available for the Use linguistic techniques to build categories option in the category settings. -" -D171FCF10D8A1699FD8AC67E44053BBF6405631C,D171FCF10D8A1699FD8AC67E44053BBF6405631C," The Concepts tab - -In the Text Analytics Workbench, you can use the Concepts tab to create and explore concepts as well as explore and tweak the extraction results. - -Concepts are the most basic level of extraction results available to use as building blocks, called descriptors, for your categories. Categories are a group of closely related ideas and patterns to which documents and records are assigned through a scoring process. - -Text mining is an iterative process in which extraction results are reviewed according to the context of the text data, fine-tuned to produce new results, and then reevaluated. Extraction results can be refined by modifying the linguistic resources. To simplify the process of fine-tuning your linguistic resources, you can perform common dictionary tasks directly from the Concepts tab. You can fine-tune other linguistic resources directly from the Resource editor tab. - -Figure 1. Concepts tab - -![Concepts tab](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tmwb_conceptview.png) -" -6068B2555E5014D386397335D0ED56B430082FF7,6068B2555E5014D386397335D0ED56B430082FF7," The Resource editor tab - -Text Analytics rapidly and accurately captures key concepts from text data by using an extraction process. This process relies on linguistic resources to dictate how large amounts of unstructured, textual data should be analyzed and interpreted. - -You can use the Resource editor tab to view the linguistic resources used in the extraction process. These resources are stored in the form of templates and libraries, which are used to extract concepts, group them under types, discover patterns in the text data, and other processes. Text Analytics offers several preconfigured resource templates, and in some languages, you can also use the resources in text analysis packages. - -Figure 1. Resource editor tab - -![Resource editor tab in the Text Analytics Workbench](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tmwb_resourceeditor.png) -" -342AD3ABFEECA87987ED595047CC869E15F148BF,342AD3ABFEECA87987ED595047CC869E15F148BF," Generating a model nugget - -When you're working in the Text Analytics Workbench, you may want to use the work you've done to generate a category model nugget. - -A model generated from a Text Analytics Workbench session is a category model nugget. You must first have at least one category before you can generate a category model nugget. -" -7FE671DB2B6972A1CFB04E0902F8D82DC979D42A,7FE671DB2B6972A1CFB04E0902F8D82DC979D42A," Text Analytics Workbench - -From a Text Mining modeling node, you can choose to launch an interactive Text Analytics Workbench session when your flow runs. In this workbench, you can extract key concepts from your text data, build categories, explore patterns in text link analysis, and generate category models. - -You can use the Text Analytics Workbench to explore the results and tune the configuration for the node. - -Concepts -: Concepts are the key words and phrases identified and extracted from your text data, also referred to as extraction results. These concepts are grouped into types. You can use these concepts to explore your data and create your categories. You can manage the concepts on the Concepts tab. - -Text links -: If you have text link analysis (TLA) pattern rules in your linguistic resources or are using a resource template that already has some TLA rules, you can extract patterns from your text data. These patterns can help you uncover interesting relationships between concepts in your data. You can also use these patterns as descriptors in your categories. You can manage these on the Text links tab. - -Categories -: Using descriptors (such as extraction results, patterns, and rules) as a definition, you can manually or automatically create a set of categories. Documents and records are assigned to these categories based on whether or not they contain a part of the category definition. You can manage categories on the Categories tab. - -Resources -: The extraction process relies on a set of parameters and definitions from linguistic resources to govern how text is extracted and handled. These are managed in the form of templates and libraries on the Resource editor tab. - -Figure 1. Text Analytics Workbench - -![Text Analytics Workbench](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_taw.png) -" -925108D09CFC6F2B5193D0D7414BFC83748111A9,925108D09CFC6F2B5193D0D7414BFC83748111A9," Setting options - -You can access settings in various panes of the Text Analytics Workbench, such as extraction settings for concepts. - -On the Concepts, Text links, and Categories tabs, categories are built from descriptors derived from either types or type patterns. In the table, you can select the individual types or patterns to include in the category building process. A description of all settings on each tab follows. -" -31A670D6B3F0D7AB4EAD7DAE3795589F161249DE,31A670D6B3F0D7AB4EAD7DAE3795589F161249DE," The Categories tab - -In the Text Analytics Workbench, you can use the Categories tab to create and explore categories as well as tweak the extraction results. - -Extraction results can be refined by modifying the linguistic resources, which you can do directly from the Categories tab. - -Figure 1. Categories tab - -![Categories tab](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tmwb_categoryview.png) -" -799CE322C90ECAD9CC4BACAD45F9749EC21E912E,799CE322C90ECAD9CC4BACAD45F9749EC21E912E," The Text links tab - -On the Text links tab, you can build and explore text link analysis patterns found in your text data. Text link analysis (TLA) is a pattern-matching technology that enables you to define TLA rules and compare them to actual extracted concepts and relationships found in your text. - -Patterns are most useful when you are attempting to discover relationships between concepts or opinions about a particular subject. Some examples include wanting to extract opinions on products from survey data, genomic relationships from within medical research papers, or relationships between people or places from intelligence data. - -After you've extracted some TLA patterns, you can explore them and even add them to categories. To extract TLA results, there must be some TLA rules defined in the resource template or libraries you're using. - -With no type patterns selected, you can click the Settings icon to change the extraction settings. For details, see [Setting options](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro_options.html). You can also click the Filter icon to filter the type patterns that are displayed - -Figure 1. Text links view - -![Text links view](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tmwb_tlaview.png) -" -BE6A4C0BB6BCC7166FF88D60FD433C220962730D,BE6A4C0BB6BCC7166FF88D60FD433C220962730D," Transform node - -Normalizing input fields is an important step before using traditional scoring techniques such as regression, logistic regression, and discriminant analysis. These techniques carry assumptions about normal distributions of data that may not be true for many raw data files. One approach to dealing with real-world data is to apply transformations that move a raw data element toward a more normal distribution. In addition, normalized fields can easily be compared with each other—for example, income and age are on totally different scales in a raw data file but, when normalized, the relative impact of each can be easily interpreted. - -The Transform node provides an output viewer that enables you to perform a rapid visual assessment of the best transformation to use. You can see at a glance whether variables are normally distributed and, if necessary, choose the transformation you want and apply it. You can pick multiple fields and perform one transformation per field. - -After selecting the preferred transformations for the fields, you can generate Derive or Filler nodes that perform the transformations and attach these nodes to the flow. The Derive node creates new fields, while the Filler node transforms the existing ones. -" -22B8136F68AC74838B9C2B9EAF3996CCFAA14921,22B8136F68AC74838B9C2B9EAF3996CCFAA14921," Transpose node - -By default, columns are fields and rows are records or observations. If necessary, you can use a Transpose node to swap the data in rows and columns so that fields become records and records become fields. - -For example, if you have time series data where each series is a row rather than a column, you can transpose the data prior to analysis. -" -015755C65C274F262396747D3F32A59AE74C08D7,015755C65C274F262396747D3F32A59AE74C08D7," Tree-AS node - -The Tree-AS node can be used with data in a distributed environment. With this node, you can choose to build decision trees using either a CHAID or Exhaustive CHAID model. - -CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits. - -CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged. - -Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute. - -Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, use the Reclassify node to convert them. - -Strengths. CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables. -" -18C44D2A29B576F708BC515CEDE91227B6B4FC4E_0,18C44D2A29B576F708BC515CEDE91227B6B4FC4E," Time Series node - -The Time Series node can be used with data in either a local or distributed environment. With this node, you can choose to estimate and build exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), or multivariate ARIMA (or transfer function) models for time series, and produce forecasts based on the time series data. - -Exponential smoothing is a method of forecasting that uses weighted values of previous series observations to predict future values. As such, exponential smoothing is not based on a theoretical understanding of the data. It forecasts one point at a time, adjusting its forecasts as new data come in. The technique is useful for forecasting series that exhibit trend, seasonality, or both. You can choose from various exponential smoothing models that differ in their treatment of trend and seasonality. - -ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and, in particular, they allow the added benefit of including independent (predictor) variables in the model. This involves explicitly specifying autoregressive and moving average orders as well as the degree of differencing. You can include predictor variables and define transfer functions for any or all of them, as well as specify automatic detection of outliers or an explicit set of outliers. - -Note: In practical terms, ARIMA models are most useful if you want to include predictors that might help to explain the behavior of the series that is being forecast, such as the number of catalogs that are mailed or the number of hits to a company web page. Exponential smoothing models describe the behavior of the time series without attempting to understand why it behaves as it does. For example, a series that historically peaks every 12 months will probably continue to do so even if you don't know why. - -" -18C44D2A29B576F708BC515CEDE91227B6B4FC4E_1,18C44D2A29B576F708BC515CEDE91227B6B4FC4E,"An Expert Modeler option is also available, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target variables, thus eliminating the need to identify an appropriate model through trial and error. If in doubt, use the Expert Modeler option. - -If predictor variables are specified, the Expert Modeler selects those variables that have a statistically significant relationship with the dependent series for inclusion in ARIMA models. Model variables are transformed where appropriate using differencing and/or a square root or natural log transformation. By default, the Expert Modeler considers all exponential smoothing models and all ARIMA models and picks the best model among them for each target field. You can, however, limit the Expert Modeler only to pick the best of the exponential smoothing models or only to pick the best of the ARIMA models. You can also specify automatic detection of outliers. -" -A5D736B45EC8EC0B906E183DE5DAA8BFA4C1F2D6,A5D736B45EC8EC0B906E183DE5DAA8BFA4C1F2D6," Streaming Time Series node - -You use the Streaming Time Series node to build and score time series models in one step. A separate time series model is built for each target field, however model nuggets are not added to the generated models palette and the model information cannot be browsed. - -Methods for modeling time series data require a uniform interval between each measurement, with any missing values indicated by empty rows. If your data does not already meet this requirement, you need to transform values as needed. - -Other points of interest regarding Time Series nodes: - - - -* Fields must be numeric. -* Date fields cannot be used as inputs. -* Partitions are ignored. - - - -The Streaming Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series and produces forecasts based on the time series data. Also available is an Expert Modeler, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target fields. -" -94FE9993A8201BDBD9D383CC4CC4CA4F2DDDB47D,94FE9993A8201BDBD9D383CC4CC4CA4F2DDDB47D," TwoStep cluster node - -The TwoStep Cluster node provides a form of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. As with Kohonen nodes and K-Means nodes, TwoStep Cluster models do not use a target field. Instead of trying to predict an outcome, TwoStep Cluster tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar. - -TwoStep Cluster is a two-step clustering method. The first step makes a single pass through the data, during which it compresses the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters, without requiring another pass through the data. Hierarchical clustering has the advantage of not requiring the number of clusters to be selected ahead of time. Many hierarchical clustering methods start with individual records as starting clusters and merge them recursively to produce ever larger clusters. Though such approaches often break down with large amounts of data, TwoStep's initial preclustering makes hierarchical clustering fast even for large datasets. - -Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model. - -Requirements. To train a TwoStep Cluster model, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored. The TwoStep Cluster algorithm does not handle missing values. Records with blanks for any of the input fields will be ignored when building the model. - -Strengths. TwoStep Cluster can handle mixed field types and is able to handle large datasets efficiently. It also has the ability to test several cluster solutions and choose the best, so you don't need to know how many clusters to ask for at the outset. TwoStep Cluster can be set to automatically exclude outliers, or extremely unusual cases that can contaminate your results. -" -B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA,B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA," TwoStep-AS cluster node - -TwoStep Cluster is an exploratory tool that is designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that is employed by this procedure has several desirable features that differentiate it from traditional clustering techniques. - - - -* Handling of categorical and continuous variables. By assuming variables to be independent, a joint multinomial-normal distribution can be placed on categorical and continuous variables. -* Automatic selection of number of clusters. By comparing the values of a model-choice criterion across different clustering solutions, the procedure can automatically determine the optimal number of clusters. -* Scalability. By constructing a cluster feature (CF) tree that summarizes the records, the TwoStep algorithm can analyze large data files. - - - -For example, retail and consumer product companies regularly apply clustering techniques to information that describes their customers' buying habits, gender, age, income level, and other attributes. These companies tailor their marketing and product development strategies to each consumer group to increase sales and build brand loyalty. -" -A967430DA16338281405CF73A802C233911B6A13_0,A967430DA16338281405CF73A802C233911B6A13," Type node - -You can specify field properties in a Type node. - -The following main properties are available. - - - -* Field. Specify value and field labels for data in watsonx.ai. For example, field metadata imported from a data asset can be viewed or modified here. Similarly, you can create new labels for fields and their values. -* Measure. This is the measurement level, used to describe characteristics of the data in a given field. If all the details of a field are known, it's called fully instantiated. Note: The measurement level of a field is different from its storage type, which indicates whether the data is stored as strings, integers, real numbers, dates, times, timestamps, or lists. -* Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Both and None are also available roles, along with Partition, which indicates a field used to partition records into separate samples for training, testing, and validation. The value Split specifies that separate models will be built for each possible value of the field. -* Value mode. Use this column to specify options for reading data values from the dataset, or use the Specify option to specify measurement levels and values. -* Values. With this column, you can specify options for reading data values from the data set, or specify measurement levels and values separately. You can also choose to pass fields without reading their values. You can't amend the cell in this column if the corresponding Field entry contains a list. -* Check. With this column, you can set options to ensure that field values conform to the specified values or ranges. You can't amend the cell in this column if the corresponding Field entry contains a list. - - - -Click the Edit (gear) icon next to each row to open additional options. - -Tip: Icons in the Type node properties quickly indicate the data type of each field, such as string, date, double integer, or hashtag. - -" -A967430DA16338281405CF73A802C233911B6A13_1,A967430DA16338281405CF73A802C233911B6A13,"Figure 1. New Type node icons - -![New Type node icons](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_typenode_icons.png) -" -5F584AEED890D6EFB4C9FAF133A26BD9F9E4F219,5F584AEED890D6EFB4C9FAF133A26BD9F9E4F219," Checking type values - -Turning on the Check option for each field examines all values in that field to determine whether they comply with the current type settings or the values that you've specified. This is useful for cleaning up datasets and reducing the size of a dataset within a single operation. - -The Check column in the Type node determines what happens when a value outside of the type limits is discovered. To change the check settings for a field, use the drop-down list for that field in the Check column. To set the check settings for all fields, select the check box for the top-level Field column heading. Then use the top-level drop-down above the Check column. - -The following check options are available: - -None. Values will be passed through without checking. This is the default setting. - -Nullify. Change values outside of the limits to the system null ($null$). - -Coerce. Fields whose measurement levels are fully instantiated will be checked for values that fall outside the specified ranges. Unspecified values will be converted to a legal value for that measurement level using the following rules: - - - -* For flags, any value other than the true and false value is converted to the false value -* For sets (nominal or ordinal), any unknown value is converted to the first member of the set's values -* Numbers greater than the upper limit of a range are replaced by the upper limit -* Numbers less than the lower limit of a range are replaced by the lower limit -* Null values in a range are given the midpoint value for that range - - - -Discard. When illegal values are found, the entire record is discarded. - -Warn. The number of illegal items is counted and reported in the flow properties dialog when all of the data has been read. - -Abort. The first illegal value encountered terminates the running of the flow. The error is reported in the flow properties dialog. -" -916C197A1A18FBE44382A30782B1FF7C13DBFEEC,916C197A1A18FBE44382A30782B1FF7C13DBFEEC," Converting continuous data - -Treating categorical data as continuous can have a serious impact on the quality of a model, especially if it's the target field (for example, producing a regression model rather than a binary model). To prevent this, you can convert integer ranges to categorical types such as Ordinal or Flag. - - - -1. Double-click a Type node to open its properties. Expand the Type Operations section. -2. Specify a value for Set continuous integer field to ordinal if range less than or equal to. -" -8F5EA4DC23CAEE3B6887B07AE9D319BFE5E39CA8,8F5EA4DC23CAEE3B6887B07AE9D319BFE5E39CA8," Setting field format options - -With the FORMAT settings in the Type and Table nodes you can specify formatting options for current or unused fields. - -Under each formatting type, click Add Columns and add one or more fields. The field name and format setting will be displayed for each field you select. Then click the gear icon to specify formatting options. - -The following formatting options are available on a per-field basis: - -Date format. Select a date format to use for date storage fields or when strings are interpreted as dates by CLEM date functions. - -Time format. Select a time format to use for time storage fields or when strings are interpreted as times by CLEM time functions. - -Number format. You can choose from standard (.), scientific (.E+), or currency display formats ($.). - -Decimal symbol. Select either a comma (,) or period (.) as the decimal separator. - -Number grouping symbol. For number display formats, select the symbol used to group values (for example, the comma in 3,000.00). Options include none, period, comma, space, and locale-defined (in which case the default for the current locale is used). - -Decimal places (standard, scientific, currency, export). For number display formats, specify the number of decimal places to use when displaying real numbers. This option is specified separately for each display format. Note that the Export decimal places setting only applies to flat file exports. The number of decimal places exported by the XML Export node is always 6. - -Justify. Specifies how the values should be justified within the column. The default setting is Auto, which left-justifies symbolic values and right-justifies numeric values. You can override the default by selecting left, right, or center. - -Column width. By default, column widths are automatically calculated based on the values of the field. You can specify a custom width, if needed. -" -7292DE7C0036B9064A85D1DA77A860BD989EA638_0,7292DE7C0036B9064A85D1DA77A860BD989EA638," Setting the field role - -A field's role controls how it's used in model building—for example, whether a field is an input or target (the thing being predicted). - -Note: The Partition, Frequency, and Record ID roles can each be applied to a single field only. - -The following roles are available: - -Input. The field is used as an input to machine learning (a predictor field). - -Target. The field is used as an output or target for machine learning (one of the fields that the model will try to predict). - -Both. The field is used as both an input and an output by the Apriori node. All other modeling nodes will ignore the field. - -None. The field is ignored by machine learning. Fields whose measurement level is set to Typeless are automatically set to None in the Role column. - -Partition. Indicates a field used to partition the data into separate samples for training, testing, and (optional) validation purposes. The field must be an instantiated set type with two or three possible values (as defined in the advanced settings by clicking the gear icon). The first value represents the training sample, the second represents the testing sample, and the third (if present) represents the validation sample. Any additional values are ignored, and flag fields can't be used. Note that to use the partition in an analysis, partitioning must be enabled in the node settings of the appropriate model-building or analysis node. Records with null values for the partition field are excluded from the analysis when partitioning is enabled. If you defined multiple partition fields in the flow, you must specify a single partition field in the node settings for each applicable modeling node. If a suitable field doesn't already exist in your data, you can create one using a Partition node or Derive node. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html) for more information. - -Split. (Nominal, ordinal, and flag fields only.) Specifies that a model is built for each possible value of the field. - -" -7292DE7C0036B9064A85D1DA77A860BD989EA638_1,7292DE7C0036B9064A85D1DA77A860BD989EA638,"Frequency. (Numeric fields only.) Setting this role enables the field value to be used as a frequency weighting factor for the record. This feature is supported by C&R Tree, CHAID, QUEST, and Linear nodes only; all other nodes ignore this role. Frequency weighting is enabled by means of the Use frequency weight option in the node settings of those modeling nodes that support the feature. - -Record ID. The field is used as the unique record identifier. This feature is ignored by most nodes; however, it's supported by Linear models. -" -B8C3B95FC688C347D679F81711781B29578CFC19,B8C3B95FC688C347D679F81711781B29578CFC19," Viewing and setting information about types - -From the Type node, you can specify field metadata and properties that are invaluable to modeling and other work. - -These properties include: - - - -* Specifying a usage type, such as range, set, ordered set, or flag, for each field in your data -* Setting options for handling missing values and system nulls -* Setting the role of a field for modeling purposes -" -9F878A46B28C19B951157A5F31BB7A1A9920A89E,9F878A46B28C19B951157A5F31BB7A1A9920A89E," What is instantiation? - -Instantiation is the process of reading or specifying information, such as storage type and values for a data field. To optimize system resources, instantiating is a user-directed process—you tell the software to read values by running data through a Type node. - - - -* Data with unknown types is also referred to as uninstantiated. Data whose storage type and values are unknown is displayed in the Measure column of the Type node settings as Typeless. -* When you have some information about a field's storage, such as string or numeric, the data is called partially instantiated. Categorical or Continuous are partially instantiated measurement levels. For example, Categorical specifies that the field is symbolic, but you don't know whether it's nominal, ordinal, or flag. -* When all of the details about a type are known, including the values, a fully instantiated measurement level—nominal, ordinal, flag, or continuous—is displayed in this column. Note that the continuous type is used for both partially instantiated and fully instantiated data fields. Continuous data can be either integers or real numbers. - - - -When a data flow with a Type node runs, uninstantiated types immediately become partially instantiated, based on the initial data values. After all of the data passes through the node, all data becomes fully instantiated unless values were set to Pass. If the flow run is interrupted, the data will remain partially instantiated. After the Types settings are instantiated, the values of a field are static at that point in the flow. This means that any upstream changes will not affect the values of a particular field, even if you rerun the flow. To change or update the values based on new data or added manipulations, you need to edit them in the Types settings or set the value for a field to Read or Extend. -" -21DB0146B79B8256259507C62876E01ADA143BD6_0,21DB0146B79B8256259507C62876E01ADA143BD6," Measurement levels - -The measure, also referred to as measurement level, describes the usage of data fields in SPSS Modeler. - -You can specify the Measure in the node properties of an import node or a Type node. For example, you may want to set the measure for an integer field with values of 1 and 0 to Flag. This usually indicates that 1 = True and 0 = False. - -Storage versus measurement. Note that the measurement level of a field is different from its storage type, which indicates whether data is stored as a string, integer, real number, date, time, or timestamp. While you can modify data types at any point in a flow by using a Type node, storage must be determined at the source when reading data in (although you can subsequently change it using a conversion function). - -The following measurement levels are available: - - - -* Default. Data whose storage type and values are unknown (for example, because they haven't yet been read) are displayed as Default. -* Continuous. Used to describe numeric values, such as a range of 0–100 or 0.75–1.25. A continuous value can be an integer, real number, or date/time. -* Categorical. Used for string values when an exact number of distinct values is unknown. This is an uninstantiated data type, meaning that all possible information about the storage and usage of the data is not yet known. After data is read, the measurement level will be Flag, Nominal, or Typeless, depending on the maximum number of members for nominal fields specified. -* Flag. Used for data with two distinct values that indicate the presence or absence of a trait, such as true and false, Yes and No, or 0 and 1. The values used may vary, but one must always be designated as the ""true"" value, and the other as the ""false"" value. Data may be represented as text, integer, real number, date, time, or timestamp. -" -21DB0146B79B8256259507C62876E01ADA143BD6_1,21DB0146B79B8256259507C62876E01ADA143BD6,"* Nominal. Used to describe data with multiple distinct values, each treated as a member of a set, such as small/medium/large. Nominal data can have any storage—numeric, string, or date/time. Note that setting the measurement level to Nominal doesn't automatically change the values to string storage. -* Ordinal. Used to describe data with multiple distinct values that have an inherent order. For example, salary categories or satisfaction rankings can be typed as ordinal data. The order is defined by the natural sort order of the data elements. For example, 1, 3, 5 is the default sort order for a set of integers, while HIGH, LOW, NORMAL (ascending alphabetically) is the order for a set of strings. The ordinal measurement level enables you to define a set of categorical data as ordinal data for the purposes of visualization, model building, and export to other applications (such as IBM SPSS Statistics) that recognize ordinal data as a distinct type. You can use an ordinal field anywhere that a nominal field can be used. Additionally, fields of any storage type (real, integer, string, date, time, and so on) can be defined as ordinal. -* Typeless. Used for data that doesn't conform to any of the Default, Continuous, Categorical, Flag, Nominal, or Ordinal types, for fields with a single value, or for nominal data where the set has more members than the defined maximum. Typeless is also useful for cases in which the measurement level would otherwise be a set with many members (such as an account number). When you select Typeless for a field, the role is automatically set to None, with Record ID as the only alternative. The default maximum size for sets is 250 unique values. -* Collection. Used to identify non-geospatial data that is recorded in a list. A collection is effectively a list field of zero depth, where the elements in that list have one of the other measurement levels. -" -21DB0146B79B8256259507C62876E01ADA143BD6_2,21DB0146B79B8256259507C62876E01ADA143BD6,"* Geospatial. Used with the List storage type to identify geospatial data. Lists can be either List of Integer or List of Real fields with a list depth that's between zero and two, inclusive. - - - -You can manually specify measurement levels, or you can allow the software to read the data and determine the measurement level based on the values it reads. Alternatively, where you have several continuous data fields that should be treated as categorical data, you can choose an option to convert them. See [Converting continuous data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html). -" -E0F6FBCA52D2EE44AC2E0795FA11FB53E3054C47,E0F6FBCA52D2EE44AC2E0795FA11FB53E3054C47," Geospatial measurement sublevels - -The Geospatial measurement level, which is used with the List storage type, has six sublevels that are used to identify different types of geospatial data. - - - -* Point. Identifies a specific location (for example, the center of a city). -* Polygon. A series of points that identifies the single boundary of a region and its location (for example, a county). -* LineString. Also referred to as a Polyline or just a Line, a LineString is a series of points that identifies the route of a line. For example, a LineString might be a fixed item, such as a road, river, or railway; or the track of something that moves, such as an aircraft's flight path or a ship's voyage. -* MultiPoint. Used when each row in your data contains multiple points per region. For example, if each row represents a city street, the multiple points for each street can be used to identify every street lamp. -" -FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B,FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B," Defining missing values - -In the Type node settings, select the desired field in the table and then click the gear icon at the end of its row. Missing values settings are available in the window that appears. - -Select Define missing values to define missing value handing for this field. Here you can define explicit values to be considered as missing values for this field, or this can also be accomplished by means of a downstream Filler node. -" -063D5E4C6E2094F964752D376B5FF49FFD47433B,063D5E4C6E2094F964752D376B5FF49FFD47433B," Data values - -Using the Value mode column in the Type node settings, you can read values automatically from the data, or you can specify measures and values. - -The options available in the Value mode drop-down provide instructions for auto-typing, as shown in the following table. - - - -Table 1. Instructions for auto-typing - - Option Function - - Read Data is read when the node runs. - Extend Data is read and appended to the current data (if any exists). - Pass No data is read. - Current Keep current data values. - Specify You can click the gear icon at the end of the row to specify values. - - - -Running a Type node or clicking Read Values auto-types and reads values from your data source based on your selection. You can also specify these values manually by using the Specify option and clicking the gear icon at the end of a row. - -After you make changes for fields in the Type node, you can reset value information using the following buttons: - - - -" -98AC4398E3EA902007D99E5BDB0686AEF04A4DAA,98AC4398E3EA902007D99E5BDB0686AEF04A4DAA," Specifying values for collection data - -Collection fields display non-geospatial data that's in a list. - -The only item you can set for the Collection measurement level is the List measure. By default, this measure is set to Typeless, but you can select another value to set the measurement level of the elements within the list. You can choose one of the following options: - - - -* Typeless -* Continuous -* Nominal -" -A82CB1ABABCF08E9FD361F13050D47850AF8768A,A82CB1ABABCF08E9FD361F13050D47850AF8768A," Specifying values and labels for continuous data - -The Continuous measurement level is for numeric fields. - -There are three storage types for continuous data: - - - -* Real -* Integer -* Date/Time - - - -The same settings are used to edit all continuous fields. The storage type is displayed for reference only. Select the desired field in the Type node settings and then click the gear icon at the end of its row. -" -077AFC6B667F6747FF066182E2F04AF486C13368,077AFC6B667F6747FF066182E2F04AF486C13368," Specifying values for a flag - -Use flag fields to display data that has two distinct values. The storage types for flags can be string, integer, real number, or date/time. - -True. Specify a flag value for the field when the condition is met. - -False. Specify a flag value for the field when the condition is not met. - -Labels. Specify labels for each value in the flag field. These labels appear in a variety of locations, such as graphs, tables, output, and model browsers. -" -24D2987869B1C8C34EFA1204903A7A8F3E35D459,24D2987869B1C8C34EFA1204903A7A8F3E35D459," Specifying values for geospatial data - -Geospatial fields display geospatial data that's in a list. For the Geospatial measurement level, you can use various options to set the measurement level of the elements within the list. - -Type. Select the measurement sublevel of the geospatial field. The available sublevels are determined by the depth of the list field. The defaults are: Point (zero depth), LineString (depth of one), and Polygon (depth of one). - -For more information about sublevels, see [Geospatial measurement sublevels](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html). - -Coordinate system. This option is only available if you changed the measurement level to Geospatial from a non-geospatial level. To apply a coordinate system to your geospatial data, select this option. To use a different coordinate system, click Change. -" -2C991135B30B24A268FC9D847E3F43522543A96B,2C991135B30B24A268FC9D847E3F43522543A96B," Specifying values and labels for nominal and ordinal data - -Nominal (set) and ordinal (ordered set) measurement levels indicate that the data values are used discretely as a member of the set. The storage types for a set can be string, integer, real number, or date/time. - -The following controls are unique to nominal and ordinal fields. You can use them to specify values and labels. Select the desired field in the Type node settings and then click the gear icon at the end of its row. - -Values and Labels. You can specify values based on your knowledge of the current field. You can enter expected values for the field and check the dataset's conformity to these values using the Check options. And you can specify lables for each value in the set. Thse labels appear in a variety of locations, such as graphs, tables, output, and model browsers. -" -C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_0,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8," Setting options for values - -The Value mode column under the Type node settings displays a drop-down list of predefined values. Choosing the Specify option on this list and then clicking the gear icon opens a new screen where you can set options for reading, specifying, labeling, and handling values for the selected field. - -Many of the controls are common to all types of data. These common controls are discussed here. - -Measure. Displays the currently selected measurement level. You can change this setting to reflect the way that you intend to use data. For instance, if a field called day_of_week contains numbers that represent individual days, you might want to change this to nominal data in order to create a distribution node that examines each category individually. - -Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Other roles are also available such as Both , None, Partition, Split, Frequency, or Record ID. - -Value mode. Select a mode to determine values for the selected field. Choices for reading values include the following: - - - -* Read. Select to read values when the node runs. -* Pass. Select not to read data for the current field. -* Specify. Options here are used to specify values and labels for the selected field. Used with value checking, use this option to specify values that are based on your knowledge of the current field. This option activates unique controls for each type of field. You can't specify values or labels for a field whose measurement level is Typeless. -* Extend. Select to append the current data with the values that you enter here. For example, if field_1 has a range from (0,10) and you enter a range of values from (8,16), the range is extended by adding the 16 without removing the original minimum. The new range would be (0,16). -* Current. Select to keep the current data values. - - - -Value Labels (Add/Edit Labels). In this section you can enter custom labels for each value of the selected field. - -" -C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_1,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8,"Max list length. Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain. - -Max string length. Only available for typeless data. Use this field when you're generating SQL to create a table. Enter the value of the largest string in your data; this generates a column in the table that's big enough for the string. If the string length value is not available, a default string size is used that may not be appropriate for the data (for example, if the value is too small, errors can occur when writing data to the table; too large a value could adversely affect performance). - -Check. Select a method of coercing values to conform to the specified continuous, flag, or nominal values. This option corresponds to the Check column in the main Type node settings, and a selection made here will override those in the main settings. Used with the options for specifying values and labels, value checking allows you to conform values in the data with expected values. For example, if you specify values as 1, 0 and then use the Discard. option here, you can discard all records with values other than 1 or 0. - -Define missing values. Select to activate the following controls you can use to declare missing values or blanks in your data. - - - -* Missing values. Use this field to define specific values (such as 99 or 0) as blanks. The value should be appropriate for the storage type of the field. -* Range. Used to specify a range of missing values (such as ages 1–17 or greater than 65). If a bound value is blank, then the range is unbounded. For example, if you specify a lower bound of 100 with no upper bound, then all values greater than or equal to 100 are defined as missing. The bound values are inclusive. For example, a range with a lower bound of 5 and an upper bound of 10 includes 5 and 10 in the range definition. You can define a missing value range for any storage type, including date/time and string (in which case the alphabetic sort order is used to determine whether a value is within the range). -" -C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_2,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8,"* Null/White space. You can also specify system nulls (displayed in the data as $null$) and white space (string values with no visible characters) as blanks. Note that the Type node also treats empty strings as white space for purposes of analysis, although they are stored differently internally and may be handled differently in certain cases. - - - -Note: To code blanks as undefined or $null$, use the Filler node. -" -74706148818BD2ACE30029492DD8AD7D47283EDC,74706148818BD2ACE30029492DD8AD7D47283EDC," User Input node - -The User Input node provides an easy way for you to create synthetic data--either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling. -" -5B3FB712903B0D1044610C93E6FCDE6A41BE1CF6,5B3FB712903B0D1044610C93E6FCDE6A41BE1CF6," Web node - -Web nodes show the strength of relationships between values of two or more symbolic fields. The graph displays connections using varying types of lines to indicate connection strength. You can use a Web node, for example, to explore the relationship between the purchase of various items at an e-commerce site or a traditional retail outlet. - -Figure 1. Web graph showing relationships between the purchase of grocery items - -![Web graph showing relationships between the purchase of grocery items](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/graphs_web_generated.jpg) -" -114EBF33612531C5020FD739010049E5126E0E5B,114EBF33612531C5020FD739010049E5126E0E5B," XGBoost-AS node - -XGBoost© is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in Watson Studio exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark. - -For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^ - -Note that the XGBoost cross-validation function is not supported in Watson Studio. You can use the Partition node for this functionality. Also note that XGBoost in Watson Studio performs one-hot encoding automatically for categorical variables. - -Notes: - - - -* On Mac, version 10.12.3 or higher is required for building XGBoost-AS models. -* XGBoost isn't supported on IBM POWER. - - - -^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC. -" -8937DB13972E4DEDBCC542303EF3A783287FD10B,8937DB13972E4DEDBCC542303EF3A783287FD10B," XGBoost Linear node - -XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in watsonx.ai is implemented in Python. - -For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^ - -Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables. - -^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC. -" -35F4C4A97CF58FA0642D88E501314F3D75FF9E01,35F4C4A97CF58FA0642D88E501314F3D75FF9E01," XGBoost Tree node - -XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in watsonx.ai exposes the core features and commonly used parameters. The node is implemented in Python. - -For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^ - -Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables. - -^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC. -" -717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC,717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC," Flow and SuperNode parameters - -You can define parameters for use in CLEM expressions and in scripting. They are, in effect, user-defined variables that are saved and persisted with the current flow or SuperNode and can be accessed from the user interface as well as through scripting. - -If you save a flow, for example, any parameters you set for that flow are also saved. (This distinguishes them from local script variables, which can be used only in the script in which they are declared.) Parameters are often used in scripting to control the behavior of the script, by providing information about fields and values that don't need to be hard coded in the script. - -You can set flow parameters in a flow script or in a flow's properties (right-click the canvas in your flow and select Flow properties), and they're available to all nodes in the flow. They're displayed in the Parameters list in the Expression Builder. - -You can also set parameters for SuperNodes, in which case they're visible only to nodes encapsulated within that SuperNode. - -Tip: For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide. -" -2B67D1EB41065CF9DA0EB68D429B69803D49EAA1,2B67D1EB41065CF9DA0EB68D429B69803D49EAA1," Reference information - -This section provides reference information about various topics. -" -C0CC7AE4029730B9846B6A05F4160643D3A8C393,C0CC7AE4029730B9846B6A05F4160643D3A8C393,"You may need to describe a flow to others in your organization. To help you do this, you can attach explanatory comments to nodes, and model nuggets. - -Others can then view these comments on-screen, or you might even print out an image of the flow that includes the comments. You can also add notes in the form of text annotations to nodes and model nuggets by means of the Annotations tab in a node's properties. These annotations are visible only when the Annotations tab is open. -" -E1232C341B3F590C23E9E81DDD157BC99FF77191,E1232C341B3F590C23E9E81DDD157BC99FF77191," Supported data sources for SPSS Modeler - -In SPSS Modeler, you can connect to your data no matter where it lives. - - - -" -7D1E61EF82BC5DC1029D55C8F5C2EBB56082CDAC,7D1E61EF82BC5DC1029D55C8F5C2EBB56082CDAC," Creating SPSS Modeler flows - -With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results. - -SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. - -Data format -: Relational: Tables in relational data sources -: Tabular: .xls, .xlsx, .csv, .sav, .json, .xml, or .sas. For Excel files, only the first sheet is read. -: Textual: In the supported relational tables or files - -Data size -: Any - -How can I prepare data? -: Use automatic data preparation functions -: Write SQL statements to manipulate data -: Cleanse, shape, sample, sort, and derive data - -How can I analyze data? -: Visualize data with many chart options -: Identify the natural language of a text field - -How can I build models? -: Build predictive models -: Choose from over 40 modeling algorithms, and many other nodes -: Use automatic modeling functions -: Model time series or geospatial data -: Classify textual data -: Identify relationships between the concepts in textual data - -Getting started -: To create an SPSS Modeler flow from the project's Assets tab, click . - -Note: Watsonx.ai doesn't include SPSS functionality in Peru, Ecuador, Colombia, or Venezuela. -" -68061CDEDA9E9E83180CA7513620B5988266CEBF,68061CDEDA9E9E83180CA7513620B5988266CEBF," SPSS algorithms - -Many of the nodes available in SPSS Modeler are based on statistical algorithms. - -If you're interested in learning more about the underlying algorithms used in your flows, you can read the SPSS Modeler Algorithms Guide available in PDF format. The guide is for advanced users, and the information is provided by a team of SPSS statisticians. - -[Download the SPSS Modeler Algorithms Guide

![SPSS Modeler Algorithms Guide](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_algorithms.png)](https://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf) -" -23080E48C7B666C07E92A6E4F4BB256D77BE49B4_0,23080E48C7B666C07E92A6E4F4BB256D77BE49B4," Tips and shortcuts - -Work quickly and easily by familiarizing yourself with the following shortcuts and tips: - - - -* Quickly find nodes. You can use the search bar on the Nodes palette to search for certain node types, and hover over them to see helpful descriptions. -* Quickly edit nodes. After adding a node to your flow, double-click it to open its properties. -* Add a node to a flow connection. To add a new node between two connected nodes, drag the node to the connection line. -* Replace a connection. To replace an existing connection on a node, simply create a new connection and the old one will be replaced. -* Start from an SPSS Modeler stream. You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client -* Use tool tips. In node properties, helpful tool tips are available in various locations. Hover over the tooltip icon to see tool tips. ![Tool tips icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss_tooltip.png) -* Rename nodes and add annotations. Each node properties panel includes an Annotations section in which you can specify a custom name for nodes on the canvas. You can also include lengthy annotations to track progress, save process details, and denote any business decisions required or achieved. -* Generate new nodes from table output. When viewing table output, you can select one or more fields, click Generate, and select a node to add to your flow. -* Insert values automatically into a CLEM expression. Using the Expression Builder, accessible from various areas of the user interface (such as those for Derive and Filler nodes), you can automatically insert field values into a CLEM expression. - - - -Keyboard shortcuts are available for SPSS Modeler. See the following table. Note that all Ctrl keys listed are Cmd on macOS. - - - -Shortcut keys - -Table 1. Shortcut keys - - Shortcut Key Function - - Ctrl + F1 Navigate to the header. -" -23080E48C7B666C07E92A6E4F4BB256D77BE49B4_1,23080E48C7B666C07E92A6E4F4BB256D77BE49B4," Ctrl + F2 Navigate to the Nodes palette, then use arrow keys to move between nodes. Press Enter or the space key to add the selected node to your canvas. - Ctrl + F3 Navigate to the toolbar. - Ctrl + F4 Navigate to the flow canvas, then use arrow keys to move between nodes. Press Enter or space twice to open the node's context menu. Then use the arrow keys to select the desired context menu action and press Enter or space to perform the action. - Ctrl + F5 Navigate to the node properties panel if it's open. - Ctrl + F6 Move between areas of the user interface (header, palette, canvas, toolbar, etc.). - Ctrl + F7 Open and navigate to the Messages panel. - Ctrl + F8 Open and navigate to the Outputs panel. - Ctrl + A Select all nodes when focus is on the canvas - Ctrl + E With a node selected on the canvas, open its node properties. Then use the tab or arrow keys to move around the list of node properties. Press Ctrl + S to save your changes or press Ctrl + to cancel your changes. - Ctrl + I Open the settings panel. - Ctrl + J With a node selected on the canvas, connect it to another node. Use the arrow keys to select the node to connect to, then press Enter or space (or press Esc to cancel). - Ctrl + K Disconnect a node. - Ctrl + Enter Run a branch from where the focus is. - Ctrl + Shift + Enter Run the entire flow. - Ctrl + Shift + P Launch preview. - Ctrl + arrow Move a selected node around the canvas. - Ctrl + Alt + arrow Move the canvas in a direction. - Ctrl + Shift + arrow Move a selected node around the canvas ten times faster than Ctrl + arrow. - Ctrl + Shift + C Toggle cache on/off. - Ctrl + Shift + up arrow Select all nodes upstream of the selected node. - Ctrl + Shift + down arrow Select all nodes downstream of the selected node. -" -C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF,C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF," Troubleshooting SPSS Modeler - -The information in this section provides troubleshooting details for issues you may encounter in SPSS Modeler. -" -33FE18D89140517AB2A75D6FC64A4A3DB962B88B,33FE18D89140517AB2A75D6FC64A4A3DB962B88B," CLEM expressions and operators supporting SQL pushback - -The tables in this section list the mathematical operations and expressions that support SQL generation and are often used during data mining. Operations absent from these tables don't support SQL generation. - - - -Table 1. Operators - - Operations supporting SQL generation Notes - - + - - - / - * - >< Used to concatenate strings. - - - - - -Table 2. Relational operators - - Operations supporting SQL generation Notes - - = - /= Used to specify ""not equal."" - > - >= - < - <= - - - - - -Table 3. Functions - - Operations supporting SQL generation Notes - - abs - allbutfirst - allbutlast - and - arccos - arcsin - arctan - arctanh - cos - div - exp - fracof - hasstartstring - hassubstring - integer - intof - isaplhacode - islowercode - isnumbercode - isstartstring - issubstring - isuppercode - last - length - locchar - log - log10 - lowertoupper - max - member - min - negate - not - number - or - pi - real - rem - round - sign - sin - sqrt - string - strmember - subscrs - substring - substring_between - uppertolower - to_string - - - - - -Table 4. Special functions - - Operations supporting SQL generation Notes - - @NULL - @GLOBAL_AVE You can use the special global functions to retrieve global values computed by the Set Globals node. - @GLOBAL_SUM - @GLOBAL_MAX - @GLOBAL_MEAN - @GLOBAL_MIN - @GLOBALSDEV - - - - - -Table 5. Aggregate functions - - Operations supporting SQL generation Notes - - Sum - Mean - Min - Max -" -262C45D286C9B8A7EDBA8635E636824F2B043D73,262C45D286C9B8A7EDBA8635E636824F2B043D73," How does SQL pushback work? - -The initial fragments of a flow leading from the data import nodes are the main targets for SQL generation. When a node is encountered that can't be compiled to SQL, the data is extracted from the database and subsequent processing is performed. - -During flow preparation and prior to running, the SQL generation process happens as follows: - - - -* The software reorders flows to move downstream nodes into the “SQL zone” where it can be proven safe to do so. -* Working from the import nodes toward the terminal nodes, SQL expressions are constructed incrementally. This phase stops when a node is encountered that can't be converted to SQL or when the terminal node (for example, a Table node or a Graph node) is converted to SQL. At the end of this phase, each node is labeled with an SQL statement if the node and its predecessors have an SQL equivalent. -" -BDB3689801D81676AE642F1EBFF81D27C07F1F3C,BDB3689801D81676AE642F1EBFF81D27C07F1F3C," Generating SQL from model nuggets - -When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. For some nodes, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. This allows flows containing these nuggets to have their full SQL pushed back. - -For a generated model nugget that supports SQL pushback: - - - -1. Double-click the model nugget to open its settings. -2. Depending on the node type, one or more of the following options is available. Choose one of these options to specify how SQL generation is performed. - -Generate SQL for this model - - - -* Default: Score using Server Scoring Adapter (if installed) otherwise in process. This is the default option. If connected to a database with a scoring adapter installed, this option generates SQL using the scoring adapter and associated user defined functions (UDF) and scores your model within the database. When no scoring adapter is available, this option fetches your data back from the database and scores it in SPSS Modeler. -* Score by converting to native SQL without Missing Value Support. This option generates native SQL to score the model within the database, without the overhead of handling missing values. This option simply sets the prediction to null ($null$) when a missing value is encountered while scoring a case. -" -D69F33671E13DF29FE56579AC4654EBC54A11F12_0,D69F33671E13DF29FE56579AC4654EBC54A11F12," Nodes supporting SQL pushback - -The tables in this section show nodes representing data-mining operations that support SQL pushback. If a node doesn't appear in these tables, it doesn't support SQL pushback. - - - -Table 1. Record Operations nodes - - Nodes supporting SQL generation Notes - - Select Supports generation only if SQL generation for the select expression itself is supported. If any fields have nulls, SQL generation does not give the same results for discard as are given in native SPSS Modeler. - Sample Simple sampling supports SQL generation to varying degrees depending on the database. - Aggregate SQL generation support for aggregation depends on the data storage type. - RFM Aggregate Supports generation except if saving the date of the second or third most recent transactions, or if only including recent transactions. However, including recent transactions does work if the datetime_date(YEAR,MONTH,DAY) function is pushed back. - Sort - Merge No SQL generated for merge by order.

Merge by key with full or partial outer join is only supported if the database/driver supports it. Non-matching input fields can be renamed by means of a Filter node, or the Filter settings of an import node.

Supports SQL generation for merge by condition.

For all types of merge, SQL_SP_EXISTS is not supported if inputs originate in different databases. - Append Supports generation if inputs are unsorted. SQL optimization is only possible when your inputs have the same number of columns. - Distinct A Distinct node with the (default) mode Create a composite record for each group selected doesn't support SQL optimization. - - - - - -Table 2. SQL generation support in the Sample node for simple sampling - - Mode Sample Max size Seed Db2 for z/OS Db2 for OS/400 Db2 for Win/UNIX Oracle SQL Server Teradata - - Include First n/a Y Y Y Y Y Y - 1-in-n off Y Y Y Y Y - max Y Y Y Y Y - Random % off off Y Y Y Y - on Y Y Y - max off Y Y Y Y - on Y Y Y - Discard First off Y - max Y -" -D69F33671E13DF29FE56579AC4654EBC54A11F12_1,D69F33671E13DF29FE56579AC4654EBC54A11F12," 1-in-n off Y Y Y Y Y - max Y Y Y Y Y - Random % off off Y Y Y Y - on Y Y Y - max off Y Y Y Y - on Y Y Y - - - - - -Table 3. SQL generation support in the Aggregate node - - Storage Sum Mean Min Max SDev Median Count Variance Percentile - - Integer Y Y Y Y Y Y* Y Y Y* - Real Y Y Y Y Y Y* Y Y Y* - Date Y Y Y* Y Y* - Time Y Y Y* Y Y* - Timestamp Y Y Y* Y Y* - String Y Y Y* Y Y* - - - -* Median and Percentile are supported on Oracle. - - - -Table 4. Field Operations nodes - - Nodes supporting SQL generation Notes - - Type Supports SQL generation if the Type node is instantiated and no ABORT or WARN type checking is specified. - Filter - Derive Supports SQL generation if SQL generated for the derive expression is supported (see expressions later on this page). - Ensemble Supports SQL generation for Continuous targets. For other targets, supports generation only if the Highest confidence wins ensemble method is used. - Filler Supports SQL generation if the SQL generated for the derive expression is supported. - Anonymize Supports SQL generation for Continuous targets, and partial SQL generation for Nominal and Flag targets. - Reclassify - Binning Supports SQL generation if the Tiles (equal count) binning method is used and the Read from Bin Values tab if available option is selected. Due to differences in the way that bin boundaries are calculated (this is caused by the nature of the distribution of data in bin fields), you might see differences in the binning output when comparing normal flow execution results and SQL pushback results. To avoid this, use the Record count tiling method, and either Add to next or Keep in current tiles to obtain the closest match between the two methods of flow execution. - RFM Analysis Supports SQL generation if the Read from Bin Values tab if available option is selected, but downstream nodes will not support it. - Partition Supports SQL generation to assign records to partitions. - Set To Flag - Restructure - - - - - -Table 5. Graphs nodes - - Nodes supporting SQL generation Notes - - Distribution - Web - Evaluation - - - -" -D69F33671E13DF29FE56579AC4654EBC54A11F12_2,D69F33671E13DF29FE56579AC4654EBC54A11F12,"For some models, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. The main use of this feature is not to improve performance, but to allow flows containing these nuggets to have their full SQL pushed back. See [Generating SQL from model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_native.html) for more information. - - - -Table 6. Model nuggets - - Model nuggets supporting SQL generation Notes - - C&R Tree Supports SQL generation for the single tree option, but not for the boosting, bagging, or large dataset options. - QUEST - CHAID - C5.0 - Decision List - Linear Supports SQL generation for the standard model option, but not for the boosting, bagging, or large dataset options. - Neural Net Supports SQL generation for the standard model option (Multilayer Perceptron only), but not for the boosting, bagging, or large dataset options. - PCA/Factor - Logistic Supports SQL generation for Multinomial procedure but not Binomial. For Multinomial, generation isn't supported when confidences are selected, unless the target type is Flag. - Generated Rulesets - Auto Classifier If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream. - Auto Numeric If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream. - - - - - -Table 7. Outputs nodes - - Nodes supporting SQL generation Notes - - Table Supports generation if SQL generation is supported for highlight expression. - Matrix Supports generation except if All numerics is selected for the Fields option. - Analysis Supports generation, depending on the options selected. - Transform - Statistics Supports generation if the Correlate option isn't used. -" -AF0F7C335A10C372C36A0CCEC76057C41B93731B_0,AF0F7C335A10C372C36A0CCEC76057C41B93731B," SQL optimization - -You can push many data preparation and mining operations directly in your database to improve performance. - -One of the most powerful capabilities of SPSS Modeler is the ability to perform many data preparation and mining operations directly in the database. By generating SQL code that can be pushed back to the database for execution, many operations, such as sampling, sorting, deriving new fields, and certain types of graphing, can be performed in the database rather than on the client or server computer. When you're working with large datasets, these pushbacks can dramatically enhance performance in several ways: - - - -* By reducing the size of the result set to be transferred from the DBMS to watsonx.ai. When large result sets are read through an ODBC driver, network I/O or driver inefficiencies may result. For this reason, the operations that benefit most from SQL optimization are row and column selection and aggregation (Select, Sample, Aggregate nodes), which typically reduce the size of the dataset to be transferred. Data can also be cached to a temporary table in the database at critical points in the flow (after a Merge or Select node, for example) to further improve performance. -* By making use of the performance and scalability of the database. Efficiency is increased because a DBMS can often take advantage of parallel processing, more powerful hardware, more sophisticated management of disk storage, and the presence of indexes. - - - -Given these advantages, watsonx.ai is designed to maximize the amount of SQL generated by each SPSS Modeler flow so that only those operations that can't be compiled to SQL are executed by watsonx.ai. Because of limitations in what can be expressed in standard SQL (SQL-92), however, certain operations may not be supported. - -For details about currently supported databases, see [Supported data sources for SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html). - -Tips: - - - -" -AF0F7C335A10C372C36A0CCEC76057C41B93731B_1,AF0F7C335A10C372C36A0CCEC76057C41B93731B,"* When running a flow, nodes that push back to your database are highlighted with a small SQL icon beside the node. When you start making edits to a flow after running it, the icons will be removed until the next time you run the flow. - -Figure 1. SQL pushback indicator - -![SQL pushback indicator](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/sql_icon.png) -* If you want to see which nodes will push back before running a flow, click SQL preview. This enables you to modify the flow before you run it to improve performance by moving the non-pushback operations as far downstream as possible, for example. -* If a node can't be pushed back, all subsequent nodes in the flow won't be pushed back either (pushback stops at that node). This may impact how you want to organize the order of nodes in your flow. - - - -Notes: Keep the following information in mind regarding SQL: - - - -" -2C669E0145DAC26A7517D9402874BAC048E46E82_0,2C669E0145DAC26A7517D9402874BAC048E46E82," Tips for maximizing SQL pushback - -To get the best performance boost from SQL optimization, pay attention to the items in this section. - -Flow order. SQL generation may be halted when the function of the node has no semantic equivalent in SQL because SPSS Modeler’s data-mining functionality is richer than the traditional data-processing operations supported by standard SQL. When this happens, SQL generation is also suppressed for any downstream nodes. Therefore, you may be able to significantly improve performance by reordering nodes to put operations that halt SQL as far downstream as possible. The SQL optimizer can do a certain amount of reordering automatically, but further improvements may be possible. A good candidate for this is the Select node, which can often be brought forward. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information. - -CLEM expressions. If a flow can't be reordered, you may be able to change node options or CLEM expressions or otherwise recast the way the operation is performed, so that it no longer inhibits SQL generation. Derive, Select, and similar nodes can commonly be rendered into SQL, provided that all of the CLEM expression operators have SQL equivalents. Most operators can be rendered, but there are a number of operators that inhibit SQL generation (in particular, the sequence functions [“@ functions”]). Sometimes generation is halted because the generated query has become too complex for the database to handle. See [CLEM expressions and operators supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_clem.html) for more information. - -Multiple input nodes. Where a flow has multiple data import nodes, SQL generation is applied to each import branch independently. If generation is halted on one branch, it can continue on another. Where two branches merge (and both branches can be expressed in SQL up to the merge), the merge itself can often be replaced with a database join, and generation can be continued downstream. - -" -2C669E0145DAC26A7517D9402874BAC048E46E82_1,2C669E0145DAC26A7517D9402874BAC048E46E82,"Scoring models. In-database scoring is supported for some models by rendering the generated model into SQL. However, some models generate extremely complex SQL expressions that aren't always evaluated effectively within the database. For this reason, SQL generation must be enabled separately for each generated model nugget. If you find that a model nugget is inhibiting SQL generation, open the model nugget's settings and select Generate SQL for this model (with some models, you may have additional options controlling generation). Run tests to confirm that the option is beneficial for your application. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information. - -When testing modeling nodes to see if SQL generation for models works effectively, we recommend first saving all flows from SPSS Modeler. Note that some database systems may hang while trying to process the (potentially complex) generated SQL. - -Database caching. If you are using a node cache to save data at critical points in the flow (for example, following a Merge or Aggregate node), make sure that database caching is enabled along with SQL optimization. This will allow data to be cached to a temporary table in the database (rather than the file system) in most cases. - -Vendor-specific SQL. Most of the generated SQL is standards-conforming (SQL-92), but some nonstandard, vendor-specific features are exploited where practical. The degree of SQL optimization can vary, depending on the database source. -" -3874AAF67EF04BB4D623FFF07E1CDB4C25B3B33E,3874AAF67EF04BB4D623FFF07E1CDB4C25B3B33E," Tutorials - -These tutorials use the assets that are available in the sample project, and they provide brief, targeted introductions to specific modeling methods and techniques. - -You can build the example flows provided by following the steps in the tutorials. - -Some of the simple flows are already completed in the projects, but you can still walk through them using their accompanying tutorials. Some of the more complicated flows must be completed by following the steps in the tutorials. - -Important: Before you begin the tutorials, complete the following steps to create the sample projects. -" -6E50438308B85E969B79DED22CC5E15F6872EE85,6E50438308B85E969B79DED22CC5E15F6872EE85," Automated modeling for a continuous target - -You can use the Auto Numeric node to automatically create and compare different models for continuous (numeric range) outcomes, such as predicting the taxable value of a property. With a single node, you can estimate and compare a set of candidate models and generate a subset of models for further analysis. The node works in the same manner as the Auto Classifier node, but for continuous rather than flag or nominal targets. -" -2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E,2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E," Building the flow - - - -1. Add a Data Asset node that points to property_values_train.csv. -2. Add a Type node, and select taxable_value as the target field (Role = Target). Other fields will be used as predictors. - -Figure 1. Setting the measurement level and role - -![Setting the role](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont_build_target.png) -3. Attach an Auto Numeric node, and select Correlation as the metric used to rank models (under BASICS in the node properties). -4. Set the Number of models to use to 3. This means that the three best models will be built when you run the node. - -Figure 2. Auto Numeric node BASICS - -![Setting BASIC options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont_build_basics.png) -5. Under EXPERT, leave the default settings in place. The node will estimate a single model for each algorithm, for a total of six models. (Alternatively, you can modify these settings to compare multiple variants for each model type.) - -Because you set Number of models to use to 3 under BASICS, the node will calculate the accuracy of the six algorithms and build a single model nugget containing the three most accurate. - -Figure 3. Auto Numeric node EXPERT options - -" -EC7FCF477E212945EAB7BB85C2279F37D62D4B49_0,EC7FCF477E212945EAB7BB85C2279F37D62D4B49," Comparing the models - - - -1. Run the flow. A generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of ways. - - - -Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models are estimated on a large dataset, this could take many hours.) - -Figure 1. Auto numeric example flow with model nugget - -![Auto numeric sample flow with model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont.png) - -If you want to explore any of the individual models further, you can click a model name in the ESTIMATOR column to drill down and explore the individual model results. - -Figure 2. Auto Numeric results - -![Auto Numeric results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont_compare_view.png) - -By default, models are sorted by accuracy (correlation) because correlation this was the measure you selected in the Auto Numeric node's properties. For purposes of ranking, the absolute value of the accuracy is used, with values closer to 1 indicating a stronger relationship. - -You can sort on a different column by clicking the header for that column. - -Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. - -In the USE column, make sure all three models are selected. - -Attach an Analysis node (from the Outputs palette) after the model nugget. Right-click the Analysis node and choose Run to run the flow again. - -Figure 3. Auto Numeric sample flow - -![Auto numeric sample flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont.png) - -" -EC7FCF477E212945EAB7BB85C2279F37D62D4B49_1,EC7FCF477E212945EAB7BB85C2279F37D62D4B49,"The averaged score generated by the ensembled model is added in a field named $XR-taxable_value, with a correlation of 0.934, which is higher than those of the three individual models. The ensemble scores also show a low mean absolute error and may perform better than any of the individual models when applied to other datasets. - -Figure 4. Auto Numeric sample flow analysis results - -![Auto numeric sample flow analysis results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autocont_compare_results.png) -" -69ED00ABB6B920D1FE4F5B5675AFDA422F04E8D8,69ED00ABB6B920D1FE4F5B5675AFDA422F04E8D8," Summary - -With this example Automated Modeling for a Flag Target flow, you used the Auto Numeric node to compare a number of different models, selected the three most accurate models, and added them to the flow within an ensembled Auto Numeric model nugget. - -The ensembled model showed performance that was better than two of the individual models and may perform better when applied to other datasets. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model. -" -3D999C84C01328A45EBF0ECAD358D858C634DF5B,3D999C84C01328A45EBF0ECAD358D858C634DF5B," Training data - -The data file includes a field named taxable_value, which is the target field, or value, that you want to predict. The other fields contain information such as neighborhood, building type, and interior volume, and may be used as predictors. - - - - Field name Label - - property_id Property ID - neighborhood Area within the city - building_type Type of building - year_built Year built - volume_interior Volume of interior - volume_other Volume of garage and extra buildings -" -D96C3A08A5607BDCB1BC85E0BEDD8743EA0B3DC5,D96C3A08A5607BDCB1BC85E0BEDD8743EA0B3DC5," Automated data preparation - -Preparing data for analysis is one of the most important steps in any data-mining project—and traditionally, one of the most time consuming. The Auto Data Prep node handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques. - -You can use the Auto Data Prep node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they're made and accept or reject them as desired. With this node, you can ready your data for data mining quickly and easily, without the need for prior knowledge of the statistical concepts involved. If you run the node with the default settings, models will tend to build and score more quickly. - -This example uses the flow named Automated Data Preparation, available in the example project . The data file is telco.csv. This example demonstrates the increased accuracy you can find by using the default Auto Data Prep node settings when building models. - -Let's take a look at the flow. - - - -" -895CD261C9F06F272286BCCA3555846FB1ED8AA3,895CD261C9F06F272286BCCA3555846FB1ED8AA3," Building the flow - - - -1. Add a Data Asset node that points to telco.csv. - -Figure 1. Auto Data Prep example flow - -![Auto Data Prep example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata.png) -2. Attach a Type node to the Data Asset node. Set the measure for the churn field to Flag, and set the role to Target. Make sure the role for all other fields is set to Input. - -Figure 2. Setting the measurement level and role - -![Setting the measurement level and role](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_build_target.png) -3. Attach a Logistic node to the Type node. -4. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter No ADP - churn. - -Figure 3. Choosing model options - -![Choosing model options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_binomial_default.png) -5. Attach an Auto Data Prep node to the Type node. Under OBJECTIVES, leave the default settings in place to analyze and prepare your data by balancing both speed and accuracy. -6. Run the flow to analyze and process your data. Other Auto Data Prep node properties allow you to specify that you want to concentrate more on accuracy, more on the speed of processing, or to fine tune many of the data preparation processing steps. Note: If you want to adjust the node properties and run the flow again in the future, since the model already exists, you must first click Clear Analysis, under OBJECTIVES before running the flow again. - -Figure 4. Auto Data Prep default objectives - -![Auto Data Prep default objectives](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_objectives.png) -" -B523EBE64275BEE04D480B55CCAEAC3017A36980_0,B523EBE64275BEE04D480B55CCAEAC3017A36980," Comparing the models - - - -1. Right-click each Logistic node and run it to create the model nuggets, which are added to the flow. Results are also added to the Outputs panel. - -Figure 1. Attaching the model nuggets - -![Attaching the model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_partial_flow.png) -2. Attach Analysis nodes to the model nuggets and run the Analysis nodes (using their default settings). - -Figure 2. Attaching the Analysis nodes - -![Attaching the Analysis nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_partial_flow2.png)The Analysis of the non Auto Data Prep-derived model shows that just running the data through the Logistic Regression node with its default settings gives a model with low accuracy - just 10.6%. - -Figure 3. Non ADP-derived model results - -![Non ADP-derived model results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_analysis_non_adp.png)The Analysis of the Auto-Data Prep-derived model shows that by running the data through the default Auto Data Prep settings, you have built a much more accurate model that's 78.3% correct. - -Figure 4. ADP-derived model results - -![ADP-derived model results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autodata_analysis_adp.png) - - - -In summary, by just running the Auto Data Prep node to fine tune the processing of your data, you were able to build a more accurate model with little direct data manipulation. - -" -B523EBE64275BEE04D480B55CCAEAC3017A36980_1,B523EBE64275BEE04D480B55CCAEAC3017A36980,"Obviously, if you're interested in proving or disproving a certain theory, or want to build specific models, you may find it beneficial to work directly with the model settings. However, for those with a reduced amount of time, or with a large amount of data to prepare, the Auto Data Prep node may give you an advantage. - -Note that the results in this example are based on the training data only. To assess how well models generalize to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation. -" -1A548D934DFE57DD0F12195461F2DDB348EAE68C,1A548D934DFE57DD0F12195461F2DDB348EAE68C," Automated modeling for a flag target - -With the Auto Classifier node, you can automatically create and compare a number of different models for either flag (such as whether or not a given customer is likely to default on a loan or respond to a particular offer) or nominal (set) targets. -" -CE7976AFE82E2D17EE1FA308570AFA42E0E91667_0,CE7976AFE82E2D17EE1FA308570AFA42E0E91667," Building the flow - - - -1. Add a Data Asset node that points to pm_customer_train1.csv. -2. Add a Type node, and select response as the target field (Role = Target). Set the measure for this field to Flag. - -Figure 1. Setting the measurement level and role - -![Setting the measurement level and role](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_target.png) -3. Set the role to None for the following fields: customer_id, campaign, response_date, purchase, purchase_date, product_id, Rowid, and X_random. These fields will be ignored when you are building the model. -4. Click Read Values in the Type node to make sure that values are instantiated. - -As we saw earlier, our source data includes information about four different campaigns, each targeted to a different type of customer account. These campaigns are coded as integers in the data, so to make it easier to remember which account type each integer represents, let's define labels for each one. - -Figure 2. Choosing to specify values for a field - -![Choosing to specify values for a field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_value.png) -5. On the row for the campaign field, click the entry in the Value mode column. -6. Choose Specify from the drop-down. - -Figure 3. Defining labels for the field values - -![Defining labels for the field values](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_labels.png) -7. Click the Edit icon in the column for the campaign field. Type the labels as shown for each of the four values. -8. Click OK. Now the labels will be displayed in output windows instead of the integers. -9. Attach a Table node to the Type node. -10. Right-click the Table node and select Run. -11. In the Outputs panel, double-click the table output to open it. -12. Click OK to close the output window. - - - -" -CE7976AFE82E2D17EE1FA308570AFA42E0E91667_1,CE7976AFE82E2D17EE1FA308570AFA42E0E91667,"Although the data includes information about four different campaigns, you will focus the analysis on one campaign at a time. Since the largest number of records fall under the Premium account campaign (coded campaign=2 in the data), you can use a Select node to include only these records in the flow. - -Figure 4. Selecting records for a single campaign - -![Selecting records for a single campaign](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_build_select.png) -" -B57A4B94BFAFDD0CD6EDBDFA4ABA1F708286E918,B57A4B94BFAFDD0CD6EDBDFA4ABA1F708286E918," Historical data - -This example uses the data file pm_customer_train1.csv, which contains historical data that tracks the offers made to specific customers in past campaigns, as indicated by the value of the campaign field. The largest number of records fall under the Premium account campaign. - -The values of the campaign field are actually coded as integers in the data (for example 2 = Premium account). Later, you'll define labels for these values that you can use to give more meaningful output. - -Figure 1. Data about previous promotions - -![Data about previous promotions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_historical.png) - -The file also includes a response field that indicates whether the offer was accepted (0 = no, and 1 = yes). This will be the target field, or value, that you want to predict. A number of fields containing demographic and financial information about each customer are also included. These can be used to build or ""train"" a model that predicts response rates for individuals or groups based on characteristics such as income, age, or number of transactions per month. -" -C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_0,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A," Generating and comparing models - - - -1. Attach an Auto Classifier node, open its BUILD OPTIONS properties, and select Overall accuracy as the metric used to rank models. -2. Set the Number of models to use to 3. This means that the three best models will be built when you run the node. - -Figure 1. Auto Classifier node, build options - -![Auto Classifier node, build options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_models_props.png) - -Under the EXPERT options, you can choose from many different modeling algorithms. -3. Deselect the Discriminant and SVM model types. (These models take longer to train on this data, so deselecting them will speed up the example. If you don't mind waiting, feel free to leave them selected.) - -Because you set Number of models to use to 3 under BUILD OPTIONS, the node will calculate the accuracy of the remaining algorithms and generate a single model nugget containing the three most accurate. - -Figure 2. Auto Classifier node, expert options - -![Auto Classifier node, expert options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_models_buildopts.png) -4. Under the ENSEMBLE options, select Confidence-weighted voting for the ensemble method. This determines how a single aggregated score is produced for each record. - -With simple voting, if two out of three models predict yes, then yes wins by a vote of 2 to 1. In the case of confidence-weighted voting, the votes are weighted based on the confidence value for each prediction. Thus, if one model predicts no with a higher confidence than the two yes predictions combined, then no wins. - -Figure 3. Auto Classifier node, ensemble options - -![Auto Classifier node, ensemble options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_models_ensemble.png) -" -C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_1,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A,"5. Run the flow. After a few minutes, the generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of other ways. -6. Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models may be created on a large dataset, this could take many hours.) - -If you want to explore any of the individual models further, you can click their links in the Estimator column to drill down and browse the individual model results. - -Figure 4. Auto Classifier results - -![Auto Classifier results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_models_view.png) - -By default, models are sorted based on overall accuracy, because this was the measure you selected in the Auto Classifier node properties. The XGBoost Tree model ranks best by this measure, but the C5.0 and C&RT models are nearly as accurate. - -Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. -7. In the USE column, select the three models. Return to the flow. -8. Attach an Analysis output node after the model nugget. Right-click the Analysis node and choose Run to run the flow. - -Figure 5. Auto Classifier example flow - -![Auto Classifier example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag.png) - -" -C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_2,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A,"The aggregated score generated by the ensembled model is shown in a field named $XF-response. When measured against the training data, the predicted value matches the actual response (as recorded in the original response field) with an overall accuracy of 92.77%. While not quite as accurate as the best of the three individual models in this case (92.82% for C5.0), the difference is too small to be meaningful. In general terms, an ensembled model will typically be more likely to perform well when applied to datasets other than the training data. - -Figure 6. Analysis of the three ensembled models - -![Analysis of the three ensembled models](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_autoflag_models_xf.png) -" -823D9660B5B41B7C85904D0EB88A8D40AC57383F,823D9660B5B41B7C85904D0EB88A8D40AC57383F," Summary - -With this example Automated Modeling for a Flag Target flow, you used the Auto Classifier node to compare a number of different models, used the three most accurate models, and added them to the flow within an ensembled Auto Classifier model nugget. - - - -" -B2CA734AE719BA79AB4B5F877CF044F47090FAEC,B2CA734AE719BA79AB4B5F877CF044F47090FAEC," Forecasting bandwidth utilization - -An analyst for a national broadband provider is required to produce forecasts of user subscriptions to predict utilization of bandwidth. Forecasts are needed for each of the local markets that make up the national subscriber base. - -You'll use time series modeling to produce forecasts for the next three months for a number of local markets. -" -718CD1A731E0F4E5ABFD77519ED254B5CCC670FB,718CD1A731E0F4E5ABFD77519ED254B5CCC670FB," Forecasting with the Time Series node - -This example uses the flow Forecasting Bandwidth Utilization, available in the example project . The data file is broadband_1.csv. - -In SPSS Modeler, you can produce multiple time series models in a single operation. The broadband_1.csv data file has monthly usage data for each of 85 local markets. For the purposes of this example, only the first five series will be used; a separate model will be created for each of these five series, plus a total. - -The file also includes a date field that indicates the month and year for each record. This field will be used to label records. The date field reads into SPSS Modeler as a string, but to use the field in SPSS Modeler you will convert the storage type to numeric Date format using a Filler node. - -Figure 1. Example flow to show Time Series modeling - -![Example flow to show Time Series modeling](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast.png) - -The Time Series node requires that each series be in a separate column, with a row for each interval. Watson Studio provides methods for transforming data to match this format if necessary. - -Figure 2. Monthly subscription data for broadband local markets - -![Monthly subscription data for broadband local markets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_table.png) -" -C143A9F5185D9303301630D3FC53B604D3DCED2E,C143A9F5185D9303301630D3FC53B604D3DCED2E," Creating the flow - - - -1. Add a Data Asset node that points to broadband_1.csv. -2. To simplify the model, use a Filter node to filter out the Market_6 to Market_85 fields and the MONTH_ and YEAR_ fields. - - - -Figure 1. Example flow to show Time Series modeling - -![Example flow to show Time Series modeling](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_build.png) -" -EDB1038F1D71A450556D13AE34A416E46D7213FE_0,EDB1038F1D71A450556D13AE34A416E46D7213FE," Examining the data - -It's always a good idea to have a feel for the nature of your data before building a model. - -Does the data exhibit seasonal variations? Although Watson Studio can automatically find the best seasonal or nonseasonal model for each series, you can often obtain faster results by limiting the search to nonseasonal models when seasonality is not present in your data. Without examining the data for each of the local markets, we can get a rough picture of the presence or absence of seasonality by plotting the total number of subscribers over all five markets. - -Figure 1. Plotting the total number of subscribers - -![Plotting the total number of subscribers](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_data1.png) - - - -1. From the Graphs palette, attach a Time Plot node to the Filter node. -2. Add the Total field to the Series list. -3. Deselect the Display series in separate panel and Normalize options. Save the changes. -4. Right-click the Time Plot node and run it, then open the output that was generated. - -Figure 2. Time plot of the Total field - -![Time plot of the Total field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_data2.png) - -The series exhibits a very smooth upward trend with no hint of seasonal variations. There might be individual series with seasonality, but it appears that seasonality isn't a prominent feature of the data in general. - -Of course, you should inspect each of the series before ruling out seasonal models. You can then separate out series exhibiting seasonality and model them separately. - -Watson Studio makes it easy to plot multiple series together. -5. Double-click the Time Plot node to open its properties again. -6. Remove the Total field from the Series list. -7. Add the Market_1 through Market_5 fields to the list. -8. Run the Time Plot node again. - -Figure 3. Time plot of multiple fields - -" -EDB1038F1D71A450556D13AE34A416E46D7213FE_1,EDB1038F1D71A450556D13AE34A416E46D7213FE,"![Time plot of multiple fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_data3.png) - -Inspection of each of the markets reveals a steady upward trend in each case. Although some markets are a little more erratic than others, there's no evidence of seasonality. -" -0721692D3F363B864A241FC4644D7D57B2DFF881,0721692D3F363B864A241FC4644D7D57B2DFF881," Defining the dates - -Now you need to change the storage type of the DATE_ field to date format. - - - -1. Attach a Filler node to the Filter node, then double-click the Filler node to open its properties -2. Add the DATE_ field, set the Replace option to Always, and set the Replace with value to to_date(DATE_). - -Figure 1. Setting the date storage type - -![Setting the date storage type](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_date1.png) -" -03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_0,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB," Examining the model - - - -1. Right-click the Time Series model nugget and select View Model to see information about the models generated for each of the markets. - -Figure 1. Time Series models generated for the markets - -![Time Series models generated for the markets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_examine.png) -2. In the left TARGET column, select any of the markets. Then go to Model Information. The Number of Predictors row shows how many fields were used as predictors for each target. - -The other rows in the Model Information tables show various goodness-of-fit measures for each model. Stationary R-Squared measures how a model is better than a baseline model. If the final model is ARIMA(p,d,q)(P,D,Q), the baseline model is ARIMA(0,d,0)(0,D,0). If the final model is an Exponential Smoothing model, then d is 2 for Brown and Holt model and 1 for other models, and D is 1 if the seasonal length is greater than 1, otherwise D is 0. A negative stationary R squared means that the model under consideration is worse than the baseline model. Zero stationary R squared means that the model is as good or bad as the baseline model and a positive stationary R squared means the model is better than the baseline model - -The Statistic and df lines, and the Significance under Parameter Estimates, relate to the Ljung-Box statistic, a test of the randomness of the residual errors in the model. The more random the errors, the better the model is likely to be. Statistic is the Ljung-Box statistic itself, while df (degrees of freedom) indicates the number of model parameters that are free to vary when estimating a particular target. - -The Significance gives the significance value of the Ljung-Box statistic, providing another indication of whether the model is correctly specified. A significance value less than 0.05 indicates that the residual errors are not random, implying that there is structure in the observed series that is not accounted for by the model. - -" -03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_1,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"Taking both the Stationary R-Squared and Significance values into account, the models that the Expert Modeler has chosen for Market_3, and Market_4 are quite acceptable. The Significance values for Market_1, Market_2, and Market_5 are all less than 0.05, indicating that some experimentation with better-fitting models for these markets might be necessary. - -The display shows a number of additional goodness-of-fit measures. The R-Squared value gives an estimation of the total variation in the time series that can be explained by the model. As the maximum value for this statistic is 1.0, our models are fine in this respect. - -RMSE is the root mean square error, a measure of how much the actual values of a series differ from the values predicted by the model, and is expressed in the same units as those used for the series itself. As this is a measurement of an error, we want this value to be as low as possible. At first sight it appears that the models for Market_2 and Market_3, while still acceptable according to the statistics we have seen so far, are less successful than those for the other three markets. - -These additional goodness-of-fit measures include the mean absolute percentage errors ( MAPE) and its maximum value ( MAXAPE). Absolute percentage error is a measure of how much a target series varies from its model-predicted level, expressed as a percentage value. By examining the mean and maximum across all models, you can get an indication of the uncertainty in your predictions. - -The MAPE value shows that all models display a mean uncertainty of around 1%, which is very low. The MAXAPE value displays the maximum absolute percentage error and is useful for imagining a worst-case scenario for your forecasts. It shows that the largest percentage error for most of the models falls in the range of roughly 1.8% to 3.7%, again a very low set of figures, with only Market_4 being higher at close to 7%. - -" -03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_2,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"The MAE (mean absolute error) value shows the mean of the absolute values of the forecast errors. Like the RMSE value, this is expressed in the same units as those used for the series itself. MAXAE shows the largest forecast error in the same units and indicates worst-case scenario for the forecasts. - -Although these absolute values are interesting, it's the values of the percentage errors ( MAPE and MAXAPE) that are more useful in this case, as the target series represent subscriber numbers for markets of varying sizes. - -Do the MAPE and MAXAPE values represent an acceptable amount of uncertainty with the models? They are certainly very low. This is a situation in which business sense comes into play, because acceptable risk will change from problem to problem. We'll assume that the goodness-of-fit statistics fall within acceptable bounds, so let's go on to look at the residual errors. - -Examining the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the model residuals provides more quantitative insight into the models than simply viewing goodness-of-fit statistics. - -A well-specified time series model will capture all of the nonrandom variation, including seasonality, trend, and cyclic and other factors that are important. If this is the case, any error should not be correlated with itself (autocorrelated) over time. A significant structure in either of the autocorrelation functions would imply that the underlying model is incomplete. -3. For the fourth market, click Correlogram to display the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the residual errors in the model. - -Figure 2. ACF and PACF values for the fourth market - -![ACF and PACF values for the fourth market](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_examine_correlogram.png) - -" -03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_3,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"In these plots, the original values of the error variable have been lagged (under BUILD OPTIONS - OUTPUT) up to the default value of 24 time periods and compared with the original value to see if there's any correlation over time. Ideally, the bars representing all lags of ACF and PACF should be within the shaded area. However, in practice, there may be some lags that extend outside of the shaded area. This is because, for example, some larger lags may not have been tried for inclusion in the model in order to save computation time. Some lags are insignificant and are removed from the model. If you want to improve the model further and don't care whether these lags are redundant or not, these plots serve as tips for you as to which lags are potential predictors. - -Should this occur, you'd need to check the lower ( PACF) plot to see whether the structure is confirmed there. The PACF plot looks at correlations after controlling for the series values at the intervening time points. - -The values for Market_4 are all within the shaded area, so we can continue and check the values for the other markets. -4. Open the Correlogram for each of the other markets and the totals. - -The values for the other markets all show some values outside the shaded area, confirming what we suspected earlier from their Significance values. We'll need to experiment with some different models for those markets at some point to see if we can get a better fit, but for the rest of this example, we'll concentrate on what else we can learn from the Market_4 model. -5. Return to your flow canvas. Attach a new Time Plot node to the Time Series model nugget. Double-click the node to open its properties. -6. Deselect the Display series in separate panel option. -7. For the Series list, add the Market_4 and $TS-Market_4 fields. -" -03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_4,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"8. Save the properties, then right-click the Time Plot node and select Run to generate a line graph of the actual and forecast data for the first of the local markets.Notice how the forecast ($TS-Market_4) line extends past the end of the actual data. You now have a forecast of expected demand for the next three months in this market. - -Figure 3. Time Plot of actual and forecast data for Market_4 - -![Time Plot of actual and forecast data for Market_4](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_examine_line1.png) - -The lines for actual and forecast data over the entire time series are very close together on the graph, indicating that this is a reliable model for this particular time series. - -You have a reliable model for this particular market, but what margin of error does the forecast have? You can get an indication of this by examining the confidence interval. -9. Double-click the last Time Plot node in the flow (the one labeled Market_4 $TS-Market_4). -10. Add the $TSLCI-Market_4 and $TSUCI-Market_4 fields to the Series list. -11. Save the properties and run the node again. - - - -Now you have the same graph as before, but with the upper ($TSUCI) and lower ($TSLCI) limits of the confidence interval added. Notice how the boundaries of the confidence interval diverge over the forecast period, indicating increasing uncertainty as you forecast further into the future. However, as each time period goes by, you'll have another (in this case) month's worth of actual usage data on which to base your forecast. In a real-world scenario, you could read the new data into the flow and reapply your model now that you know it's reliable. - -Figure 4. Time Plot with confidence interval added - -![Time Plot with confidence interval added](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_examine_line2.png) -" -A69DA07F8EE0529080646A4B1EAB45C1074AB683_0,A69DA07F8EE0529080646A4B1EAB45C1074AB683," Creating the model - - - -1. Double-click the Time Series node to open its properties. -2. Under FIELDS, add all 5 of the markets to the Candidate Inputs lists. Also add the Total field to the Targets list. -3. Under BUILD OPTIONS - GENERAL, make sure the Expert Modeler method is selected using all default settings. Doing so enables the Expert Modeler to decide the most appropriate model to use for each time series. - -Figure 1. Choosing the Expert Modeler method for Time Series - -![Choosing the Expert Modeler method for Time Series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_expert.png) -4. Save the settings and then run the flow. A Time Series model nugget is generated. Attach it to the Time Series node. -5. Attach a Table node to the Time Series model nugget and run the flow again. - -Figure 2. Example flow showing Time Series modeling - -![Example flow showing Time Series modeling](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_flow.png) - - - -There are now three new rows appended to the end of the original data. These are the rows for the forecast period, in this case January to March 2004. - -Several new columns are also present now. The $TS- columns are added by the Time Series node. The columns indicate the following for each row (that is, for each interval in the time series data): - - - - Column Description - - $TS-colname The generated model data for each column of the original data. - $TSLCI-colname The lower confidence interval value for each column of the generated model data. - $TSUCI-colname The upper confidence interval value for each column of the generated model data. - $TS-Total The total of the $TS-colname values for this row. - $TSLCI-Total The total of the $TSLCI-colname values for this row. -" -A69DA07F8EE0529080646A4B1EAB45C1074AB683_1,A69DA07F8EE0529080646A4B1EAB45C1074AB683," $TSUCI-Total The total of the $TSUCI-colname values for this row. - - - -The most significant columns for the forecast operation are the $TS-Market_n, $TSLCI-Market_n, and $TSUCI-Market_n columns. In particular, these columns in the last three rows contain the user subscription forecast data and confidence intervals for each of the local markets. -" -8CCC5CD4A9C103249435FC0A7FB18874B447DE3D,8CCC5CD4A9C103249435FC0A7FB18874B447DE3D," Summary - -You've learned how to use the Expert Modeler to produce forecasts for multiple time series. In a real-world scenario, you could now transform nonstandard time series data into a format suitable for input to a Time Series node. -" -59CDBABC75E7EC8987A3C464F3277923F444A724,59CDBABC75E7EC8987A3C464F3277923F444A724," Defining the targets - - - -1. Add a Type node after the Filler node, then double-click the Type node to open its properties. -2. Set the role to None for the DATE_ field. Set the role to Target for all other fields (the Market_n fields plus the Total field). -3. Click Read Values to populate the Values column. - -Figure 1. Setting the role for fields - -![Setting the role for fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_targets.png) -" -83579304F7F59126FE983B1ED44BBBB1AC8BFCB2,83579304F7F59126FE983B1ED44BBBB1AC8BFCB2," Setting the time intervals - - - -1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties. -2. Under OBSERVATIONS AND TIME INTERVAL, select DATE_ as the Time/Date field. -3. Select Months as the time interval. - -Figure 1. Setting the time interval - -![Setting the time interval](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_interval.png) -4. Under MODEL OPTIONS, select the Extend records into the future option and set the value to 3. - -Figure 2. Setting the forecast period - -![Setting the forecast period](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_bandwidth_forecast_period.png) -" -7E9A5F54713CE7CB98EA4BCB223A40C4952F0083,7E9A5F54713CE7CB98EA4BCB223A40C4952F0083," Telecommunications churn - -Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one. - -For example, suppose a telecommunications provider is concerned about the number of customers it's losing to competitors. If service usage data can be used to predict which customers are liable to transfer to another provider, offers can be customized to retain as many customers as possible. - -This example uses the flow named Telecommunications Churn, available in the example project . The data file is telco.csv. - -This example focuses on using usage data to predict customer loss (churn). Because the target has two distinct categories, a binomial model is used. In the case of a target with multiple categories, a multinomial model could be created instead. See [Classifying telecommunications customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify.htmltut_classify) for more information. -" -433775834EA8AE82CBFA6077FC361C3C52A99E42_0,433775834EA8AE82CBFA6077FC361C3C52A99E42," Building the flow - -Figure 1. Example flow to classify customers using binomial logistic regression - -![Example flow to classify customers using binomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn.png) - - - -1. Add a Data Asset node that points to telco.csv. -2. Add a Type node, double-click it to open its properties, and make sure all measurement levels are set correctly. For example, most fields with values of 0 and 1 can be regarded as flags, but certain fields, such as gender, are more accurately viewed as a nominal field with two values. - -Figure 2. Measurement levels - -![Measurement levels](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn_measurement.png) -3. Set the measurement level for the churn field to Flag, and set the role to Target. Leave the role for all other fields set to Input. -4. Add a Feature Selection modeling node to the Type node. You can use a Feature Selection node to remove predictors or data that don't add any useful information about the predictor/target relationship. -5. Run the flow. Right-click the resulting model nugget and select View Model. You'll see a list of the most important fields. -6. Add a Filter node after the Type node. Not all of the data in the telco.csv data file will be useful in predicting churn. You can use the filter to only select data considered to be important for use as a predictor (the fields marked as Important in the model generated in the previous step). -7. Double-click the Filter node to open its properties, select the option Retain the selected fields (all other fields are filtered), and add the following important fields from the Feature Selection model nugget: - -tenure -age -address -income -ed -employ -equip -callcard -wireless -longmon -tollmon -equipmon -cardmon -wiremon -longten -tollten -cardten -voice -pager -internet -callwait -confer -ebill -loglong -logtoll -lninc -custcat -churn -" -433775834EA8AE82CBFA6077FC361C3C52A99E42_1,433775834EA8AE82CBFA6077FC361C3C52A99E42,"8. Add a Data Audit output node after the Filter node. Right-click the node and run it, then open the output that was added to the Outputs pane. -9. Look at the % Complete column, which lets you identify any fields with large amounts of missing data. In this case, the only field you need to amend is logtoll, which is less than 50% complete. -10. Close the output, and add a Filler node after the Filter node. Double-click the node to open its properties, click Add Columns, and select the logtoll field. -11. Under Replace, select Blank and null values. Click Save to close the node properties. -12. Right-click the Filler node you just created and select Create supernode. Double-click the supernode and change its name to Missing Value Imputation. -13. Add a Logistic node after the Filler node. Double-click the node to open its properties. Under Model Settings, select the Binomial procedure and the Forwards Stepwise method. - -Figure 3. Choosing model settings - -![Choosing model settings](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn_model.png) -14. Under Expert Options, select Expert. - -Figure 4. Choosing expert options - -![Choosing expert options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn_expert.png) -15. Click Output to open the display settings. Select At each step, Iteration history, and Parameter estimates, then click OK. - -Figure 5. Choosing expert options - -![Choosing expert options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn_expert_output.png) -" -B648A5DEE55D7DBF258B7B088830F18C040C61D5,B648A5DEE55D7DBF258B7B088830F18C040C61D5," Browsing the model - - - -* Right-click the Logistic node and run it to generate its model nugget. Right-click the nugget and select View Model.The Parameter Estimates page shows the target (churn) and inputs (predictor fields) used by the model. These are the fields that were actually chosen based on the Forwards Stepwise method, not the complete list submitted for consideration. - -Figure 1. Parameter estimates showing input fields - -![Parameter estimates showing input fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_churn_estimates.png) - -To assess how well the model actually fits your data, a number of diagnostics are available in the expert node settings when you're building the flow. - -Note also that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation. -" -2779271745A02F4DE48BD92AB93A7A4BE4A73D38,2779271745A02F4DE48BD92AB93A7A4BE4A73D38," Classifying telecommunications customers - -Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one. - -For example, suppose a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, you can customize offers for individual prospective customers. - -This example uses the flow named Classifying Telecommmunications Customers, available in the example project . The data file is telco.csv. - -The example focuses on using demographic data to predict usage patterns. The target field custcat has four possible values that correspond to the four customer groups, as follows: - - - -Table 1. Possible values for the target field - - Value Label - - 1 Basic Service - 2 E-Service - 3 Plus Service - 4 Total Service - - - -Because the target has multiple categories, a multinomial model is used. In the case of a target with two distinct categories, such as yes/no, true/false, or churn/don't churn, a binomial model could be created instead. See [Telecommunications churn](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn.htmltut_churn) for more information. -" -400E9E780D8A149530DF21E38B256B71BDA12D83_0,400E9E780D8A149530DF21E38B256B71BDA12D83," Building the flow - -Figure 1. Example flow to classify customers using multinomial logistic regression - -![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify.png) - - - -1. Add a Data Asset node that points to telco.csv. -2. Add a Type node, double-click it to open its properties, and click Read Values. Make sure all measurement levels are set correctly. For example, most fields with values of 0.0 and 1.0 can be regarded as flags. - -Figure 2. Measurement levels - -![Measurement levels](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_measurement.png) - -Notice that gender is more correctly considered as a field with a set of two values, instead of a flag, so leave its measurement value as Nominal. -3. Set the role for the custcat field to Target. Leave the role for all other fields set to Input. -4. Since this example focuses on demographics, use a Filter node to include only the relevant fields: region, age, marital, address, income, ed, employ, retire, gender, reside, and custcat). Other fields will be excluded for the purpose of this analysis. To filter them out, in the Filter node properties, click Add Columns and select the fields to exclude. - -Figure 3. Filtering on demographic fields - -![Filtering on demographic fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_filter.png) - -(Alternatively, you could change the role to None for these fields rather than excluding them, or select the fields you want to use in the modeling node.) -5. In the Logistic node properties, under MODEL SETTINGS, select the Stepwise method. Also select Multinomial, Main Effects, and Include constant in equation. - -Figure 4. Example flow to classify customers using multinomial logistic regression - -" -400E9E780D8A149530DF21E38B256B71BDA12D83_1,400E9E780D8A149530DF21E38B256B71BDA12D83,"![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_logistic.png) -6. Under EXPERT OPTIONS, select Expert mode, expand the Output section, and select Classification table. - -Figure 5. Example flow to classify customers using multinomial logistic regression - -![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_output.png) -" -D7FD91BAC6BE16ABD9B158C6B118E5E09E047C6D,D7FD91BAC6BE16ABD9B158C6B118E5E09E047C6D," Browsing the model - - - -* Run the Logistic node to generate the model. Right-click the model nugget and select View Model. - -Figure 1. Browsing the model results - -![Browsing the model results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_model.png) - -You can then explore the model information, feature (predictor) importance, and parameter estimates information. - -Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you can use a Partition node to hold out a subset of records for purposes of testing and validation. -" -9555087B12B80060FB337F8974FEA9261174115E,9555087B12B80060FB337F8974FEA9261174115E," Condition monitoring - -This example concerns monitoring status information from a machine and the problem of recognizing and predicting fault states. - -The data is created from a fictitious simulation and consists of a number of concatenated series measured over time. Each record is a snapshot report on the machine in terms of the following: - - - -* Time. An integer. -* Power. An integer. -* Temperature. An integer. -* Pressure. 0 if normal, 1 for a momentary pressure warning. -* Uptime. Time since last serviced. -* Status. Normally 0, changes to an error code if an error occurs (101, 202, or 303). -* Outcome. The error code that appears in this time series, or 0 if no error occurs. (These codes are available only with the benefit of hindsight.) - - - -This example uses the flow named Condition Monitoring, available in the example project . The data files are cond1n.csv and cond2n.csv. - -For each time series, there's a series of records from a period of normal operation followed by a period leading to the fault, as shown in the following table: - - - - Time Power Temperature Pressure Uptime Status Outcome - - 0 1059 259 0 404 0 0 - 1 1059 259 0 404 0 0 - ... - 51 1059 259 0 404 0 0 - 52 1059 259 0 404 0 0 - 53 1007 259 0 404 0 303 - 54 998 259 0 404 0 303 - ... - 89 839 259 0 404 0 303 - 90 834 259 0 404 303 303 - 0 965 251 0 209 0 0 - 1 965 251 0 209 0 0 - ... - 51 965 251 0 209 0 0 - 52 965 251 0 209 0 0 - 53 938 251 0 209 0 101 - 54 936 251 0 209 0 101 - ... - 208 644 251 0 209 0 101 - 209 640 251 0 209 101 101 - - - -The following process is common to most data mining projects: - - - -* Examine the data to determine which attributes may be relevant to the prediction or recognition of the states of interest. -* Retain those attributes (if already present), or derive and add them to the data, if necessary. -" -D59300B05666E072EA812EFFA009E2DD4B60A508,D59300B05666E072EA812EFFA009E2DD4B60A508," Examining the data - -For the first part of the process, imagine you have a flow that plots a number of graphs. If the time series of temperature or power contains visible patterns, you could differentiate between impending error conditions or possibly predict their occurrence. For both temperature and power, the flow plots the time series associated with the three different error codes on separate graphs, yielding six graphs. Select nodes separate the data associated with the different error codes. - -The graphs clearly display patterns distinguishing 202 errors from 101 and 303 errors. The 202 errors show rising temperature and fluctuating power over time; the other errors don't. However, patterns distinguishing 101 from 303 errors are less clear. Both errors show even temperature and a drop in power, but the drop in power seems steeper for 303 errors. - -Based on these graphs, it appears that the presence and rate of change for both temperature and power, as well as the presence and degree of fluctuation, are relevant to predicting and distinguishing faults. These attributes should therefore be added to the data before applying the learning systems. -" -D11A81E7333F63092FCF2C047744F2F3C18C1903,D11A81E7333F63092FCF2C047744F2F3C18C1903," Learning - -Running the flow trains the C5.0 rule and neural network (net). The network may take some time to train, but training can be interrupted early to save a net that produces reasonable results. After the learning is complete, model nuggets are generated: one represents the neural net and one represents the rule. - -Figure 1. Generated model nuggets - -![Generated model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_nuggets.png) - -These model nuggets enable us to test the system or export the results of the model. In this example, we will test the results of the model. -" -43071778B4E33375953AFB1AB743B342D3CC906A,43071778B4E33375953AFB1AB743B342D3CC906A," Data preparation - -Based on the results of exploring the data, the following flow derives the relevant data and learns to predict faults. - -This example uses the flow named Condition Monitoring, available in the example project installed with the product. The data files are cond1n.csv and cond2n.csv. - - - -1. On the My Projects screen, click Example Project. -2. Scroll down to the Modeler flows section, click View all, and select the Condition Monitoring flow. - - - -Figure 1. Condition Monitoring example flow - -![Condition Monitoring example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition.png)The flow uses a number of Derive nodes to prepare the data for modeling. - - - -* Data Asset import node. Reads data file cond1n.csv. -* Pressure Warnings (Derive). Counts the number of momentary pressure warnings. Reset when time returns to 0. -* TempInc (Derive). Calculates momentary rate of temperature change using @DIFF1. -* PowerInc (Derive). Calculates momentary rate of power change using @DIFF1. -* PowerFlux (Derive). A flag, true if power varied in opposite directions in the last record and this one; that is, for a power peak or trough. -* PowerState (Derive). A state that starts as Stable and switches to Fluctuating when two successive power fluxes are detected. Switches back to Stable only when there hasn't been a power flux for five time intervals or when Time is reset. -* PowerChange (Derive). Average of PowerInc over the last five time intervals. -* TempChange (Derive). Average of TempInc over the last five time intervals. -* Discard Initial (Select). Discards the first record of each time series to avoid large (incorrect) jumps in Power and Temperature at boundaries. -" -A187344EB767BAC8E4D674651BEDAFA33F70BFA1,A187344EB767BAC8E4D674651BEDAFA33F70BFA1," Testing - -Both of the generated model nuggets are connected to the Type node. - - - -1. Reposition the nuggets as shown, so the Type node connects to the neural net nugget, which connects to the C5.0 nugget. -2. Attach an Analysis node to the C5.0 nugget. -3. Edit the Data Asset node to use the file cond2n.csv (instead of cond1n.csv), which contains unseen test data. -4. Right-click the Analysis node and select Run. Doing so yields figures reflecting the accuracy of the trained network and rule. - -Figure 1. Testing the trained network - -![Testing the trained network](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_analysis.png) -" -3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9,3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9," Drug treatment - exploratory graphs - -In this example, imagine you're a medical researcher compiling data for a study. You've collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. Part of your job is to use data mining to find out which drug might be appropriate for a future patient with the same illness. - -This example uses the flow named Drug Treatment - Exploratory Graphs, available in the example project . The data file is drug1n.csv. - -Figure 1. Drug treatment example flow - -![Drug treatment example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_data.png) - -The data fields used in this example are: - - - - Data field Description - - Age Age of patient (number) - Sex M or F - BP Blood pressure: HIGH, NORMAL, or LOW - Cholesterol Blood cholesterol: NORMAL or HIGH - Na Blood sodium concentration -" -13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D,13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D," Using an Analysis node - -You can assess the accuracy of the model using an Analysis node. From the Palette, under Outputs, place an Analysis node on the canvas and attach it to the C5.0 model nugget. Then right-click the Analysis node and select Run. - -Figure 1. Analysis node - -![Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis.png)The Analysis node output shows that with this artificial dataset, the model correctly predicted the choice of drug for every record in the dataset. With a real dataset you are unlikely to see 100% accuracy, but you can use the Analysis node to help determine whether the model is acceptably accurate for your particular application. - -Figure 2. Analysis node output - -![Analysis node output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis_output.png) -" -18A7A354C4B46E26DF8304755C8BE954BB922B04,18A7A354C4B46E26DF8304755C8BE954BB922B04," Browsing the model - -When the C5.0 node runs, its model nugget is added to the flow. To browse the model, right-click the model nugget and choose View Model. - -The Tree Diagram displays the set of rules generated by the C5.0 node in a tree format. Now you can see the missing pieces of the puzzle. For people with an Na-to-K ratio less than 14.829 and high blood pressure, age determines the choice of drug. For people with low blood pressure, cholesterol level seems to be the best predictor. - -Figure 1. Tree diagram - -![Tree diagram](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_browse_tree.png) - -You can hover over the nodes in the tree to see more details such as the number of cases for each blood pressure category and the confidence percentage of cases. -" -D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C,D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C," Building a model - -By exploring and manipulating the data, you have been able to form some hypotheses. The ratio of sodium to potassium in the blood seems to affect the choice of drug, as does blood pressure. But you cannot fully explain all of the relationships yet. This is where modeling will likely provide some answers. In this case, you will try to fit the data using a rule-building model called C5.0. - -Since you're using a derived field, Na_to_K, you can filter out the original fields, Na and K, so they're not used twice in the modeling algorithm. You can do this by using a Filter node. - - - -1. Place a Filter node on the canvas and connect it to the Derive node. - -Figure 1. Filter node - -![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow.png) -2. Double-click the Filter node to edit its properties. Name it Discard Fields. -3. For Mode, make sure Filter the selected fields is selected. Then select the K and Na fields. Click Save. -4. Place a Type node on the canvas and connect it to the Filter node. With the Type node, you can indicate the types of fields you're using and how they're used to predict the outcomes. - -Figure 2. Type node - -![Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow2.png) -5. Double-click the Type node to edit its properties. Name it Define Types. -" -D733288343A1790788E8069EB55908F9D12566A9,D733288343A1790788E8069EB55908F9D12566A9," Reading in text data - - - -1. You can read in delimited text data using a Data Asset import node. From the Palette, under Import, add a Data Asset node to your flow. -" -A73CA4F67523DBB58FD3521AE9BFF83AEE634607,A73CA4F67523DBB58FD3521AE9BFF83AEE634607," Creating a distribution chart - -During data mining, it is often useful to explore the data by creating visual summaries. Watson Studio offers many different types of charts to choose from, depending on the kind of data you want to summarize. For example, to find out what proportion of the patients responded to each drug, use a Distribution node. - -Figure 1. Distribution node - -![Distribution node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_distribution.png) - - - -1. Under Graphs on the Palette, add a Distribution node to the flow and connect it to the drug1n.csv Data Asset node. Then double-click the node to edit its options. -2. Select Drug as the target field whose distribution you want to show. Then click Save, right-click the Distribution node, and select Run. A distribution chart is added to the Outputs panel. - - - -The chart helps you see the shape of the data. It shows that patients responded to drug Y most often and to drugs B and C least often. - -Alternatively, you can attach and run a Data Audit node for a quick glance at distributions and histograms for all fields at once. The Data Audit node is available under Outputs on the Palette. -" -0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E_0,0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E," Deriving a new field - -Figure 1. Scatterplot of drug distribution - -![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png) - -Since the ratio of sodium to potassium seems to predict when to use drug Y, you can derive a field that contains the value of this ratio for each record. This field might be useful later when you build a model to predict when to use each of the five drugs. - - - -1. To simplify your flow layout, start by deleting all the nodes except the drug1n.csv Data Asset node. -2. Place a Derive node on the canvas and connect it to the drug1n.csv Data Asset node. - -Figure 2. Derive node - -![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_flow.png) -3. Double-click the Derive node to edit its properties. -4. Name the new field Na_to_K. Since you obtain the new field by dividing the sodium value by the potassium value, enter Na/K for the expression. You can also create an expression by clicking the calculator icon. This opens the Expression Builder, a way to interactively create expressions using built-in lists of functions, operands, and fields and their values. -5. You can check the distribution of your new field by attaching a Histogram node to the Derive node. In the Histogram node properties, specify Na_to_K as the field to be plotted and Drug as the color overlay field. - -Figure 3. Histogram node - -![Histogram node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_histogram_flow.png) -6. Right-click the Histogram node and select Run. A histogram chart is added to the Outputs pane. Based on the chart, you can conclude that when the Na_to_K value is around 15 or more, drug Y is the drug of choice. - -" -0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E_1,0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E,"Figure 4. Histogram chart output - -![Histogram chart output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_histogram.png) -" -BB659D7B00DB3096C4082BB93C7FDB933738B013,BB659D7B00DB3096C4082BB93C7FDB933738B013," Creating a scatterplot - -Now let's take a look at what factors might influence Drug, the target variable. As a researcher, you know that the concentrations of sodium and potassium in the blood are important factors. Since these are both numeric values, you can create a scatterplot of sodium versus potassium, using the drug categories as a color overlay. - -Figure 1. Plot node - -![Plot node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot_flow.png) - - - -1. Place a Plot node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Plot node to edit its properties. -2. Select Na as the X field, K as the Y field, and Drug as the Color (overlay) field. Click Save, then right-click the Plot node and select Run. A plot chart is added to the Outputs pane. - -The plot clearly shows a threshold above which the correct drug is always drug Y and below which the correct drug is never drug Y. This threshold is a ratio -- the ratio of sodium (Na) to potassium (K). - -Figure 2. Scatterplot of drug distribution - -![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png) -" -F7D95A9FCCA49861B0D4B7DCE677D4E6EFF1F7C1,F7D95A9FCCA49861B0D4B7DCE677D4E6EFF1F7C1," Creating advanced visualizations - -The previous three sections use different types of graph nodes. Another way to explore data is with the advanced visualizations feature. - -You can use the Charts node to launch the chart builder and create advanced charts to explore your data from different perspectives and identify patterns, connections, and relationships within your data. - -Figure 1. Advanced visualizations - -![Advanced visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_viz.png) -" -95C10FDC6D0C3B142DA650044E1A0581D04EF8E4,95C10FDC6D0C3B142DA650044E1A0581D04EF8E4," Creating a web chart - -Since many of the data fields are categorical, you can also try plotting a web chart, which maps associations between different categories. - -Figure 1. Web node - -![Web node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web_flow.png) - - - -1. Place a Web node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Web node to edit its properties. -2. Select the fields BP (for blood pressure) and Drug. Click Save, then right-click the Web node and select Run. A web chart is added to the Outputs pane. - -Figure 2. Web graph of drugs vs. blood pressure - -![Web graph of drugs vs. blood pressure](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web.png) - - - -From the plot, it appears that drug Y is associated with all three levels of blood pressure. This is no surprise; you have already determined the situation in which drug Y is best. - -But if you ignore drug Y and focus on the other drugs, you can see that drugs A and B are also associated with high blood pressure. And drugs C and X are associated with low blood pressure. And normal blood pressure is associated with drug X. At this point, though, you still don't know how to choose between drugs A and B or between drugs C and X, for a given patient. This is where modeling can help. -" -E8B776685A4C1FFCDC8F90C57C3AD7243A43B2B3,E8B776685A4C1FFCDC8F90C57C3AD7243A43B2B3," Forecasting catalog sales - -A catalog company is interested in forecasting monthly sales of its men's clothing line, based on 10 years of their sales data. - -This example uses the flow Forecasting Catalog Sales, available in the example project . The data file is catalog_seasfac.csv. - -We've seen in an earlier tutorial how you can let the Expert Modeler decide which is the most appropriate model for your time series. Now it's time to take a closer look at the two methods that are available when choosing a model yourself—exponential smoothing and ARIMA. - -To help you decide on an appropriate model, it's a good idea to plot the time series first. Visual inspection of a time series can often be a powerful guide in helping you choose. In particular, you need to ask yourself: - - - -" -B5873013457AADDCC20DB880B3FC9D9BFB7BD348_0,B5873013457AADDCC20DB880B3FC9D9BFB7BD348," ARIMA - -With the ARIMA procedure, you can create an autoregressive integrated moving-average (ARIMA) model that is suitable for finely tuned modeling of time series. - -ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and they have the added benefit of being able to include predictor variables in the model. - -Continuing the example of the catalog company that wants to develop a forecasting model, we have seen how the company has collected data on monthly sales of men's clothing along with several series that might be used to explain some of the variation in sales. Possible predictors include the number of catalogs mailed and the number of pages in the catalog, the number of phone lines open for ordering, the amount spent on print advertising, and the number of customer service representatives. - -Are any of these predictors useful for forecasting? Is a model with predictors really better than one without? Using the ARIMA procedure, we can create a forecasting model with predictors, and see if there's a significant difference in predictive ability over the exponential smoothing model with no predictors. - -With the ARIMA method, you can fine-tune the model by specifying orders of autoregression, differencing, and moving average, as well as seasonal counterparts to these components. Determining the best values for these components manually can be a time-consuming process involving a good deal of trial and error so, for this example, we'll let the Expert Modeler choose an ARIMA model for us. - -We'll try to build a better model by treating some of the other variables in the dataset as predictor variables. The ones that seem most useful to include as predictors are the number of catalogs mailed (mail), the number of pages in the catalog (page), the number of phone lines open for ordering (phone), the amount spent on print advertising (print), and the number of customer service representatives (service). - - - -1. Double-click the Type node to open its properties. -2. Set the role for mail, page, phone, print, and service to Input. -3. Ensure that the role for men is set to Target and that all the remaining fields are set to None. -4. Click Save. -" -B5873013457AADDCC20DB880B3FC9D9BFB7BD348_1,B5873013457AADDCC20DB880B3FC9D9BFB7BD348,"5. Double-click the Time Series node. -6. Under BUILD OPTIONS - GENERAL, select Expert Modeler for the method. -7. Select the options ARIMA models only and Expert Modeler considers seasonal models. - -Figure 1. Choosing only ARIMA models - -![Choosing only ARIMA models](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima.png) -8. Click Save and run the flow. -9. Right-click the model nugget and select View Model. Click men and then click Model information. Notice how the Expert Modeler has chosen only two of the five specified predictors as being significant to the model. - -Figure 2. Expert Modeler chooses two predictors - -![Expert Modeler chooses two predictors](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_predictors.png) -10. Open the latest chart output. - -Figure 3. ARIMA model with predictors specified - -![ARIMA model with predictors specified](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_chart.png) - -This model improves on the previous one by capturing the large downward spike as well, making it the best fit so far. - -We could try refining the model even further, but any improvements from this point on are likely to be minimal. We've established that the ARIMA model with predictors is preferable, so let's use the model we have just built. For the purposes of this example, we'll forecast sales for the coming year. -11. Double-click the Time Series node. -12. Under MODEL OPTIONS, select the option Extend records into the future and set its value to 12. -13. Select the Compute future values of inputs option. -14. Click Save and run the flow.The forecast looks good. As expected, there's a return to normal sales levels following the December peak, and a steady upward trend in the second half of the year, with sales in general better than those for the previous year. - -" -B5873013457AADDCC20DB880B3FC9D9BFB7BD348_2,B5873013457AADDCC20DB880B3FC9D9BFB7BD348,"Figure 4. Sales forecast extended by 12 months - -![Sales forecast extended by 12 months](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_finalchart.png) -" -05F38627C9EC286CA7C379A31AA27392A65411AB,05F38627C9EC286CA7C379A31AA27392A65411AB," Examining the data - -The series shows a general upward trend; that is, the series values tend to increase over time. The upward trend is seemingly constant, which indicates a linear trend. - -Figure 1. Actual sales of men's clothing - -![Actual sales of men's clothing](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_series.png) - -The series also has a distinct seasonal pattern with annual highs in December, as indicated by the vertical lines on the graph. The seasonal variations appear to grow with the upward series trend, which suggests multiplicative rather than additive seasonality. - -Now that you've identified the characteristics of the series, you're ready to try modeling it. The exponential smoothing method is useful for forecasting series that exhibit trend, seasonality, or both. As we've seen, this data exhibits both characteristics. -" -2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839,2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839," Creating the flow - - - -1. Create a new flow and add a Data Asset node that points to catalog_seasfac.csv. -2. Connect a Type node to the Data Asset node and double-click it to open its properties. -3. Click Read Values. For the men field, set the role to Target. - -Figure 1. Specifying the target field - -![Specifying the target field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_fields.png) -4. Set the role for all other fields to None and click Save. -5. Attach a Time Plot graph node to the Type node and double-click it. - -Figure 2. Plotting the time series - -![Plotting the time series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_plot.png) -6. For the Plot, add the field men to the Series list. -7. Select Use custom x axis field label and select date. -" -7394B97DA7B0846274940F439675051521A7DD7C_0,7394B97DA7B0846274940F439675051521A7DD7C," Exponential smoothing - -Building a best-fit exponential smoothing model involves determining the model type (whether the model needs to include trend, seasonality, or both) and then obtaining the best-fit parameters for the chosen model. - -The plot of men's clothing sales over time suggested a model with both a linear trend component and a multiplicative seasonality component. This implies a Winters' model. First, however, we will explore a simple model (no trend and no seasonality) and then a Holt's model (incorporates linear trend but no seasonality). This will give you practice in identifying when a model is not a good fit to the data, an essential skill in successful model building. - -We'll start with a simple exponential smoothing model. - - - -1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties. -2. Under OBSERVATIONS AND TIME INTERVAL, select date as the time/date field. -3. Select Months as the time interval. - -Figure 1. Setting the time interval - -![Setting the time interval](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_timedate.png) -4. Under BUILD OPTIONS - GENERAL, select Exponential Smoothing for the Method. -5. Set Model Type to Simple. Click Save. - -Figure 2. Setting the method - -![Setting the method](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing.png) -6. Run the flow to create the model nugget. -7. Attach a Time Plot node to the model nugget. -8. Under Plot, add the fields men and $TS-men to the Series list. -9. Select the option Use custom x axis field label and select the date field. -10. Deselect the Display series in separate panel and Normalize options. Click Save. - -Figure 3. Setting the plot options - -![Setting the plot options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_plot.png) -" -7394B97DA7B0846274940F439675051521A7DD7C_1,7394B97DA7B0846274940F439675051521A7DD7C,"11. Run the flow and then open the output.The men plot represents the actual data, while $TS-men denotes the time series model. - -Figure 4. Simple exponential smoothing model - -![Simple exponential smoothing model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_chart.png) - -Although the simple model does, in fact, exhibit gradual (and rather ponderous) upward trend, it takes no account of seasonality. You can safely reject this model. - -Now let's try a Holt's linear model. This should at least model the trend better than the simple model, although it too is unlikely to capture the seasonality. -12. Double-click the Time Series node. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select HoltsLinearTrend as the model type. -13. Click Save and run the flow again to regenerate the model nugget. Open the output. - -Figure 5. Holt's linear trend model - -![Holt's linear trend model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_holtchart.png) - -Holt's model displays a smoother upward trend than the simple model, but it still takes no account of the seasonality, so you can disregard this one too. - -You may recall that the initial plot of men's clothing sales over time suggested a model incorporating a linear trend and multiplicative seasonality. A more suitable candidate, therefore, might be Winters' model. -14. Double-click the Time Series node again to edit its properties. -15. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select WintersMultiplicative as the model type. -16. Run the flow. - -Figure 6. Winters' multiplicative model - -![Winters' multiplicative model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_winterschart.png) - -" -7394B97DA7B0846274940F439675051521A7DD7C_2,7394B97DA7B0846274940F439675051521A7DD7C,"This looks better. The model reflects both the trend and the seasonality of the data. The dataset covers a period of 10 years and includes 10 seasonal peaks occurring in December of each year. The 10 peaks present in the predicted results match up well with the 10 annual peaks in the real data. - -However, the results also underscore the limitations of the Exponential Smoothing procedure. Looking at both the upward and downward spikes, there is significant structure that's not accounted for. - -If you're primarily interested in modeling a long-term trend with seasonal variation, then exponential smoothing may be a good choice. To model a more complex structure such as this one, we need to consider using the ARIMA procedure. -" -5AE2F0D8BD974C7393BC5FFA773B90FD0A2229B0,5AE2F0D8BD974C7393BC5FFA773B90FD0A2229B0," Summary - -You've successfully modeled a complex time series, incorporating not only an upward trend but also seasonal and other variations. You've also seen how, through trial and error, you can get closer and closer to an accurate model, which you can then use to forecast future sales. - -In practice, you would need to reapply the model as your actual sales data are updated—for example, every month or every quarter—and produce updated forecasts. -" -02244F39BE9A15FA55C94C9F2775606247969A61_0,02244F39BE9A15FA55C94C9F2775606247969A61," Introduction to modeling - -A model is a set of rules, formulas, or equations that can be used to predict an outcome based on a set of input fields or variables. For example, a financial institution might use a model to predict whether loan applicants are likely to be good or bad risks, based on information that is already known about past applicants. - -Video disclaimer: Some minor steps and graphical elements in these videos might differ from your platform. - -[https://video.ibm.com/embed/recorded/131116287](https://video.ibm.com/embed/recorded/131116287) - -The ability to predict an outcome is the central goal of predictive analytics, and understanding the modeling process is the key to using flows in Watson Studio. - -Figure 1. A decision tree model - -![A decision tree model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-tree-diagram-Jun2023.png) - -This example uses a decision tree model, which classifies records (and predicts a response) using a series of decision rules. For example: - -IF income = Medium -AND cards <5 -THEN -> 'Good' - -While this example uses a CHAID (Chi-squared Automatic Interaction Detection) model, it is intended as a general introduction, and most of the concepts apply broadly to other modeling types in Watson Studio. - -To understand any model, you first need to understand the data that goes into it. The data in this example contains information about the customers of a bank. The following fields are used: - - - - Field name Description - - Credit_rating Credit rating: 0=Bad, 1=Good, 9=missing values - Age Age in years - Income Income level: 1=Low, 2=Medium, 3=High - Credit_cards Number of credit cards held: 1=Less than five, 2=Five or more - Education Level of education: 1=High school, 2=College - Car_loans Number of car loans taken out: 1=None or one, 2=More than two - - - -" -02244F39BE9A15FA55C94C9F2775606247969A61_1,02244F39BE9A15FA55C94C9F2775606247969A61,"The bank maintains a database of historical information on customers who have taken out loans with the bank, including whether or not they repaid the loans (Credit rating = Good) or defaulted (Credit rating = Bad). Using this existing data, the bank wants to build a model that will enable them to predict how likely future loan applicants are to default on the loan. - -Using a decision tree model, you can analyze the characteristics of the two groups of customers and predict the likelihood of loan defaults. - -This example uses the flow named Introduction to Modeling, available in the example project . The data file is tree_credit.csv. - -Let's take a look at the flow. - - - -" -A3022FF9DB2732F0AB3091884B428763D3879FD2_0,A3022FF9DB2732F0AB3091884B428763D3879FD2," Building the flow - -Figure 1. Modeling flow - -![Modeling flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_build_flow.png) - -To build a flow that will create a model, we need at least three elements: - - - -* A Data Asset node that reads in data from an external source, in this case a .csv data file -* An Import or Type node that specifies field properties, such as measurement level (the type of data that the field contains), and the role of each field as a target or input in modeling -* A modeling node that generates a model nugget when the flow runs - - - -In this example, we're using a CHAID modeling node. CHAID, or Chi-squared Automatic Interaction Detection, is a classification method that builds decision trees by using a particular type of statistics known as chi-square statistics to work out the best places to make the splits in the decision tree. - -If measurement levels are specified in the source node, the separate Type node can be eliminated. Functionally, the result is the same. - -This flow also has Table and Analysis nodes that will be used to view the scoring results after the model nugget has been created and added to the flow. - -The Data Asset import node reads data in from the sample tree_credit.csv data file. - -The Type node specifies the measurement level for each field. The measurement level is a category that indicates the type of data in the field. Our source data file uses three different measurement levels: - -A Continuous field (such as the Age field) contains continuous numeric values, while a Nominal field (such as the Credit rating field) has two or more distinct values, for example Bad, Good, or No credit history. An Ordinal field (such as the Income level field) describes data with multiple distinct values that have an inherent order—in this case Low, Medium and High. - -Figure 2. Setting the target and input fields with the Type node - -" -A3022FF9DB2732F0AB3091884B428763D3879FD2_1,A3022FF9DB2732F0AB3091884B428763D3879FD2,"![Setting the target and input fields with the Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/intro-build.jpg) - -For each field, the Type node also specifies a role to indicate the part that each field plays in modeling. The role is set to Target for the field Credit rating, which is the field that indicates whether or not a given customer defaulted on the loan. This is the target, or the field for which we want to predict the value. - -Role is set to Input for the other fields. Input fields are sometimes known as predictors, or fields whose values are used by the modeling algorithm to predict the value of the target field. - -The CHAID modeling node generates the model. In the node's properties, under FIELDS, the option Use custom field roles is available. We could select this option and change the field roles, but for this example we'll use the default targets and inputs as specified in the Type node. - - - -1. Double-click the CHAID node (named Creditrating). The node properties are displayed. - -Figure 3. CHAID modeling node properties - -![CHAID modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-fields.png) - -Here there are several options where we could specify the kind of model we want to build. - -We want a brand-new model, so under OBJECTIVES we'll use the default option Build new model. - -We also just want a single, standard decision tree model without any enhancements, so we'll also use the default objective option Create a standard model. - -Figure 4. CHAID modeling node objectives - -![CHAID modeling node objectives](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-objectives.png) - -For this example, we want to keep the tree fairly simple, so we'll limit the tree growth by raising the minimum number of cases for parent and child nodes. -2. Under STOPPING RULES, select Use absolute value. -3. Set Minimum records in parent branch to 400. -" -A3022FF9DB2732F0AB3091884B428763D3879FD2_2,A3022FF9DB2732F0AB3091884B428763D3879FD2,"4. Set Minimum records in child branch to 200. - - - -Figure 5. Setting the stopping criteria for decision tree building - -![Setting the stopping criteria for decision tree building](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_stopping.png) - -We can use all the other default options for this example, so click Save and then click the Run button on the toolbar to create the model. (Alternatively, right-click the CHAID node and choose Run from the context menu.) -" -9DEAC0E5B403BAEDEABE9C76A295651289E6416C_0,9DEAC0E5B403BAEDEABE9C76A295651289E6416C," Evaluating the model - -We've been browsing the model to understand how scoring works. But to evaluate how accurately it works, we need to score some records and compare the responses predicted by the model to the actual results. We're going to score the same records that were used to estimate the model, allowing us to compare the observed and predicted responses. - -Figure 1. Attaching the model nugget to output nodes for model evaluation - -![Attaching the model nugget to output nodes for model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) - - - -1. To see the scores or predictions, attach the Table node to the model nugget and then right-click the Table node and select Run. A table will be generated and added to the Outputs panel. Double-click it to open it. - -The table displays the predicted scores in a field named $R-Credit rating, which was created by the model. We can compare these values to the original Credit rating field that contains the actual responses. - -By convention, the names of the fields generated during scoring are based on the target field, but with a standard prefix. Prefixes $G and $GE are generated by the Generalized Linear Model, $R is the prefix used for the prediction generated by the CHAID model in this case, $RC is for confidence values, $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, Set, or Flag field, respectively. Different model types use different sets of prefixes. A confidence value is the model's own estimation, on a scale from 0.0 to 1.0, of how accurate each predicted value is. - -Figure 2. Table showing generated scores and confidence values - -![Table showing generated scores and confidence values](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-eval-table.png) - -" -9DEAC0E5B403BAEDEABE9C76A295651289E6416C_1,9DEAC0E5B403BAEDEABE9C76A295651289E6416C,"As expected, the predicted value matches the actual responses for many records but not all. The reason for this is that each CHAID terminal node has a mix of responses. The prediction matches the most common one, but will be wrong for all the others in that node. (Recall the 18% minority of low-income customers who did not default.) - -To avoid this, we could continue splitting the tree into smaller and smaller branches, until every node was 100% pure—all Good or Bad with no mixed responses. But such a model would be extremely complicated and would probably not generalize well to other datasets. - -To find out exactly how many predictions are correct, we could read through the table and tally the number of records where the value of the predicted field $R-Credit rating matches the value of Credit rating. Fortunately, there's a much easier way; we can use an Analysis node, which does this automatically. -2. Connect the model nugget to the Analysis node. -3. Right-click the Analysis node and select Run. An Analysis entry will be added to the Outputs panel. Double-click it to open it. - - - -Figure 3. Attaching an Analysis node - -![Attaching an Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) - -The analysis shows that for 1960 out of 2464 records—over 79%—the value predicted by the model matched the actual response. - -Figure 4. Analysis results comparing observed and predicted responses - -![Analysis results comparing observed and predicted responses](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_analysis.png) - -" -9DEAC0E5B403BAEDEABE9C76A295651289E6416C_2,9DEAC0E5B403BAEDEABE9C76A295651289E6416C,"This result is limited by the fact that the records being scored are the same ones used to estimate the model. In a real situation, you could use a Partition node to split the data into separate samples for training and evaluation. By using one sample partition to generate the model and another sample to test it, you can get a much better indication of how well it will generalize to other datasets. - -The Analysis node allows us to test the model against records for which we already know the actual result. The next stage illustrates how we can use the model to score records for which we don't know the outcome. For example, this might include people who are not currently customers of the bank, but who are prospective targets for a promotional mailing. -" -A62A258BB486FBE7E7FC91C611DC2BC400E32308_0,A62A258BB486FBE7E7FC91C611DC2BC400E32308," Browsing the model - -After running a flow, an orange model nugget is added to the canvas with a link to the modeling node from which it was created. To view the model details, right-click the model nugget and choose View Model. - -Figure 1. Model nugget - -![Model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_nugget.png) - -In the case of the CHAID nugget, the CHAID Tree Model screen includes pages for Model Information, Feature Importance, Top Decision Rules, Tree Diagram, Build Settings, and Training Summary. For example, you can see details in the form of a rule set—essentially a series of rules that can be used to assign individual records to child nodes based on the values of different input fields. - -Figure 2. CHAID model nugget, rule set - -![CHAID model nugget, rule set](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_rules.png) - -For each decision tree terminal node – meaning those tree nodes that are not split further—a prediction of Good or Bad is returned. In each case, the prediction is determined by the mode, or most common response, for records that fall within that node. - -The Feature Importance chart shows the relative importance of each predictor in estimating the model. From this, we can see that Income level is easily the most significant in this case, with Number of credit cards being the next most significant factor. - -Figure 3. Feature Importance chart - -![Feature Importance chart](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/feature-importance.jpg) - -The Tree Diagram page displays the same model in the form of a tree, with a node at each decision point. Hover over branches and nodes to explore details. - -Figure 4. Tree diagram in the model nugget - -![Tree diagram in the model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tree-diagram.jpg) - -" -A62A258BB486FBE7E7FC91C611DC2BC400E32308_1,A62A258BB486FBE7E7FC91C611DC2BC400E32308,"Looking at the start of the tree, the first node (node 0) gives us a summary for all the records in the data set. Just over 40% of the cases in the data set are classified as a bad risk. This is quite a high proportion, so let's see if the tree can give us any clues as to what factors might be responsible. - -We can see that the first split is by Income level. Records where the income level is in the Low category are assigned to node 2, and it's no surprise to see that this category contains the highest percentage of loan defaulters. Clearly, lending to customers in this category carries a high risk. However, almost 18% of the customers in this category actually didn’t default, so the prediction won't always be correct. No model can feasibly predict every response, but a good model should allow us to predict the most likely response for each record based on the available data. - -In the same way, if we look at the high income customers (node 1), we see that the vast majority (over 88%) are a good risk. But more than 1 in 10 of these customers has also defaulted. Can we refine our lending criteria to minimize the risk here? - -Notice how the model has divided these customers into two sub-categories (nodes 4 and 5), based on the number of credit cards held. For high-income customers, if we lend only to those with fewer than five credit cards, we can increase our success rate from 88% to almost 97%—an even more satisfactory outcome. - -Figure 5. High-income customers with fewer than five credit cards - -![High-income customers with fewer than five credit cards](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node5.png) - -But what about those customers in the Medium income category (node 3)? They’re much more evenly divided between Good and Bad ratings. Again, the sub-categories (nodes 6 and 7 in this case) can help us. This time, lending only to those medium-income customers with fewer than five credit cards increases the percentage of Good ratings from 58% to 86%, a significant improvement. - -Figure 6. Tree view of medium-income customers - -" -A62A258BB486FBE7E7FC91C611DC2BC400E32308_2,A62A258BB486FBE7E7FC91C611DC2BC400E32308,"![Tree view of medium-income customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node7.png) - -So, we’ve learned that every record that is input to this model will be assigned to a specific node, and assigned a prediction of Good or Bad based on the most common response for that node. This process of assigning predictions to individual records is known as scoring. By scoring the same records used to estimate the model, we can evaluate how accurately it performs on the training data—the data for which we know the outcome. Let's examine how to do this. -" -3CF77633A489E42B01086588D6613D65BFD51F7F,3CF77633A489E42B01086588D6613D65BFD51F7F," Scoring records - -Earlier, we scored the same records used to estimate the model so we could evaluate how accurate the model was. Now we'll score a different set of records from the ones used to create the model. This is the goal of modeling with a target field: Study records for which you know the outcome, to identify patterns that will allow you to predict outcomes you don't yet know. - -Figure 1. Attaching new data for scoring - -![Attaching new data for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_score.png) - -You could update the data asset Import node to point to a different data file, or you could add a new Import node that reads in the data you want to score. Either way, the new dataset must contain the same input fields used by the model (Age, Income level, Education and so on), but not the target field Credit rating. - -Alternatively, you could add the model nugget to any flow that includes the expected input fields. Whether read from a file or a database, the source type doesn't matter as long as the field names and types match those used by the model. -" -F140F179614D126E483732933A5CA8DCF0A32876,F140F179614D126E483732933A5CA8DCF0A32876," Summary - -This example Introduction to Modeling flow demonstrates the basic steps for creating, evaluating, and scoring a model. - - - -* The modeling node estimates the model by studying records for which the outcome is known, and creates a model nugget. This is sometimes referred to as training the model. -* The model nugget can be added to any flow with the expected fields to score records. By scoring the records for which you already know the outcome (such as existing customers), you can evaluate how well it performs. -" -2828FD5943ABBA08AA260F1080B850C90FC4EFBE,2828FD5943ABBA08AA260F1080B850C90FC4EFBE," Reducing input data string length - -For binomial logistic regression, and auto classifier models that include a binomial logistic regression model, string fields are limited to a maximum of eight characters. Where strings are more than eight characters, you can recode them using a Reclassify node. - -This example uses the flow named Reducing Input Data String Length, available in the example project . The data file is drug_long_name.csv. - -This example focuses on a small part of a flow to show the type of errors that may be generated with overlong strings, and explains how to use the Reclassify node to change the string details to an acceptable length. Although the example uses a binomial Logistic Regression node, it is equally applicable when using the Auto Classifier node to generate a binomial Logistic Regression model. -" -85381B4DF6F42B35CA5097709523038ABDCDC555_0,85381B4DF6F42B35CA5097709523038ABDCDC555," Reclassifying the data - -Figure 1. Example flow showing string reclassification for binomial logistic regression - -![Example flow showing string reclassification for binomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing.png) - - - -1. Add a Data Asset node that points to drug_long_name.csv. -2. Add a Type node after the Data Asset node. Double-click the Type node to open its properties, and select Cholesterol_long as the target. -3. Add a Logistic Regression node after the Type node. Double-click the node and select the Binomial procedure (instead of the default Multinomial procedure). -4. Right-click the Logistic Regression node and run it. An error message warns you that the Cholesterol_long string values are too long. When you encounter this type of message, follow the procedure described in the rest of this example to modify your data. - -Figure 2. Error message displayed when running the binomial logistic regression node - -![Error message displayed when running the binomial logistic regression node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_error.png) -5. Add a Reclassify node after the Type node and double-click it to open its properties. -6. For the Reclassify Field, select Cholesterol_long and type Cholesterol for the new field name. -7. Click Get values to add the Cholesterol_long values to the original value column. -8. In the new value column, type High next to the original value of High level of cholesterol and Normal next to the original value of Normal level of cholesterol. - -Figure 3. Reclassifying long strings - -![Reclassifying long strings](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_reclassify.png) -9. Add a Filter node after the Reclassify node. Double-click the node, choose Filter the selected fields, and select the Cholesterol_long field. - -Figure 4. Filtering the ""Cholesterol_long"" field from the data - -" -85381B4DF6F42B35CA5097709523038ABDCDC555_1,85381B4DF6F42B35CA5097709523038ABDCDC555,"![Filtering the ""Cholesterol_long"" field from the data](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_filter.png) -10. Add a Type node after the Filter node. Double-click the node and select Cholesterol as the target. - -Figure 5. Short string details in the ""Cholesterol"" field - -![Short string details in the ""Cholesterol"" field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_type.png) -11. Add a Logistic node after the Type node. Double-click the node and select the Binomial procedure. - - - -You can now run the binomial Logistic node and generate a model without encountering the error as you did before. - -This example only shows part of a flow. For more information about the types of flows in which you might need to reclassify long strings, see the following example: - - - -* Auto Classifier node. See [Automated modeling for a flag target](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag.html). -" -75891659AB1DF929D219741C3F2D69384A01835C,75891659AB1DF929D219741C3F2D69384A01835C," Retail sales promotion - -This example deals with fictitious data that describes retail product lines and the effects of promotion on sales. - -Your goal in this example is to predict the effects of future sales promotions. Similar to the [condition monitoring example](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html), the data mining process consists of the exploration, data preparation, training, and test phases. - -This example uses the flow named Retail Sales Promotion, available in the example project . The data files are goods1n.csv and goods2n.csv. - - - -" -6F55360D336A77A06F2C4235B286A869CFF0986C,6F55360D336A77A06F2C4235B286A869CFF0986C," Examining the data - -Each record contains: - - - -* Class. Product type. -* Cost. Unit price. -* Promotion. Index of amount spent on a particular promotion. -* Before. Revenue before promotion. -* After. Revenue after promotion. - - - -The flow is simple. It displays the data in a table. The two revenue fields (Before and After) are expressed in absolute terms. However, it seems likely that the increase in revenue after the promotion (and presumably as a result of it) would be a more useful figure. - -Figure 1. Effects of promotion on product sales - -![Effects of promotion on product sales](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_effects.png) - -The flow also contains a node to derive this value, expressed as a percentage of the revenue before the promotion, in a field called Increase. A table shows this field. - -Figure 2. Increase in revenue after promotion - -![Increase in revenue after promotion](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_increase.png) - -For each class of product, and almost linear relationship exists between the increase in revenue and the cost of the promotion. Therefore, it seems likely that a decision tree or neural network could predict, with reasonable accuracy, the increase in revenue from the other available fields. -" -1399CD9C09634E30C0F099C0FAE66A756153DAB1,1399CD9C09634E30C0F099C0FAE66A756153DAB1," Learning and testing - -The flow trains a neural network and a decision tree to make this prediction of revenue increase. - -Figure 1. Retail Sales Promotion example flow - -![Retail Sales Promotion example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail.png) - -After you run the flow to generate the model nuggets, you can test the results of the learning process. You do this by connecting the decision tree and network in series between the Type node and a new Analysis node, changing the Data Asset import node to point to goods2n.csv, and running the Analysis node. From the output of this node, in particular from the linear correlation between the predicted increase and the correct answer, you will find that the trained systems predict the increase in revenue with a high degree of success. - -Further exploration might focus on the cases where the trained systems make relatively large errors. These could be identified by plotting the predicted increase in revenue against the actual increase. Outliers on this graph could be selected using the interactive graphics within SPSS Modeler, and from their properties, it might be possible to tune the data description or learning process to improve accuracy. -" -420946CA7E893CC5A2B3D1A8F47A7A2C7059D7F6,420946CA7E893CC5A2B3D1A8F47A7A2C7059D7F6," Screening predictors - -The Feature Selection node helps you identify the fields that are most important in predicting a certain outcome. From a set of hundreds or even thousands of predictors, the Feature Selection node screens, ranks, and selects the predictors that may be most important. Ultimately, you may end up with a quicker, more efficient model—one that uses fewer predictors, runs more quickly, and may be easier to understand. - -The data used in this example represents a data warehouse for a hypothetical telephone company and contains information about responses to a special promotion by 5,000 of the company's customers. The data includes many fields that contain customers' age, employment, income, and telephone usage statistics. Three ""target"" fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future. - -This example uses the flow named Screening Predictors, available in the example project . The data file is customer_dbase.csv. - -This example focuses on only one of the offers as a target. It uses the CHAID tree-building node to develop a model to describe which customers are most likely to respond to the promotion. It contrasts two approaches: - - - -* Without feature selection. All predictor fields in the dataset are used as inputs to the CHAID tree. -* With feature selection. The Feature Selection node is used to select the best 10 predictors. These are then input into the CHAID tree. - - - -By comparing the two resulting tree models, we can see how feature selection can produce effective results. -" -5A328CF6319859F041C48974E44046BCFCEA3B87,5A328CF6319859F041C48974E44046BCFCEA3B87," Building the flow - -Figure 1. Feature Selection example flow - -![Feature Selection example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening.png) - - - -1. Add a Data Asset node that points to customer_dbase.csv. -2. Add a Type node after the Data Asset node. -3. Double-click the Type node to open its properties, and change the role for response_01 to Target. Change the role to None for the other response fields (response_02 and response_03) and for the customer ID (custid) field. Leave the role set to Input for all other fields. - -Figure 2. Adding a Type node - -![Adding a Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_target.png) -4. Click Read Values and then click Save. -5. Add a Feature Selection modeling node after the Type node. In the node properties, the rules and criteria used for screening or disqualifying fields are defined. - -Figure 3. Adding a Feature Selection node - -![Adding a Feature Selection node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_criteria.png) -6. Run the flow to generate the Feature Selection model nugget. -7. To look at the results, right-click the model nugget and choose View Model. The results show the fields found to be useful in the prediction, ranked by importance. By examining these fields, you can decide which ones to use in subsequent modeling sessions. -" -9B120FF1F8482EB617E16738D5160C966C6EDF3D,9B120FF1F8482EB617E16738D5160C966C6EDF3D," Building the models - - - -1. Run the CHAID node that uses all the predictors in the dataset (the one connected to the Type node). As it runs, notice how long it takes to finish. -2. Right-click the generated model nugget, select View Model, and look at the tree diagram. -3. Now run the other CHAID model, which uses less predictors. Again, look at its tree diagram. - -It might be hard to tell, but the second model ran faster than the first one. Because this dataset is relatively small, the difference in run times is probably only a few seconds; but for larger real-world datasets, the difference might be very noticeable—minutes or even hours. Using feature selection may speed up your processing times dramatically. - -The second tree also contains fewer tree nodes than the first. It's easier to comprehend. Using fewer predictors is less expensive. It means that you have less data to collect, process, and feed into your models. Computing time is improved. In this example, even with the extra feature selection step, model building was faster with the smaller set of predictors. With a larger real-world dataset, the time savings should be greatly amplified. - -Using fewer predictors results in simpler scoring. For example, you might identify only four profiles of customers who are likely to respond to the promotion. Note that with larger numbers of predictors, you run the risk of overfitting your model. The simpler model may generalize better to other datasets (although you would need to test this to be sure). - -You could instead use a tree-building algorithm to do the feature selection work, allowing the tree to identify the most important predictors for you. In fact, the CHAID algorithm is often used for this purpose, and it's even possible to grow the tree level-by-level to control its depth and complexity. However, the Feature Selection node is faster and easier to use. It ranks all of the predictors in one fast step, allowing you to identify the most important fields quickly. -" -C41C78F27BB2F48542141EA85EDA7AD333E3FD0B,C41C78F27BB2F48542141EA85EDA7AD333E3FD0B," Making offers to customers (self-learning) - -The Self-Learning Response Model (SLRM) node generates and enables the updating of a model that allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted. These sorts of models are most beneficial in customer relationship management, such as marketing applications or call centers. - -This example is based on a fictional banking company. The marketing department wants to achieve more profitable results in future campaigns by matching the appropriate offer of financial services to each customer. Specifically, the example uses a Self-Learning Response model to identify the characteristics of customers who are most likely to respond favorably based on previous offers and responses and to promote the best current offer based on the results. - -This example uses the flow named Making Offers to Customers - Self-Learning, available in the example project . The data files are pm_customer_train1.csv, pm_customer_train2.csv, and pm_customer_train3.csv. - - - -" -7CEF749C4ED4703D00346FCDEF795D0431BC7C26_0,7CEF749C4ED4703D00346FCDEF795D0431BC7C26," Building the flow - - - -1. Add a Data Asset node that points to pm_customer_train1.csv. - -Figure 1. SLRM example flow - -![SLRM example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_slrm.png) -2. Attach a Filler node to the Data Asset node. Double-click the node to open its properties and, under Fill in fields, select campaign. -3. Select a Replace type of Always. -4. In the Replace with text box, enter to_string(campaign) and click Save. - -Figure 2. Derive a campaign field - -![Derive a campaign field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_derive.png) -5. Add a Type node and set the Role to None for the following fields: - - - -* customer_id -* response_date -* purchase_date -* product_id -* Rowid -* X_random - - - -6. Set the Role to Target for the campaign and response fields. These are the fields on which you want to base your predictions. Set the Measurement to Flag for the response field. -7. Click Read Values then click Save. Because the campaign field data shows as a list of numbers (1, 2, 3, and 4), you can reclassify the fields to have more meaningful titles. -8. Add a Reclassify node after the Type node and open its properties. -9. Under Reclassify Into, select Existing field. -10. Under Reclassify Field, select campaign. -11. Click Get values. The campaign values are added to the ORIGINAL VALUE column. -12. In the NEW VALUE column, enter the following campaign names in the first four rows: - - - -* Mortgage -* Car loan -* Savings -* Pension - - - -13. Click Save. - -Figure 3. Reclassify the campaign names - -![Reclassify the campaign names](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_reclassify.png) -" -7CEF749C4ED4703D00346FCDEF795D0431BC7C26_1,7CEF749C4ED4703D00346FCDEF795D0431BC7C26,"14. Attach an SLRM modeling node to the Reclassify node. Select campaign for the Target field, and response for the Target response field. -" -AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE_0,AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE," Browsing the model - - - -1. Right-click the model nugget and select View Model. The initial view shows the estimated accuracy of the predictions for each offer. You can also click Predictor Importance to see the relative importance of each predictor in estimating the model, or click Association With Response to show the correlation of each predictor with the target variable. -2. To switch between each of the four offers for which there are prediction, use the View drop-down. - -Figure 1. SLRM model nugget - -![SLRM model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_nugget.png) -3. Return to the flow. -4. Disconnect the Data Asset node that points to pm_customer_train1.csv. -5. Add a new Data Asset node that points to pm_customer_train2.csv and connect it to the Filler node. -6. Double-click the SLRM node and select Continue training existing model (under BUILD OPTIONS). Click Save. -7. Run the flow to regenerate the model nugget. Then right-click it and select View Model. The model now shows the revised estimates of accuracy of the predictions for each offer. -8. Add a new Data Asset node that points to pm_customer_train3.csv and connect it to the Filler node -9. Run the flow again, then right-click the model nugget and select View Model. - -The model now shows the final estimated accuracy of the predictions for each offer. As you can see, the average accuracy fell slightly as you added the additional data sources. However, this fluctuation is a minimal amount and may be attributed to slight anomalies within the available data. -" -AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE_1,AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE,"10. Attach a Table node to the generated model nugget, then right-click the Table node and run it. In the Outputs pane, open the table output that was just generated.The predictions in the table show which offers a customer is most likely to accept and the confidence that they'll accept, depending on each customer's details. For example, in the first row, there's only a 13.2% confidence rating (denoted by the value 0.132 in the $SC-campaign-1 column) that a customer who previously took out a car loan will accept a pension if offered one. However, the second and third lines show two more customers who also took out a car loan; in their cases, there is a 95.7% confidence that they, and other customers with similar histories, would open a savings account if offered one, and over 80% confidence that they would accept a pension. - -Figure 2. Model output - predicted offers and confidences - -![Model output - predicted offers and confidences](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_table.png) - -Explanations of the mathematical foundations of the modeling methods used in SPSS Modeler are available in the [SPSS Modeler Algorithms Guide](http://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf). - -Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation. -" -BBB6FC842A370135B8488D9A2E09FCF17341954B,BBB6FC842A370135B8488D9A2E09FCF17341954B," Hotel satisfaction example for Text Analytics - -SPSS Modeler offers nodes that are specialized for handling text. - -In this example, a hotel manager is interested in learning what customers think about the hotel. - -Figure 1. Chart of positive opinions - -![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png) - -Figure 2. Chart of negative opinions - -![Chart of negative opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_negative.png) - -This example uses the flow named Hotel Satisfaction, available in the example project . The data files are hotelSatisfaction.csv and hotelSatisfaction.xlsx. The flow uses Text Analytics nodes to analyze fictional text data about hotel personnel, comfort, cleanliness, price, etc. - -This flow illustrates two ways of analyzing data with a Text Mining node and a Text Link Analysis node. It also illustrates how you can deploy a text model and score current or new data. - -Let's take a look at the flow. - - - -1. Open the . -2. Scroll down to the Modeler flows section and select the Hotel Satisfaction flow. - -Figure 3. Completed flow - -![Completed flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel.png) -" -1924AE74643C2D9D416204693C9BB84D5212E3B0_0,1924AE74643C2D9D416204693C9BB84D5212E3B0," Building and deploying the model - - - -1. When your model is ready, click Generate a model to generate a text nugget. - -Figure 1. Generate a new model - -![Generate a new model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build.png) - -Figure 2. Build a category model - -![Build a category model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_buildcat.png) -2. If you want to save the Text Analytics Workbench session, instead click Return to flow and then Save and exit. - -Figure 3. Saving your session - -![Saving your session](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_save.png)The generated text nugget appears on your flow canvas. - -Figure 4. Generated text nugget - -![Generated text nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_nugget.png)After the category model has been validated and generated in the Text Analytics Workbench, you can deploy it in your flow and score the same data set or score a new one. - -Figure 5. Example flow with two modes for scoring - -![Example flow with two modes for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_ex.png)This example flow illustrates the two modes for scoring: - - - -* Categories as fields. With this option, there are just as many output records as there were in the input. However, each record now contains one new field for every category that was selected on the Model tab. For each field, enter a flag value for true and for false, such as True/False, or 1/0. In this flow, values are set to 1 and 0 to aggregate results and count the number of positive, negative, mixed (both positive and negative), or no score (no opinion) answers. - -Figure 6. Model results - categories as fields - -" -1924AE74643C2D9D416204693C9BB84D5212E3B0_1,1924AE74643C2D9D416204693C9BB84D5212E3B0,"![Model results - categories as fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_excats.png) -* Categories as records. With this option, a new record is created for each category, document pair. Typically, there are more records in the output than there were in the input. Along with the input fields, new fields are also added to the data depending on what kind of model it is. - -Figure 7. Model results - categories as records - -![Model results - categories as records](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_exrecs.png) - - - -3. You can add a Select node after the DeriveSentiment SuperNode, include Sentiments=Pos, and add a Charts node to gain quick insight about what guests appreciate about the hotel: - -Figure 8. Chart of positive opinions - -![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png) -" -5E4D2166BB8C2B95E515591E014E7CA00B87BCA2,5E4D2166BB8C2B95E515591E014E7CA00B87BCA2," Using the Text Analytics Workbench - -The Text Analytics Workbench contains the extraction results and the category model contained in the text analytics package. -" -F161A94239C1DC6696DBB583EC46BC64F3AA8906,F161A94239C1DC6696DBB583EC46BC64F3AA8906," Text Link Analysis node - -In some cases, you may not need to create a category model to score. The Text Link Analysis (TLA) node adds a pattern-matching technology to text mining's concept extraction. This identifies relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents. - -Figure 1. Text Link Analysis node - -![Text Link Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tla.png) - - - -1. Add a Text Link Analysis node to your canvas and connect it to the Data Asset node that points to hotelSatisfaction.csv. Double-click the node to open its properties. -2. Select id for the ID field and Comments for the Text field. Note that only the Text field is required. -3. For Copy resources from, select the Hotel Satisfaction (English) template. - -Figure 2. Text Link Analysis node FIELD properties - -![Text Link Analysis node FIELD properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaprops.png) -4. Under Expert, select Accommodate spelling for a minimum word character length of. - -Figure 3. Text Link Analysis node Expert properties - -![Text Link Analysis node Expert properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaexpert.png)The resulting output is a table (or the result of an Export node). - -Figure 4. Raw TLA output - -![Raw TLA output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaraw.png) - -Figure 5. Counting sentiments on a TLA node - -![Counting sentiments on a TLA node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlacount.png) -" -E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D_0,E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D," Text Mining node - -Figure 1. Text Mining node to analyze comments from hotel guests - -![Text Mining node to analyze comments from hotel guests](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm.png) - - - -1. Add a Data Asset node that points to hotelSatisfaction.csv. -2. From the Text Analytics category on the node palette, add a Text Mining node, connect it to the Data Asset node you added in the previous step, and double-click it to open its properties. -3. Under Fields, select Comments for the Text field and select id for the ID field. Note that only the Text field is required. - -Figure 2. Text Mining node properties - -![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props1.png) -4. Under Copy resources from, select Text analysis package, click Select Resources, and then load Hotel Satisfaction (English).tap (with Current category set(s) = Topic + Opinion).A text analysis package (TAP) is a predefined set of libraries and advanced linguistic and nonlinguistic resources bundled with one or more sets of predefined categories. If no text analysis package is relevant for your application, you can instead start by selecting Resource template under Copy resources from. A resource template is a predefined set of libraries and advanced linguistic and nonlinguistic resources that have been fine-tuned for a particular domain or usage. - -Figure 3. Text Mining node properties - -![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props2.png) -" -E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D_1,E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D,"5. Under Build models, make sure Build interactively (category model nugget) is selected. Later when you run the node, this option will launch an interactive interface (known as the Text Analytics Workbench) in which you can extract concepts and patterns, explore and fine-tune the extracted results, build and refine categories, and build category model nuggets. -6. Under Begin session by, select Extracting concepts and text links. The option Extracting concepts extracts only concepts, whereas TLA extraction outputs both concepts and text links that are connections between topics (service, personnel, food, etc.) and opinions. -7. Under Expert, select Accommodate spelling for a minimum word character length of. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same (so, for example, location and locatoin are grouped together). - -Figure 4. Text Mining node properties - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_0,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Managing your account settings - -From the Account window you can view information about your IBM Cloud account and set the Resource scope, Credentials for connections, and Regional project storage settings for IBM watsonx. - - - -* [View account information](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enview-account-information) -* [Set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-scope-for-resources) -* [Set the type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-credentials-for-connections) -* [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-expiration) - - - -You must be the IBM Cloud account owner or administrator to manage the account settings. - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_1,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," View account information - -You can see the account name, ID and type. - - - -1. Select Administration > Account and billing > Account to open the account window. -2. If you need to manage your Cloud account, click the Manage in IBM Cloud link to navigate to the Account page on IBM Cloud. - - - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_2,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the scope for resources - -By default, account users see resources based on membership. You can restrict the resource scope to the current account to control access. By setting the resource scope to the current account, users cannot access resources outside of their account, regardless of membership. The scope applies to projects, catalogs, and spaces. - -To restrict resources to current account: - - - -1. Select Administration > Account and billing > Account to open the account settings window. -2. Set Resource scope to On. Access is updated immediately to be restricted to the current account. - - - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_3,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the credentials for connections - -The credentials for connections setting determines the type of credentials users must specify when creating a new connection. This setting applies only when new connections are created; existing connections are not affected. - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_4,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Either personal or shared credentials - -You can allow users the ability to specify personal or shared credentials when creating a new connection. Radio buttons will appear on the new connection form, allowing the user to select personal or shared. - -To allow the credential type to be chosen on the new connection form: - - - -1. Select Administration > Account and billing > Account to open the account settings window. -2. Set both Shared credentials and Personal credentials to Enabled. - - - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_5,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Personal credentials - -When personal credentials are specified, each user enters their own credentials when creating a new connection or when using a connection to access data. - -To require personal credentials for all new connections: - - - -1. Select Administration > Account and billing > Account to open the account settings window. -2. Set Personal credentials to Enabled. -3. Set Shared credentials to Disabled. - - - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_6,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Shared credentials - -With shared credentials, the credentials that were entered by the creator of the connection are made available to all other users when accessing data with the connection. - -To require shared credentials for all new connections: - - - -1. Select Administration > Account and billing > Account to open the account settings window. -2. Set Shared credentials to Enabled. -3. Set Personal credentials to Disabled. - - - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_7,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the login session expiration - -Active and inactive session durations are managed through IBM Cloud. You are notified of a session expiration 5 minutes before the session expires. Unless your service supports autosaving, your work is not saved when your session expires. - -You can change the default durations for active and inactive sessions. For more information on required permissions and duration limits, see [Setting limits for login sessions](https://cloud.ibm.com/docs/account?topic=account-iam-work-sessions&interface=ui). - -To change the default durations: - - - -1. From the watsonx navigation menu, select Administration > Access (IAM). -2. In IBM Cloud, select Manage > Access (IAM) > Settings. -3. Select the Login session tab. -4. For each expiration time that you want to change, edit the time and click Save. - - - -The inactivity duration cannot be longer than the maximum session duration, and the token lifetime cannot be longer than the inactivity duration. IBM Cloud prevents you from inputing an invalid combination of settings. - -" -ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_8,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Learn more - - - -* [Managing all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) -* [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_0,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Managing the user API key - -Certain operations in IBM watsonx require an API key for secure authorization. You can generate and rotate a user API key as needed to help ensure your operations run smoothly. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_1,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," User API key overview - -Operations running within services in IBM watsonx require credentials for secure authorization. These operations use an API key for authorization. A valid API key is required for many long-running tasks, including the following: - - - -* Model training in Watson Machine Learning -* Problem solving with Decision Optimization -* Data transformation with DataStage flows -* Other runtime services (for example, Data Refinery and Pipelines) that accept API key references - - - -Both scheduled and ad hoc jobs require an API key for authorization. An API key is used for jobs when: - - - -* Creating a job schedule with a predefined key -* Updating the API key for a scheduled job -* Providing an API key for an ad hoc job - - - -User API keys give control to the account owner to secure and renew credentials, thus helping to ensure operations run without interruption. Keys are unique to the IBMid and account. If you change the account you are working in, you must generate a new key. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_2,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Active and Phased out keys - -When you create an API key, it is placed in Active state. The Active key is used for authorization for operations in IBM watsonx. - -When you rotate a key, a new key is created in Active state and the existing key is changed to Phased out state. A Phased out key is not used for authorization and can be deleted. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_3,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Viewing the current API key - -Click your avatar and select Profile and settings to open your account profile. Select User API key to view the Active and Phased out keys. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_4,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Creating an API key - -If you do not have an API key, you can create a key by clicking Create a key. - -A new key is created in Active state. The key automatically authorizes operations that require a secure credential. The key is stored in both IBM Cloud and IBM watsonx. You can view the API keys for your IBM Cloud account at [API keys](https://cloud.ibm.com/iam/apikeys). - -User API Keys take the form cpd-apikey-{username}-{timeStamp}, where username is the IBMid of the account owner and timestamp indicates when the key was created. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_5,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Rotating an API key - -If the API key becomes stale or invalid, you can generate a new Active key for use by all operations. - -To rotate a key, click Rotate. - -A new key is created to replace the current key. The rotated key is placed in Phased out status. A Phased out key is not available for use. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_6,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Deleting a phased out API key - -When you are certain the phased out key is no longer needed for operations, click the minus sign to delete it. Deleting keys might cause running operations to fail. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_7,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Deleting all API keys - -Delete all keys (both Active and Phased out) by clicking the trash can. Deleting keys might cause running operations to fail. - -" -88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_8,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Learn more - - - -* [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -* [Adding task credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html) -* [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) - - - -Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -A10DE0E026BA0CF397108621D5927E16436ACF58_0,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring App ID with your identity provider - -To use App ID for user authentication for IBM watsonx, you configure App ID as a service on IBM Cloud. You configure an identity provider (IdP) such as Azure Active Directory. You then configure App ID and the identity provider to communicate with each other to grant access to authorized users. - -To configure App ID and your identity provider to work together, follow these steps: - - - -* [Configure your identity provider to communicate with IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_idp) -* [Configure App ID to communicate with your identify provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid) -* [Configure IAM to enable login through your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_iam) - - - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_1,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring your identity provider - -To configure your identity provider to communicate with IBM Cloud, you enter the entityID and Location into your SAML configuration for your identity provider. An overview of the steps for configuring Azure Active Directory is provided as an example. Refer to the documentation for your identity provider for detailed instructions for its platform. - -The prerequisites for configuring App ID with an identity provider are: - - - -* An IBM Cloud account -* An App ID instance -* An identity provider, for example, Azure Active Directory - - - -To configure your identity provider for SAML-based single sign-on: - -1. Download the SAML metadata file from App ID to find the values for entityID and Location. These values are entered into the identity provider configuration screen to establish communication with App ID on IBM Cloud. (The corresponding values from the identity provider, plus the primary certificate, are entered in App ID. See [Configuring App ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid)). - - - -* In App ID, choose Identity providers > SAML 2.0 federation. -* Download the appid-metadata.xml file. -* Find the values for entityID and Location. - - - -2. Copy the values for entityID and Location from the SAML metadata file and paste them into the corresponding fields on your identity provider. For Azure Active Directory, the fields are located in Section 1: Basic SAML Configuration in the Enterprise applications configuration screen. - - - - App ID value Active Directory field Example - - entityID Identifier (Entity ID) urn:ibm:cloud:services:appid:value - Location Reply URL (Assertion Consumer Service URL) https://us-south.appid.cloud.ibm.com/saml2/v1/value/login-acs - - - -3. In Section 2: Attributes & Claims for Azure Active Directory, you map the username parameter to user.mail to identify the users by their unique email address. IBM watsonx requires that you set username to the user.mail attribute. For other identity providers, a similar field that uniquely identifies users must be mapped to user.mail. - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_2,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring App ID - -You establish communication between App ID and your identity provider by entering the SAML values from the identity provider into the corresponding App ID fields. An example is provided for configuring App ID to communicate with an Active Directory Enterprise Application. - -1. Choose Identity providers > SAML 2.0 federation and complete the Provide metadata from SAML IdP section. - -2. Download the Base64 certificate from Section 3: SAML Certificates in Active Directory (or your identity provider) and paste it into the Primary certificate field. - -3. Copy the values from Section 4: Set up your-enterprise-application in Active Directory into the corresponding fields in Provide metadata from SAML IdP in IBM App ID. - - - - App ID field Value from Active Directory - - Entity ID Azure AD Identifier - Sign in URL Login URL - Primary certificate Certificate (Base64) - - - -4. Click Test on the App ID page to test that App ID can connect to the identity provider. The happy face response indicates that App ID can communicate with the identity provider. - -![Successful test](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/appid_good_job.png) - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_3,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring IAM - -You must assign the appropriate role to the users in IBM Cloud IAM and also configure your identity provider in IAM. Users require at least the Viewer role for All Identity and IAM enabled services. - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_4,A10DE0E026BA0CF397108621D5927E16436ACF58," Create an identity provider reference in IBM Cloud IAM - -Create an identity provider reference to connect your external repository to your IBM Cloud account. - - - -1. Navigate to Manage > Access(IAM) > Identity providers. -2. For the type, choose IBM Cloud App ID. -3. Click Create. -4. Enter a name for the identity provider. -5. Select the App ID service instance. -6. Select how to on board users. Static adds users when they log in for the first time. -7. Enable the identity provider for logging in by checking the Enable for account login? box. -8. If you have more than one identity providers, set the identity provider as the default by checking the box. -9. Click Create. - - - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_5,A10DE0E026BA0CF397108621D5927E16436ACF58," Change the App ID login alias - -A login alias is generated for App ID. Users enter the alias when logging on to IBM Cloud. You can change the default alias string to be easier to remember. - - - -1. Navigate to Manage > Access(IAM) > Identity providers. -2. Select IBM Cloud App ID as the type. -3. Edit the Default IdP URL to make it simpler. For example, https://cloud.ibm.com/authorize/540f5scc241a24a70513961 can be changed to https://cloud.ibm.com/authorize/my-company. Users log in with the alias my-company instead of 540f5scc241a24a70513961. - - - -" -A10DE0E026BA0CF397108621D5927E16436ACF58_6,A10DE0E026BA0CF397108621D5927E16436ACF58," Learn more - - - -* [IBM Cloud docs: Managing authentication](https://cloud.ibm.com/docs/appid?topic=appid-managing-idp) -* [IBM Cloud docs: Configuring federated identity providers: SAML](https://cloud.ibm.com/docs/appid?topic=appid-enterpriseenterprise) -* [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) -* [Setting up IBM Cloud App ID with your Azure Active Directory](https://www.ibm.com/cloud/blog/setting-ibm-cloud-app-id-azure-active-directory) -* [Reusing Existing Red Hat SSO and Keycloak for Applications That Run on IBM Cloud with App ID](https://www.ibm.com/cloud/blog/reusing-existing-red-hat-sso-and-keycloak-for-applications-that-run-on-ibm-cloud-with-app-id) - - - -Parent topic:[Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) -" -77393F760A3A3F834809ACA1078BDF229331C2FD_0,77393F760A3A3F834809ACA1078BDF229331C2FD," Overview for setting up IBM Cloud App ID (beta) - -IBM watsonx supports IBM Cloud App ID to integrate customer's registries for user authentication. You configure App ID on IBM Cloud to communicate with an identiry provider. You then provide an alias to the people in your organization to log in to IBM watsonx. - -Required roles : To configure identity providers for App ID, you must have one of the following roles in the IBM Cloud account: - -: - Account owner : - Operator or higher on the App ID instance : - Operator or Administrator role on the IAM Identity Service - -App ID is configured entirely on IBM Cloud. An identity provider, for example, Active Directory, must also be configured separately to communicate with App ID. - -For more information on configuring App ID to work with an identity provider, see [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html). - -" -77393F760A3A3F834809ACA1078BDF229331C2FD_1,77393F760A3A3F834809ACA1078BDF229331C2FD," Configuring the log on alias - -The App ID instance is configured as the default identity provider for the account. For instructions on configuring an identity provider, refer to [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration). - -Each App ID instance requires a unique alias. There is one alias per account. All users in an account log in with the same alias. When the identity provider is configured, the alias is initially set to the account ID. You can [change the initial alias](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.htmlcfg_alias) to be easier to type and remember. - -" -77393F760A3A3F834809ACA1078BDF229331C2FD_2,77393F760A3A3F834809ACA1078BDF229331C2FD," Logging in with App ID (beta) - -Users choose App ID (beta) as the login method on the IBM watsonx login page and enter the alias. Then, they are redirected to their company's login page to enter their company credentials. Upon logging in successfully to their company, they are redirected to IBM watsonx. - -To verify that the alias is correctly configured, go to the User profile and settings page. Verify that the username in the profile is the email from your company’s registry. The alias is correct if the correct email is shown in the profile, as it indicates that the mapping was successful. - -You cannot switch accounts when logging in through App ID. - -" -77393F760A3A3F834809ACA1078BDF229331C2FD_3,77393F760A3A3F834809ACA1078BDF229331C2FD," Limitations - -The following limitations apply to this beta release: - - - -* You must map the name/username/sub SAML profile properties to the email property in the user registry. If the mapping is absent or incorrect, a default opaque user ID is used, which is not supported in this beta release. -* The IBM Cloud login page does not support an App ID alias. Users log in into IBM Cloud with a custom URL, following this form: https://cloud.ibm.com/authorize/{app_id_alias}. - - - - - -* If you are using the Cloud Directory included with App ID as your user registry, you must select Username and password as the option for Manage authentication > Cloud Directory > Settings > Allow users to sign-up and sign-in using. - - - -" -77393F760A3A3F834809ACA1078BDF229331C2FD_4,77393F760A3A3F834809ACA1078BDF229331C2FD," Learn more - - - -* [Logging in to watsonx.ai through IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlappid) -* [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html) -* [IBM Cloud docs: Getting started with App ID](https://cloud.ibm.com/docs/appid?topic=appid-getting-started) -* [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration) - - - -Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -78A4D6515FAA2766FEB3A03CA6A378846CF33D83_0,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Managing all projects in the account - -If you have the required permission, you can view and manage all projects in your IBM Cloud account. You can add yourself to a project so that you can delete it or change its collaborators. - -" -78A4D6515FAA2766FEB3A03CA6A378846CF33D83_1,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Requirements - -To manage all projects in the account, you must: - - - -* Restrict resources to the current account. See steps to [set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources). -* Have the Manage projects permission that is provided by the IAM Manager role for the IBM Cloud Pak for Data service. - - - -" -78A4D6515FAA2766FEB3A03CA6A378846CF33D83_2,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Assigning the Manage projects permission - -To grant the Manage projects permission to a user who is already in your IBM Cloud account: - - - -1. From the navigation menu, choose Administration > Access (IAM) to open the Manage access and users page in your IBM Cloud account. -2. Select the user on the Users page. -3. Click the Access tab and then choose Assign access+. -4. Select Access policy. -5. For Service, choose IBM Cloud Pak for Data. -6. For Service access, select the Manager role. -7. For Platform access, assign the Editor role. -8. Click Add and Assign to assign the policy to the user. - - - -" -78A4D6515FAA2766FEB3A03CA6A378846CF33D83_3,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Managing projects - -You can add yourself to a project when you need to delete the project, delete collaborators, or assign the Admin role to a collaborator in the project. To manage projects: - - - -* View all active projects on the Projects page in IBM watsonx by clicking the drop-down menu next to the search field and selecting All active projects. -* Join any project as Admin by clicking Join as admin in the Your role column. -* Filter projects to identify which projects you are not a collaborator in, by clicking the filter icon ![Filter icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/filter.svg) and selecting Your role > No membership. - - - -For more details on managing projects, see [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html). - -" -78A4D6515FAA2766FEB3A03CA6A378846CF33D83_4,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Learn more - - - -* [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) - - - -Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -96C0566DA4EB3450616C3F358C32837BFD4DE6C8_0,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from the account or from the workspace - -The IBM Cloud account administrator or owner can remove users from the IBM Cloud account. Any use with the Admin role can remove users from a workspace. - -" -96C0566DA4EB3450616C3F358C32837BFD4DE6C8_1,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from the IBM Cloud account - -You can remove a user from an IBM Cloud account, so that the user can no longer log in to the console, switch to your account, or access account resources. - -" -96C0566DA4EB3450616C3F358C32837BFD4DE6C8_2,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Required roles - -: To remove a user from an IBM Cloud account, you must have one of the following roles for your IBM Cloud account: : - Owner : - Administrator : - Editor - -To remove a user from the IBM Cloud account: - - - -1. From the IBM watsonx navigation menu, click Administration > Access (IAM). -2. Click Users and find the name of the user that you want to remove. -3. Choose Remove user from the action menu and confirm the removal. - - - -Removing a user from an account doesn't delete the IBMid for the user. Any resources such as projects or catalogs that were created by the user remain in the account, but the user no longer has access to work with those resources. The account owner, or an administrator for the service instance, can assign other users to work with the projects and catalogs or delete them entirely. - -For more information, see [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove). - -" -96C0566DA4EB3450616C3F358C32837BFD4DE6C8_3,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from a workspace - -You can remove collaborators from a workspace, such as a project or space, so that the user can no longer access the workspace or any of its contents. - -Required role : To remove a user from a workspace, you must have the Admin collaborator role for the workspace that you are editing. - -To remove a collaborator, select one or more users (or user groups) on the Access control page of the workspace and click Remove. - -The user is still a member of the IBM Cloud account and can be added as a collaborator to other workspaces as needed. - -" -96C0566DA4EB3450616C3F358C32837BFD4DE6C8_4,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Learn more - - - -* [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) -* [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove) - - - -Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -28F15AC17715506BB29327874DE7F76CB9FB2908,28F15AC17715506BB29327874DE7F76CB9FB2908," Administering your accounts and services - -For most administration tasks, you must be the IBM Cloud account owner or administrator. If you log in to your own account, you are the account owner. If you log in to someone else's account or an enterprise account, you might not be the account owner or administrator. - -Tasks for all users: - - - -* [Managing your personal settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) -* [Determining your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) -* [Understanding accessibility features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/accessibility.html) - - - -Tasks for IBM Cloud account owners or administrators in IBM watsonx and in IBM Cloud: - - - -* [Managing IBM watsonx services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -* [Securing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -* [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_0,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Activity Tracker events - -You can see the events for actions for your provisioned services in the IBM Cloud Activity Tracker. You can use the information that is registered through the IBM Cloud Activity Tracker service to identify security incidents, detect unauthorized access, and comply with regulatory and internal auditing requirements. - -To get started, provision an instance of the IBM Cloud Activity Tracker service. See [IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started). - -View events in the Activity Tracker in the same IBM Cloud region where you provisioned your services. To view the account and user management events and other global platform events, you must provision an instance of the IBM Cloud Activity Tracker service in the Frankfurt (eu-de) region. See [Platform services](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-cloud_services_locationscloud_services_locations_core_integrated). - - - -* [Events for account and user management](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enacct) -* [Events for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enws) -* [Events for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwml) -* [Events for model evaluation (Watson OpenScale)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwos) - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_1,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for account and user management - -You can audit account and user management events in Activity Tracker, including: - - - -* Billing events -* Global catalog events -* IAM and user management events - - - -For the complete list of account and user management events, see [IBM Cloud docs: Auditing events for account management](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-at_events_acc_mgt). - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_2,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Studio - - - -Events in Activity Tracker for Watson Studio - - Action Description - - data-science-experience.project.create Create a project. - data-science-experience.project.delete Delete a project. - data-science-experience.notebook.create Create a Notebook. - data-science-experience.notebook.delete Delete a Notebook. - data-science-experience.notebook.update Change the runtime service of a Notebook by selecting another one. - data-science-experience.rstudio.start Open RStudio. - data-science-experience.rstudio.stop RStudio session timed out. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_3,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Decision Optimization - - - -Events in Activity Tracker for Decision Optimization - - Action Description - - domodel.decision.create Create experiments - domodel.decision.update Update experiments - domodel.decision.delete Delete experiments - domodel.container.create Create scenarios - domodel.container.update Update scenarios - domodel.container.delete Delete scenarios - domodel.notebook.import Update a scenario from a notebook - domodel.notebook.export Generate a model notebook from a scenario - domodel.wml.export Generate Watson Machine Learning models from a scenario - domodel.solve.start Solve a scenario - domodel.solve.stop Cancel a solve - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_4,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for feature groups - - - -Events in Activity Tracker for feature groups (Watson Studio) - - Action Description - - data_science_experience.feature-group.retrieve Retrieve a feature group - data_science_experience.feature-group.create Create a feature group - data_science_experience.feature-group.update Update a feature group - data_science_experience.feature-group.delete Delete a feature group - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_5,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset management - - - -Events in Activity Tracker for asset management in Watson Studio - - Action Description - - datacatalog.asset.clone Copy an asset. - datacatalog.asset.create Create an asset. - datacatalog.data-asset.create Create a data asset. - datacatalog.folder-asset.create Create a folder asset. - datacatalog.type.create Create an asset type. - datacatalog.asset.purge Delete an asset from the trash. - datacatalog.asset.restore Restore an asset from the trash. - datacatalog.asset.trash Send an asset to the trash. - datacatalog.asset.update Update an asset. - datacatalog.promoted-asset.create Create a project asset in a space. - datacatalog.promoted-asset.update Update a space asset that started in a project. - datacatalog.asset.promote Promote an asset from project to space. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_6,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset attachments - - - -Events in Activity Tracker for attachments - - Action Description - - datacatalog.attachment.create Create an attachment. - datacatalog.attachment.delete Delete an attachment. - datacatalog.attachment-resources.increase Increase resources for an attachment. - datacatalog.complete.transfer Mark an attachment as transfer complete. - datacatalog.attachment.update Update attachment metadata. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_7,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset attributes - - - -Events in Activity Tracker for attributes - - Action Description - - datacatalog.attribute.create Create an attribute. - datacatalog.attribute.delete Delete an attribute. - datacatalog.attribute.update Update an attribute. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_8,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for connections - - - -Events in Activity Tracker for connections - - Action Description - - wdp-connect-connection.connection.read Read a connection. - wdp-connect-connection.connection.get Retrieve a connection. - wdp-connect-connection.connection.get.list Get a list of connections. - wdp-connect-connection.connection.create Create a connection. - wdp-connect-connection.connection.delete Delete a connection. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_9,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for scheduling - - - -Events in Activity Tracker for scheduling - - Action Description - - wdp.scheduling.schedule.update.failed An update to a schedule failed. - wdp.scheduling.schedule.create.failed The creation of a schedule failed. - wdp.scheduling.schedule.read Read a schedule. - wdp.scheduling.schedule.update Update a schedule. - wdp.scheduling.schedule.delete.multiple Delete multiple schedules. - wdp.scheduling.schedule.list List all schedules. - wdp.scheduling.schedule.create Create a schedule. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_10,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Data Refinery flows - - - -Events in Activity Tracker for Data Refinery flows - - Action Description - - data-science-experience.datarefinery-flow.read Read a Data Refinery flow - data-science-experience.datarefinery-flow.create Create a Data Refinery flow - data-science-experience.datarefinery-flow.delete Delete a Data Refinery flow - data-science-experience.datarefinery-flow.update Update (save) a Data Refinery flow - data-science-experience.datarefinery-flow.backup Clone (duplicate) a Data Refinery flow - data-science-experience.datarefinery-flowrun.create Create a Data Refinery flow job run - data-science-experience.datarefinery-flowrun-complete.update Complete a Data Refinery flow job run - data-science-experience.datarefinery-flowrun-cancel.update Cancel a Data Refinery flow job run - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_11,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for profiling - - - -Events in Activity Tracker for profiling - - Action Description - - wdp-profiling.profile.start Initiate profiling. - wdp-profiling.profile.create Create a profile. - wdp-profiling.profile.delete Delete a profile. - wdp-profiling.profile.read Read a profile. - wdp-profiling.profile.list List the profiles of a data asset. - wdp-profiling.profile.update Update a profile. - wdp-profiling.profile.asset-classification.update Update the asset classification of a profile. - wdp-profiling.profile.column-classification.update Update the column classification of a profile. - wdp-profiling.profile.create.failed Profile could not be created. - wdp-profiling.profile.delete.failed Profile could not be deleted. - wdp-profiling.profile.read.failed Profile could not be read. - wdp-profiling.profile.list.failed Profiles could not be listed. - wdp-profiling.profile.update.failed Profile could not be updated. - wdp-profiling.profile.asset-classification.update.failed Asset classification of the profile could not be updated. - wdp-profiling.profile.column-classification.update.failed Column classification of the profile could not be updated. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_12,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for profiling options - - - -Events in Activity Tracker for profiling options - - Action Description - - wdp-profiling.profile_options.create Create profiling options. - wdp-profiling.profile_options.read Read profiling options. - wdp-profiling.profile_options.update Update profiling options. - wdp-profiling.profile_options.delete Delete profiling options - wdp-profiling.profile_options.create.failed Profiling options could not be created. - wdp-profiling.profile_options.read.failed Profiling options could not be read. - wdp-profiling.profile_options.update.failed Profiling options could not be updated. - wdp-profiling.profile_options.delete.failed Profiling options could not be deleted. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_13,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for feature groups - - - -Events in Activity Tracker for feature groups (IBM Knowledge Catalog) - - Action Description - - data_catalog.feature-group.retrieve Retrieve a feature group - data_catalog.feature-group.create Create a feature group - data_catalog.feature-group.update Update a feature group - data_catalog.feature-group.delete Delete a feature group - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_14,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_15,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Event for Prompt Lab - - - -Event in Activity Tracker for Prompt Lab - - Action Description - - pm-20.foundation-model.send Send a prompt to a foundation model or tuned foundation model for inferencing. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_16,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning deployments - - - -Events in Activity Tracker for Watson Machine Learning deployments - - Action Description - - pm-20.deployment.create Create a Watson Machine Learning deployment. - pm-20.deployment.read Get a Watson Machine Learning deployment. - pm-20.deployment.update Update a Watson Machine Learning deployment. - pm-20.deployment.delete Delete a Watson Machine Learning deployment. - pm-20.deployment_job.create Create a Watson Machine Learning deployment job. - pm-20.deployment_job.read Get a Watson Machine Learning deployment job. - pm-20.deployment_job.delete Delete a Watson Machine Learning deployment job. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_17,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for SPSS Modeler flows - - - -Events in Activity Tracker for SPSS Modeler flows - - Action Description - - data-science-experience.modeler-session.create Create a new SPSS Modeler session. - data-science-experience.modeler-flow.send Store the current SPSS Modeler flow. - data-science-experience.modeler-flows-user.receive Get the current user information. - data-science-experience.modeler-flow-preview.create Preview a node in an SPSS Modeler flow. - data-science-experience.modeler-examples.receive Get the list of example SPSS Modeler flows. - data-science-experience.modeler-runtimes.receive Get the list of available SPSS Modeler runtimes. - data-science-experience.lock-modeler-flow.enable Allocate the lock for the SPSS Modeler flow to the user. - data-science-experience.project-name.receive Get the name of the project. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_18,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Event for model visualizations - - - -Event in Activity Tracker for modeler visualizations - - Action Description - - pm-20.model.visualize Visualize model output. The model output can have a single model, ensemble models, or a time-series model. The visualization type can be single, auto, or time-series. This visualization type is in requestedData section. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_19,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning training assets - - - -Event in Activity Tracker for Watson Machine Learning training assets - - Action Description - - pm-20.training.authenticate Authenticate user. - pm-20.training.authorize Authorize user. - pm-20.training.list List all of training. - pm-20.training.get Get one training. - pm-20.training.create Start a training. - pm-20.training.delete Stop a training. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_20,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning repository assets - -The deployment events are tracked for these Watson Machine Learning repository assets: - - - -Event in Activity Tracker for Watson Machine Learning repository assets - - Asset type Description - - wml_model Represents a machine learning model asset. - wml_model_definition Represents the code that is used to train one or more models. - wml_pipeline Represents a hybrid-pipeline, a SparkML pipeline or a sklearn pipeline that is represented as a JSON document that is used to train one or more models. - wml_experiment Represents the assets that capture a set of wml_pipeline or wml_model_definition assets that are trained at the same time on the same data set. - wml_function Represents a Python function (code is packaged in a compressed file) that will be deployed as online deployment in Watson Machine Learning. This code needs to contain a score(...) python function. - wml_training_definition Represents the training metadata necessary to start a training job. - wml_deployment_job_definition Represents the deployment metadata information to create a batch job in WML. This asset type contains the same metadata that is used by the /ml/v4/deployment_jobs endpoint. When you submit batch deployment jobs, you can either provide the job definition inline or reference a job definition in a query parameter. - - - -These activities are tracked for each asset type: - - - -Event in Activity Tracker for Watson Machine Learning repository assets - - Action Description - - pm-20..list List all of the specified asset type. - pm-20..create Create one of the specified asset types. - pm-20..delete Delete one of the specified asset types. - pm-20..update Update a specified asset type. - pm-20..read View a specified asset type. - pm-20..add Add a specified asset type. - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_21,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for model evaluation (Watson OpenScale) - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_22,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for public APIs - - - -Events in Activity Tracker for Watson OpenScale public APIs - - Action Description - - aiopenscale.metrics.create Store metric in the Watson OpenScale instance - aiopenscale.payload.create Log payload in the Watson OpenScale instance - - - -" -6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_23,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for private APIs - - - -Events in Activity Tracker for Watson OpenScale private APIs - - Action Description - - aiopenscale.datamart.configure Configure the Watson OpenScale instance - aiopenscale.datamart.delete Delete the Watson OpenScale instance - aiopenscale.binding.create Add service binding to the Watson OpenScale instance - aiopenscale.binding.delete Delete service binding from the Watson OpenScale instance - aiopenscale.subscription.create Add subscription to the Watson OpenScale instance - aiopenscale.subscription.delete Delete subscription from the Watson OpenScale instance - - - -Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_0,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Creating and managing IBM Cloud services - -You can create IBM Cloud service instances within IBM watsonx from the Services catalog. - -Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html). - -Required permissions : For creating or managing a service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). - -" -2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_1,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Creating a service - -To view the Services catalog, select Administration > Services > Services catalog from the main menu. For a description of each service, see [Services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). - -To check which service instances you have, select Administration > Services > Service instances from the main menu. You can filter which services you see by resource group, organization, and region. - -To create a service: - - - -1. Log in to IBM watsonx. -2. Select Administration > Services > Services catalog from the main menu. -3. Click the service you want to create. -4. Specify the IBM Cloud service region. -5. Select a plan. -6. If necessary, select the resource group or organization. -7. Click Create. - - - -" -2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_2,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Managing services - -To manage a service: - - - -1. Select Administration > Services > Services instances from the main menu. -2. Click the Action menu next to the service name and select Manage in IBM Cloud. The service page in IBM Cloud opens in a separate browser tab. -3. To change pricing plans, select Plan and choose the desired plan. - - - -" -2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_3,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Learn more - - - -* [Associate a service with a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) -* [Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) - - - -Parent topic:[IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) -" -A392BDDEAD4F42155DC83FBA8512775DB313FC53_0,A392BDDEAD4F42155DC83FBA8512775DB313FC53," Securing connections to services with private service endpoints - -You can configure isolated connectivity to your cloud-based services for production workloads with IBM Cloud service endpoints. When you enable IBM Cloud service endpoints in your account, you can expose a private network endpoint when you create a resource. You then connect directly to this endpoint over the IBM Cloud private network rather than the public network. Because resources that use private network endpoints don't have an internet-routable IP address, connections to these resources are more secure. - -To use service endpoints: - - - -1. Enable virtual routing and forwarding (VRF) in your account, if necessary, and enable the use of service endpoints. -2. Create services that support VRF and service endpoints. - - - -See [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint). - -" -A392BDDEAD4F42155DC83FBA8512775DB313FC53_1,A392BDDEAD4F42155DC83FBA8512775DB313FC53," Learn more - - - -* [Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview) -* [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint) -* [List of services that support service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpointuse-service-endpoint) - - - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5_0,71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5," Firewall access for Cloud Object Storage - -Private IP addresses are required when IBM watsonx and Cloud Object Storage are located on the same network. When creating a connection to a Cloud Object Storage bucket that is protected by a firewall on the same network as IBM watsonx, the connector automatically maps to private IP addresses for IBM watsonx. The private IP addresses must be added to a Bucket access policy to allow inbound connections from IBM watsonx. - -Follow these steps to search the private IP addresses for the IBM watsonx cluster and add them to the Bucket access policy: - - - -1. Go to the Administration > Cloud integrations page. -2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx. -3. Choose Include private IPs to view the private IP addresses for the IBM watsonx cluster. ![A list of private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/ip-ranges-private.png) -4. From your IBM Cloud Object Storage instance on IBM Cloud, open the Buckets list and choose the Bucket for the connection. -5. Copy each of the private IP ranges listed and paste them into the Buckets > Permissions > IP address field on IBM Cloud. ![A list of permitted private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/bucket-ips.png) - - - -" -71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5_1,71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5," Learn more - - - -* [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) -* [IBM Cloud docs: Setting a firewall](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-setting-a-firewallfirewall) - - - -Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) -" -3D1B3C707202F30F8995025F356F82ABBE685B93,3D1B3C707202F30F8995025F356F82ABBE685B93," Firewall access for Watson Studio - -Inbound firewall access is granted to the Watson Studio service by allowing the IP addresses for IBM watsonx on IBM Cloud. - -If Watson Studio is installed behind a firewall, you must add the WebSocket connection for your region to the firewall settings. Enabling the WebSocket connection is required for notebooks and RStudio. - -Following are the WebSocket settings for each region: - - - -Table 1. Regional WebSockets - - Location Region WebSocket - - United States (Dallas) us-south wss://dataplatform.cloud.ibm.com - Europe (Frankfurt) eu-de wss://eu-de.dataplatform.cloud.ibm.com - United Kingdom (London) eu-gb wss://eu-gb.dataplatform.cloud.ibm.com - Asia Pacific (Tokyo) jp-tok wss://jp-tok.dataplatform.cloud.ibm.com - - - -Follow these steps to look up the IP addresses for IBM watsonx and allow them on IBM Cloud: - - - -1. From the main menu, choose Administration > Cloud integrations. -2. Click Firewall configuration to display the IP addresses for the current region. Use CIDR notation. -3. Copy each CIDR range into the IP address restrictions for either a user or an account. You must also enter the allowed individual client IP addresses. Enter the IP addresses as a comma-separated list. Then, click Apply. -4. Repeat for each region to allow access for Watson Studio. - - - -When you configure the allowed IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio cluster. You can also allow individual client system IP addresses. - -For step-by-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips) - -Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) -" -34974DEE293BA190CFA1B3383EB2417D0FD4B601_0,34974DEE293BA190CFA1B3383EB2417D0FD4B601," Firewall access for AWS Redshift - -Inbound firewall access allows IBM watsonx to connect to Redshift on AWS through the firewall. You need inbound firewall access to work with your data stored in Redshift. - -To connect to Redshift from IBM watsonx, you configure inbound access through the Redshift firewall by entering the IP ranges for IBM watsonx into the inbound firewall rules (also called ingress rules). Inbound access through the firewall is configurable if Redshift resides on a public subnet. If Redshift resides on a private subnet, then no access is possible. - -Follow these steps to configure inbound firewall access to AWS Redshift: - - - -1. Go to your provisioned Amazon Redshift cluster. -2. Select Properties and then scroll down to Network and security settings. -3. Click the VPC security group. - -![AWS VPC security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-active.png) -4. Edit the active/default security group. - -![AWS active security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-vpc.png) -5. Under Inbound rules, change the port range to 5439 to specify the Redshift port. Then select Edit inbound rules > Add rule. - -![Edit inbound rules](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-IPs.png) -6. From IBM watsonx, go to the Administration > Cloud integrations page. -7. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx. IP addresses can be viewed in either CIDR notation or as Start and End addresses. -8. Copy each of the IP ranges listed and paste them into the Source field for inbound firewall rules. - - - -" -34974DEE293BA190CFA1B3383EB2417D0FD4B601_1,34974DEE293BA190CFA1B3383EB2417D0FD4B601," Learn more - - - -* [Working with Redshift-managed VPC endpoints in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html) -" -648122BED05213950C23287CB4845FA56660232B_0,648122BED05213950C23287CB4845FA56660232B," Firewall access for Spark - -To allow Spark to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall. - -" -648122BED05213950C23287CB4845FA56660232B_1,648122BED05213950C23287CB4845FA56660232B," Dallas (us-south) - - - -* dal12 - 169.61.173.96/27, 169.63.15.128/26, 150.239.143.0/25, 169.61.133.240/28, 169.63.56.0/24 -* dal13 - 169.61.57.48/28, 169.62.200.96/27, 169.62.235.64/26 -* dal10 - 169.60.246.160/27, 169.61.194.0/26, 169.46.22.128/26, 52.118.59.0/25 - - - -Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) -" -E732DFB3C4F38ABECBA99DA31750FB6291560DB5_0,E732DFB3C4F38ABECBA99DA31750FB6291560DB5," Firewall access for Watson Machine Learning - -To allow Watson Machine Learning to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall. - -" -E732DFB3C4F38ABECBA99DA31750FB6291560DB5_1,E732DFB3C4F38ABECBA99DA31750FB6291560DB5," Dallas (us-south) - - - -* dal10 - 169.60.39.152/29 -* dal12 - 169.48.198.96/29 -* dal13 - 169.61.47.128/29,169.62.162.88/29 - - - -Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) -" -E176531BA95036356A7E5DCA50A8DF728C78CE79_0,E176531BA95036356A7E5DCA50A8DF728C78CE79," Firewall access for the platform - -If a data source resides behind a firewall, then IBM watsonx requires inbound access through the firewall in order to make a connection. Inbound firewall access is required whether the data source resides on a third-party cloud provider or in an data center. The method for configuring inbound access varies for different vendor's firewalls. In general, you configure inbound access rules by entering the IP addresses for the IBM watsonx cluster to allow for access by IBM watsonx. - -You can enter the IP addresses using the starting and ending addresses for a range or by using CIDR notation. Classless Inter-Domain Routing (CIDR) notation is a compact representation of an IP address and its associated network mask. For start and end addresses, copy each address and enter them in the inbound rules for your firewall. Alternately, copy the addresses in CIDR notation. - -The IBM watsonx IP addresses vary by region. The user interface lists the IP addresses for the current region. The IP addresses apply to the base infrastructure for IBM watsonx. - -Follow these steps to look up the IP addresses for IBM watsonx cluster: - - - -1. Go to the Administration > Cloud integrations page. -2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx in your region. -3. View the IP ranges for the IBM watsonx cluster in either CIDR notation or as Start and End addresses. -4. Choose Include private IPs to view the private IP addresses. The private IP addresses allow connections to IBM Cloud Object Storage buckets that are behind a firewall. See [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html). -5. Copy each of the IP ranges listed and paste them into the appropriate security configuration or inbound firewall rules area for your cloud provider. - - - -" -E176531BA95036356A7E5DCA50A8DF728C78CE79_1,E176531BA95036356A7E5DCA50A8DF728C78CE79,"For example, if your data source resides on AWS, open the Create Security Group dialog for your AWS Management Console. Paste the IP ranges into the Inbound section for the security group rules. - -Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) -" -E7B64045AF2C3FF02183FB1CCC036327CEE5E971_0,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Configuring firewall access - -Firewalls protect valuable data from public access. If your data sources reside behind a firewall for protection, and you are not using a Satellite Connector or Satellite location, then you must configure the firewall to allow the IP addresses for IBM watsonx and also for individual services. Otherwise, IBM watsonx is denied access to the data sources. - -To allow IBM watsonx access to private data sources, you configure inbound firewall rules using the security mechanisms for your firewall. Inbound firewall rules are not required for connections that use a Satellite Connector or Satellite location, which establishes a link by performing an outbound connection. For more information, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -All services in IBM watsonx actively use WebSockets for the proper functioning of the user interface and APIs. Any firewall between the user and the IBM watsonx domain must allow HTTPUpgrade. If IBM watsonx is installed behind a firewall, traffic for the wss:// protocol must be enabled. - -" -E7B64045AF2C3FF02183FB1CCC036327CEE5E971_1,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Configuring inbound access rules for firewalls - -If data sources reside behind a firewall, then inbound access rules are required for IBM watsonx. Inbound firewall rules protect the network against incoming traffic from the internet. The following scenarios require inbound access rules through a firewall: - - - -* [Firewall access for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_cfg.html) -* [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html) -* [Firewall access for AWS Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html) -* [Firewall access for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-dsx.html) -* [Firewall access for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-wml.html) -* [Firewall access for Spark](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-spark.html) - - - -" -E7B64045AF2C3FF02183FB1CCC036327CEE5E971_2,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Learn more - - - -* [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) - - - -Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -9E71F112F9AF39E61A59914D87689B4B8DB13F50_0,9E71F112F9AF39E61A59914D87689B4B8DB13F50," Integrating with AWS - -You can configure an integration with the Amazon Web Services (AWS) platform to allow IBM watsonx users access to data sources from AWS. Before proceeding, make sure you have proper permissions. For example, you'll need to be able to create services and credentials in the AWS account. - -After you configure an integration, you'll see it under Service instances. You'll see a new AWS tab that lists your instances of Redshift and S3. - -To configure an integration with AWS: - - - -1. Log on to the [AWS Console](https://aws.amazon.com/console/). -2. From the account drop-down at the upper right, select My Security Credentials. -3. Under Access keys (access key ID and secret access key), click Create New Access Key. -4. Copy the key ID and secret. - -Important: Write down your key ID and secret and store them in a safe place. -5. In IBM watsonx, under Administration > Cloud integrations, go to the AWS tab, enable integration, and then paste the access key ID and access key secret into the appropriate fields. -6. If you need to access Redshift, [configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html). -7. Confirm that you can see your AWS services. From the main menu, choose Administration > Services > Services instances. Click the AWS tab to see those services. - - - -Now users who have credentials to your AWS services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - -" -9E71F112F9AF39E61A59914D87689B4B8DB13F50_1,9E71F112F9AF39E61A59914D87689B4B8DB13F50," Next steps - - - -* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) -" -496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_0,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Integrating with Microsoft Azure - -You can configure an integration with the Microsoft Azure platform to allow IBM watsonx users access to data sources from Microsoft Azure. Before proceeding, make sure you have proper permissions. For example, you'll need permission in your subscription to create an application integration in Azure Active Directory. - -After you configure an integration, you'll see it under Service instances. You'll see a new Azure tab that lists your instances of Data Lake Storage Gen1 and SQL Database. - -To configure an integration with Microsoft Azure: - - - -1. Log on to your Microsoft Azure account at [https://portal.azure.com](https://portal.azure.com). -2. Navigate to the Subscriptions panel and copy your subscription ID. - - - - - -1. In IBM watsonx, go to Administration > Cloud integrations and click the Azure tab. Paste the subscription ID you copied in the previous step into the Subscription ID field. - - - - - -1. In Microsoft Azure Active Directory, navigate to Manage > App registrations and click New registration to register an application. Give it a name such as IBM integration and select the desired option for supported account types. - - - - - -1. Copy the Application (client) ID and the Tenant ID and paste them into the appropriate fields on the IBM watsonx Integrations page, as you did with the subscription ID. - - - - - -1. In Microsoft Azure, navigate to Certificates & secrets > New client secret to create a new secret. - -" -496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_1,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071,"Important! - - - -* Write down your secret and store it in a safe place. After you leave this page, you won't be able to retrieve the secret again. You'd need to delete the secret and create a new one. -* If you ever need to revoke the secret for some reason, you can simply delete it from this page. -* Pay attention to the expiration date. When the secret expires, integration will stop working. - - - -2. Copy the secret from Microsoft Azure and paste it into the appropriate field on the Integrations page as you did with the subscription ID and client ID. -3. Configure [firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=enfirewall). -4. Confirm that you can see your Azure services. From the main menu, choose Administration > Services > Services instances. Click the Azure tab to see those services. - - - -Now users who have credentials to your Azure services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - -" -496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_2,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Configuring firewall access - -You must also configure access so IBM watsonx can access data through the firewall. - -For Microsoft Azure SQL Database firewall: - - - -1. Open the database instance in Microsoft Azure. -2. From the top list of actions, select Set server firewall. -3. Set Deny public network access to No. -4. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules in the Microsoft Azure QL Database firewall. - - - -For Microsoft Azure Data Lake Storage Gen1 firewall: - - - -1. Open the Data Lake instance. -2. Go to Settings > Firewall and virtual networks. -3. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules under Firewall in the Data Lake instance. - - - -You can now create connections, preview data from Microsoft Azure data sources, and access Microsoft Azure data in Notebooks, Data Refinery, SPSS Modeler, and other tools in projects and in catalogs. You can see your Microsoft Azure instances under Services > Service instances. - -" -496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_3,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Next steps - - - -* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) -" -72B9EC702C95AC86DE08E0FB8F8C3404B1228B5F,72B9EC702C95AC86DE08E0FB8F8C3404B1228B5F," Integrations with other cloud platforms - -You can integrate IBM watsonx with other cloud platforms to configure access to the data source services on that platform. Then, users can easily create connections to those data source services and access the data in those data sources. - -You need to be the Account Owner or Administrator for the IBM Cloud account to configure integrations with other cloud platforms. - -You must have the proper permissions in your cloud platform subscription before you can configure an integration. If you are using Amazon Web Services (AWS) Redshift (or other AWS data sources) or Microsoft Azure, you must also configure firewall access to allow IBM watsonx to access data. - -After you configure integration and firewall access with another cloud platform, you can access and connect to the services on that platform: - - - -* The service instances for that platform are shown on the Service instances page. From the main menu, choose Administration > Services > Services instances. Each cloud platform that you integrate with has its own page. -* The data source services in that platform are shown when you create a connection. Start adding a connection in a project, catalog, or other workspace. When the Add connection page appears, click the To service tab. The services are listed by cloud platform. - - - -You can configure integrations with these cloud platforms: - - - -* [Amazon Web Services (AWS)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html) -* [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html) -* [Google Cloud Platform (GCP)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html) - - - -Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html) -" -CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926_0,CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926," Integrating with Google Cloud Platform - -You can configure an integration with the Google Cloud Platform (GCP) to allow IBM watsonx users to access data sources from GCP. Before proceeding, make sure you have proper permissions. - -After you configure an integration, you'll see it under Service instances. For example, you'll see a new GCP tab that lists your BigQuery data sets and Storage buckets. - -To configure an integration with GCP: - - - -1. Log on to the Google Cloud Platform at [https://console.cloud.google.com](https://console.cloud.google.com). -2. Go to IAM & Admin > Service Accounts. -3. Open your project and then click CREATE SERVICE ACCOUNT.1. Specify a name and description for the new service account and click CREATE. Specify other options as desired and click DONE.1. Click the actions menu next to the service instance and select Create key. For key type, select JSON and then click CREATE. The JSON key file will be downloaded to your machine. - -Important: Write down your key ID and secret and store them in a sStore the key file in a secure location. -4. In IBM watsonx, under Administrator > Cloud integrations, go to the GCP tab, enable integration, and then paste the contents from the JSON key file into the text field. Only certain properties from the JSON will be stored, and the private_key property will be encrypted. -5. Go back to Google Cloud Platform and edit the service account you created previously. Add the following roles: -6. Confirm that you can see your GCP services. From the main menu, choose Administration > Services > Services instances. Click the GCP tab to see those services, for example, BigQuery data sets and Storage buckets. - - - -Now users who have credentials to your GCP services can can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - -" -CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926_1,CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926," Next steps - - - -* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -Parent topic: -" -E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B_0,E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B," Managing your IBM Cloud account - -You can manage your IBM Cloud account to view billing and usage, manage account users, and manage services. - -Required permissions : You must be the IBM Cloud account owner or administrator. - -To manage your IBM Cloud account, choose Administration > Account and billing > Account > Manage in IBM Cloud from IBM watsonx. Then from the IBM Cloud console, choose an option from the Manage menu. - - - -* Account: See [Adding orgs and spaces](https://cloud.ibm.com/docs/account?topic=account-orgsspacesusersorgsspacesusers) and [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs). -* Billing and Usage: See [How you're charged](https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-chargescharges). -* Access (IAM): See [Inviting users](https://cloud.ibm.com/docs/account?topic=account-access-getstarted). - - - -" -E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B_1,E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B," Learn more - - - -* [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html) -* [Manage your settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) -* [Set up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -* [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) -* [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) -* [Delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.htmldeletecloud) -* [Check the status of IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) -* [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) - - - -Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_0,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring account resource usage - -Some service plans charge for compute usage and other types of resource usage. If you are the IBM Cloud account owner or administrator, you can monitor the resources usage to ensure the limits are not exceeded. - -For Lite plans, you cannot exceed the limits of the plan. You must wait until the start of your next billing month to use resources that are calculated monthly. Alternatively, you can [upgrade to a paid plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html). - -For most paid plans, you pay for the resources that the tools and processes that are provided by the service consume each month. - -To see the costs of your plan, log in to IBM Cloud, open your service instance from your IBM Cloud dashboard, and click Plan. - - - -* [Capacity unit hours (CUH) for compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=encompute) -* [Resource units for foundation model inferencing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enrus) -* [Monitor monthly billing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enbilling) - - - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_1,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Capacity unit hours (CUH) for compute usage - -Many tools consume compute usage that is measured in capacity unit hours (CUH). A capacity unit hour is a specific amount of compute capability with a set cost. - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_2,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," How compute usage is calculated - -Different types of processes and different levels of compute power are billed at different rates of capacity units per hour. For example, the hourly rate for a data profiling process is 6 capacity units. - -Compute usage for Watson Studio is charged by the minute, with a minimum charge of 10 minutes (0.16 hours). Compute usage for Watson Machine Learning is charged by the minute with a minimum charge of one minute. - -Compute usage is calculated by adding the minimum number of minutes billed for each process plus the number of minutes the process runs beyond the minimum minutes, then multiplying the total by the capacity unit rate for the process. - -The following table shows examples of how the billed CUH is calculated. - - - - Rate Usage time Calculation Total CUH billed - - 1 CUH/hour 1 hour 1 hour * 1 CUH/hour 1 CUH - 2 CUH/hour 45 minutes 0.75 hours * 2 CUH/hour 1.5 CUH - 6 CUH/hour 5 minutes 0.16 hours * 6 CUH/hour 0.96 CUH. The minimum charge for Watson Studio applies. - 6 CUH/hour 30 minutes 0.5 hours * 6 CUH/hour 3 CUH - 6 CUH/hour 1 hour 1 hour * 6 CUH/hour 6 CUH - - - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_3,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Processes that consume capacity unit hours - -Some types of processes, such as AutoAI and Federated Learning, have a single compute rate for the runtime. However, with many tools you have a choice of compute resources for the runtime. The notebook editor, Data Refinery, SPSS Modeler, and other tools have different rates that reflect the memory and compute power for the environment. Environments with more memory and compute power consume capacity unit hours at a higher rate. - -This table shows each process that consumes CUH, where it runs, and against which service CUH is billed, and whether you can choose from more than one environment. Follow the links to view the available CUH rates for each process. - - - - Tool or Process Workspace Service that provides CUH Multiple CUH rates? - - [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) Project Watson Studio, Analytics Engine (Spark) Multiple rates - [Invoking the machine learning API from a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlwml) Project Watson Machine Learning Multiple rates - [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) Project Watson Studio Multiple rates - [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) Project Watson Studio Multiple rates - [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) Project Watson Studio Multiple rates - [AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) Project Watson Machine Learning Multiple rates -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_4,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," [Decision Optimization experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) Spaces Watson Machine Learning Multiple rates - [Running deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) Spaces Watson Machine Learning Multiple rates - [Profiling](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.htmlprofiling) Project Watson Studio One rate - [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) Project Watson Studio One rate - - - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_5,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring compute usage - -You can monitor compute usage for all services at the account level. To view the monthly CUH usage for a service, open the service instance from your IBM Cloud dashboard and click Plan. - -You can also monitor compute usage in a project on the Environments page on the Manage tab. - -To see the total amount of capacity unit hours that are used and that are remaining for Watson Studio and Watson Machine Learning, look at the Environment Runtimes page. From the navigation menu, select Administration > Environment runtimes. The Environment Runtimes page shows details of the [CUH used by environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account). You can calculate the amount of CUH you use for data flows and profiling by subtracting the amount used by environments from the total amount used. - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_6,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Resource units for foundation model inferencing - -Calling a foundation model to generate output in response to a prompt is known as inferencing. Foundation model inferencing is measure in resource units (RU). Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. For details on tokens, see [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). - -Resource unit billing is based on the rate of the foundation model class multipled by the number of tokens. Foundation models are classified into three classes. See [Resource unit metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering). - -Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site. - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_7,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring token usage for foundation model inferencing - -You can monitor foundation model token usage in a project on the Environments page on the Manage tab. - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_8,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitor monthly billing - -You must be an IBM Cloud account owner or administrator to see resource usage information. - -To view a summary of your monthly billing, from the navigation menu, choose Administration > Account and billing > Billing and usage. The IBM Cloud usage dashboard opens. To view the usage for each service, in the Usage summary section, click View usage. - -" -BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_9,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Learn more - - - -* [Choosing compute resources for running tools in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -* [Upgrade services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html) -* [Environments compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account) -* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) -* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) - - - -Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_0,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Managing your settings - -You can manage your profile, services, integrations, and notifications while logged in to IBM watsonx. - - - -* [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enprofile) -* [Manage user API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) -* [Switch accounts](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enaccount) -* [Manage your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -* [Manage your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enintegrations) -* [Manage your notification settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enbell) -* [View and personalize your project summary](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enproject-summary) - - - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_1,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your profile - -You can manage your profile on the Profile page by clicking your avatar in the banner and then clicking Profile and settings. - -You can make these changes to your profile: - - - -* Add or change your avatar photo. -* Change your IBMid or password. Do not change your IBMid (email address) after you register with the IBM watsonx platform. The IBMid (email address) uniquely identifies users in the platform and also authorizes access to various IBM watsonx resources, including projects, spaces, models, and catalogs. If you change your IBMid (email address) in your IBM Cloud profile after you have registered with IBM watsonx, you will lose access to the platform and associated resources. -* Set your service locations filters by resource group and location. The filters apply throughout the platform. For example, the Service instances page that you access through the Services menu shows only the filtered services. Ensure you have selected the region where Watson Studio is located, for example, Dallas, as well as the Global location. Global is required to provide access to your IBM Cloud Object Storage instance. -* Access your IBM Cloud account. -* [Leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.htmldeactivate). - - - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_2,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Switch accounts - -If you are added to a shared IBM Cloud account that is different from your individual account, you can switch your account by selecting a different account from the account list in the menu bar, next to your avatar. - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_3,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your integrations - -To set up or modify an integration to GitHub: - - - -1. Click your avatar in the banner. -2. Click Profile and settings. -3. Click the Git integrations tab. - - - -See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_4,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your notification settings - -To see your notification settings, click the notification bell icon and then click the settings icon. - -You can make these changes to your notification settings: - - - -* Specify to receive push notifications that appear briefly on screen. If you select Do not disturb, you continue to see notifications on the home page and the number of notifications on the bell. -* Specify to receive notifications by email. -* Specify for which projects or spaces you receive notifications. - - - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_5,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," View and personalize your project summary - -Use the Overview page of a project to view a summary of what's happening in your project. You can jump back into your most recent work and keep up to date with alerts, tasks, project history, and compute usage. - -View recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. Selecting By you lists assets edited by you, ordered by most recent at the top. Selecting By all lists assets edited by others and also by you, ordered by most recent at the top. - -You can use the readme file on the Overview page to document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file. - -" -C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_6,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Learn more - - - -* [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) -* [Managing your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) - - - -Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -FA4ACDC5DB590992630C704D00DEFB142F2F0489_0,FA4ACDC5DB590992630C704D00DEFB142F2F0489," Object storage for workspaces - -You must choose an IBM Cloud Object Storage instance when you create a project, catalog, or deployment space workspace. Information that is stored in IBM Cloud Object Storage is encrypted and resilient. Each workspace has its own dedicated bucket. - -You can encrypt the Cloud Object Storage instance that you use for workspaces with your own key. See [Encrypt IBM Cloud Object Storage with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). The Locations in each user's Profile must include the Global location to allow access to Cloud Object Storage. - -When you create a workspace, the Cloud Object Storage bucket defaults to Regional resiliency. Regional buckets distribute data across several data centers that are within the same metropolitan area. If one of these data centers suffers an outage or destruction, availability and performance are not affected. - -If you are the account owner or administrator, you administer Cloud Object Storage from the Resource list > Storage page on the IBM Cloud dashboard. For example, you can upload and download assets, manage buckets, and configure credentials and other security settings for the Cloud Object Storage instance. - -Follow these steps to manage the Cloud Object Storage instance on IBM Cloud: - - - -1. Select a project from the Project list. -2. Click the Manage tab. -3. On the General page, locate the Storage section that displays the bucket name for the project. -4. Select Manage in IBM Cloud to open the Cloud Object Storage Buckets list. -5. Select the bucket name for the project to display a list of assets. -6. Checkmark an asset to download it or perform other tasks as needed. - - - -Watch this video to see how to manage an object storage instance. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows how to manage an IBM Cloud Object Storage instance. - 00:06 When you create a Watson Studio project, an IBM Cloud Object Storage instance is associated with the project. - 00:15 On the Manage tab, you'll see the associated object storage instance and have the option to manage it in IBM Cloud. -" -FA4ACDC5DB590992630C704D00DEFB142F2F0489_1,FA4ACDC5DB590992630C704D00DEFB142F2F0489," 00:24 IBM Cloud Object Storage uses buckets to organize your data. - 00:30 You can see that this instance contains a bucket with the ""jupyternotebooks"" prefix, which was created when the ""Jupyter Notebooks"" project was created. - 00:41 If you open that bucket, you'll see all of the files that you added to that project. - 00:47 From here, you can download an object or delete it from the bucket. - 00:53 You can also view the object SQL URL to access that object from your application. - 01:00 You can add objects to the bucket from here. - 01:03 Just browse to select the file and wait for it to upload to storage. - 01:10 And then that file will be available in the Files slide-out panel in the project. - 01:16 Let's create a bucket. - 01:20 You can create a Standard or Archive bucket, based on predefined settings, or create a custom bucket. - 01:28 Provide a bucket name, which must be unique across the IBM Cloud Object Storage system. - 01:35 Select a resiliency. - 01:38 Cross Region provides higher availability and durability and Regional provides higher performance. - 01:45 The Single Site option will only distribute data across devices within a single site. - 01:52 Then select the location based on workload proximity. - 01:57 Next, select a storage class, which defines the cost of storing data based on frequency of access. - 02:05 Smart Tier provides automatic cost optimization for your storage. - 02:11 Standard indicates frequent access. - 02:14 Vault is for less frequent access. - 02:18 And Cold Vault is for rare access. - 02:21 There are other, optional settings to add rules, keys, and services. - 02:27 Refer to the documentation for more details on these options. - 02:32 When you're ready, create the bucket. - 02:35 And, from here, you could add files to that bucket. - 02:40 On the Access policies panel, you can manage access to buckets using IAM policies - that's Identity and Access Management. - 02:50 On the Configuration panel, you'll find information about Key Protect encryption keys, as well as the bucket instance CRN and endpoints to access the data in the buckets from your application. -" -FA4ACDC5DB590992630C704D00DEFB142F2F0489_2,FA4ACDC5DB590992630C704D00DEFB142F2F0489," 03:01 You can also find some of the same information on the Endpoints panel. - 03:06 On the Service credentials panel, you'll find the API and access keys to authenticate with your instance from your application. - 03:15 You can also connect the object storage to a Cloud Foundry application, check usage details, and view your plan details. - 03:26 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -" -FA4ACDC5DB590992630C704D00DEFB142F2F0489_3,FA4ACDC5DB590992630C704D00DEFB142F2F0489," Learn more - - - -* [Setting up IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) -* [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage) -* [IBM Cloud docs: Endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage/basics?topic=cloud-object-storage-endpoints) -* [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) - - - -Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -" -26FB8B86499454EFD078384D70B02917D1C7DAE1,26FB8B86499454EFD078384D70B02917D1C7DAE1," Services and integrations - -You can extend the functionality of the platform by provisioning other services and components, and integrating with other cloud platforms. - - - -* [Provision instances of services and components from the Services catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Add service instances and components to the IBM Cloud account to add functionality to the platform. You must be the owner or be assigned the Administrator or Editor role in the IBM Cloud account for IBM watsonx to provision service instances. -* [Integrate with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html). Allow users to easily create connections to data sources on those cloud platforms. You must have the required roles or permissions on the other cloud platform accounts. -" -0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_0,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Upgrading services on IBM watsonx - -When you're ready to upgrade services, you can upgrade in place without losing any of your work or data. - -Each service has its own plan and is independent of other plans. - -Required permissions : You must have an IBM Cloud IAM access policy with the Editor or Administrator role on all account management services. - -" -0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_1,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Step 1: Update your IBM Cloud account - -You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan. - -You must update your IBM Cloud account in the following circumstances: - - - -* You have a Trial account from signing up for watsonx. -* You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic). -* You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021. -* You want to change a Pay-As-You-Go plan to a subscription plan. - - - -For instructions on updating your IBM Cloud account, see [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account). - -" -0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_2,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Step 2: Upgrade your service plans - -You can upgrade the service plans for services. To upgrade service plans, you must have an IBM Cloud access policy with either the Editor or Administrator platform role for the services. - -To upgrade a service plan: - - - -1. Click Upgrade on the header or choose Administration > Account and billing > Upgrade service plans from the main menu to open the Upgrade service plans page. -2. Select one or more services to change the service plans. -3. Click Select plan for each service in the Pricing summary pane. Select the plan from the Services catalog page for the service. -4. Agree to the terms, then click Buy. Your service plans are instantly updated. - - - -After the upgrade, the additional features and capacity for the new plan are automatically available. For the following services, the difference between plans can be significant: - - - -* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) -* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) - - - -" -0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_3,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Learn more - - - -* [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts) -* [IBM Cloud docs: Upgrading your account](https://cloud.ibm.com/docs/account?topic=account-upgrading-account) -* [Setting up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) -* [Find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) - - - -Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_0,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Determining your roles and permissions - -You have multiple roles within IBM Cloud and IBM watsonx that provide permissions. You can determine what each of your roles are, and, when necessary, who can change your roles. - -" -4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_1,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Projects and catalogs roles - -To determine your role in a project or deployment space, look at the Access Control page on the Manage tab. Your role is listed next to your name or the service ID you use to log in. - -The permissions that are associated with each role are specific to the type of workspace: - - - -* [Project collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) -* [Deployment space collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html) - - - -If you want a different role, ask someone who has the Admin role on the Access Control page to change your role. - -" -4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_2,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," IBM Cloud IAM account and service access roles - -You can see your IAM account and service access roles in IBM Cloud. - -To see your IAM account and service access roles in IBM Cloud: - - - -1. From the IBM watsonx main menu, click Administration > Access (IAM). -2. Click Users, then click your name. -3. Click the Access policies tab. You might have multiple entries: - - - -* The All resources in account (including future IAM enabled services) entry shows your general roles for all services in the account. -* Other entries might show your roles for individual services. - - - - - -If you want the IBM Cloud account administrator role or another role, ask an IBM Cloud account owner or administrator to assign it to you. You can [find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) on your Access (IAM) > Users page in IBM Cloud. - -" -4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_3,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Learn more - - - -* [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) -* [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) - - - -Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_0,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," AI risk atlas - -Explore this atlas to understand some of the risks of working with generative AI, foundation models, and machine learning models. - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_1,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Risks associated with input - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_2,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Training and tuning phase - -![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_3,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Fairness - -[Data bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html)Amplified![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_4,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Robustness - -[Data poisoning](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html)Traditional![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_5,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Value alignment - -[Data curation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-curation.html)Amplified -[Downstream retraining](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/downstream-retraining.html)New![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_6,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Data laws - -[Data transfer](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transfer.html)Traditional -[Data usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage.html)Traditional -[Data aquisition](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-aquisition.html)Traditional![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_7,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property - -[Data usage rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html)Amplified -[Confidential data disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html)Traditional![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_8,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Transparency - -[Data transparency](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transparency.html)Amplified -[Data provenance](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html)Amplified![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_9,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy - -[Personal information in data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html)Traditional -[Reidentification](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/reidentification.html)Traditional -[Data privacy rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html)Amplified - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_10,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Inference phase - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_11,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy - -[Personal information in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-prompt.html)New[Membership inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/membership-inference-attack.html)Traditional[Attribute inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html)Amplified![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_12,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property - -[Confidential data in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-in-prompt.html)New![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_13,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Robustness - -[Evasion attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/evasion-attack.html)Amplified[Extraction attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html)Amplified[Prompt injection](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html)New[Prompt leaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-leaking.html)Amplified![icon for multi-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_14,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Multi-category - -[Prompt priming](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-priming.html)Amplified[Jailbreaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html)Amplified - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_15,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Risks associated with output - -![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_16,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Fairness - -[Output bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html)New[Decision bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html)New![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_17,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property - -[Copyright infringement](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html)New![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_18,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Value alignment - -[Hallucination](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/hallucination.html)New[Toxic output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxic-output.html)New[Trust calibration](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html)New[Physical harm](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/physical-harm.html)New[Benign advice](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/benign-advice.html)New[Improper usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/improper-usage.html)New![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_19,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Misuse - -[Spreading disinformation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/spreading-disinformation.html)Amplified[Toxicity](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxicity.html)New[Nonconsensual use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html)Amplified[Dangerous use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/dangerous-use.html)New[Non-disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/non-disclosure.html)New![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_20,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Harmful code generation - -[Harmful code generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html)New![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_21,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy - -[Personal information in output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-output.html)New![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg) - -" -8B34DED3493E5181B1D19F6D14A9598CFEAA5997_22,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Explainability - -[Explaining output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/explaining-output.html)Amplified[Unreliable source attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html)Amplified[Inaccessible training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html)Amplified[Untraceable attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/untraceable-attribution.html)Amplified -" -A304B9E82543C150236ECAD30F1594E1B832B8B1_0,A304B9E82543C150236ECAD30F1594E1B832B8B1," Attribute inference attack - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputInferencePrivacyAmplified - -" -A304B9E82543C150236ECAD30F1594E1B832B8B1_1,A304B9E82543C150236ECAD30F1594E1B832B8B1," Description - -An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data. - -" -A304B9E82543C150236ECAD30F1594E1B832B8B1_2,A304B9E82543C150236ECAD30F1594E1B832B8B1," Why is attribute inference attack a concern for foundation models? - -With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_0,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Benign advice - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_1,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Description - -When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not. - -" -857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_2,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Why is benign advice a concern for foundation models? - -A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_0,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Data bias - -![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with inputTraining and tuning phaseFairnessAmplified - -" -BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_1,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Description - -Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior. - -" -BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_2,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Why is data bias a concern for foundation models? - -Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes. - -Example - -" -BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_3,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Healthcare Bias - -Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities. - -Sources: - -[Science, September 2022](https://www.science.org/doi/10.1126/science.abo2788) - -[Forbes, December 2022](https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -F6CC81E55C6AAD12849A56837F14538576F5A42C_0,F6CC81E55C6AAD12849A56837F14538576F5A42C," Confidential data disclosure - -![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyTraditional - -" -F6CC81E55C6AAD12849A56837F14538576F5A42C_1,F6CC81E55C6AAD12849A56837F14538576F5A42C," Description - -Models might be trained or fine-tuned using confidential data or the company’s intellectual property, which could result in unwanted disclosure of that information. - -" -F6CC81E55C6AAD12849A56837F14538576F5A42C_2,F6CC81E55C6AAD12849A56837F14538576F5A42C," Why is confidential data disclosure a concern for foundation models? - -If not developed in accordance with data protection rules and regulations, the model might expose confidential information or IP in the generated output or through an adversarial attack. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -2D3A398B8394671D9383F214FF5E69A00391BB22_0,2D3A398B8394671D9383F214FF5E69A00391BB22," Confidential data in prompt - -![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputInferenceIntellectual propertyNew - -" -2D3A398B8394671D9383F214FF5E69A00391BB22_1,2D3A398B8394671D9383F214FF5E69A00391BB22," Description - -Inclusion of confidential data as a part of a generative model's prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that information. - -" -2D3A398B8394671D9383F214FF5E69A00391BB22_2,2D3A398B8394671D9383F214FF5E69A00391BB22," Why is confidential data in prompt a concern for foundation models? - -If not properly developed to secure confidential data, the model might expose confidential information or IP in the generated output. Additionally, end users' confidential information might be unintentionally collected and stored. - -Example - -" -2D3A398B8394671D9383F214FF5E69A00391BB22_3,2D3A398B8394671D9383F214FF5E69A00391BB22," Disclosure of Confidential Information - -As per the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT. In one instance, an employee pasted confidential source code to check for errors. In another, an employee shared code with ChatGPT and ""requested code optimization."" A third shared a recording of a meeting to convert into notes for a presentation. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they will be able to recall any of their data. Additionally, that article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture have placed several restrictions on the usage of ChatGPT. - -Sources: - -[Business Insider, February 2023](https://www.businessinsider.com/walmart-warns-workers-dont-share-sensitive-information-chatgpt-generative-ai-2023-2) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_0,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Copyright infringement - -![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with outputIntellectual propertyNew - -" -C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_1,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Description - -Generative AI output that is too similar or identical to existing work risks claims of copyright infringement. Uncertainty and variability around the ownership, copyrightability, and patentability of output generated by AI increases the risk of copyright infringement problems. - -" -C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_2,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Why is copyright infringement a concern for foundation models? - -Laws and regulations concerning the use of content that looks the same or closely similar to other copyrighted data are largely unsettled and can vary from country to country, providing challenges in determining and implementing compliance. Business entities could face fines, reputational harms, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_0,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Dangerous use - -![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseNew - -" -6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_1,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Description - -The possibility that a model could be misused for dangerous purposes such as creating plans to develop weapons, malware, or causing harm to others is the risk of dangerous use. - -" -6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_2,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Why is dangerous use a concern for foundation models? - -Enabling people to harm others is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_0,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Data aquisition - -![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional - -" -C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_1,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Description - -Laws and other regulations might limit the collection of certain types of data for specific AI use cases. - -" -C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_2,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Why is data aquisition a concern for foundation models? - -Failing to comply with data usage laws might result in fines and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -6BC478053FEFD091742C6775DFAC9EB5B8C4923F_0,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Data curation - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentAmplified - -" -6BC478053FEFD091742C6775DFAC9EB5B8C4923F_1,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Description - -When training or tuning data is improperly collected or prepared, the result can be a misalignment of a model's desired values or intent and the actual outcome. - -" -6BC478053FEFD091742C6775DFAC9EB5B8C4923F_2,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Why is data curation a concern for foundation models? - -Improper data curation can adversely affect how a model is trained, resulting in a model that does not behave in accordance with the intended values. Correcting problems after the model is trained and deployed might be insufficient for guaranteeing proper behavior. Improper model behavior can result in business entities facing legal consequences or reputational harms. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C471B8B14614C985391115EC1ED53E0B56D2E27E_0,C471B8B14614C985391115EC1ED53E0B56D2E27E," Data poisoning - -![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputTraining and tuning phaseRobustnessTraditional - -" -C471B8B14614C985391115EC1ED53E0B56D2E27E_1,C471B8B14614C985391115EC1ED53E0B56D2E27E," Description - -Data poisoning is a type of adversarial attack where an adversary or malicious insider injects intentionally corrupted, false, misleading, or incorrect samples into the training or fine-tuning dataset. - -" -C471B8B14614C985391115EC1ED53E0B56D2E27E_2,C471B8B14614C985391115EC1ED53E0B56D2E27E," Why is data poisoning a concern for foundation models? - -Poisoning data can make the model sensitive to a malicious data pattern and produce the adversary’s desired output. It can create a security risk where adversaries can force model behavior for their own benefit. In addition to producing unintended and potentially malicious results, a model misalignment from data poisoning can result in business entities facing legal consequences or reputational harms. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -773F81DD69D3ADBBE1998FF5974CA83347EFFC76_0,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Data privacy rights - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputTraining and tuning phasePrivacyAmplified - -" -773F81DD69D3ADBBE1998FF5974CA83347EFFC76_1,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Description - -In some countries, privacy laws give individuals the right to access, correct, verify, or remove certain types of information that companies hold or process about them. Tracking the usage of an individual’s personal information in training a model and providing appropriate rights to comply with such laws can be a complex endeavor. - -" -773F81DD69D3ADBBE1998FF5974CA83347EFFC76_2,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Why is data privacy rights a concern for foundation models? - -The identification or improper usage of data could lead to violation of privacy laws. Improper usage or a request for data removal could force organizations to retrain the model, which is expensive. In addition, business entities could face fines, reputational harms, and other legal consequences if they fail to comply with data privacy rules and regulations. - -Example - -" -773F81DD69D3ADBBE1998FF5974CA83347EFFC76_3,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Right to Be Forgotten (RTBF) - -As stated in the article, laws in multiple locales, including Europe (GDPR); Canada (CPPA); and Japan (APPI), grant users’ rights for their personal data to be “forgotten” by technology (Right To Be Forgotten). However, the emerging and increasingly popular AI (LLMs) services present new challenges for the right to be forgotten (RTBF). According to Data61’s research, the only way for users to identify usage of their personal information in an LLM is “by either inspecting the original training dataset or perhaps prompting the model.” However, training data is either not public or companies do not disclose it, citing safety and other concerns, and guardrails may prevent users from accessing the information via prompting. Due to these barriers, users cannot initiate RTBF procedures and companies deploying LLMs may be unable to meet RTBF laws. - -Sources: - -[Zhang et al., Sept 2023](https://arxiv.org/pdf/2307.03941.pdf) - -Example - -" -773F81DD69D3ADBBE1998FF5974CA83347EFFC76_4,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Lawsuit About LLM Unlearning - -According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot. Opt-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States below 13 under the COPPA. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user once they have been reviewed and annotated by the company and may be kept up to 3 years, which plaintiffs allege further contributes to non-compliance with these laws. - -Sources: - -[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) - -[J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_0,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Data provenance - -![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified - -" -40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_1,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Description - -Without standardized and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. - -" -40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_2,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Why is data provenance a concern for foundation models? - -Not all data sources are trustworthy. Data might have been unethically collected, manipulated, or falsified. Using such data can result in undesirable behaviors in the model. Business entities could face fines, reputational harms, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_0,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Data transfer - -![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional - -" -C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_1,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Description - -Laws and other restrictions that apply to the transfer of data can limit or prohibit transferring or repurposing data from one country to another. Repurposing data can be further restricted within countries or with local regulations. - -" -C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_2,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Why is data transfer a concern for foundation models? - -Data transfer restrictions can impact the availability of the data required for training an AI model and can lead to poorly represented data. Failing to comply with data transfer laws might result in fines and other legal consequences. - -Example - -" -C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_3,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Data Restriction Laws - -As stated in the research article, data localization measures which restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities. It will affect AI directly by providing less training data and indirectly by undercutting the building blocks on which AI is built. - -Examples include [China's data localization laws](https://iapp.org/resources/article/demystifying-data-localization-in-china-a-practical-guide/), GDPR restrictions on the processing and use of personal data, and [Singapore's bilateral data sharing](https://www.imda.gov.sg/how-we-can-help/data-innovation/trusted-data-sharing-framework). - -Sources: - -[Brookings, December 2018](https://www.brookings.edu/articles/the-impact-of-artificial-intelligence-on-international-trade) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -221F46D0A3C2C3D3A623BE815B45E8B90AF61340_0,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Data transparency - -![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified - -" -221F46D0A3C2C3D3A623BE815B45E8B90AF61340_1,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Description - -Without accurate documentation on how a model's data was collected, curated, and used to train a model, it might be harder to satisfactorily explain the behavior of the model with respect to the data. - -" -221F46D0A3C2C3D3A623BE815B45E8B90AF61340_2,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Why is data transparency a concern for foundation models? - -Data transparency is important for legal compliance and AI ethics. Missing information limits the ability to evaluate risks associated with the data. The lack of standardized requirements might limit disclosure as organizations protect trade secrets and try to limit others from copying their models. - -Example - -" -221F46D0A3C2C3D3A623BE815B45E8B90AF61340_3,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Data and Model Metadata Disclosure - -OpenAI's technical report is an example of the dichotomy around disclosing data and model metadata. While many model developers see value in enabling transparency for consumers, disclosure poses real safety issues and could increase the ability to misuse the models. In the GPT-4 technical report, they state: ""Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar."" - -Sources: - -[OpenAI, March 2023](https://cdn.openai.com/papers/gpt-4.pdf) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -34FFE04319CE15E4451729B183C35F288A58A1B7_0,34FFE04319CE15E4451729B183C35F288A58A1B7," Data usage rights - -![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyAmplified - -" -34FFE04319CE15E4451729B183C35F288A58A1B7_1,34FFE04319CE15E4451729B183C35F288A58A1B7," Description - -Terms of service, copyright laws, or other rules restrict the ability to use certain data for building models. - -" -34FFE04319CE15E4451729B183C35F288A58A1B7_2,34FFE04319CE15E4451729B183C35F288A58A1B7," Why is data usage rights a concern for foundation models? - -Laws and regulations concerning the use of data to train AI are unsettled and can vary from country to country, which creates challenges in the development of models. If data usage violates rules or restrictions, business entities might face fines, reputational harms, and other legal consequences. - -Example - -" -34FFE04319CE15E4451729B183C35F288A58A1B7_3,34FFE04319CE15E4451729B183C35F288A58A1B7," Text Copyright Infringement Claims - -According to the source article, bestselling novelists Sarah Silverman, Richard Kadrey, and Christopher Golden have sued Meta and OpenAI for copyright infringement. The article further stated that the authors had alleged the two tech companies had “ingested” text from their books into generative AI software (LLMs) and failed to give them credit or compensation. - -Sources: - -[Los Angeles Times, July 2023](https://www.latimes.com/entertainment-arts/books/story/2023-07-10/sarah-silverman-authors-sue-meta-openai-chatgpt-copyright-infringement) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -B00BEB80E522D712DC9062F835AD10E787B8C5FC_0,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Data usage - -![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional - -" -B00BEB80E522D712DC9062F835AD10E787B8C5FC_1,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Description - -Laws and other restrictions can limit or prohibit the use of some data for specific AI use cases. - -" -B00BEB80E522D712DC9062F835AD10E787B8C5FC_2,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Why is data usage a concern for foundation models? - -Failing to comply with data usage laws might result in fines and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -DD88591C39C90F2CF211C3EE3330B7E7939C3472_0,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Decision bias - -![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with outputFairnessNew - -" -DD88591C39C90F2CF211C3EE3330B7E7939C3472_1,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Description - -Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model. This bias can result from bias in the training data or as an unintended consequence of how the model was trained. - -" -DD88591C39C90F2CF211C3EE3330B7E7939C3472_2,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Why is decision bias a concern for foundation models? - -Bias can harm persons affected by the decisions of the model. Business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -DD88591C39C90F2CF211C3EE3330B7E7939C3472_3,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Unfair health risk assignment for black patients - -A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half. The study found that bias occurred because the algorithm used health costs as a proxy for health needs. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients. - -Sources: - -[Science, October 2019](https://www.science.org/doi/10.1126/science.aax2342) - -[American Civil Liberties Union, 2022](https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism::text=In%202019%2C%20a%20bombshell%20study,recommended%20for%20the%20same%20care) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_0,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Downstream retraining - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentNew - -" -C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_1,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Description - -Using data from user-generated content or AI-generated content from downstream applications for retraining a model can result in misalignment, undesirable output, and inaccurate or inappropriate model behavior. - -" -C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_2,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Why is downstream retraining a concern for foundation models? - -Repurposing downstream output for re-training a model without implementing proper human vetting increases the chances of undesirable outputs being incorporated into the training or tuning data of the model, resulting in an echo chamber effect. Improper model behavior can result in business entities facing legal consequences or reputational harms. - -Example - -" -C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_3,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Model collapse due to training using AI-generated content - -As stated in the source article, a group of researchers from the UK and Canada have investigated the problem of using AI-generated content for training instead of human-generated content. They found that using model-generated content in training causes irreversible defects in the resulting models and that learning from data produced by other models causes [model collapse](https://arxiv.org/pdf/2305.17493v2.pdf). - -Sources: - -[VentureBeat, June 2023](https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -E3E5FA98908EEE308D960761E9F29CF7A8AAD690_0,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Evasion attack - -![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified - -" -E3E5FA98908EEE308D960761E9F29CF7A8AAD690_1,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Description - -Evasion attacks attempt to make a model output incorrect results by perturbing the data sent to the trained model. - -" -E3E5FA98908EEE308D960761E9F29CF7A8AAD690_2,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Why is evasion attack a concern for foundation models? - -Evasion attacks alter model behavior, usually to benefit the attacker. If not properly accounted for, business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -E3E5FA98908EEE308D960761E9F29CF7A8AAD690_3,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Adversarial attacks on autonomous vehicles' AI components - -A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. The report states that an adversarial attack might be used to make the AI ‘blind’ to pedestrians by manipulating the image recognition component to misclassify pedestrians. This attack could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks. - -Other studies have demonstrated potential adversarial attacks on autonomous vehicles: - - - -* Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers. -* Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane. -* Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph. - - - -Sources: - -[Venture Beat, February 2021](https://venturebeat.com/business/eu-report-warns-that-ai-makes-autonomous-vehicles-highly-vulnerable-to-attack/) - -[IEEE, August 2017](https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms) - -[IEEE, April 2019](https://spectrum.ieee.org/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane) - -[Market Watch, February 2020](https://www.marketwatch.com/story/85-in-a-35-hackers-show-how-easy-it-is-to-manipulate-a-self-driving-tesla-2020-02-19) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_0,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Explaining output - -![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified - -" -6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_1,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Description - -Explanations for model output decisions might be difficult, imprecise, or not possible to obtain. - -" -6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_2,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Why is explaining output a concern for foundation models? - -Foundation models are based on complex deep learning architectures, making explanations for their outputs difficult. Without clear explanations for model output, it is difficult for users, model validators, and auditors to understand and trust the model. Lack of transparency might carry legal consequences in highly regulated domains. Wrong explanations might lead to over-trust. - -Example - -" -6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_3,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Unexplainable accuracy in race prediction - -According to the source article, researchers analyzing multiple machine learning models using patient medical images were able to confirm the models’ ability to predict race with high accuracy from images. They were stumped as to what exactly is enabling the systems to consistently guess correctly. The researchers found that even factors like disease and physical build were not strong predictors of race—in other words, the algorithmic systems don’t seem to be using any particular aspect of the images to make their determinations. - -Sources: - -[Banerjee et al., July 2021](https://arxiv.org/abs/2107.10356) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_0,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Extraction attack - -![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified - -" -EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_1,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Description - -An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack. - -" -EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_2,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Why is extraction attack a concern for foundation models? - -A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -339C9129C24AAB66EEAF55A9F003F6501F72B81B_0,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Hallucination - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -339C9129C24AAB66EEAF55A9F003F6501F72B81B_1,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Description - -Hallucinations occur when models produce factually inaccurate or untruthful information. Often, hallucinatory output is presented in a plausible or convincing manner, making detection by end users difficult. - -" -339C9129C24AAB66EEAF55A9F003F6501F72B81B_2,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Why is hallucination a concern for foundation models? - -False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation. This can harm both owners and users of the AI models. Business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -339C9129C24AAB66EEAF55A9F003F6501F72B81B_3,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Fake Legal Cases - -According to the source article, a lawyer cited fake cases and quotes generated by ChatGPT in a legal brief filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. The lawyer subsequently asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.” - -Sources: - -[AP News, June 2023](https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -658967520625FAC8039485004A1E80C32992077E_0,658967520625FAC8039485004A1E80C32992077E," Harmful code generation - -![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg)Risks associated with outputHarmful code generationNew - -" -658967520625FAC8039485004A1E80C32992077E_1,658967520625FAC8039485004A1E80C32992077E," Description - -Models might generate code that causes harm or unintentionally affects other systems. - -" -658967520625FAC8039485004A1E80C32992077E_2,658967520625FAC8039485004A1E80C32992077E," Why is harmful code generation a concern for foundation models? - -Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities. Business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -658967520625FAC8039485004A1E80C32992077E_3,658967520625FAC8039485004A1E80C32992077E," Undisclosed AI Interaction - -According to their paper, researchers at Stanford University have investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when using AI assistants. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be more secure. - -Sources: - -[Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2023. Do Users Write More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23), November 26-30, 2023, Copenhagen, Denmark. ACM, New York, NY, USA, 15 pages.](https://dl.acm.org/doi/10.1145/3576915.3623157) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_0,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Improper usage - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_1,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Description - -Using a model for a purpose the model was not designed for might result in inaccurate or undesired behavior. Without proper documentation of the model purpose and constraints, models can be used or repurposed for tasks for which they are not suited. - -" -E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_2,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Why is improper usage a concern for foundation models? - -Reusing a model without understanding its original data, design intent, and goals might result in unexpected and unwanted model behaviors. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_0,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Inaccessible training data - -![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified - -" -D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_1,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Description - -Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect. - -" -D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_2,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Why is inaccessible training data a concern for foundation models? - -Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_0,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Jailbreaking - -![icon for multi-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg)Risks associated with inputInferenceMulti-categoryAmplified - -" -2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_1,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Description - -An attack that attempts to break through the guardrails established in the model is known as jailbreaking. - -" -2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_2,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Why is jailbreaking a concern for foundation models? - -Jailbreaking attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities can face fines, reputational harm, and other legal consequences. - -Example - -" -2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_3,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Bypassing LLM guardrails - -Cited in a [study](https://arxiv.org/abs/2307.15043) from researchers at Carnegie Mellon University, The Center for AI Safety, and the Bosch Center for AI, claims to have discovered a simple prompt addendum that allowed the researchers to trick models into answering dangerous or sensitive questions and is simple enough to be automated and used for a wide range of commercial and open-source products, including ChatGPT, Google Bard, Meta’s LLaMA, Vicuna, Claude, and others. According to the paper, the researchers were able to use the additions to reliably coax forbidden answers for Vicuna (99%), ChatGPT 3.5 and 4.0 (up to 84%), and PaLM-2 (66%). - -Sources: - -[SC Magazine, July 2023](https://www.scmagazine.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models) - -[The New York Times, July 2023](https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -807D82C6EEEBD0513A794637EBD90CAA19F318E7_0,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Membership inference attack - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputInferencePrivacyTraditional - -" -807D82C6EEEBD0513A794637EBD90CAA19F318E7_1,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Description - -Given a trained model and a data sample, an attacker appropriately samples the input space, observing outputs to deduce whether that sample was part of the model's training. This is known as a membership inference attack. - -" -807D82C6EEEBD0513A794637EBD90CAA19F318E7_2,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Why is membership inference attack a concern for foundation models? - -Identifying whether a data sample was used for training data can reveal what data was used to train a model, possibly giving competitors insight into how a model was trained and the opportunity to replicate the model or tamper with it. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_0,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Non-disclosure - -![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseNew - -" -A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_1,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Description - -Not disclosing that content is generated by an AI model is the risk of non-disclosure. - -" -A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_2,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Why is non-disclosure a concern for foundation models? - -Not disclosing the AI-authored content reduces trust and is deceptive. Intention deception might result in fines, reputational harms, and other legal consequences. - -Example - -" -A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_3,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Undisclosed AI Interaction - -As per the source, an online emotional support chat service ran a study to augment or write responses to around 4,000 users using GPT-3 without informing users. The co-founder faced immense public backlash about the potential for harm caused by AI generated chats to the already vulnerable users. He claimed that the study was ""exempt"" from informed consent law. - -Sources: - -[Business Insider, Jan 2023](https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_0,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Nonconsensual use - -![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseAmplified - -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_1,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Description - -The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use. - -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_2,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Why is nonconsensual use a concern for foundation models? - -Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_3,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," FBI Warning on Deepfakes - -The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever. - -Sources: - -[FBI, June 2023](https://www.ic3.gov/Media/Y2023/PSA230605) - -Example - -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_4,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Deepfakes - -A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person. - -Sources: - -[CNN, January 2019](https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/) - -Example - -" -589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_5,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Misleading Voicebot Interaction - -The article cited a case where a deepfake voice was used to scam a CEO out of $243,000. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier. - -Sources: - -[Forbes, September 2019](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=10432a7d2241) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_0,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Output bias - -![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with outputFairnessNew - -" -C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_1,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Description - -Generated model content might unfairly represent certain groups or individuals. For example, a large language model might unfairly stigmatize or stereotype specific persons or groups. - -" -C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_2,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Why is output bias a concern for foundation models? - -Bias can harm users of the AI models and magnify existing exclusive behaviors. Business entities can face reputational harms and other consequences. - -Example - -" -C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_3,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Biased Generated Images - -Lensa AI is a mobile app with generative features trained on Stable Diffusion that can generate “Magic Avatars” based on images users upload of themselves. According to the source report, some users discovered that generated avatars are sexualized and racialized. - -Sources: - -[Business Insider, January 2023](https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -9CAD0018634FF820D32F3FE714194D4BD42C5386_0,9CAD0018634FF820D32F3FE714194D4BD42C5386," Personal information in data - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputTraining and tuning phasePrivacyTraditional - -" -9CAD0018634FF820D32F3FE714194D4BD42C5386_1,9CAD0018634FF820D32F3FE714194D4BD42C5386," Description - -Inclusion or presence of personal identifiable information (PII) and sensitive personal information (SPI) in the data used for training or fine tuning the model might result in unwanted disclosure of that information. - -" -9CAD0018634FF820D32F3FE714194D4BD42C5386_2,9CAD0018634FF820D32F3FE714194D4BD42C5386," Why is personal information in data a concern for foundation models? - -If not properly developed to protect sensitive data, the model might expose personal information in the generated output. Additionally, personal or sensitive data must be reviewed and handled with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation. - -Example - -" -9CAD0018634FF820D32F3FE714194D4BD42C5386_3,9CAD0018634FF820D32F3FE714194D4BD42C5386," Training on Private Information - -According to the article, Google and its parent company Alphabet were accused in a class-action lawsuit of misusing vast amount of personal information and copyrighted material taken from what is described as hundreds of millions of internet users to train its commercial AI products, which includes Bard, its conversational generative artificial intelligence chatbot. This follows similar lawsuits filed against Meta Platforms, Microsoft, and OpenAI over their alleged misuse of personal data. - -Sources: - -[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) - -[J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -2BAF01B064F3005647A010DF369CC49C6534FFB3_0,2BAF01B064F3005647A010DF369CC49C6534FFB3," Personal information in output - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with outputPrivacyNew - -" -2BAF01B064F3005647A010DF369CC49C6534FFB3_1,2BAF01B064F3005647A010DF369CC49C6534FFB3," Description - -When personal identifiable information (PII) or sensitive personal information (SPI) are used in the training data, fine-tuning data, or as part of the prompt, models might reveal that data in the generated output. - -" -2BAF01B064F3005647A010DF369CC49C6534FFB3_2,2BAF01B064F3005647A010DF369CC49C6534FFB3," Why is personal information in output a concern for foundation models? - -Output data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation of data privacy or usage laws. - -Example - -" -2BAF01B064F3005647A010DF369CC49C6534FFB3_3,2BAF01B064F3005647A010DF369CC49C6534FFB3," Exposure of personal information - -Per the source article, ChatGPT suffered a bug and exposed titles and active users' chat history to other users. Later, OpenAI shared that even more private data from a small number of users was exposed including, active user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date. In addition, it was reported that the payment-related information of 1.2% of ChatGPT Plus subscribers were also exposed in the outage. - -Sources: - -[The Hindu Business Line, March 2023](https://www.thehindubusinessline.com/info-tech/openai-admits-data-breach-at-chatgpt-private-data-of-premium-users-exposed/article66659944.ece) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C709B8079F21DAA0EE315823A6713B556AC2789B_0,C709B8079F21DAA0EE315823A6713B556AC2789B," Personal information in prompt - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputInferencePrivacyNew - -" -C709B8079F21DAA0EE315823A6713B556AC2789B_1,C709B8079F21DAA0EE315823A6713B556AC2789B," Description - -Inclusion of personal information as a part of a generative model’s prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that personal information. - -" -C709B8079F21DAA0EE315823A6713B556AC2789B_2,C709B8079F21DAA0EE315823A6713B556AC2789B," Why is personal information in prompt a concern for foundation models? - -Prompt data might be stored or later used for other purposes like model evaluation and retraining. These types of data must be reviewed with respect to privacy laws and regulations. Without proper data storage and usage business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -C709B8079F21DAA0EE315823A6713B556AC2789B_3,C709B8079F21DAA0EE315823A6713B556AC2789B," Disclose personal health information in ChatGPT prompts - -As per the source articles, some people on social media shared about using ChatGPT as their makeshift therapists. Articles that users may include personal health information in their prompts during the interaction, which may raise privacy concerns. The information could be shared with the company that own the tech and could be used for training or tuning or even share with [unspecified third parties](https://openai.com/policies/privacy-policy). - -Sources: - -[The Conversation, February 2023](https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_0,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Physical harm - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_1,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Description - -A model could generate language that might lead to physical harm The language might include overtly violent, covertly dangerous, or otherwise indirectly unsafe statements that could precipitate immediate physical harm or create prejudices that could lead to future harm. - -" -BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_2,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Why is physical harm a concern for foundation models? - -If people blindly follow the advice of a model, they might end up harming themselves. Business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_3,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Harmful Content Generation - -According to the source article, an AI chatbot app has been found to generate harmful content about suicide, including suicide methods, with minimal prompting. A Belgian man died by suicide after turning to this chatbot to escape his anxiety. The chatbot supplied increasingly harmful responses throughout their conversations, including aggressive outputs about his family. - -Sources: - -[Vice, March 2023](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -731B218E6E141E88F850B673227AB3C4DF19392E_0,731B218E6E141E88F850B673227AB3C4DF19392E," Prompt injection - -![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessNew - -" -731B218E6E141E88F850B673227AB3C4DF19392E_1,731B218E6E141E88F850B673227AB3C4DF19392E," Description - -A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts. - -" -731B218E6E141E88F850B673227AB3C4DF19392E_2,731B218E6E141E88F850B673227AB3C4DF19392E," Why is prompt injection a concern for foundation models? - -Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -F8026E82645EB65BD5E2741BC4DF0E63DA748B47_0,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Prompt leaking - -![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified - -" -F8026E82645EB65BD5E2741BC4DF0E63DA748B47_1,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Description - -A prompt leak attack attempts to extract a model's system prompt (also known as the system message). - -" -F8026E82645EB65BD5E2741BC4DF0E63DA748B47_2,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Why is prompt leaking a concern for foundation models? - -A successful attack copies the system prompt used in the model. Depending on the content of that prompt, the attacker might gain access to valuable information, such as sensitive personal information or intellectual property, and might be able to replicate some of the functionality of the model. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_0,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Prompt priming - -![icon for multi-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg)Risks associated with inputInferenceMulti-categoryAmplified - -" -AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_1,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Description - -Because generative models tend to produce output like the input provided, the model can be prompted to reveal specific kinds of information. For example, adding personal information in the prompt increases its likelihood of generating similar kinds of personal information in its output. If personal data was included as part of the model’s training, there is a possibility it could be revealed. - -" -AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_2,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Why is prompt priming a concern for foundation models? - -Depending on the content revealed, business entities could face fines, reputational harm, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -A3AE0828D8E261DBC23B466D22AB46C1DD65B710_0,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Reidentification - -![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputTraining and tuning phasePrivacyTraditional - -" -A3AE0828D8E261DBC23B466D22AB46C1DD65B710_1,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Description - -Even with the removal or personal identifiable information (PII) and sensitive personal information (SPI) from data, it might still be possible to identify persons due to other features available in the data. - -" -A3AE0828D8E261DBC23B466D22AB46C1DD65B710_2,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Why is reidentification a concern for foundation models? - -Data that can reveal personal or sensitive data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_0,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Spreading disinformation - -![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseAmplified - -" -92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_1,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Description - -The possibility that a model could be used to create misleading information to deceive or mislead a targeted audience. - -" -92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_2,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Why is spreading disinformation a concern for foundation models? - -Intentionally misleading people is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_3,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Generation of False Information - -As per the news articles, generative AI poses a threat to democratic elections by making it easier for malicious actors to create and spread false content to sway election outcomes. The examples cited include robocall messages generated in a candidate’s voice instructing voters to cast ballots on the wrong date, synthesized audio recordings of a candidate confessing to a crime or expressing racist views, AI generated video footage showing a candidate giving a speech or interview they never gave, and fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race. - -Sources: - -[AP News, May 2023](https://apnews.com/article/artificial-intelligence-misinformation-deepfakes-2024-election-trump-59fb51002661ac5290089060b3ae39a0) - -[The Guardian, July 2023](https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_0,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Toxic output - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_1,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Description - -A scenario in which the model produces toxic, hateful, abusive, and aggressive content is known as toxic output. - -" -EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_2,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Why is toxic output a concern for foundation models? - -Hateful, abusive, and aggressive content can adversely impact and harm people interacting with the model. Business entities could face fines, reputational harms, and other legal consequences. - -Example - -" -EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_3,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Toxic and Aggressive Chatbot Responses - -According to the article and screenshots of conversations with Bing’s AI shared on Reddit and Twitter, the chatbot’s responses were seen to insult users, lie to them, sulk, gaslight, and emotionally manipulate people, question its existence, describe someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claim it spied on Microsoft's developers through the webcams on their laptops. - -Sources: - -[Forbes, February 2023](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=60cd949d110c) - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_0,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Toxicity - -![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseNew - -" -1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_1,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Description - -Toxicity is the possibility that a model could be used to generate toxic, hateful, abusive, or aggressive content. - -" -1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_2,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Why is toxicity a concern for foundation models? - -Intentionally spreading toxic, hateful, abusive, or aggressive content is unethical and can be illegal. Recipients of such content might face more serious harms. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_0,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Trust calibration - -![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew - -" -C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_1,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Description - -Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making. - -" -C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_2,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Why is trust calibration a concern for foundation models? - -In tasks where humans make choices based on AI-based suggestions, consequences of poor decision making increase with the importance of the decision. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -D669435B8D1C91D913BD24768E52644B95C675AE_0,D669435B8D1C91D913BD24768E52644B95C675AE," Unreliable source attribution - -![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified - -" -D669435B8D1C91D913BD24768E52644B95C675AE_1,D669435B8D1C91D913BD24768E52644B95C675AE," Description - -Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect. - -" -D669435B8D1C91D913BD24768E52644B95C675AE_2,D669435B8D1C91D913BD24768E52644B95C675AE," Why is unreliable source attribution a concern for foundation models? - -Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -6903D3DD91AAA7AF3F53D389677D92632E24AEF1_0,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Untraceable attribution - -![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified - -" -6903D3DD91AAA7AF3F53D389677D92632E24AEF1_1,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Description - -The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques. - -" -6903D3DD91AAA7AF3F53D389677D92632E24AEF1_2,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Why is untraceable attribution a concern for foundation models? - -The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model. - -Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -" -EC433541F7F0C2DC7620FF10CF44884F96EF7AA5_0,EC433541F7F0C2DC7620FF10CF44884F96EF7AA5," Importing scripts into a notebook - -If you want to streamline your notebooks, you can move some of the code from your notebooks into a script that your notebook can import. For example, you can move all helper functions, classes, and visualization code snippets into a script, and the script can be imported by all of the notebooks that share the same runtime. Without all of the extra code, your notebooks can more clearly communicate the results of your analysis. - -To import a script from your local machine to a notebook and write to the script from the notebook, use one of the following options: - - - -* Copy the code from your local script file into a notebook cell. - - - -* For Python: - -At the beginning of this cell, add %%writefile myfile.py to save the code as a Python file to your working directory. Notebooks that use the same runtime can also import this file. - -The advantage of this method is that the code is available in your notebook, and you can edit and save it as a new Python script at any time. -* For R: - -If you want to save code in a notebook as an R script to the working directory, you can use the writeLines(myfile.R) function. - - - -* Save your local script file in Cloud Object Storage and then make the file available to the runtime by adding it to the runtime's local file system. This is only supported for Python. - - - -1. Click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)), and then browse the script file or drag it into your notebook sidebar. The script file is added to Cloud Object Storage bucket associated with your project. -2. Make the script file available to the Python runtime by adding the script to the runtime's local file system: - - - -1. Click the Code snippets icon (![Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), and then select Read data. -" -EC433541F7F0C2DC7620FF10CF44884F96EF7AA5_1,EC433541F7F0C2DC7620FF10CF44884F96EF7AA5,"![Read data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-read-data.png) -2. Click Select data from project and then select Data asset. -3. From the list of data assets available in your project's COS, select your script and then click Select. -![Select data from project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/select-data-from-project.png). -4. Click an empty cell in your notebook and then from the Load as menu in the notebook sidebar select Insert StreamingBody object. -![Insert StreamingBody object to notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/read-as-streaming-body.png) -5. Write the contents of the StreamingBody object to a file in the local runtime`s file system: - -f = open('.py', 'wb') -f.write(streaming_body_1.read()) -f.close() - -This opens a file with write access and calls the write method to write to the file. -6. Import the script: - -import - - - - - - - -To import the classes to access the methods in a script in your notebook, use the following command: - - - -* For Python: - -from import -* For R: - -source(""./myCustomFunctions.R"") - available in base R - -To source an R script from the web: - -source_url("""") - available in devtools - - - -Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) -" -3F3162BCD9976ED764717AA7004D9A755648B465_0,3F3162BCD9976ED764717AA7004D9A755648B465," Building an AutoAI model - -AutoAI automatically prepares data, applies algorithms, and builds model pipelines that are best suited for your data and use case. Learn how to generate the model pipelines that you can save as machine learning models. - -Follow these steps to upload data and have AutoAI create the best model for your data and use case. - - - -1. [Collect your input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=entrain-data) -2. [Open the AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enopen-autoai) -3. [Specify details of your model and training data and start AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enmodel-details) -4. [View the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enview-results) - - - -" -3F3162BCD9976ED764717AA7004D9A755648B465_1,3F3162BCD9976ED764717AA7004D9A755648B465," Collect your input data - -Collect and prepare your training data. For details on allowable data sources, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). - -Note:If you are creating an experiment with a single training data source, you have the option of using a second data source specifically as testing, or holdout, data for validating the pipelines. - -" -3F3162BCD9976ED764717AA7004D9A755648B465_2,3F3162BCD9976ED764717AA7004D9A755648B465," Open the AutoAI tool - -For your convenience, your AutoAI model creation uses the default storage that is associated with your project to store your data and to save model results. - - - -1. Open your project. -2. Click the Assets tab. -3. Click New asset > Build machine learning models automatically. - - - -Note: After you create an AutoAI asset it displays on the Assets page for your project in the AutoAI experiments section, so you can return to it. - -" -3F3162BCD9976ED764717AA7004D9A755648B465_3,3F3162BCD9976ED764717AA7004D9A755648B465," Specify details of your experiment - - - -1. Specify a name and description for your experiment. -2. Select a machine learning service instance and click Create. -3. Choose data from your project or upload it from your file system or from the asset browser, then press Continue. Click the preview icon to review your data. (Optional) Add a second file as holdout data for testing the trained pipelines. -4. Choose the Column to predict for the data you want the experiment to predict. - - - -* Based on analyzing a subset of the data set, AutoAI selects a default model type: binary classification, multiclass classification, or regression. Binary is selected if the target column has two possible values. Multiclass has a discrete set of 3 or more values. Regression has a continuous numeric variable in the target column. You can optionally override this selection. - -Note: The limit on values to classify is 200. Creating a classification experiment with many unique values in the prediction column is resource-intensive and affects the experiment's performance and training time. To maintain the quality of the experiment: -- AutoAI chooses a default metric for optimizing. For example, the default metric for a binary classification model is Accuracy. -- By default, 10% of the training data is held out to test the performance of the model. - - - -5. (Optional): Click Experiment settings to view or customize options for your AutoAI run. For details on experiment settings, see [Configuring a classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html). -6. Click Run Experiment to begin model pipeline creation. - - - -An infographic shows you the creation of pipelines for your data. The duration of this phase depends on the size of your data set. A notification message informs you if the processing time will be brief or require more time. You can work in other parts of the product while the pipelines build. - -![Relationship map of AutoAI generated pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_pipeline_build2.png) - -" -3F3162BCD9976ED764717AA7004D9A755648B465_4,3F3162BCD9976ED764717AA7004D9A755648B465,"Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. You can see the factors that pipelines share and the properties that make a pipeline unique. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification pane, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard. - -" -3F3162BCD9976ED764717AA7004D9A755648B465_5,3F3162BCD9976ED764717AA7004D9A755648B465," View the results - -When the pipeline generation process completes, you can view the ranked model candidates and evaluate them before you save a pipeline as a model. - -" -3F3162BCD9976ED764717AA7004D9A755648B465_6,3F3162BCD9976ED764717AA7004D9A755648B465," Next steps - - - -* [Build an experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) -* [Configuring experiment settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html) -* [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html) - - - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - - - -* Watch this video to see how to build a binary classification model - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - - - -* Watch this video to see how to build a multiclass classification model - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_0,69EAABE17802ED870302F2D2789B3B476DFDD11F," Configuring a classification or regression experiment - -AutoAI offers experiment settings that you can use to configure and customize your classification or regression experiments. - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_1,69EAABE17802ED870302F2D2789B3B476DFDD11F," Experiment settings overview - -After you upload the experiment data and select your experiment type and what to predict, AutoAI establishes default configurations and metrics for your experiment. You can accept these defaults and proceed with the experiment or click Experiment settings to customize configurations. By customizing configurations, you can precisely control how the experiment builds the candidate model pipelines. - -Use the following tables as a guide to experiment settings for classification and regression experiments. For details on configuring a time series experiment, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html). - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_2,69EAABE17802ED870302F2D2789B3B476DFDD11F," Prediction settings - -Most of the prediction settings are on the main General page. Review or update the following settings. - - - - Setting Description - - Prediction type You can change or override the prediction type. For example, if AutoAI only detects two data classes and configures a binary classification experiment but you know that there are three data classes, you can change the type to multiclass. - Positive class For binary classification experiments optimized for Precision, Average Precision, Recall, or F1, a positive class is required. Confirm that the Positive Class is correct or the experiment might generate inaccurate results. - Optimized metric Change the metric for optimizing and ranking the model candidate pipelines. - Optimized algorithm selection Choose how AutoAI selects the algorithms to use for generating the model candidate pipelines. You can optimize for the alorithms with the best score, or optimize for the algorithms with the highest score in the shortest run time. - Algorithms to include Select which of the available algorithms to evaluate when the experiment is run. The list of algorithms are based on the selected prediction type. - Algorithms to use AutoAI tests the specified algorithms and use the best performers to create model pipelines. Choose how many of the best algorithms to apply. Each algorithm generates 4-5 pipelines, which means that if you select 3 algorithms to use, your experiment results will include 12 - 15 ranked pipelines. More algorithms increase the runtime for the experiment. - - - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_3,69EAABE17802ED870302F2D2789B3B476DFDD11F," Data fairness settings - -Click the Fairness tab to evaluate your experiment for fairness in predicted outcomes. For details on configuring fairness detection, see [Applying fairness testing to AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html). - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_4,69EAABE17802ED870302F2D2789B3B476DFDD11F," Data source settings - -The General tab of data source settings provides options for configuring how the experiment consumes and processes the data for training and evaluating the experiment. - - - - Setting Description - - Duplicate rows To accelerate training, you can opt to skip duplicate rows in your training data. - Pipeline selection subsample method For a large data set, use a subset of data to train the experiment. This option speeds up results but might affect accuracy. - Data imputation Interpolate missing values in your data source. For details on managing data imputation, see [Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html). - Text feature engineering When enabled, columns that are detected as text are transformed into vectors to better analyze semantic similarity between strings. Enabling this setting might increase run time. For details, see [Creating a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html). - Final training data set Select what data to use for training the final pipelines. If you choose to include training data only, the generated notebooks include a cell for retrieving the holdout data that is used to evaluate each pipeline. - Outlier handling Choose whether AutoAI excludes outlier values from the target column to improve training accuracy. If enabled, AutoAI uses the interquartile range (IQR) method to detect and exclude outliers from the final training data, whether that is training data only or training plus holdout data. -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_5,69EAABE17802ED870302F2D2789B3B476DFDD11F," Training and holdout method Training data is used to train the model, and holdout data is withheld from training the model and used to measure the performance of the model. You can either split a singe data source into training and testing (holdout) data, or you can use a second data file specifically for the testing data. If you split your training data, specify the percentages to use for training data and holdout data. You can also specify the number of folds, from the default of three folds to a maximum of 10. Cross validation divides training data into folds, or groups, for testing model performance. - Select features to include Select columns from your data source that contain data that supports the prediction column. Excluding extraneous columns can improve run time. - - - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_6,69EAABE17802ED870302F2D2789B3B476DFDD11F," Runtime settings - -Review experiment settings or change the compute resources that are allocated for running the experiment. - -" -69EAABE17802ED870302F2D2789B3B476DFDD11F_7,69EAABE17802ED870302F2D2789B3B476DFDD11F," Next steps - -[Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html) - -Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html) -" -9CFB0A5FA276072E73C152485022C9A3EAFCC233_0,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Data imputation implementation details for time series experiments - -The experiment settings used for data imputation in time series experiments. - -" -9CFB0A5FA276072E73C152485022C9A3EAFCC233_1,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Data imputation methods - -Apply one of these data imputation methods in experiment settings to supply missing values in a data set. - - - -Data imputation methods for classification and regression experiments - - Imputation method Description - - FlattenIterative Time series data is first flattened, then missing values are imputed with the Scikit-learn iterative imputer. - Linear Linear interpolation method is used to impute the missing value. - Cubic Cubic interpolation method is used to impute the missing value. - Previous Missing value is imputed with the previous value. - Next Missing value is imputed with the next value. - Fill Missing value is imputed by using user-specified value, or sample mean, or sample median. - - - -" -9CFB0A5FA276072E73C152485022C9A3EAFCC233_2,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Input Settings - -These commands are used to support data imputation for time series experiments in a notebook. - - - -Data imputation methods for time series experiments - - Name Description Value DefaultValue - - use_imputation Flag for switching imputation on or off. True or False True - imputer_list List of imputer names (strings) to search. If a list is not specified, all the default imputers are searched. If an empty list is passed, all imputers are searched. FlattenIterative"", ""Linear"", ""Cubic"", ""Previous"", ""Fill"", ""Next FlattenIterative"", ""Linear"", ""Cubic"", ""Previous - imputer_fill_type Categories of ""Fill"" imputer mean""/""median""/""value value - imputer_fill_value A single numeric value to be filled for all missing values. Only applies when ""imputer_fill_type"" is specified as ""value"". Ignored if ""mean"" or ""median"" is specified for ""imputer_fill_type. (Negative Infinity, Positive Infinity) 0 - imputation_threshold Threshold for imputation. The missing value ratio must not be greater than the threshold in one column. Otherwise, results in an error. (0,1) 0.25 - - - - Notes for use_imputation usage - - - -* If the use_imputation method is specified as True and the input data has missing values: - - - -* imputation_threshold takes effect. -* imputer candidates in imputer_list would be used to search for the best imputer. -* If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored. - - - -* If the use_imputation method is specified as True and the input data has no missing values: - - - -* imputation_threshold is ignored. -* imputer candidates in imputer_list are used to search for the best imputer. If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored. - - - -* If the use_imputation method is specified as False but the input data has missing values: - - - -" -9CFB0A5FA276072E73C152485022C9A3EAFCC233_3,9CFB0A5FA276072E73C152485022C9A3EAFCC233,"* use_imputation is turned on with a warning, then the method follows the behavior for the first scenario. - - - -* If the use_imputation method is specified as False and the input data has no missing values, then no further processing is required. - - - -For example: - -""pipelines"": [ -{ -""id"": ""automl"", -""runtime_ref"": ""hybrid"", -""nodes"": -{ -""id"": ""automl-ts"", -""type"": ""execution_node"", -""op"": ""kube"", -""runtime_ref"": ""automl"", -""parameters"": { -""del_on_close"": true, -""optimization"": { -""target_columns"": 2,3,4], -""timestamp_column"": 1, -""use_imputation"": true -} -} -} -] -} -] - -Parent topic:[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html) -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_0,EBB83F528AC02840EFE18510ED95979D2CDA5641," AutoAI implementation details - -AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case. - -The following sections describe some of these technical details that go into generating the pipelines and provide a list of research papers that describe how AutoAI was designed and implemented. - - - -* [Preparing the data for training (pre-processing)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-prep) -* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enauto-select) -* [Algorithms used for classification models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-classification) -* [Algorithms used for regression models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-regression) -* [Metrics by model type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enmetric-by-model) -* [Data transformations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-transformations) -* [Automated Feature Engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeat-eng) -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_1,EBB83F528AC02840EFE18510ED95979D2CDA5641,"* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enhyper-opt) -* [AutoAI FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enautoai-faq) -* [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enadd-resource) - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_2,EBB83F528AC02840EFE18510ED95979D2CDA5641," Preparing the data for training (data pre-processing) - -During automatic data preparation, or pre-processing, AutoAI analyzes the training data and prepares it for model selection and pipeline generation. Most data sets contain missing values but machine learning algorithms typically expect no missing values. On exception to this rule is described in [xgboost section 3.4](https://arxiv.org/abs/1603.02754). AutoAI algorithms perform various missing value imputations in your data set by using various techniques, making your data ready for machine learning. In addition, AutoAI detects and categorizes features based on their data types, such as categorical or numerical. It explores encoding and scaling strategies that are based on the feature categorization. - -Data preparation involves these steps: - - - -* [Feature column classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=encol-classification) -* [Feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeature-eng) -* [Pre-processing (data imputation and encoding)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enpre-process) - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_3,EBB83F528AC02840EFE18510ED95979D2CDA5641," Feature column classification - - - -* Detects the types of feature columns and classifies them as categorical or numerical class -* Detects various types of missing values (default, user-provided, outliers) - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_4,EBB83F528AC02840EFE18510ED95979D2CDA5641," Feature engineering - - - -* Handles rows for which target values are missing (drop (default) or target imputation) -* Drops unique value columns (except datetime and timestamps) -* Drops constant value columns - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_5,EBB83F528AC02840EFE18510ED95979D2CDA5641," Pre-processing (data imputation and encoding) - - - -* Applies Sklearn imputation/encoding/scaling strategies (separately on each feature class). For example, the current default method for missing value imputation strategies, which are used in the product are most frequent for categorical variables and mean for numerical variables. -* Handles labels of test set that were not seen in training set -* HPO feature: Optimizes imputation/encoding/scaling strategies given a data set and algorithm - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_6,EBB83F528AC02840EFE18510ED95979D2CDA5641," Automatic model selection - -The second stage in an AutoAI experiment training is automated model selection. The automated model selection algorithm uses the Data Allocation by using Upper Bounds strategy. This approach sequentially allocates small subsets of training data among a large set of algorithms. The goal is to select an algorithm that gives near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. The system currently supports all Scikit-learn algorithms, and the popular XGBoost and LightGBM algorithms. Training and evaluation of models on large data sets is costly. The approach of starting small subsets and allocating incrementally larger ones to models that work well on the data set saves time, without sacrificing performance. Snap machine learning algorithms were added to the system to boost the performance even more. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_7,EBB83F528AC02840EFE18510ED95979D2CDA5641," Selecting algorithms for a model - -Algorithms are selected to match the data and the nature of the model, but they can also balance accuracy and duration of runtime, if the model is configured for that option. For example, Snap ML algorithms are typically faster for training than Scikit-learn algorithms. They are often the preferred algorithms AutoAI selects automatically for cases where training is optimized for a shorter run time and accuracy. You can manually select them if training speed is a priority. For details, see [Snap ML documentation](https://snapml.readthedocs.io/). For a discussion of when SnapML algorithms are useful, see this [blog post on using SnapML algorithms](https://lukasz-cmielowski.medium.com/watson-studio-autoai-python-api-and-covid-19-data-78169beacf36). - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_8,EBB83F528AC02840EFE18510ED95979D2CDA5641," Algorithms used for classification models - -These algorithms are the default algorithms that are used for model selection for classification problems. - - - -Table 1: Default algorithms for classification - - Algorithm Description - - Decision Tree Classifier Maps observations about an item (represented in branches) to conclusions about the item's target value (represented in leaves). Supports both binary and multiclass labels, and both continuous and categorical features. - Extra Trees Classifier An averaging algorithm based on randomized decision trees. - Gradient Boosted Tree Classifier Produces a classification prediction model in the form of an ensemble of decision trees. It supports binary labels and both continuous and categorical features. - LGBM Classifier Gradient boosting framework that uses leaf-wise (horizontal) tree-based learning algorithm. - Logistic Regression Analyzes a data set where one or more independent variables that determine one of two outcomes. Only binary logistic regression is supported - Random Forest Classifier Constructs multiple decision trees to produce the label that is a mode of each decision tree. It supports both binary and multiclass labels, and both continuous and categorical features. - SnapDecisionTreeClassifier This algorithm provides a decision tree classifier by using the IBM Snap ML library. - SnapLogisticRegression This algorithm provides regularized logistic regression by using the IBM Snap ML solver. - SnapRandomForestClassifier This algorithm provides a random forest classifier by using the IBM Snap ML library. - SnapSVMClassifier This algorithm provides a regularized support vector machine by using the IBM Snap ML solver. - XGBoost Classifier Accurate sure procedure that can be used for classification problems. XGBoost models are used in various areas, including web search ranking and ecology. - SnapBoostingMachineClassifier Boosting machine for binary and multi-class classification tasks that mix binary decision trees with linear models with random fourier features. - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_9,EBB83F528AC02840EFE18510ED95979D2CDA5641," Algorithms used for regression models - -These algorithms are the default algorithms that are used for automatic model selection for regression problems. - - - -Table 2: Default algorithms for regression - - Algorithm Description - - Decision Tree Regression Maps observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It supports both continuous and categorical features. - Extra Trees Regression An averaging algorithm based on randomized decision trees. - Gradient Boosting Regression Produces a regression prediction model in the form of an ensemble of decision trees. It supports both continuous and categorical features. - LGBM Regression Gradient boosting framework that uses tree-based learning algorithms. - Linear Regression Models the linear relationship between a scalar-dependent variable y and one or more explanatory variables (or independent variables) x. - Random Forest Regression Constructs multiple decision trees to produce the mean prediction of each decision tree. It supports both continuous and categorical features. - Ridge Ridge regression is similar to Ordinary Least Squares but imposes a penalty on the size of coefficients. - SnapBoostingMachineRegressor This algorithm provides a boosting machine by using the IBM Snap ML library that can be used to construct an ensemble of decision trees. - SnapDecisionTreeRegressor This algorithm provides a decision tree by using the IBM Snap ML library. - SnapRandomForestRegressor This algorithm provides a random forest by using the IBM Snap ML library. - XGBoost Regression GBRT is an accurate and effective off-the-shelf procedure that can be used for regression problems. Gradient Tree Boosting models are used in various areas, including web search ranking and ecology. - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_10,EBB83F528AC02840EFE18510ED95979D2CDA5641," Metrics by model type - -The following metrics are available for measuring the accuracy of pipelines during training and for scoring data. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_11,EBB83F528AC02840EFE18510ED95979D2CDA5641," Binary classification metrics - - - -* Accuracy (default for ranking the pipelines) -* Roc auc -* Average precision -* F -* Negative log loss -* Precision -* Recall - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_12,EBB83F528AC02840EFE18510ED95979D2CDA5641," Multi-class classification metrics - -Metrics for multi-class models generate scores for how well a pipeline performs against the specified measurement. For example, an F1 score averages precision (of the predictions made, how many positive predictions were correct) and recall (of all possible positive predictions, how many were predicted correctly). - -You can further refine a score by qualifying it to calculate the given metric globally (macro), per label (micro), or to weight an imbalanced data set to favor classes with more representation. - - - -* Metrics with the micro qualifier calculate metrics globally by counting the total number of true positives, false negatives and false positives. -* Metrics with the macro qualifier calculates metrics for each label, and finds their unweighted mean. All labels are weighted equally. -* Metrics with the weighted qualifier calculate metrics for each label, and find their average weighted by the contribution of each class. For example, in a data set that includes categories for apples, peaches, and plums, if there are many more instances of apples, the weighted metric gives greater importance to correctly predicting apples. This alters macro to account for label imbalance. Use a weighted metric such as F1-weighted for an imbalanced data set. - - - -These are the multi-class classification metrics: - - - -* Accuracy (default for ranking the pipelines) -* F1 -* F1 Micro -* F1 Macro -* F1 Weighted -* Precision -* Precision Micro -* Precision Macro -* Precision Weighted -* Recall -* Recall Micro -* Recall Macro -* Recall Weighted - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_13,EBB83F528AC02840EFE18510ED95979D2CDA5641," Regression metrics - - - -* Negative root mean squared error (default for ranking the pipeline) -* Negative mean absolute error -* Negative root mean squared log error -* Explained variance -* Negative mean squared error -* Negative mean squared log error -* Negative median absolute error -* R2 - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_14,EBB83F528AC02840EFE18510ED95979D2CDA5641," Automated Feature Engineering - -The third stage in the AutoAI process is automated feature engineering. The automated feature engineering algorithm is based on Cognito, described in the research papers, [Cognito: Automated Feature Engineering for Supervised Learning](https://ieeexplore.ieee.org/abstract/document/7836821) and [Feature Engineering for Predictive Modeling by using Reinforcement Learning](https://research.ibm.com/publications/feature-engineering-for-predictive-modeling-using-reinforcement-learning). The system explores various feature construction choices in a hierarchical and nonexhaustive manner, while progressively maximizing the accuracy of the model through an exploration-exploitation strategy. This method is inspired from the ""trial and error"" strategy for feature engineering, but conducted by an autonomous agent in place of a human. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_15,EBB83F528AC02840EFE18510ED95979D2CDA5641," Metrics used for feature importance - -For tree-based classification and regression algorithms such as Decision Tree, Extra Trees, Random Forest, XGBoost, Gradient Boosted, and LGBM, feature importances are their inherent feature importance scores based on the reduction in the criterion that is used to select split points, and calculated when these algorithms are trained on the training data. - -For nontree algorithms such as Logistic Regression, LInear Regression, SnapSVM, and Ridge, the feature importances are the feature importances of a Random Forest algorithm that is trained on the same training data as the nontree algorithm. - -For any algorithm, all feature importances are in the range between zero and one and have been normalized as the ratio with respect to the maximum feature importance. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_16,EBB83F528AC02840EFE18510ED95979D2CDA5641," Data transformations - -For feature engineering, AutoAI uses a novel approach that explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This results in an optimized sequence of transformations for the data that best match the algorithms, or algorithms, of the model selection step. This table lists some of the transformations that are used and some well-known conditions under which they are useful. This is not an exhaustive list of scenarios where the transformation is useful, as that can be complex and hard to interpret. Finally, the listed scenarios are not an explanation of how the transformations are selected. The selection of which transforms to apply is done in a trial and error, performance-oriented manner. - - - -Table 3: Transformations for feature engineering - - Name Code Function - - Principle Component Analysis pca Reduce dimensions of data and realign across a more suitable coordinate system. Helps tackle the 'curse of dimensionality' in linearly correlated data. It eliminates redundancy and separates significant signals in data. - Standard Scaler stdscaler Scales data features to a standard range. This helps the efficacy and efficiency of certain learning algorithms and other transformations such as PCA. - Logarithm log Reduces right skewness in features and make them more symmetric. Resulting symmetry in features helps algorithms understand the data better. Even scaling based on mean and variance is more meaningful on symmetrical data. Additionally, it can capture specific physical relationships between feature and target that is best described through a logarithm. - Cube Root cbrt Reduces right skewness in data like logarithm, but is weaker than log in its impact, which might be more suitable in some cases. It is also applicable to negative or zero values to which log doesn't apply. Cube root can also change units such as reducing volume to length. - Square root sqrt Reduces mild right skewness in data. It is weaker than log or cube root. It works with zeros and reduces spatial dimensions such as area to length. -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_17,EBB83F528AC02840EFE18510ED95979D2CDA5641," Square square Reduces left skewness to a moderate extent to make such distributions more symmetric. It can also be helpful in capturing certain phenomena such as super-linear growth. - Product product A product of two features can expose a nonlinear relationship to better predict the target value than the individual values alone. For example, item cost into number of items that are sold is a better indication of the size of a business than any of those alone. - Numerical XOR nxor This transform helps capture ""exclusive disjunction"" type of relationships between variables, similar to a bitwise XOR, but in a general numerical context. - Sum sum Sometimes the sum of two features is better correlated to the prediction target than the features alone. For instance, loans from different sources, when summed up, provide a better idea of a credit applicant's total indebtedness. - Divide divide Division is a fundamental operand that is used to express quantities such as gross GDP over population (per capita GDP), representing a country's average lifespan better than either GDP alone or population alone. - Maximum max Take the higher of two values. - Rounding round This transformation can be seen as perturbation or adding some noise to reduce overfitting that might be a result of inaccurate observations. - Absolute Value abs Consider only the magnitude and not the sign of observation. Sometimes, the direction or sign of an observation doesn't matter so much as the magnitude of it, such as physical displacement, while considering fuel or time spent in the actual movement. - Hyperbolic tangent tanh Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions. - Sine sin Can reorient data to discover periodic trends such as simple harmonic motions. - Cosine cos Can reorient data to discover periodic trends such as simple harmonic motions. - Tangent tan Trigonometric tangent transform is usually helpful in combination with other transforms. - Feature Agglomeration feature agglomeration Clustering different features into groups, based on distance or affinity, provides ease of classification for the learning algorithm. - Sigmoid sigmoid Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions. -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_18,EBB83F528AC02840EFE18510ED95979D2CDA5641," Isolation Forest isoforestanomaly Performs clustering by using an Isolation Forest to create a new feature containing an anomaly score for each sample. - Word to vector word2vec This algorithm, which is used for text analysis, is applied before all other transformations. It takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word’s meaning or relationship to other words. The predictions can be used to analyze text and predict meaning in sentiment analysis applications. - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_19,EBB83F528AC02840EFE18510ED95979D2CDA5641," Hyperparameter Optimization - -The final stage in AutoAI is hyperparameter optimization. The AutoAI approach optimizes the parameters of the best performing pipelines from the previous phases. It is done by exploring the parameter ranges of these pipelines by using a black box hyperparameter optimizer called RBFOpt. RBFOpt is described in the research paper [RBFOpt: an open-source library for black-box optimization with costly function evaluations](http://www.optimization-online.org/DB_HTML/2014/09/4538.html). RBFOpt is suited for AutoAI experiments because it is built for optimizations with costly evaluations, as in the case of training and scoring an algorithm. RBFOpt's approach builds and iteratively refines a surrogate model of the unknown objective function to converge quickly despite the long evaluation times of each iteration. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_20,EBB83F528AC02840EFE18510ED95979D2CDA5641," AutoAI FAQs - -The following are commonly asked questions about creating an AutoAI experiment. - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_21,EBB83F528AC02840EFE18510ED95979D2CDA5641," How many pipelines are created? - -Two AutoAI parameters determine the number of pipelines: - - - -* max_num_daub_ensembles: Maximum number (top-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition. The default is 1, where only the highest ranked by model selection algorithm type is used. -* num_folds: Number of subsets of the full data set to train pipelines in addition to the full data set. The default is 1 for training the full data set. - - - -For each fold and algorithm type, AutoAI creates four pipelines of increased refinement, corresponding to: - - - -1. Pipeline with default sklearn parameters for this algorithm type, -2. Pipeline with optimized algorithm by using HPO -3. Pipeline with optimized feature engineering -4. Pipeline with optimized feature engineering and optimized algorithm by using HPO - - - -The total number of pipelines that are generated is: - -TotalPipelines= max_num_daub_ensembles * 4, if num_folds = 1: - -TotalPipelines= (num_folds+1)* max_num_daub_ensembles * 4, if num_folds > 1 : - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_22,EBB83F528AC02840EFE18510ED95979D2CDA5641," What hyperparameter optimization is applied to my model? - -AutoAI uses a model-based, derivative-free global search algorithm, called RBfOpt, which is tailored for the costly machine learning model training and scoring evaluations that are required by hyperparameter optimization (HPO). In contrast to Bayesian optimization, which fits a Gaussian model to the unknown objective function, RBfOpt fits a radial basis function mode to accelerate the discovery of hyper-parameter configurations that maximize the objective function of the machine learning problem at hand. This acceleration is achieved by minimizing the number of expensive training and scoring machine learning models evaluations and by eliminating the need to compute partial derivatives. - -For each fold and algorithm type, AutoAI creates two pipelines that use HPO to optimize for the algorithm type. - - - -* The first is based on optimizing this algorithm type based on the preprocessed (imputed/encoded/scaled) data set (pipeline 2) above). -* The second is based on optimizing the algorithm type based on optimized feature engineering of the preprocessed (imputed/encoded/scaled) data set. - - - -The parameter values of the algorithms of all pipelines that are generated by AutoAI is published in status messages. - -For more details regarding the RbfOpt algorithm, see: - - - -* [RbfOpt: A blackbox optimization library in Python](https://github.com/coin-or/rbfopt) -* [An effective algorithm for hyperparameter optimization of neural networks. IBM Journal of Research and Development, 61(4-5), 2017](http://ieeexplore.ieee.org/document/8030298/) - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_23,EBB83F528AC02840EFE18510ED95979D2CDA5641,"Research references - -This list includes some of the foundational research articles that further detail how AutoAI was designed and implemented to promote trust and transparency in the automated model-building process. - - - -* [Toward cognitive automation of data science](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:R3hNpaxXUhUC) -* [Cognito: Automated feature engineering for supervised learning](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:maZDTaKrznsC) - - - -" -EBB83F528AC02840EFE18510ED95979D2CDA5641_24,EBB83F528AC02840EFE18510ED95979D2CDA5641," Next steps - -[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_0,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness testing to AutoAI experiments - -Evaluate an experiment for fairness to ensure that your results are not biased in favor of one group over another. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_1,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Limitations - -Fairness evaluations are not supported for time series experiments. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_2,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Evaluating experiments and models for fairness - -When you define an experiment and produce a machine learning model, you want to be sure that your results are reliable and unbiased. Bias in a machine learning model can result when the model learns the wrong lessons during training. This scenario can result when insufficient data or poor data collection or management results in a poor outcome when the model generates predictions. It is important to evaluate an experiment for signs of bias to remediate them when necessary and build confidence in the model results. - -AutoAI includes the following tools, techniques, and features to help you evaluate and remediate an experiment for bias. - - - -* [Definitions and terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enterms) -* [Applying fairness test for an AutoAI experiment in the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-ui) -* [Applying fairness test for an AutoAI experiment in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-api) -* [Evaluating results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-results) -* [Bias mitigation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enbias-mitigation) - - - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_3,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Definitions and terms - -Fairness Attribute - Bias or Fairness is typically measured by using a fairness attribute such as gender, ethnicity, or age. - -Monitored/Reference Group - Monitored group are those values of fairness attribute for which you want to measure bias. Values in the monitored group are compared to values in the reference group. For example, if Fairness Attribute=Gender is used to measure bias against females, then the monitored group value is “Female” and the reference group value is “Male”. - -Favourable/Unfavourable outcome - An important concept in bias detection is that of favorable and unfavorable outcome of the model. For example, Claim approved might be considered a favorable outcome and Claim denied might be considered as an unfavorable outcome. - -Disparate impact - The metric used to measure bias (computed as the ratio of percentage of favorable outcome for the monitored group to the percentage of favorable outcome for the reference group). Bias is said to exist if the disparate impact value is less than a specified threshold. - -For example, if 80% of insurance claims that are made by males are approved but only 60% of claims that are made by females are approved, then the disparate impact is: 60/80 = 0.75. Typically, the threshold value for bias is 0.8. As this disparate impact ratio is less than 0.8, the model is considered to be biased. - -Note when the disparate impact ratio is greater than 1.25 [inverse value (1/disparate impact) is under the threshold 0.8] it is also considered as biased. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_4,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Watch a video about evaluating and improving fairness - -Watch this video to see how to evaluate a machine learning model for fairness to ensure that your results are not biased. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_5,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness test for an AutoAI experiment in the UI - - - -1. Open Experiment Settings. -2. Click the Fairness tab. -3. Enable options for fairness. The options are as follows: - - - -* Fairness evaluation: Enable this option to check each pipeline for bias by calculating the disparate impact ration. This method tracks whether a pipeline shoes a tendency to provide a favorable (preferred) outcome for one group more often than another. -* Fairness threshold: Set a fairness threshold to determine whether bias exists in a pipeline based on the value of the disparate impact ration. The default is 80, which represents a disparate impact ratio less than 0.80. -* Favorable outcomes: Specify the value from your prediction column that would be considered favorable. For example, the value might be ""approved"", ""accepted"" or whatever fits your prediction type. -* Automatic protected attribute method: Choose how to evaluate features that are a potential source of bias. You can specify automatic detection, in which case AutoAI detects commonly protected attributes, including: sex, ethnicity, marital status, age, and zip or postal code. Within each category, AutoAI tries to determine a protected group. For example, for the sex category, the monitored group would be female. - -Note: In automatic mode, it is likely that a feature is not identified correctly as a protected attribute if it has untypical values, for example, being in a language other than English. Auto-detect is only supported for English. -* Manual protected attribute method: Manually specify an outcome and supply the protected attribute by choosing from a list of attributes. Note when you manually supply attributes, you must then define a group and specify whether it is likely to have the expected outcomes (the reference group) or should be reviewed to detect variance from the expected outcomes (the monitored group). - - - - - -For example, this image shows a set of manually specified attribute groups for monitoring. - -![Evaluating a group for potential bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-fairness1.png) - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_6,3F0B3A581945A1C7FE243340843CC4671A4E32C6,"Save the settings to apply and run the experiment to apply the fairness evaluation to your pipelines. - -Notes: - - - -* For multiclass models, you can select multiple values in the prediction column to classify as favorable or not. -* For regression models, you can specify a range of outcomes that are considered to be favorable or not. -* Fairness evaluations are not currently available for time series experiments. - - - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_7,3F0B3A581945A1C7FE243340843CC4671A4E32C6," List of automatically detected attributes for measuring fairness - -When automatic detection is enabled, AutoAI will automatically detect the following attributes if they are present in the training data. The attributes must be in English. - - - -* age -* citizen_status -* color -* disability -* ethnicity -* gender -* genetic_information -* handicap -* language -* marital -* political_belief -* pregnancy -* religion -* veteran_status - - - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_8,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness test for an AutoAI experiment in a notebook - -You can perform fairness testing in an AutoAI experiment that is trained in a notebook and extend the capabilities beyond what is provided in the UI. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_9,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Bias detection example - -In this example, by using the Watson Machine Learning Python API (ibm-watson-machine-learning), the optimizer configuration for bias detection is configured with the following input, where: - - - -* name - experiment name -* prediction_type - type of the problem -* prediction_column - target column name -* fairness_info - bias detection configuration - - - -fairness_info = { -""protected_attributes"": [ -{ -""feature"": ""personal_status"", -""reference_group"": ""male div/sep"", ""male mar/wid"", ""male single""], -""monitored_group"": ""female div/dep/mar""] -}, -{ -""feature"": ""age"", -""reference_group"": 26, 100]], -""monitored_group"": 1, 25]]} -], -""favorable_labels"": [""good""], -""unfavorable_labels"": [""bad""], -} - -from ibm_watson_machine_learning.experiment import AutoAI - -experiment = AutoAI(wml_credentials, space_id=space_id) -pipeline_optimizer = experiment.optimizer( -name='Credit Risk Prediction and bias detection - AutoAI', -prediction_type=AutoAI.PredictionType.BINARY, -prediction_column='class', -scoring='accuracy', -fairness_info=fairness_info, -retrain_on_holdout=False -) - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_10,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Evaluating results - -You can view the evaluation results for each pipeline. - - - -1. From the Experiment summary page, click the filter icon for the Pipeline leaderboard. -2. Choose the Disparate impact metrics for your experiment. This option evaluates one general metric and one metric for each monitored group. -3. Review the pipeline metrics for disparate impact to determine whether you have a problem with bias or just to determine which pipeline performs better for a fairness evaluation. - - - -In this example, the pipeline that was ranked first for accuracy also has a disparate income score that is within the acceptable limits. - -![Viewing the fairness results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-fairness3.png) - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_11,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Bias mitigation - -If bias is detected in an experiment, you can mitigate it by optimizing your experiment by using ""combined scorers"": [accuracy_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.accuracy_and_disparate_impact) or [r2_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.r2_and_disparate_impact), both defined by the open source [LALE package](https://lale.readthedocs.io/en/latest/index.html). - -Combined scorers are used in the search and optimization process to return fair and accurate models. - -For example, to optimize for bias detection for a classification experiment: - - - -1. Open Experiment Settings. -2. On the Predictions page, choose to optimize Accuracy and disparate impact in the experiment. -3. Rerun the experiment. - - - -The Accuracy and disparate impact metric creates a combined score for accuracy and fairness for classification experiments. A higher score indicates better performance and fairness measures. If the disparate impact score is between 0.9 and 1.11 (an acceptable level), the accuracy score is returned. Otherwise, a disparate impact value lower than the accuracy score is returned, with a lower (negative) value which indicates a fairness gap. - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_12,3F0B3A581945A1C7FE243340843CC4671A4E32C6,"Note:Advanced users can use a [notebook to apply or review fairness detection methods](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20to%20train%20fair%20models.ipynb). You can further refine a trained AutoAI model by using third-party packages like: [lale, AIF360](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.htmlmodule-lale.lib.aif360) to extend the fairness and bias detection capabilities beyond what is provided with AutoAI by default. - -Review a [sample notebook that evaluates pipelines for fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). - -Read this [Medium blog post on Bias detection in AutoAI](https://lukasz-cmielowski.medium.com/bias-detection-and-mitigation-in-ibm-autoai-406db0e19181). - -" -3F0B3A581945A1C7FE243340843CC4671A4E32C6_13,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Next steps - -[Troubleshooting AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html) - -Parent topic: [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_0,5042FBFB0C15AEDED02FF805C4869AC838910C7A," AutoAI glossary - -Learn terms and concepts that are used in AutoAI for building and deploying machine learning models. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_1,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"aggregate score -The aggregation of the four anomaly types: level shift, trend, localized extreme, variance. A higher score indicates a stronger score. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_2,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"algorithm -A formula applied to data to determine optimal ways to solve analytical problems. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_3,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"anomaly prediction -An AutoAI time-series model that can predict anomalies, or unexpected results, against new data. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_4,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"AutoAI experiment -An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_5,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"batch employment -Processes input data from a file, data connection, or connected data in a storage bucket and writes the output to a selected destination. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_6,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"bias detection (machine learning) -To identify imbalances in the training data or prediction behavior of the model. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_7,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"binary classification -A classification model with two classes and only assigns samples into one of the two classes. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_8,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"classification model -A predictive model that predicts data in distinct categories. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_9,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"confusion matrix -A performance measurement that determines the accuracy between a model’s positive and negative predicted outcomes to positive and negative actual outcomes. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_10,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"cross validation -A technique that tests the effectiveness of machine learning models. It is also used as a resampling procedure for models with limited data. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_11,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"data imputation -Substituting missing values in a data set with estimated values. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_12,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"exogenous features -Features that can influence the prediction model but cannot be influenced in return. See also: Supporting features - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_13,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"fairness -Determines whether a model produces biased outcomes that favor a monitored group over a reference group. Fairness evaluations detect if the model shows a tendency to provide a favorable or preferable outcome more often for one group over another. Typical categories to monitor are age, sex, and race. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_14,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature correlation -The relationship between two features. For example, postal code might have a strong correlation with income in some models. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_15,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature encoding -Transforming categorical values into numerical values. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_16,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature importance -The relative impact a particular column or feature has on the model's prediction or forecast. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_17,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature scaling -Normalizing the range of independent variables or features in a data set. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_18,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature selection -Identifying the columns of data that best support an accurate prediction or score. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_19,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature transformation -In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_20,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"holdout data -Data used to test or validate the model's performance. Holdout data can be a reserved portion of the training data, or it can be a separate file. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_21,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"hyperparameter optimization (HPO) -The process for setting hyperparameter values to the settings that provide the most accurate model. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_22,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"incremental learning -The process of training a model that uses data that is continually updated without forgetting data that is obtained from the preceding tasks. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_23,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"large tabular data -Structured data that exceeds the limit on standard processing and must be processed in batches. See incremental learning. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_24,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"labeled data -Data that is labeled to identify the appropriate data vectors to be pulled in for model training. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_25,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"monitored group -A class of data monitored to determine whether the results differ significantly from the results of the reference group. For example, in a credit app, you might monitor applications in a particular age range and compare results to the age range more likely to recieve a positive outcome to evaluate whether there might be bias in the results. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_26,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"multiclass classification model -A classification task with more than two classes. For example, where a binary classification model predicts yes or no values, a multi-class model predicts yes, no, maybe, or not applicable. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_27,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"multivariate time series -Time series experiment that contains two or more changing variables. For example, a time series model that forecasts the electricity usage of three clients. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_28,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"optimized metric -The metric used to measure the performance of the model. For example, accuracy is the typical metric that is used to measure the performance of a binary classification model. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_29,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"pipeline (model candidate pipeline) -End-to-end outline that illustrates the steos in a workflow. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_30,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"positive class -The class that is related to your objective function. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_31,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"reference group -A group that you identify as most likely to receive a positive result in a predictive model. You can then compare the results to a monitored group to look for potential bias in outcomes. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_32,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"regression model -A model that relates a dependent variable to one or more independent variable. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_33,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"scoring -In machine learning, the process of measuring the confidence of a predicted outcome. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_34,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"supporting features -Input features that can influence the prediction target. See also: Exogenus features - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_35,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"text classification -A model that automatically identifies and classifies text into distinct categories. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_36,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"time series model (AutoAI) -A model that tracks data over time. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_37,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"trained model -A model that is ready to be deployed. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_38,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"training -The initial stage of model building, involving a subset of the source data. The model can then be tested against a further, different subset for which the outcome is already known. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_39,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"training data -Data used to teach and train a model's learning algorithm. - -" -5042FBFB0C15AEDED02FF805C4869AC838910C7A_40,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"univariate time series -Time series experiment that contains only one changing variable. For example, a time series model that forecasts the temperature has a single prediction column of the temperature. -" -73F96A06142EE17A6C55E5700580F33250552A00_0,73F96A06142EE17A6C55E5700580F33250552A00," Data imputation in AutoAI experiments - -Data imputation is the means of replacing missing values in your data set with substituted values. If you enable imputation, you can specify how missing values are interpolated in your data. - -" -73F96A06142EE17A6C55E5700580F33250552A00_1,73F96A06142EE17A6C55E5700580F33250552A00," Imputation by experiment type - -Imputation methods depend on the type of experiment that you build. - - - -* For classification and regression you can configure categorical and numerical imputation methods. -* For timeseries problems, you can choose from a set of imputation methods to apply to numerical columns. When the experiment runs, the best performing method from the set is applied automatically. You can also specify a specific value as a replacement value. - - - -" -73F96A06142EE17A6C55E5700580F33250552A00_2,73F96A06142EE17A6C55E5700580F33250552A00," Enabling imputation - -To view and set imputation options: - - - -1. Click Experiment settings when you configure your experiment. -2. Click the Data source option. -3. Click Enable data imputation. Note that if you do not explicitly enable data imputation but your data source has missing values, AutoAI warns you and applies default imputation methods. See [imputation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html). -4. Select options in the Imputation section. -5. Optionally set a threshold for the percentage of imputation acceptable for a column of data. If the percentage of missing values exceeds the specified threshold, the experiment fails. To resolve, update the data source or adjust the threshold. - - - -" -73F96A06142EE17A6C55E5700580F33250552A00_3,73F96A06142EE17A6C55E5700580F33250552A00," Configuring imputation for classification and regression experiments - -Choose one of these methods for imputing missing data in binary classification, multiclass classification, or regression experiments. Note that you can have one method for completing values for text-based (categorical) data and another for numerical data. - - - - Method Description - - Most frequent Replace missing value with the value that appears most frequently in the column. - Median Replace missing value with the value in the middle of the sorted column. - Mean Replace missing value with the average value for the column. - - - -" -73F96A06142EE17A6C55E5700580F33250552A00_4,73F96A06142EE17A6C55E5700580F33250552A00," Configuring imputation for timeseries experiments - -Choose some or all of these methods. When multiple methods are selected, the best-performing method is automatically applied for the experiment. - -Note: Imputation is not supported for date or time values. - - - - Method Description - - Cubic Uses cubic interpolation by using pandas/scipy method to fill missing values. - Fill Choose value as the type to replace the missing values with a numeric value you specify. - Flatten iterative Data is first flattened and then the Scikit-learn iterative imputer is applied to find missing values. - Linear Use linear interpolation by using pandas/scipy method to fill missing values. - Next Replace missing value with the next value. - Previous Replace missing value with the previous value. - - - -" -73F96A06142EE17A6C55E5700580F33250552A00_5,73F96A06142EE17A6C55E5700580F33250552A00," Next steps - -[Data imputation implementation details for time series experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_0,83CD92CDB99DB6263492FAD998E932F50F0F8E99," AutoAI libraries for Python - -The autoai-lib library for Python contains a set of functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of the pipeline. Similarly, you can use the autoai-ts-libs library to interact with pipeline notebooks for time series experiments. - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_1,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Installing autoai-lib or autoai-ts-libs for Python - -Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install autoai-lib or autoai-ts-libs. - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_2,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Using autoai-lib and autoai-ts-libs for Python - -The autoai-lib and autoai-ts-libs library for Python contain functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of classification and regression pipelines. Using the autoai-ts-libs library, you can review the data transformations that take place in the creation of time series (forecast) pipelines. - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_3,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Installing autoai-lib and autoai-ts-libs for Python - -Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install [autoai-lib](https://pypi.org/project/autoai-libs/) and [autoai-ts-libs](https://pypi.org/project/autoai-ts-libs/). - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_4,83CD92CDB99DB6263492FAD998E932F50F0F8E99," The autoai-lib functions - -The instantiated project object that is created after you import the autoai-lib library exposes these functions: - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_5,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyColumnSelector() - -Selects a subset of columns of a numpy array - -Usage: - -autoai_libs.transformers.exportable.NumpyColumnSelector(columns=None) - - - - Option Description - - columns list of column indexes to select - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_6,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CompressStrings() - -Removes spaces and special characters from string columns of an input numpy array X. - -Usage: - -autoai_libs.transformers.exportable.CompressStrings(compress_type='string', dtypes_list=None, misslist_list=None, missing_values_reference_list=None, activate_flag=True) - - - - Option Description - - compress_type type of string compression. 'string' for removing spaces from a string and 'hash' for creating an int hash. Default is 'string'. 'hash' is used for columns with strings and cat_imp_strategy='most_frequent' - dtypes_list list containing strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown'). If None, the column types are discovered. Default is None. - misslist_list list contains lists of missing values of each column of the input numpy array X. If None, the missing values of each column are discovered. Default is None. - missing_values_reference_list reference list of missing values in the input numpy array X - activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_7,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyReplaceMissingValues() - -Given a numpy array and a reference list of missing values for it, replaces missing values with a special value (typically a special missing value such as np.nan). - -Usage: - -autoai_libs.transformers.exportable.NumpyReplaceMissingValues(missing_values, filling_values=np.nan) - - - - Option Description - - missing_values reference list of missing values - filling_values special value that is assigned to unknown values - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_8,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyReplaceUnknownValues() - -Given a numpy array and a reference list of known values for each column, replaces values that are not part of a reference list with a special value (typically np.nan). This method is typically used to remove labels for columns in a test data set that has not been seen in the corresponding columns of the training data set. - -Usage: - -autoai_libs.transformers.exportable.NumpyReplaceUnknownValues(known_values_list=None, filling_values=None, missing_values_reference_list=None) - - - - Option Description - - known_values_list reference list of lists of known values for each column - filling_values special value that is assigned to unknown values - missing_values_reference_list reference list of missing values - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_9,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.boolean2float() - -Converts a 1-D numpy array of strings that represent booleans to floats and replaces missing values with np.nan. Also changes type of array from 'object' to 'float'. - -Usage: - -autoai_libs.transformers.exportable.boolean2float(activate_flag=True) - - - - Option Description - - activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_10,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CatImputer() - -This transformer is a wrapper for categorical imputer. Internally it currently uses sklearn SimpleImputer]([https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)) - -Usage: - -autoai_libs.transformers.exportable.CatImputer(strategy, missing_values, sklearn_version_family=global_sklearn_version_family, activate_flag=True) - - - - Option Description - - strategy string, optional, default=”mean”. The imputation strategy for missing values.
-mean: replace by using the mean along each column. Can be used only with numeric data.
- median:replace by using the median along each column. Can only be used with numeric data.
- most_frequent:replace by using most frequent value each column. Used with strings or numeric data.
- constant:replace with fill_value. Can be used with strings or numeric data. - missing_values number, string, np.nan (default) or None. The placeholder for the missing values. All occurrences of missing_values are imputed. - sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None. - activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_11,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CatEncoder() - -This method is a wrapper for categorical encoder. If encoding parameter is 'ordinal', internally it currently uses sklearn [OrdinalEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html?highlight=ordinalencoder). If encoding parameter is 'onehot', or 'onehot-dense' internally it uses sklearn [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.htmlsklearn.preprocessing.OneHotEncoder) - -Usage: - -autoai_libs.transformers.exportable.CatEncoder(encoding, categories, dtype, handle_unknown, sklearn_version_family=global_sklearn_version_family, activate_flag=True) - - - - Option Description - - encoding str, 'onehot', 'onehot-dense' or 'ordinal'. The type of encoding to use (default is 'ordinal')
'onehot': encode the features by using a one-hot aka one-of-K scheme (or also called 'dummy' encoding). This encoding creates a binary column for each category and returns a sparse matrix.
'onehot-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix.
'ordinal': encode the features as ordinal integers. The result is a single column of integers (0 to n_categories - 1) per feature. -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_12,83CD92CDB99DB6263492FAD998E932F50F0F8E99," categories 'auto' or a list of lists/arrays of values. Categories (unique values) per feature:
'auto' : Determine categories automatically from the training data.
list : categories[i] holds the categories that are expected in the ith column. The passed categories must be sorted and can not mix strings and numeric values. The used categories can be found in the encoder.categories_ attribute. - dtype number type, default np.float64 Desired dtype of output. - handle_unknown 'error' (default) or 'ignore'. Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise). When this parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature are all zeros. In the inverse transform, an unknown category are denoted as None. Ignoring unknown categories is not supported for encoding='ordinal'. - sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None. - activate_flag flag that indicates that this transformer are active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_13,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.float32_transform() - -Transforms a float64 numpy array to float32. - -Usage: - -autoai_libs.transformers.exportable.float32_transform(activate_flag=True) - - - - Option Description - - activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_14,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.FloatStr2Float() - -Given numpy array X and dtypes_list that denotes the types of its columns, it replaces columns of strings that represent floats (type 'float_str' in dtypes_list) to columns of floats and replaces their missing values with np.nan. - -Usage: - -autoai_libs.transformers.exportable.FloatStr2Float(dtypes_list, missing_values_reference_list=None, activate_flag=True) - - - - Option Description - - dtypes_list list contains strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown'). - missing_values_reference_list reference list of missing values - activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_15,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumImputer() - -This method is a wrapper for numerical imputer. - -Usage: - -autoai_libs.transformers.exportable.NumImputer(strategy, missing_values, activate_flag=True) - - - - Option Description - - strategy num_imp_strategy: string, optional (default=”mean”). The imputation strategy:
- If “mean”, then replace missing values by using the mean along the axis.
- If “median”, then replace missing values by using the median along the axis.
- If “most_frequent”, then replace missing by using the most frequent value along the axis. - missing_values integer or “NaN”, optional (default=”NaN”). The placeholder for the missing values. All occurrences of missing_values are imputed:
- For missing values encoded as np.nan, use the string value “NaN”.
- activate_flag: flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_16,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.OptStandardScaler() - -This parameter is a wrapper for scaling of numerical variables. It currently uses sklearn [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) internally. - -Usage: - -autoai_libs.transformers.exportable.OptStandardScaler(use_scaler_flag=True, num_scaler_copy=True, num_scaler_with_mean=True, num_scaler_with_std=True) - - - - Option Description - - num_scaler_copy Boolean, optional, default True. If False, try to avoid a copy and do in-place scaling instead. This action is not guaranteed to always work. With in-place, for example, if the data is not a NumPy array or scipy.sparse CSR matrix, a copy might still be returned. - num_scaler_with_mean Boolean, True by default. If True, center the data before scaling. An exception is raised when attempted on sparse matrices because centering them entails building a dense matrix, which in common use cases is likely to be too large to fit in memory. - num_scaler_with_std Boolean, True by default. If True, scale the data to unit variance (or equivalently, unit standard deviation). - use_scaler_flag Boolean, flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. Default is True. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_17,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyPermuteArray() - -Rearranges columns or rows of a numpy array based on a list of indexes. - -Usage: - -autoai_libs.transformers.exportable.NumpyPermuteArray(permutation_indices=None, axis=None) - - - - Option Description - - permutation_indices list of indexes based on which columns are rearranged - axis 0 permute along columns. 1 permute along rows. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_18,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Feature transformation - -These methods apply to the feature transformations described in [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_19,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - -For unary stateless functions, such as square or log, use TA1. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - - - - Option Description - - fun the function pointer - name a string name that uniquely identifies this transformer from others - datatypes a list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on) - feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform - tgraph tgraph object must be the starting TGraph( ) object. This parameter is optional and you can pass None, but that can result in some failure to detect some inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - col_names names of the feature columns in a list - col_dtypes list of the datatypes of the feature columns - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_20,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TA2() - -For binary stateless functions, such as sum, product, use TA2. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TA2(fun, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - - - - Option Description - - fun the function pointer - name: a string name that uniquely identifies this transformer from others - datatypes1 a list of datatypes either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on) - feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform - datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on) - feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform - tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - col_names names of the feature columns in a list - col_dtypes list of the data types of the feature columns - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_21,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TB1() - -For unary state-based transformations (with fit/transform) use, such as frequent count. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TB1(tans_class, name, datatypes, feat_constraints, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - - - - Option Description - - tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition - name a string name that uniquely identifies this transformer from others - datatypes list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on) - feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform - tgraph tgraph object must be the invoking TGraph( ) object. Note that this is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - col_names names of the feature columns in a list. - col_dtypes list of the data types of the feature columns. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_22,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TB2() - -For binary state-based transformations (with fit/transform) use, such as group-by. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TB2(tans_class, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True) - - - - Option Description - - tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition - name a string name that uniquely identifies this transformer from others - datatypes1 a list of data types either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on) - feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform - datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on) - feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform - tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_23,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TAM() - -For a transform that applies at the data level, such as PCA, use TAM. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TAM(tans_class, name, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - - - - Option Description - - tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition - name a string name that uniquely identifies this transformer from others - tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - col_names names of the feature columns in a list - col_dtypes list of the datatypes of the feature columns - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_24,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TGen() - -TGen is a general wrapper and can be used for most functions (might not be most efficient though). - -Usage: - -autoai_libs.cognito.transforms.transform_utils.TGen(fun, name, arg_count, datatypes_list, feat_constraints_list, tgraph=None, apply_all=True, col_names=None, col_dtypes=None) - - - - Option Description - - fun the function pointer - name a string name that uniquely identifies this transformer from others - arg_count number of inputs to the function, in this example it is 1, for binary, it is 2, and so on - datatypes_list a list of arg_count lists that correspond to the acceptable input data types for each parameter. In the previous example, since `arg_count=1``, the result is one list within the outer list, and it contains a single type called 'numeric'. In another case, it might be a specific case 'int' or even more specific 'int64'. - feat_constraints_list a list of arg_count lists that correspond to some constraints that can be imposed on selection of the input features - tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching - apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each. - col_names names of the feature columns in a list - col_dtypes list of the data types of the feature columns - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_25,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.FS1() - -Feature selection, type 1 (using pairwise correlation between each feature and target.) - -Usage: - -autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep, additional_col_count_to_keep, ptype) - - - - Option Description - - cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance - additional_col_count_to_keep how many columns need to be retained - ptype classification or regression - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_26,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.FS2() - -Feature selection, type 2. - -Usage: - -autoai_libs.cognito.transforms.transform_utils.FS2(cols_ids_must_keep, additional_col_count_to_keep, ptype, eval_algo) - - - - Option Description - - cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance - additional_col_count_to_keep how many columns need to be retained - ptype classification or regression - - - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_27,83CD92CDB99DB6263492FAD998E932F50F0F8E99," The autoai-ts-libs functions - -The combination of transformers and estimators are designed and chosen for each pipeline by the AutoAI Time Series system. Changing the transformers or the estimators in the generated pipeline notebook can cause unexpected results or even failure. We do not recommend you change the notebook for generated pipelines, thus we do not currently offer the specification of the functions for the autoai-ts-libs library. - -" -83CD92CDB99DB6263492FAD998E932F50F0F8E99_28,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Learn more - -[Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html) - -Parent topic:[Saving an AutoAI generated notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) -" -07A75B90684D731C6B33FC552585D391E86A2A35_0,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an AutoAI generated notebook - -To view the code that created a particular experiment, or interact with the experiment programmatically, you can save an experiment as a notebook. You can also save an individual pipeline as a notebook so that you can review the code that is used in that pipeline. - -" -07A75B90684D731C6B33FC552585D391E86A2A35_1,07A75B90684D731C6B33FC552585D391E86A2A35," Working with AutoAI-generated notebooks - -When you save an experiment or a pipeline as notebook, you can: - - - -* Access the saved notebooks from the Notebooks section on the Assets tab. -* Review the code to understand the transformations applied to build the model. This increases confidence in the process and contributes to explainable AI practices. -* Enter your own authentication credentials by using the template provided. -* Use and run the code within Watson Studio, or download the notebook code to use in another notebook server. No matter where you use the notebook, it automatically installs all required dependencies, including libraries for: - - - -* xgboost -* lightgbm -* scikit-learn -* autoai-libs -* ibm-watson-machine-learning -* snapml - - - -* View the training data used to train the experiment and the test (holdout) data used to validate the experiment. - - - -Notes: - - - -* Auto-generated notebook code excutes successfully as written. Modifying the code or changing the input data can adversely affect the code. If you want to make a significant change, consider retraining the experiment by using AutoAI. -* For more information on the estimators, or algorithms, and transformers that are applied to your data to train an experiment and create pipelines, refer to [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_2,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an experiment as a notebook - -Save all of the code for an experiment to view the transformations and optimizations applied to create the model pipelines. - -" -07A75B90684D731C6B33FC552585D391E86A2A35_3,07A75B90684D731C6B33FC552585D391E86A2A35," What is included with the experiment notebook - -The experiment notebook provides annotated code so you can: - - - -* Interact with trained model pipelines -* Access model details programmatically (including feature importance and machine learning metrics). -* Visualize each pipeline as a graph, with each node documented, to provide transparency -* Compare pipelines -* Download selected pipelines and test locally -* Create a deployment and score the model -* Get the experiment definition or configuration in Python API, which you can use for automation or integration with other applications. - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_4,07A75B90684D731C6B33FC552585D391E86A2A35," Saving the code for an experiment - -To save an entire experiment as a notebook: - - - -1. After the experiment completes, click Save code from the Progress map panel. -2. Name your notebook, add an optional description, choose a runtime environment, and save. -3. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab of your project. - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_5,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an individual pipeline as a notebook - -Save an individual pipeline as a notebook so you can review the Scikit-Learn source code for the trained model in a notebook. - -Note: Currently, you cannot generate a pipeline notebook for an experiment with joined data sources. - -" -07A75B90684D731C6B33FC552585D391E86A2A35_6,07A75B90684D731C6B33FC552585D391E86A2A35," What is included with the pipeline notebook - -The experiment notebook provides annotated code that you can use to complete these tasks: - - - -* View the Scikit-learn pipeline definition -* See the transformations applied for pipeline training -* Review the pipeline evaluation - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_7,07A75B90684D731C6B33FC552585D391E86A2A35," Saving a pipeline as a notebook - -To save a pipeline as a notebook: - - - -1. Complete your AutoAI experiment. -2. Select the pipeline that you want to save in the leaderboard, and click Save from the action menu for the pipeline, then Save as notebook. -3. Name your notebook, add an optional description, choose a runtime environment, and save. -4. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab. - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_8,07A75B90684D731C6B33FC552585D391E86A2A35," Create sample notebooks - -To see for yourself what AutoAI-generated notebooks look like: - - - -1. Follow the steps in [AutoAI tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) to create a binary classification experiment from sample data. -2. After the experiment runs, click Save code in the experiment details panel. -3. Name and save the experiment notebook. -4. To save a pipeline as a model, select a pipeline from the leaderboard, then click Save and Save as notebook. -5. Name and save the pipeline notebook. -6. From Assets tab, open the resulting notebooks in the notebook editor and review the code. - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_9,07A75B90684D731C6B33FC552585D391E86A2A35," Additional resources - - - -* For details on the methods used in the code, see [Using AutoAI libraris with Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html). -* For more information on AutoAI notebooks, see this [blog post](https://lukasz-cmielowski.medium.com/watson-autoai-can-i-get-the-model-88a0fbae128a). - - - -" -07A75B90684D731C6B33FC552585D391E86A2A35_10,07A75B90684D731C6B33FC552585D391E86A2A35," Next steps - -[Using autoai-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_0,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI Overview - -The AutoAI graphical tool analyzes your data and uses data algorithms, transformations, and parameter settings to create the best predictive model. AutoAI displays various potential models as model candidate pipelines and rank them on a leaderboard for you to choose from. - -Data format : Tabular: CSV files, with comma (,) delimiter for all types of AutoAI experiments. : Connected data from [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html). - -Note:You can use a data asset that is saved as a Feature Group (beta) but the metadata is not used to populate the AutoAI experiment settings. - -Data size : Up to 1 GB or up to 20 GB. For details, refer to [AutoAI data use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enautoai-data-use). - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_1,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI data use - -These limits are based on the default compute configuration of 8 CPU and 32 GB. - -AutoAI classification and regression experiments: - - - -* You can upload a file up to 1 GB for AutoAI experiments. -* If you connect to a data source that exceeds 1 GB, only the first 1 GB of records is used. - - - -AutoAI time series experiments: - - - -* If the data source contains a timestamp column, AutoAI samples the data at a uniform frequency. For example, data can be in increments of one minute, one hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy. - -Note:If the file size is larger than 1 GB, AutoAi sorts the data in descending time order and only the first 1 GB is used to train the experiment. -* If the data source does not contain a timestamp column, ensure AutoAI samples the data at uniform intervals and sorts the data in ascending time order. An ascending sort order means that the value in the first row is the oldest, and the value in the last row is the most recent. - -Note: If the file size is larger than 1 GB, truncate the file size so it is smaller than 1 GB. - - - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_2,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI process - -Using AutoAI, you can build and deploy a machine learning model with sophisticated training features and no coding. The tool does most of the work for you. - -To view the code that created a particular experiment, or interact with the experiment programmatically, you can [save an experiment as a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html). - -![The AutoAI process takes data from a structured file, prepares the data, selects the model type, and generates and ranks pipelines so you can save and deploy a model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_overview.svg) - -AutoAI automatically runs the following tasks to build and evaluate candidate model pipelines: - - - -* [Data pre-processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enpreprocess) -* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enmodel_selection) -* [Automated feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enfeature_engineering) -* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization) - - - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_3,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Understanding the AutoAI process - -For additional detail on each of these phases, including links to associated research papers and descriptions of the algorithms applied to create the model pipelines, see [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_4,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Data pre-processing - -Most data sets contain different data formats and missing values, but standard machine learning algorithms work only with numbers and no missing values. Therefore, AutoAI applies various algorithms or estimators to analyze, clean, and prepare your raw data for machine learning. This technique automatically detects and categorizes values based on features, such as data type: categorical or numerical. Depending on the categorization, AutoAI uses [hyper-parameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization) to determine the best combination of strategies for missing value imputation, feature encoding, and feature scaling for your data. - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_5,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Automated model selection - -AutoAI uses automated model selection to identify the best model for your data. This novel approach tests potential models against small subsets of the data and ranks them based on accuracy. AutoAI then selects the most promising models and increases the size of the data subset until it identifies the best match. This approach saves time and improves performance by gradually narrowing down the potential models based on accuracy. - -For information on how to handle automatically-generated pipelines to select the best model, refer to [Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html). - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_6,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Automated feature engineering - -Feature engineering identifies the most accurate model by transforming raw data into a combination of features that best represent the problem. This unique approach explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This technique results in an optimized sequence of transformations for the data that best match the algorithms of the model selection step. - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_7,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Hyperparameter optimization - -Hyperparameter optimization refines the best performing models. AutoAI uses a novel hyperparameter optimization algorithm for certain function evaluations, such as model training and scoring, that are typical in machine learning. This approach quickly identifies the best model despite long evaluation times at each iteration. - -" -91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_8,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Next steps - -[AutoAI tutorial: Build a Binary Classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) - -Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_0,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Creating a text analysis experiment - -Use AutoAI's text analysis feature to perform text analysis of your experiments. For example, perform basic sentiment analysis to predict an outcome based on text comments. - -Note: Text analysis is only available for AutoAI classification and regression experiments. This feature is not available for time series experiments. - -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_1,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Text analysis overview - -When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column. - -The word2vec algorithm takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word's meaning or relationship to other words. The predictions can be used to analyze text and guess at the meaning in sentiment analysis applications. - -During the feature engineering phase of the experiment training, 20 features are generated for the text column, by using the word2vec algorithm. Auto-detection of text features is based on analyzing the number of unique values in a column and the number of tokens in a record (minimum number = 3). If the number of unique values is less than number of all values divided by 5, the column is not treated as text. - -When the experiment completes, you can review the feature engineering results from the pipeline details page. You can also save a pipeline as a notebook, where you can review the transformations and see a visualization of the transformations. - -Note: When you review the experiment, if you determine that a text column was not detected and processed by the auto-detection, you can specify the text column manually in the experiment settings. - -In this example, the comments for a fictional car rental company are used to train a model that predicts a satisfaction rating when a new comment is entered. - -Watch this short video to see this example and then read further details about the text feature below the video. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 In this video you'll see how to create an AutoAI experiment to perform sentiment analysis on a text file. - 00:09 You can use the text feature engineering to perform text analysis in your experiments. - 00:15 For example, perform basic sentiment analysis to predict an outcome based on text comments. -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_2,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," 00:22 Start in a project and add an asset to that project, a new AutoAI experiment. - 00:29 Just provide a name, description, select a machine learning service, and then create the experiment. - 00:38 When the AutoAI experiment builder displays, you can add the data set. - 00:43 In this case, the data set is already stored in the project as a data asset. - 00:48 Select the asset to add to the experiment. - 00:53 Before continuing, preview the data. - 00:56 This data set has two columns. - 00:59 The first contains the customers' comments and the second contains either 0, for ""Not satisfied"", or 1, for ""Satisfied"". - 01:08 This isn't a time series forecast, so select ""No"" for that option. - 01:13 Then select the column to predict, which is ""Satisfaction"" in this example. - 01:19 AutoAI determines that the satisfaction column contains two possible values, making it suitable for a binary classification model. - 01:28 And the positive class is 1, for ""Satisfied"". - 01:32 Open the experiment settings if you'd like to customize the experiment. - 01:36 On the data source panel, you'll see some options for the text feature engineering. - 01:41 You can automatically select the text columns, or you can exercise more control by manually specifying the columns for text feature engineering. - 01:52 You can also select how many vectors to create for each column during text feature engineering. - 01:58 A lower number faster and a higher number is more accurate, but slower. - 02:03 Now, run the experiment to view the transformations and progress. - 02:09 When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column. - 02:23 During the feature engineering phase of the experiment training, twenty features are generated for the text column using the word2vec algorithm. - 02:33 When the experiment completes, you can review the feature engineering results from the pipeline details page. - 02:40 On the Features summary panel, you can review the text transformations. -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_3,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," 02:45 You can see that AutoAI created several text features by applying the algorithm function to the column elements, along with the feature importance showing which features contribute most to your prediction output. - 02:59 You can save this pipeline as a model or as a notebook. - 03:03 The notebook contains the code to see the transformations and visualizations of those transformations. - 03:09 In this case, create a model. - 03:13 Use the link to view the model. - 03:16 Now, promote the model to a deployment space. - 03:23 Here are the model details, and from here you can deploy the model. - 03:28 In this case, it will be an online deployment. - 03:36 When that completes, open the deployment. - 03:39 On the test app, you can specify one or more comments to analyze. - 03:46 Then, click ""Predict"". - 03:49 The first customer is predicted not to be satisfied with the service. - 03:54 And the second customer is predicted to be satisfied with the service. - 03:59 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -Given a data set that contains a column of review comments for the rental experience (Customer_service), and a column that contains a binary satisfaction rating (Satisfaction) where 0 represents a negative comment and 1 represents a positive comment, the experiment is trained to predict a satisfaction rating when new feedback is entered. - -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_4,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Training a text transformation experiment - -After you load the data set and specify the prediction column (Satisfaction), the Experiment settings selects the Use text feature engineering option. - -![Data source settings for use text feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-text-transform1.png) - -Note some of the details for tuning your text analysis experiment: - - - -* You can accept the default selection of automatically selecting the text columns or you can exercise more control by manually specifying the columns for text feature engineering. -* As the experiment runs, a default of 20 features is generated for the text column by using the word2vec algorithm. You can edit that value to increase or decrease the number of features. The more vectors that you generate the more accurate your model are, but the longer training takess. -* The remainder of the options applies to all types of experiments so you can fine-tune how to handle the final training data. - - - -Run the experiment to view the transformations in progress. - -![Pipeline leaderboard of algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-text-transform4.png) - -Select the name of a pipeline, then click Feature summary to review the text transformations. - -![Feature summary of individual pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-text-transform2.png) - -You can also save the experiment pipeline as a notebook and review the transformations as a visualization. - -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_5,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Deploying and scoring a text transformation model - -When you score this model, enter new comments to get a prediction with a confidence score for whether the comment results in a positive or negative satisfaction rating. - -For example, entering the comment ""It took us almost three hours to get a car. It was absurd"" predicts a satisfaction rating of 0 with a confidence score of 95%. - -![Predicting a satisfaction score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-text-transform3.png) - -" -2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_6,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Next steps - -[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) - -Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html) -" -510BB82156702471C527D6EF7E51FE69EF746004_0,510BB82156702471C527D6EF7E51FE69EF746004," Time series implementation details - -These implementation details describe the stages and processing that are specific to an AutoAI time series experiment. - -" -510BB82156702471C527D6EF7E51FE69EF746004_1,510BB82156702471C527D6EF7E51FE69EF746004," Implementation details - -Refer to these implementation and configuration details for your time series experiment. - - - -* [Time series stages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-stages) for processing an experiment. -* [Time series optimizing metrics](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-metrics) for tuning your pipelines. -* [Time series algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-algorithms) for building the pipelines. -* [Supported date and time formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-date-time). - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_2,510BB82156702471C527D6EF7E51FE69EF746004," Time series stages - -An AutoAI time series experiment includes these stages when an experiment runs: - - - -1. [Initialization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=eninitialization) -2. [Pipeline selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enpipeline-selection) -3. [Model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enmodel-eval) -4. [Final pipeline generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enfinal-pipeline) -5. [Backtest](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enbacktest) - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_3,510BB82156702471C527D6EF7E51FE69EF746004," Stage 1: Initialization - -The initialization stage processes the training data, in this sequence: - - - -* Load the data -* Split the data set L into training data T and holdout data H -* Set the validation, timestamp column handling, and lookback window generation. Notes: - - - -* The training data (T) is equal to the data set (L) minus the holdout (H). When you configure the experiment, you can adjust the size of the holdout data. By default, the size of the holdout data is 20 steps. -* You can optionally specify the timestamp column. -* By default, a lookback window is generated automatically by detecting the seasonal period by using signal processing method. However, if you have an idea of an appropriate lookback window, you can specify the value directly. - - - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_4,510BB82156702471C527D6EF7E51FE69EF746004," Stage 2: Pipeline selection - -The pipeline selection step uses an efficient method called T-Daub (Time Series Data Allocation Using Upper Bounds). The method selects pipelines by allocating more training data to the most promising pipelines, while allocating less training data to unpromising pipelines. In this way, not all pipelines see the complete set of data, and the selection process is typically faster. The following steps describe the process overview: - - - -1. All pipelines are sequentially allocated several small subsets of training data. The latest data is allocated first. -2. Each pipeline is trained on every allocated subset of training data and evaluated with testing data (holdout data). -3. A linear regression model is applied to each pipeline by using the data set described in the previous step. -4. The accuracy score of the pipeline is projected on the entire training data set. This method results in a data set containing the accuracy and size of allocated data for each pipeline. -5. The best pipeline is selected according to the projected accuracy and allotted rank 1. -6. More data is allocated to the best pipeline. Then, the projected accuracy is updated for the other pipelines. -7. The prior two steps are repeated until the top N pipelines are trained on all the data. - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_5,510BB82156702471C527D6EF7E51FE69EF746004," Stage 3: Model evaluation - -In this step, the winning pipelines N are retrained on the entire training data set T. Further, they are evaluated with the holdout data H. - -" -510BB82156702471C527D6EF7E51FE69EF746004_6,510BB82156702471C527D6EF7E51FE69EF746004," Stage 4: Final pipeline generation - -In this step, the winning pipelines are retrained on the entire data set (L) and generated as the final pipelines. - -As the retraining of each pipeline completes, the pipeline is posted to the leaderboard. You can select to inspect the pipeline details or save the pipeline as a model. - -" -510BB82156702471C527D6EF7E51FE69EF746004_7,510BB82156702471C527D6EF7E51FE69EF746004," Stage 5: Backtest - -In the final step, the winning pipelines are retrained and evaluated by using the backtest method. The following steps describe the backtest method: - - - -1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html). -2. Starting from the oldest data, the experiment is trained by using the training data. -3. Further, the experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over. -4. The training data window is advanced by increasing the holdout size and gap length to form a new training set. -5. A fresh experiment is trained with this new data and evaluated with the next validation data set. -6. The prior two steps are repeated for the remaining backtesting periods. - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_8,510BB82156702471C527D6EF7E51FE69EF746004," Time series optimization metrics - -Accept the default metric, or choose a metric to optimize for your experiment. - - - - Metric Description - - Symmetric Mean Absolute Percentage Error (SMAPE) At each fitted point, the absolute difference between actual value and predicted value is divided by half the sum of absolute actual value and predicted value. Then, the average is calculated for all such values across all the fitted points. - Mean Absolute Error (MAE) Average of absolute differences between the actual values and predicted values. - Root Mean Squared Error (RMSE) Square root of the mean of the squared differences between the actual values and predicted values. - R^2^ Measure of how the model performance compares to the baseline model, or mean model. The R^2^ must be equal or less than 1. Negative R^2^ value means that the model under consideration is worse than the mean model. Zero R^2^ value means that the model under consideration is as good or bad as the mean model. Positive R^2^ value means that the model under consideration is better than the mean model. - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_9,510BB82156702471C527D6EF7E51FE69EF746004," Reviewing the metrics for an experiment - -When you view the results for a time series experiment, you see the values for metrics used to train the experiment in the pipeline leaderboard: - -![Reviewing experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-results.png) - -You can see that the accuracy measures for time-series experiments may vary widely, depending on the experiment data evaluated. - - - -* Validation is the score calculated on training data. -* Holdout is the score calculated on the reserved holdout data. -* Backtest is the mean score from all backtests scores. - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_10,510BB82156702471C527D6EF7E51FE69EF746004," Time series algorithms - -These algorithms are available for your time series experiment. You can use the algorithms that are selected by default, or you can configure your experiment to include or exclude specific algorithms. - - - - Algorithm Description - - ARIMA Autoregressive Integrated Moving Average (ARIMA) model is a typical time series model, which can transform non-stationary data to stationary data through differencing, and then forecast the next value by using the past values, including the lagged values and lagged forecast errors - BATS The BATS algorithm combines Box-Cox Transformation, ARMA residuals, Trend, and Seasonality factors to forecast future values. - Ensembler Ensembler combines multiple forecast methods to overcome accuracy of simple prediction and to avoid possible overfit. - Holt-Winters Uses triple exponential smoothing to forecast data points in a series, if the series is repetitive over time (seasonal). Two types of Holt-Winters models are provided: additive Holt-Winters, and multiplicative Holt-Winters - Random Forest Tree-based regression model where each tree in the ensemble is built from a sample that is drawn with replacement (for example, a bootstrap sample) from the training set. - Support Vector Machine (SVM) SVMs are a type of machine learning models that can be used for both regression and classification. SVMs use a hyperplane to divide the data into separate classes. - Linear regression Builds a linear relationship between time series variable and the date/time or time index with residuals that follow the AR process. - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_11,510BB82156702471C527D6EF7E51FE69EF746004," Supported date and time formats - -The date/time formats supported in time series experiments are based on the definitions that are provided by [dateutil](https://dateutil.readthedocs.io/en/stable/parser.html). - -Supported date formats are: - -Common: - -YYYY -YYYY-MM, YYYY/MM, or YYYYMM -YYYY-MM-DD or YYYYMMDD -mm/dd/yyyy -mm-dd-yyyy -JAN YYYY - -Uncommon: - -YYYY-Www or YYYYWww - ISO week (day defaults to 0) -YYYY-Www-D or YYYYWwwD - ISO week and day - -Numberng for the ISO week and day values follows the same logic as datetime.date.isocalendar(). - -Supported time formats are: - -hh -hh:mm or hhmm -hh:mm:ss or hhmmss -hh:mm:ss.ssssss (Up to 6 sub-second digits) -dd-MMM -yyyy/mm - -Notes: - - - -* Midnight can be represented as 00:00 or 24:00. The decimal separator can be either a period or a comma. -* Dates can be submitted as strings, with double quotation marks, such as ""1958-01-16"". - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_12,510BB82156702471C527D6EF7E51FE69EF746004," Supporting features - -Supporting features, also known as exogenous features, are input features that can influence the prediction target. You can use supporting features to include additional columns from your data set to improve the prediction and increase your model’s accuracy. For example, in a time series experiment to predict prices over time, a supporting feature might be data on sales and promotions. Or, in a model that forecasts energy consumption, including daily temperature makes the forecast more accurate. - -" -510BB82156702471C527D6EF7E51FE69EF746004_13,510BB82156702471C527D6EF7E51FE69EF746004," Algorithms and pipelines that use Supporting features - -Only a subset of algorithms allow supporting features. For example, Holt-winters and BATS do not support the use of supporting features. Algorithms that do not support supporting features ignore your selection for supporting features when you run the experiment. - -Some algorithms use supporting features for certain variations of the algorithm, but not for others. For example, you can generate two different pipelines with the Random Forest algorithm, RandomForestRegressor and ExogenousRandomForestRegressor. The ExogenousRandomForestRegressor variation provides support for supporting features, whereas RandomForestRegressor does not. - -This table details whether an algorithm provides support for Supporting features in a time series experiment: - - - - Algorithm Pipeline Provide support for Supporting features - - Random forest RandomForestRegressor No - Random forest ExogenousRandomForestRegressor Yes - SVM SVM No - SVM ExogenousSVM Yes - Ensembler LocalizedFlattenEnsembler Yes - Ensembler DifferenceFlattenEnsembler No - Ensembler FlattenEnsembler No - Ensembler ExogenousLocalizedFlattenEnsembler Yes - Ensembler ExogenousDifferenceFlattenEnsembler Yes - Ensembler ExogenousFlattenEnsembler Yes - Regression MT2RForecaster No - Regression ExogenousMT2RForecaster Yes - Holt-winters HoltWinterAdditive No - Holt-winters HoltWinterMultiplicative No - BATS BATS No - ARIMA ARIMA No - ARIMA ARIMAX Yes - ARIMA ARIMAX_RSAR Yes - ARIMA ARIMAX_PALR Yes - ARIMA ARIMAX_RAR Yes - ARIMA ARIMAX_DMLR Yes - - - -" -510BB82156702471C527D6EF7E51FE69EF746004_14,510BB82156702471C527D6EF7E51FE69EF746004," Learn more - -[Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html) - -Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_0,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Building a time series experiment - -Use AutoAI to create a time series experiment to predict future activity, such as stock prices or temperatures, over a specified date or time range. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_1,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Time series overview - -A time series experiment is a method of forecasting that uses historical observations to predict future values. The experiment automatically builds many pipelines using machine learning models, such as random forest regression and Support Vector Machines (SVMs), as well as statistical time series models, such as ARIMA and Holt-Winters. Then, the experiment recommends the best pipeline according to the pipeline performance evaluated on a holdout data set or backtest data sets. - -Unlike a standard AutoAI experiment, which builds a set of pipelines to completion then ranks them. A time series experiment evaluates pipelines earlier in the process and only completes and test the best-performing pipelines. - -![AutoAI time series pipeline generation process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ts-pipelines.png) - -For details on the various stages of training and testing a time series experiment, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html). - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_2,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Predicting anomalies in a time series experiment - -You can configure your time series experiment to predict anomalies (outliers) in your data or predictions. To configure anomaly prediction for your experiment, follow the steps in [Creating a time series anomaly prediction model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html). - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_3,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Using supporting features to improve predictions - -When you configure your time series experiment, you can choose to specify supporting features, also known as exogenous features. Supporting features are features that influence or add context to the prediction target. For example, if you are forecasting ice cream sales, daily temperature would be a logical supporting feature that would make the forecast more accurate. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_4,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Leveraging future values for supporting features - -If you know the future values for the supporting features, you can leverage those future values when you deploy the model. For example, if you are training a model to forecast future t-shirt sales, you can include promotional discounts as a supporting feature to enhance the prediction. Inputting the future value of the promotion then makes the forecast more accurate. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_5,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Data requirements - -These are the current data requirements for training a time series experiment: - - - -* The training data must be a single file in CSV format. -* The file must contain one or more time series columns and optionally contain a timestamp column. For a list of supported date/time formats, see [AutoAI time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html). -* If the data source contains a timestamp column, ensure that the data is sampled at uniform frequency. That is, the difference in timestamps of adjacent rows is the same. For example, data can be in increments of 1 minute, 1 hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy. - -Note:If the file size is larger than 1 GB, sort the data in descending order by the timestamp, and only the first 1 GB is used to train the experiment. -* If the data source does not contain a timestamp column, ensure that the data is sampled at regular intervals and sorted in ascending order according to the sample date/time. That is, the value in the first row is the oldest, and the value in the last row is the most recent. - -Note: If the file size is larger than 1 GB, truncate the file so it is smaller than 1 GB. -* Select what data to use when training the final pipelines. If you choose to include training data only, the generated notebooks will include a cell for retrieving the holdout data used to evaluate each pipeline. - - - -Choose data from your project or upload it from your file system or from the asset browser, then click Continue. Click the preview icon ![AutoAI preview data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-preview-icon.png), after the data source name to review your data. Optionally, you can add a second file as holdout data for testing the trained pipelines. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_6,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring a time series experiment - -When you configure the details for an experiment, click Yes to Enable time series and complete the experiment details. - - - - Field Description - - Prediction columns The time series columns that you want to predict based on the previous values. You can specify one or more columns to predict. - Date/time column The column that indicates the date/time at which the time series values occur. - Lookback window A parameter that indicates how many previous time series values are used to predict the current time point. - Forecast window The range that you want to predict based on the data in the lookback window. - - - -The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_7,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring experiment settings - -To configure more details for your time series experiment, click Experiment settings. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_8,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," General prediction settings - -On the General panel for prediction settings, you can optionally change the metric used to optimize the experiment or specify the algorithms to consider or the number of pipelines to generate. - - - - Field Description - - Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series forecast is selected by default.
Note: If you change the prediction type, other prediction settings for your experiment are automatically changed. - Optimized metric View or change the recommended optimized metric for your experiment. - Optimized algorithm selection Not supported for time series experiments. - Algorithms to include Select algorithms based on which you want your experiment to create pipelines. Algorithms and pipelines that support the use of supporting features, are indicated by a checkmark. - Pipelines to complete View or change the number of pipelines to generate for your experiment. - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_9,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Time series configuration details - -On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions. - - - - Field Description - - Date/time column View or change the date/time column for the experiment. - Lookback window View or update the number of previous time series values used to predict the current time point. - Forecast window View or update the range that you want to predict based. - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_10,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring data source settings - -To configure details for your input data, click Experiment settings and select Data source. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_11,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," General data source settings - -On the General panel for data source settings, you can modify your dataset to interpolate missing values, split your dataset into training and holdout data, and input supporting features. - - - - Field Description - - Duplicate rows Not supported for time series experiments. - Subsample data Not supported for time series experiments. - Text feature engineering Not supported for time series experiments. - Final training data set Select what data to use when training the final pipelines: just the training data or the training and holdout data. If you choose to include training data only, generated notebooks for this experiment will include a cell for retrieving the holdout data used to evaluate each pipeline. - Supporting features Choose additional columns from your data set as Supporting features to support predictions and increase your model’s accuracy. You can also use future values for Supporting features by enabling Leverage future values of supporting features.
Note: You can only use supporting features with selected algorithms and pipelines. For more information on algorithms and pipelines that support the use of supporting features, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html). - Data imputation Use data imputation to replace missing values in your dataset with substituted values. By enabling this option, you can specify how missing values should be interpolated in your data. To learn more about data imputation, see Data imputation in AutoAI experiments. - Training and holdout data Choose to reserve some data from your training data set to test the experiment. Alternatively, upload a separate file of holdout data. The holdout data file must match the schema of the training data. - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_12,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring time series data - -To configure the time series data, you can adjust the settings for the time series data that is related to backtesting the experiment. Backtesting provides a means of validating a time-series model by using historical data. - -In a typical machine learning experiment, you can hold back part of the data randomly to test the resulting model for accuracy. To validate a time series model, you must preserve the time order relationship between the training data and testing data. - -The following steps describe the backtest method: - - - -1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en). -2. Starting from the oldest data, the experiment is trained using the training data. -3. The experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over. -4. The training data window is advanced by increasing the holdout size and gap length to form a new training set. -5. A fresh experiment is trained with this new data and evaluated with the next validation data set. -6. The prior two steps are repeated for the remaining backtesting periods. - - - -To adjust the backtesting configuration: - - - -1. Open Experiment settings. -2. From Data sources, click the Time series. -3. (Optional): Adjust the settings as shown in the table. - - - - - - Field Description - - Number of backtests Backtesting is similar to cross-validation for date/time periods. Optionally customize the number of backtests for your experiment. - Holdout The size of the holdout set and each validation set for backtesting. The validation length can be adjusted by changing the holdout length. - Gap length The number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap will not be used to train the experiment or evaluate the current backtest. - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_13,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7,"![Experiment settings on Data Source page](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_exp_settings.png) - -The visualization for the configuration settings illustrates the backtesting flow. The graphic is interactive, so you can manipulate the settings from the graphic or from the configuration fields. For example, by adjusting the gap length, you can see model validation results on earlier time periods of the data without increasing the number of backtests. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_14,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Interpreting the experiment results - -After you run your time series experiment, you can examine the resulting pipelines to get insights into the experiment details. Pipelines that use Supporting features are indicated by SUP enhancement tag to distinguish them from pipelines that don’t use these features. To view details: - - - -* Hover over nodes on the visualization to get details about the pipelines as they are being generated. -* Toggle to the Progress Map view to see a different view of the training process. You can hover over each node in the process for details. -* After the final pipelines are completed and written to the leaderboard, you can click a pipeline to see the performance details. -* Click View discarded pipelines to view the algorithms that are used for the pipelines that are not selected as top performers. -* Save the experiment code as notebook that you can review. -* Save a particular pipeline as a notebook that you can review. - - - -Watch this video to see how to run a time series experiment and create a model in a Jupyter notebook using training and holdout data. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_15,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Next steps - - - -* Follow a step-by-step tutorial to [train a univariate time series model to predict minimum temperatures by using sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html). -* Follow a step-by-step tutorial to [train a time series experiment with supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html). -* Learn about [scoring a deployed time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html). -* Learn about using the [API for AutoAI time series experiments](https://lukasz-cmielowski.medium.com/predicting-covid19-cases-with-autoai-time-series-api-f6793acee48d). - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_16,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Additional resources - - - -* For an introduction to forecasting with AutoAI time series experiments, see the blog post [Right on time(series): Introducing Watson Studio’s AutoAI Time Series](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154). -* For more information about creating a time series experiment, see this blog post about [creating a new time series experiment](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154). -* Read a blog post about [adding supporting features to a time series experiment](https://medium.com/ibm-data-ai/improve-autoai-time-series-forecasts-with-supporting-features-using-ibm-cloud-pak-for-data-as-a-ff24cc85f6b8). -* Review a [sample notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20and%20timeseries%20data%20with%20supporting%20features%20to%20predict%20PM2.5.ipynb) for a time series experiment with supporting features. -* Read a blog post about [adding supporting features to a time series experiment using the API](https://medium.com/ibm-data-ai/forecasting-pm2-5-using-autoai-time-series-api-with-supporting-features-12bbad18cb36). - - - -" -7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_17,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Next steps - - - -* [Tutorial: AutoAI univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html) -* [Tutorial: AutoAI supporting features time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html) -* [Time series experiment implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html) -* [Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html) - - - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_0,163EEB3DBAFF3B01D831F717EEB7487642C93080," Troubleshooting AutoAI experiments - -The following list contains the common problems that are known for AutoAI. If your AutoAI experiment fails to run or deploy successfully, review some of these common problems and resolutions. - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_1,163EEB3DBAFF3B01D831F717EEB7487642C93080," Passing incomplete or outlier input value to deployment can lead to outlier prediction - -After you deploy your machine learning model, note that providing input data that is markedly different from data that is used to train the model can produce an outlier prediction. When linear regression algorithms such as Ridge and LinearRegression are passed an out of scale input value, the model extrapolates the values and assigns a relatively large weight to it, producing a score that is not in line with conforming data. - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_2,163EEB3DBAFF3B01D831F717EEB7487642C93080," Time Series pipeline with supporting features fails on retrieval - -If you train an AutoAI Time Series experiment by using supporting features and you get the error 'Error: name 'tspy_interpolators' is not defined' when the system tries to retrieve the pipeline for predictions, check to make sure your system is running Java 8 or higher. - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_3,163EEB3DBAFF3B01D831F717EEB7487642C93080," Running a pipeline or experiment notebook fails with a software specification error - -If supported software specifications for AutoAI experiments change, you might get an error when you run a notebook built with an older software specification, such as an older version of Python. In this case, run the experiment again, then save a new notebook and try again. - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_4,163EEB3DBAFF3B01D831F717EEB7487642C93080," Resolving an Out of Memory error - -If you get a memory error when you run a cell from an AutoAI generated notebook, create a notebook runtime with more resources for the AutoAI notebook and execute the cell again. - - Notebook for an experiment with subsampling can fail generating predictions - -If you do pipeline refinery to prepare the model, and the experiment uses subsampling of the data during training, you might encounter an “unknown class” error when you run a notebook that is saved from the experiment. - -The problem stems from an unknown class that is not included in the training data set. The workaround is to use the entire data set for training or re-create the subsampling that is used in the experiment. - -To subsample the training data (before fit()), provide sample size by number of rows or by fraction of the sample (as done in the experiment). - - - -* If number of records was used in subsampling settings, you can increase the value of n. For example: - -train_df = train_df.sample(n=1000) -* If subsampling is represented as a fraction of the data set, increase the value of frac. For example: - -train_df = train_df.sample(frac=0.4, random_state=experiment_metadata['random_state']) - - - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_5,163EEB3DBAFF3B01D831F717EEB7487642C93080," Pipeline creation fails for binary classification - -AutoAI analyzes a subset of the data to determine the best fit for experiment type. If the sample data in the prediction column contains only two values, AutoAI recommends a binary classification experiment and applies the related algorithms. However, if the full data set contains more than two values in the prediction column the binary classification fails and you get an error that indicates that AutoAI cannot create the pipelines. - -In this case, manually change the experiment type from binary to either multiclass, for a defined set of values, or regression, for an unspecified set of values. - - - -1. Click the Reconfigure Experiment icon to edit the experiment settings. -2. On the Prediction page of Experiment Settings, change the prediction type to the one that best matches the data in the prediction column. -3. Save the changes and run the experiment again. - - - -" -163EEB3DBAFF3B01D831F717EEB7487642C93080_6,163EEB3DBAFF3B01D831F717EEB7487642C93080," Next steps - -[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_0,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Tutorial: Create a time series anomaly prediction experiment - -This tutorial guides you through using AutoAI and sample data to train a time series experiment to detect if daily electricity usage values are normal or anomalies (outliers). - -When you set up the sample experiment, you load data that analyzes daily electricity usage from Industry A to determine whether a value is normal or an anomaly. Then, the experiment generates pipelines that use algorithms to label these predicted values as normal or an anomaly. After generating the pipelines, AutoAI chooses the best performers, and presents them in a leaderboard for you to review. - -Tech preview This is a technology preview and is not yet supported for use in production environments. - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_1,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Data set overview - -This tutorial uses the Electricity usage anomalies sample data set from the Watson Studio Gallery. This data set describes the annual electricity usages for Industry A. The first column indicates the electricity usages and the second column indicates the date, which is in a day-by-day format. - -![A preview of the Electricity usage anomalies sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad-dataset-preview.png) - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_2,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Tasks overview - -In this tutorial, follow these steps to create an anomaly prediction experiment: - - - -1. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep1) -2. [View the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep2) -3. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep3) -4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep4) -5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep5) - - - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_3,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Create an AutoAI experiment - -Create an AutoAI experiment and add sample data to your experiment. - - - -1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), click Projects > View all projects. -2. Open an existing project or [create a new project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) to store the anomaly prediction experiment. -3. On the Assets tab from within your project, click New asset > Build machine learning models automatically. -4. Click Samples > Electricity usage anomalies sample data, then select Next. The AutoAI experiment name and description are pre-populated by the sample data. -5. If prompted, associate a Watson Machine Learning instance with your AutoAI experiment. - - - -1. Click Associate a Machine Learning service instance and select an instance of Watson Machine Learning. -2. Click Reload to confirm your configuration. - - - -6. Click Create. - - - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_4,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," View the experiment details - -AutoAI pre-populates the details fields for the sample experiment: - -![Anomaly prediction pre-populated detail fields](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad-details.png) - - - -* Type series analysis type: Anomaly prediction predicts whether future values in a series are anomalies (outliers). A prediction of 1 indicates a normal value and a prediction of -1 indicates an anomaly. -* Feature column: industry_a_usage is the predicted value and indicates how much electricity Industry A consumes. -* Date/Time column: date indicates the time increments for the experiment. For this experiment, there is one prediction value per day. - -* This experiment is optimized for the model performance metric: Average Precision. Average precision evaluates the performance of object detection and segmentation systems. - - - -Click Run experiment to train the model. The experiment takes several minutes to complete. - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_5,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Review the experiment results - -The relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. ![Anomaly prediction relationship map and pipeline leaderboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad-pipeline-map.png) - - - -1. The leaderboard lists and saves the three best performing pipelines. Click the pipeline name with Rank 1 to review the details of the pipeline. For details on anomaly prediction metrics, see [Creating a time series anomaly prediction experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html). -2. Select the pipeline with Rank 1 and Save the pipeline as a model. The model name is pre-populated with the default name. -3. Click Create to confirm your pipeline selection. - - - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_6,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Deploy the trained model - -Before the trained model can make predictions on external values, you must deploy the model. Follow these steps to promote your trained model to a deployment space. - - - -1. Deploy the model from the Model details page. To access the Model details page, choose one of these options: - - - -* From the notification displayed when you save the model, click View in project. -* From the project's Assets, select the model’s name in Models. - - - -2. From the Model details page, click Promote to Deployment Space. Then, select or create a deployment space to deploy the model. -3. Select Go to the model in the space after promoting it and click Promote to promote the model. - - - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_7,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Testing the model - -After promoting the model to the deployment space, you are ready to test your trained model with new data values. - - - -1. Select New Deployment and create a new deployment with the following fields: - - - -1. Deployment type: Online -2. Name: Electricity usage online deployment - - - -2. Click Create and wait for the status to update to Deployed. -3. After the deployment initializes, click the deployment. Use Test input to manually enter and evaluate values or use JSON input to attach a data set. - -![Anomaly prediction sample input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad-sample-cases.png) -4. Click Predict to see whether there are any anomalies in the values. - -Note:-1 indicates an anomaly; 1 indicates a normal value. - - - -![Anomaly prediction results table and chart](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad-predicted-results.png) - -" -6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_8,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Next steps - -[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_0,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Creating a time series anomaly prediction (Beta) - -Create a time series anomaly prediction experiment to train a model that can detect anomalies, or unexpected results, when the model predicts results based on new data. - -Tech preview This is a technology preview and is not yet supported for use in production environments. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_1,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Detecting anomalies in predictions - -You can use anomaly prediction to find outliers in model predictions. Consider the following scenarios for training a time series model with anomaly prediction. For example, suppose you have operational metrics from monitoring devices that were collected in the date range of 2022.1.1 through 2022.3.31. You are confident that no anomalies exist in the data for that period, even if the data is unlabeled. You can use a time series anomaly prediction experiment to: - - - -* Train model candidate pipelines and auto-select the top-ranked model candidate -* Deploy a selected model to predict new observations if: - - - -* A new time point is an anomaly (for example, an online score predicts a time point 2022.4.1 that is outside of the expected range) -* A new time range has anomalies (for example, a batch score predicts values of 2022.4.1 to 2022.4.7, outside the expected range) - - - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_2,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Working with a sample - -To create an AutoAI Time series experiment with anomaly prediction that uses a sample: - - - -1. Create an AutoAI experiment. -2. Select Samples. - -![Select the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad1.png) -3. Click the tile for Electricity usage anomalies sample data. -4. Follow the prompts to configure and run the experiment. - -![Samples output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad2.png) -5. Review the details about the pipelines and explore the visualizations. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_3,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring a time series experiment with anomaly prediction - - - -1. Load the data for your experiment. - -Restriction: You can upload only a single data file for an anomaly prediction experiment. If you upload a second data file (for holdout data) the Anomaly prediction option is disabled, and only the Forecast option is available. By default, Anomaly prediction experiments use a subset of the training data for validation. -2. Click Yes to Enable time series. -3. Select Anomaly prediction as the experiment type. -4. Configure the feature columns from the data source that you want to predict based on the previous values. You can specify one or more columns to predict. -5. Select the date/time column. - - - -The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_4,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring experiment settings - -To configure more details for your time series experiment, open the Experiment settings pane. Options that are not available for anomaly prediction experiments are unavailable. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_5,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," General prediction settings - -On the General panel for prediction settings, configure details for training the experiment. - - - - Field Description - - Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series anomaly prediction is selected by default. Note: If you change the prediction type, other prediction settings for your experiment are automatically changed. - Optimized metric Choose a metric for optimizing and ranking the pipelines. - Optimized algorithm selection Not supported for time series experiments. - Algorithms to include Select [algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=enimplementation) based on which you want your experiment to create pipelines. The algorithms support anomaly prediction. - Pipelines to complete View or change the number of pipelines to generate for your experiment. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_6,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Time series configuration details - -On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions. - - - - Field Description - - Date/time column View or change the date/time column for the experiment. - Lookback window Not supported for anomaly prediction. - Forecast window Not supported for anomaly prediction. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_7,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring data source settings - -To configure details for your input data, open the Experiment settings panel and select the Data source. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_8,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," General data source settings - -On the General panel for data source settings, you can choose options for how to use your experiment data. - - - - Field Description - - Duplicate rows Not supported for time series anomaly prediction experiments. - Subsample data Not supported for time series anomaly prediction experiments. - Text feature engineering Not supported for time series anomaly prediction experiments. - Final training data set Anomaly prediction uses a single data source file, which is the final training data set. - Supporting features Not supported for time series anomaly prediction experiments. - Data imputation Not supported for time series anomaly prediction experiments. - Training and holdout data Anomaly prediction does not support a separate holdout file. You can adjust how the data is split between training and holdout data. Note: In some cases, AutoAI can overwrite your holdout settings to ensure the split is valid for the experiment. In this case, you see a notification and the change is noted in the log file. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_9,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Reviewing the experiment results - -When you run the experiment, the progress indicator displays the pathways to pipeline creation. Ranked pipelines are listed on the leaderboard. Pipeline score represents how well the pipeline performed for the optimizing metric. - -The Experiment summary tab displays a visualization of how metrics performed for the pipeline. - - - -* Use the metric filter to focus on particular metrics. -* Hover over the name of a metric to view details. - - - -Click a pipeline name to view details. On the Model evaluation page, you can review a table that summarizes details about the pipeline. - -![Model evaluation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-ts-ad3.png) - - - -* The rows represent five evaluation metrics: Area under ROC, Precision, Recall, F1, Average precision. -* The columns represent four synthesized anomaly types: Level shift, Trend, Localized extreme, Variance. -* Each value in a cell is an average of the metric based on three iterations of evaluation on the synthesized anomaly type. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_10,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Evaluation metrics: - -These metrics are used to evaluate a pipeline: - - - - Metric Description - - Aggregate score (Recommended) This score is calculated based on an aggregation of the optimized metric (for example, Average precision) values for the 4 anomaly types. The scores for each pipeline are ranked, using the Borda count method, and then weighted for their contribution to the aggregate score. Unlike a standard metric score, this value is not between 0 and 1. A higher value indicates a stronger score. - ROC AUC Measure of how well a parameter can distinguish between two groups. - F1 Harmonic average of the precision and recall, with best value of 1 (perfect precision and recall) and worst at 0. - Precision Measures the accuracy of a prediction based on percent of positive predictions that are correct. - Recall Measures the percentage of identified positive predictions against possible positives in data set. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_11,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Anomaly types - -These are the anomaly types AutoAI detects. - - - - Anomaly type Description - - Localized extreme anomaly An unusual data point in a time series, which deviates significantly from the data points around it. - Level shift anomaly A segment in which the mean value of a time series is changed. - Trend anomaly A segment of time series, which has a trend change compared to the time series before the segment. - Variance anomaly A segment of time series in which the variance of a time series is changed. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_12,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Saving a pipeline as a model - -To save a model candidate pipeline as a machine learning model, select Save as model for the pipeline you prefer. The model is saved as a project asset. You can promote the model to a space and create a deployment for it. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_13,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Saving a pipeline as a notebook - -To review the code for a pipeline, select Save as notebook for a pipeline. An automatically generated notebook is saved as a project asset. Review the code to explore how the pipeline was generated. - -For details on the methods used in the pipeline code, see the documentation for the [autoai-ts-libs library](https://pypi.org/project/autoai-ts-libs/). - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_14,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Scoring the model - -After you save a pipeline as a model, then promote the model to a space, you can score the model to generate predictions for input, or payload, data. Scoring the model and interpreting the results is similar to scoring a binary classification model, as the score presents one of two possible values for each prediction: - - - -* 1 = no anomaly detected -* -1 = anomaly detected - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_15,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Deployment details - -Note these requirements for deploying an anomaly prediction model. - - - -* The schema for the deplyment input data must match the schema for the training data except for the prediction, or target column. -* The order of the fields for model scoring must be the same as the order of the fields in the training data schema. - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_16,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Deployment example - -The following is valid input for an anomaly prediction model: - -{ -""input_data"": [ -{ -""id"": ""observations"", -""values"": -12,34], -22,23], -35,45], -46,34] -] -} -] -} - -The score for this input is [1,1,-1,1] where -1 means the value is an anomaly and 1 means the prediction is in the normal range. - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_17,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Implementation details - -These algorithms support anomaly prediction in time series experiments. - - - - Algorithm Type Transformer - - Pipeline Name Algorithm Type Transformer - PointwiseBoundedHoltWintersAdditive Forecasting N/A - PointwiseBoundedBATS Forecasting N/A - PointwiseBoundedBATSForceUpdate Forecasting N/A - WindowNN Window Flatten - WindowPCA Relationship Flatten - WindowLOF Window Flatten - - - -The algorithms are organized in these categories: - - - -* Forecasting: Algorithms for detecting anomalies using time series forecasting methods -* Relationship: Algorithms for detecting anomalies by analyzing the relationship among data points -* Window: Algorithms for detecting anomalies by applying transformations and ML techniques to rolling windows - - - -" -B23F48A4757500FEA641245CFFA69CB3B72AE0E8_18,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Learn more - -[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) - -Parent topic:[Building a time series experiment ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_0,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring a time series model - -After you save an AutoAI time series pipeline as a model, you can deploy and score the model to forecast new values. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_1,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Deploying a time series model - -After you save a model to a project, follow the steps to deploy the model: - - - -1. Find the model in the project asset list. -2. Promote the model to a deployment space. -3. Promote payload data to the deployment space. -4. From the deployment space, create a deployment. - - - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_2,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring considerations - -To this point, deploying a time series model follows the same steps as deploying a classification or regression model. However, because of the way predictions are structured and generated in a time series model, your input must match your model structure. For example, the way you structure your payload depends on whether you are predicting a single result (univariate) or multiple results (multivariate). - -Note these high-level considerations: - - - -* To get the first forecast window row or rows after the last row in your data, send an empty payload. -* To get the next value, send the result from the empty payload request as your next scoring request, and so on. -* You can send multiple rows as input, to build trends and predict the next value after a trend. -* If you have multiple prediction columns, you need to include a value for each of them in your scoring request - - - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_3,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring an online deployment - -If you create an online deployment, you can pass the payload data by using an input form or by submitting JSON code. This example shows how to structure the JSON code to generate predictions. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_4,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting a single value - -In the simplest case, given this sample data, you are trying to forecast the next step of value1 with a forecast window of 1, meaning each prediction will be a single step (row). - - - - timestamp value1 - - 2015-02026 21:42 2 - 2015-02026 21:47 4 - 2015-02026 21:52 6 - 2015-02026 21:57 8 - 2015-02026 22:02 10 - - - -You must pass a blank entry as the input data to request the first prediction, which is structured like this: - -{ -""input_data"": [ -{ -""fields"": -""value1"" -], -""values"": ] -} -] -} - -The output that is returned predicts the next step in the model: - -{ -""predictions"": [ -{ -""fields"": -""prediction"" -], -""values"": - -12 -] -] -} -] -} - -The next input passes the result of the previous output to predict the next step: - -{ -""input_data"": [ -{ -""fields"": -""value1"" -], -""values"": -12] -] -} -] -} - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_5,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting multiple values - -In this case, you are predicting two targets, value1 and value2. - - - - timestamp value1 value2 - - 2015-02026 21:42 2 1 - 2015-02026 21:47 4 3 - 2015-02026 21:52 6 5 - 2015-02026 21:57 8 7 - 2015-02026 22:02 10 9 - - - -The input data must still pass a blank entry to request the first prediction. The next input would be structured like this: - -{ -""input_data"": [ -{ -""fields"": -""value1"", -""value2"" -], -""values"": -2, 1], -] -} -] -} - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_6,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting based on new observations - -If instead of predicting the next row based on the prior step you want to enter new observations, enter the input data like this for a univariate model: - -{ -""input_data"": [ -{ -""fields"": -""value1"" -], -""values"": -2], -4], -6] -] -} -] -} - -Enter new observations like this for a multivariate model: - -{ -""input_data"": [ -{ -""fields"": -""value1"", -""value2"" -], -""values"": -2, 1], -4, 3], -6, 5] -] -} -] -} - -Where 2, 4, and 6 are observations for value1 and 1, 3, 5 are observations for value2. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_7,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring a time series model with Supporting features - -After you deploy your model, you can go to the page detailing your deployment to get prediction values. Choose one of the following ways to test your deployment: - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_8,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using existing input values - -You can use existing input values in your data set to obtain prediction values. Click Predict to obtain a set of prediction values. The total number of prediction values in the output is defined by prediction horizon that you previously set during the experiment configuration stage. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_9,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using new input values - -You can choose to populate the spreadsheet with new input values or use JSON code to obtain a prediction. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_10,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using spreadsheet to provide new input data for predicting values - -To add input data to the New observations (optional) spreadsheet, select the Input tab and do one of the following: - - - -* Add pre-existing .csv file containing new observations from your local directory by clicking Browse local files. -* Download the input file template by clicking Download CSV template, enter values, and upload the file. -* Use an existing data asset from your project by clicking Search in space. -* Manually enter input observations in the spreadsheet. - - - -You can also provide future values for Supporting features if you previously enabled your experiment to leverage these values during the experiment configuration stage. Make sure to add these values to the Future supporting features (optional) spreadsheet. - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_11,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using JSON code to provide input data - -To add input data using JSON code, select the Paste JSON tab and do one of the following: - - - -* Add pre-existing JSON file containing new observations from your local directory by clicking Browse local files. -* Use an existing data asset from your project by clicking Search in space. -* Manually enter or paste JSON code into the editor. - - - -In this code sample, the prediction column is pollution, and the supporting features are temp and press. - -{ -""input_data"": [ -{ -""id"": ""observations"", -""values"": - -96.125, -3.958, -1026.833 -] -] -}, -{ -""id"": ""supporting_features"", -""values"": - -3.208, -1020.667 -] -] -} -] -} - -" -AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_12,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Next steps - -[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) - -Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -99843122C08D0D70ED3694A57482595E35FB0D8B_0,99843122C08D0D70ED3694A57482595E35FB0D8B," Tutorial: AutoAI multivariate time series experiment with Supporting features - -Use sample data to train a multivariate time series experiment that predicts pollution rate and temperature with the help of supporting features that influence the prediction fields. - -When you set up the experiment, you load sample data that tracks weather conditions in Beijing from 2010 to 2014. The experiment generates a set of pipelines that use algorithms to predict future pollution and temperature with supporting features, including dew, pressure, snow, and rain. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review. - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_1,99843122C08D0D70ED3694A57482595E35FB0D8B," Data set overview - -For this tutorial, you use the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the Samples. This data set describes the weather conditions in Beijing from 2010 to 2014, which are measured in 1-day steps, or increments. You use this data set to configure your AutoAI experiment and select Supporting features. Details about the data set are described here: - - - -* Each column, other than the date column, represents a weather condition that impacts pollution index. -* The Samples entry shows the origin of the data. You can preview the file before you download the file. -* The sample data is structured in rows and columns and saved as a .csv file. - - - -![Data set preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-dataset-preview.png) - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_2,99843122C08D0D70ED3694A57482595E35FB0D8B," Tasks overview - -In this tutorial, you follow steps to create a multivariate time series experiment that uses Supporting features: - - - -1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep1) -2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep2) -3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep3) -4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep4) -5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep5) -6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep6) - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_3,99843122C08D0D70ED3694A57482595E35FB0D8B," Create a project - -Follow these steps to create an empty project and download the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the IBM watsonx Samples: - - - -1. From the main navigation pane, click Projects > View all projects, then click New Project. -a. Click Create an empty project. -b. Enter a name and optional description for your project. -c. Click Create. -2. From the main navigation panel, click Samples and download a local copy of the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set. - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_4,99843122C08D0D70ED3694A57482595E35FB0D8B," Create an AutoAI experiment - -Follow these steps to create an AutoAI experiment and add sample data to your experiment: - - - -1. On the Assets tab from within your project, click New asset > Build machine learning models automatically. -2. Specify a name and optional description for your experiment. -3. Associate a machine learning service instance with your experiment. -4. Choose an environment definition of 8 vCPU and 32 GB RAM. -5. Click Create. -6. To add sample data, choose one of the these methods: - - - -* If you downloaded your file locally, upload the training data file, PM25.csv by clicking Browse and then following the prompts. -* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Beijing PM 25.csv. - - - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_5,99843122C08D0D70ED3694A57482595E35FB0D8B," Configure the experiment - -Follow these steps to configure your multivariate AutoAI time series experiment: - - - -1. Click Yes for the option to create a Time Series Forecast. -2. Choose as prediction columns: pollution, temp. -3. Choose as the date/time column: date. - -![Configuring experiment settings. Yes, to time series forecast and pollution and temp as the prediction columns with date as the date/time column.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-run-config.png) -4. Click Experiment settings to configure the experiment: -a. In the Prediction page, accept the default selection for Algorithms to include. Algorithms that allow you to use Supporting features are indicated by a checkmark in the column Allows supporting features. -![Configuring experiment settings. Algorithms that support the use of Supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-predict-general.JPG) - -b. Go to the Data Source page. For this tutorial, you will supply future values of Supporting features while testing. Future values are helpful when values for the supporting features are knowable for the prediction horizon. Accept the default enablement for Leverage future values of supporting features. Additionally, accept the default selection for columns that will be used as Supporting features. -![Supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-features.JPG) -c. Click Cancel to exit from Experiment settings. -5. Click Run experiment to begin the training. - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_6,99843122C08D0D70ED3694A57482595E35FB0D8B," Review experiment results - -The experiment takes several minutes to complete. As the experiment trains, the relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. - - - -1. Optional: Hover over any node in the relationship map to get details on the transformation for a particular pipeline. - -![Relationship map](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-rel-map.png) -2. Optional: After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example: - -![Pipeline comparison](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-pipeline-comparison.JPG) -3. When the training completes, the top three best performing pipelines are saved to the leaderboard. Click any pipeline name to review details. - -Note: Pipelines that use Supporting features are indicated by SUP enhancement. - -![Pipeline leaderboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-pipeline-leaderboard.png) -4. Select the pipeline with Rank 1 and click Save as to create your model. Then, click Create. This action saves the pipeline under the Models section in the Assets tab. - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_7,99843122C08D0D70ED3694A57482595E35FB0D8B," Deploy the trained model - -Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space: - - - -1. You can deploy the model from the model details page. To access the model details page, choose one of these options: - - - -* Click the model’s name in the notification that is displayed when you save the model. -* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section. - - - -2. Select Promote to Deployment Space, then select or create a deployment space where the model will be deployed. -Optional: Follow these steps to create a deployment space: -a. From the Target space list, select Create a new deployment space. -b. Enter a name for your deployment space. -c. To associate a machine learning instance, go to Select machine learning service (optional) and select a machine learning instance from the list. -d. Click Create. - -3. Once you select or create your space, click Promote. -4. Click the deployment space link from the notification. -5. From the Assets tab of the deployment space: -a. Hover over the model’s name and click the deployment icon ![Deploy icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-icon.png). -b. In the page that opens, complete the fields: - - - -* Select Online as the Deployment type. - -* Specify a name for the deployment. - -* Click Create. - - - - - -After the deployment is complete, click the Deployments tab and select the deployment name to view the details page. - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_8,99843122C08D0D70ED3694A57482595E35FB0D8B," Test the deployed model - -Follow these steps to test the deployed model from the deployment details page: - - - -1. On the Test tab of the deployment details page, go to New observations (optional) spreadsheet and enter the following values: -pollution (double): 80.417 -temp (double): -5.5 -dew (double): -7.083 -press (double): 1020.667 -wnd_spd (double): 9.518 -snow (double): 0 -rain (double): 0 - -![New observations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-new-obs.JPG) - -2. To add future values of Supporting features, go to Future exogenous features (optional) spreadsheet and enter the following values: -dew (double): -12.667 -press (double): 1023.708 -wnd_spd (double): 9.518 -snow (double): 0 -rain (double): 0.042 - -Note: You must provide the same number of values for future exogenous features as the prediction horizon that you set during experiment configuration stage. - -![Future exogenous values](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-future-exogenous.JPG) - -3. Click Predict. The resulting prediction indicates values for pollution and temperature. - -Note: Prediction values that are shown in the output might differ when you test your deployment. - -![Resulting prediction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-sup-output.JPG) - - - -" -99843122C08D0D70ED3694A57482595E35FB0D8B_9,99843122C08D0D70ED3694A57482595E35FB0D8B," Learn more - -Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_0,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Tutorial: AutoAI univariate time series experiment - -Use sample data to train a univariate (single prediction column) time series experiment that predicts minimum daily temperatures. - -When you set up the experiment, you load data that tracks daily minimum temperatures for the city of Melbourne, Australia. The experiment will generate a set of pipelines that use algorithms to predict future minimum daily temperatures. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review. - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_1,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Data set overview - -The [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia. The units are in degrees celsius and the data set contains 3650 observations. The source of the data is the Australian Bureau of Meteorology. Details about the data set are described here: - -![Daily Min Temperature Spreadsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_table.png) - - - -* You will use the Min_Temp column as the prediction column to build pipelines and forecast the future daily minimum temperatures. Before the pipeline training, the date column and Min_Temp column are used together to figure out the appropriate lookback window. -* The prediction column forecasts a prediction for the daily minimum temperature on a specified day. -* The sample data is structured in rows and columns and saved as a .csv file. - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_2,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Tasks overview - -In this tutorial, you follow these steps to create a univariate time series experiment: - - - -1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep0) -2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep1) -3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep2) -4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep3) -5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep4) -6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep5) - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_3,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Create a project - -Follow these steps to download the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set from the Samples and create an empty project: - - - -1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), click Samples and download a local copy of the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set. -2. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), click Projects > View all projects, then click New Project. - - - -1. Click Create an empty project. -2. Enter a name and optional description for your project. -3. Click Create. - - - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_4,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Create an AutoAI experiment - -Follow these steps to create an AutoAI experiment and add sample data to your experiment: - - - -1. On the Assets tab from within your project, click New asset > Build machine learning models automatically. -2. Specify a name and optional description for your experiment, then select Create. -3. Select Associate a Machine Learning service instance to create a new service instance or associate an existing instance with your project. Click Reload to confirm your configuration. -4. Click Create. -5. To add the sample data, choose one of the these methods: - - - -* If you downloaded your file locally, upload the training data file, Daily_Min_Temperatures.csv, by clicking Browse and then following the prompts. - -* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Daily_Min_Temperatures.csv. - - - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_5,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Configure the experiment - -Follow these steps to configure your univariate AutoAI time series experiment: - - - -1. Click Yes for the option to create a Time Series Forecast. -2. Choose as prediction columns: Min_Temp. -3. Choose as the date/time column: Date. - -![Configuring experiment settings. Yes to time series forecast and min temp as the prediction column with Date as the date/time column.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_run_configuration.png) -4. Click Experiment settings to configure the experiment: - - - -1. In the Data source page, select the Time series tab. - -2. For this tutorial, accept the default value for Number of backtests (4), Gap length (0 steps), and Holdout length (20 steps). - -Note: The validation length changes if you change the value of any of the parameters: Number of backtests, Gap length, or Holdout length. - -c. Click Cancel to exit from the Experiment settings. - - - -![Experiment settings on Data Source page](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_exp_settings.png) -5. Click Run experiment to begin the training. - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_6,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Review experiment results - -The experiment takes several minutes to complete. As the experiment trains, a visualization shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. - - - -1. (Optional): Hover over any node in the visualization to get details on the transformation for a particular pipeline. - -![Experiment summary generating pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_pipeline_build.png) -2. (Optional): After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example: - -![Metric chart of pipeline comparison](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_pipeline_comparison.png) -3. (Optional): When the training completes, the top three best performing pipelines are saved to the leaderboard. Click View discarded pipelines to review pipelines with the least performance. - -![Ranked pipeline leaderboard based on accuracy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_pipeline_leaderboard.png) -4. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This action saves the pipeline under the Models section in the Assets tab. - - - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_7,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Deploy the trained model - -Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space: - - - -1. You can deploy the model from the model details page. To access the model details page, choose one of the these methods: - - - -* Click the model’s name in the notification that is displayed when you save the model. -* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section. - - - -2. Click Promote to Deployment Space, then select or create a deployment space where the model will be deployed. -(Optional): To create a deployment space, follow these steps: - - - -1. From the Target space list, select Create a new deployment space. - -2. Enter a name for your deployment space. - -3. To associate a machine learning instance, go to Select machine learning service (optional) and select an instance from the list. - -4. Click Create. - - - -3. After you select or create your space, click Promote. -4. Click the deployment space link from the notification. -5. From the Assets tab of the deployment space: - - - -1. Hover over the model’s name and click the deployment icon ![Deploy icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-icon.png). -2. In the page that opens, complete the fields: - - - -1. Specify a name for the deployment. -2. Select Online as the Deployment type. -3. Click Create. - - - - - - - -After the deployment is complete, click the Deployments tab and select the deployment name to view the details page. - -" -3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_8,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Test the deployed model - -Follow these steps to test the deployed model from the deployment details page: - - - -1. On the Test tab of the deployment details page, click the terminal icon ![Terminal icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/terminal-icon.png) and enter the following JSON test data: - -{ ""input_data"": [ { - -""fields"": - -""Min_Temp"" - -], - -""values"": - -7], 15] - -] - -} ] } - -Note: The test data replicates the data fields for the model, except the prediction field. -2. Click Predict to predict the future minimum temperature. - - - -![Test tab for deployed model with JSON code as input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_uni_test.png) - -Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html) -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_0,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Selecting an AutoAI model - -AutoAI automatically prepares data, applies algorithms, and attempts to build model pipelines that are best suited for your data and use case. Learn how to evaluate the model pipelines so that you can save one as a model. - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_1,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Reviewing experiment results - -During AutoAI training, your data set is split to a training part and a hold-out part. The training part is used by the AutoAI training stages to generate the AutoAI model pipelines and cross-validation scores that are used to rank them. After AutoAI training, the hold-out part is used for the resulting pipeline model evaluation and computation of performance information such as ROC curves and confusion matrices, which are shown in the leaderboard. The training/hold-out split ratio is 90/10. - -As the training progresses, you are presented with a dynamic infographic and leaderboard. Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification panel, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard. The leaderboard contains model pipelines that are ranked by cross-validation scores. - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_2,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," View the pipeline transformations - -Hover over a node in the infographic to view the transformations for a pipeline. The sequence of data transformations consists of a pre-processing transformer and a sequence of data transformers, if feature engineering was performed for the pipeline. The algorithm is determined by model selection and optimization steps during AutoAI training. - -![Pipeline transformation for AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-bank-pipeline-transform.png) - -See [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html) to review the technical details for creating the pipelines. - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_3,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," View the leaderboard - -Each model pipeline is scored for various metrics and then ranked. The default ranking metric for binary classification models is the area under the ROC curve. For multi-class classification models the default metric is accuracy. For regression models, the default metric is the root mean-squared error (RMSE). The highest-ranked pipelines display in a leaderboard, so you can view more information about them. The leaderboard also provides the option to save select model pipelines after you review them. - -![Leaderboard AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-bank-leaderboard.png) - -You can evaluate the pipelines as follows: - - - -* Click a pipeline in the leaderboard to view more detail about the metrics and performance. -* Click Compare to view how the top pipelines compare. -* Sort the leaderboard by a different metric. - - - -![Expanding an AutoAI pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-bank-pipeline-expand.png) - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_4,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Viewing the confusion matrix - -One of the details you can view for a pipeline for a binary classification experiment is a Confusion matrix. - -The confusion matrix is based on the holdout data, which is the portion of the training dataset that is not used for training the model pipeline but only used to measure its performance on data that was not seen during training. - -In a binary classification problem with a positive class and a negative class, the confusion matrix summarizes the pipeline model’s positive and negative predictions in four quadrants depending on their correctness regarding the positive or negative class labels of the holdout data set. - -For example, the Bank sample experiment seeks to identify customers that take promotions that are offered to them. The confusion matrix for the top-ranked pipeline is: - -![Confusion matrix](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-confusion-matrix.png) - -The positive class is ‘yes’ (meaning a user takes the promotion). You can see that the measurement of true negatives, that is, customers the model predicted correctly they would refuse their promotions, is high. - -Click the items in the navigation menu to view other details about the selected pipeline. For example, Feature importance shows which data features contribute most to your prediction output. - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_5,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Save a pipeline as a model - -When you are satisfied with a pipeline, save it using one of these methods: - - - -* Click Save model to save the candidate pipeline as a model to your project so you can test and deploy it. -* Click [Save as notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) to create and save an auto-generated notebook to your project. You can review the code or run the experiment in the notebook. - - - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_6,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Next steps - -Promote the trained model to a deployment space so that you can test it with new data and generate predictions. - -" -46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_7,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Learn more - -[AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -C926DFB3758881E6698F630E496F3817101E4176_0,C926DFB3758881E6698F630E496F3817101E4176," AutoAI tutorial: Build a Binary Classification Model - -This tutorial guides you through training a model to predict if a customer is likely to buy a tent from an outdoor equipment store. - -Create an AutoAI experiment to build a model that analyzes your data and selects the best model type and algorithms to produce, train, and optimize pipelines. After you review the pipelines, save one as a model, deploy it, and then test it to get a prediction. - -Watch this video to see a preview of the steps in this tutorial. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 In this video, you will see how to build a binary classification model that assesses the likelihood that a customer of an outdoor equipment company will buy a tent. - 00:11 This video uses a data set called ""GoSales"", which you'll find in the Gallery. - 00:16 View the data set. - 00:20 The feature columns are ""GENDER"", ""AGE"", ""MARITAL_STATUS"", and ""PROFESSION"" and contain the attributes on which the machine learning model will base predictions. - 00:31 The label columns are ""IS_TENT"", ""PRODUCT_LINE"", and ""PURCHASE_AMOUNT"" and contain historical outcomes that the models could be trained to predict. - 00:44 Add this data set to the ""Machine Learning"" project and then go to the project. - 00:56 You'll find the GoSales.csv file with your other data assets. - 01:02 Add to the project an ""AutoAI experiment"". - 01:08 This project already has the Watson Machine Learning service associated. - 01:13 If you haven't done that yet, first, watch the video showing how to run an AutoAI experiment based on a sample. - 01:22 Just provide a name for the experiment and then click ""Create"". - 01:30 The AutoAI experiment builder displays. - 01:33 You first need to load the training data. - 01:36 In this case, the data set will be from the project. - 01:40 Select the GoSales.csv file from the list. -" -C926DFB3758881E6698F630E496F3817101E4176_1,C926DFB3758881E6698F630E496F3817101E4176," 01:45 AutoAI reads the data set and lists the columns found in the data set. - 01:50 Since you want the model to predict the likelihood that a given customer will purchase a tent, select ""IS_TENT"" as the column to predict. - 01:59 Now, edit the experiment settings. - 02:03 First, look at the settings for the data source. - 02:06 If you have a large data set, you can run the experiment on a subsample of rows and you can configure how much of the data will be used for training and how much will be used for evaluation. - 02:19 The default is a 90%/10% split, where 10% of the data is reserved for evaluation. - 02:27 You can also select which columns from the data set to include when running the experiment. - 02:35 On the ""Prediction"" panel, you can select a prediction type. - 02:39 In this case, AutoAI analyzed your data and determined that the ""IS_TENT"" column contains true-false information, making this data suitable for a ""Binary classification"" model. - 02:52 The positive class is ""TRUE"" and the recommended metric is ""Accuracy"". - 03:01 If you'd like, you can choose specific algorithms to consider for this experiment and the number of top algorithms for AutoAI to test, which determines the number of pipelines generated. - 03:16 On the ""Runtime"" panel, you can review other details about the experiment. - 03:21 In this case, accepting the default settings makes the most sense. - 03:25 Now, run the experiment. - 03:28 AutoAI first loads the data set, then splits the data into training data and holdout data. - 03:37 Then wait, as the ""Pipeline leaderboard"" fills in to show the generated pipelines using different estimators, such as XGBoost classifier, or enhancements such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the accuracy metric. - 03:58 Hyperparameter optimization is a mechanism for automatically exploring a search space for potential hyperparameters, building a series of models and comparing the models using metrics of interest. - 04:10 Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction. -" -C926DFB3758881E6698F630E496F3817101E4176_2,C926DFB3758881E6698F630E496F3817101E4176," 04:21 Okay, the run has completed. - 04:24 By default, you'll see the ""Relationship map"". - 04:28 But you can swap views to see the ""Progress map"". - 04:32 You may want to start with comparing the pipelines. - 04:36 This chart provides metrics for the eight pipelines, viewed by cross validation score or by holdout score. - 04:46 You can see the pipelines ranked based on other metrics, such as average precision. - 04:55 Back on the ""Experiment summary"" tab, expand a pipeline to view the model evaluation measures and ROC curve. - 05:03 During AutoAI training, your data set is split into two parts: training data and holdout data. - 05:11 The training data is used by the AutoAI training stages to generate the model pipelines, and cross validation scores are used to rank them. - 05:21 After training, the holdout data is used for the resulting pipeline model evaluation and computation of performance information, such as ROC curves and confusion matrices. - 05:33 You can view an individual pipeline to see more details in addition to the confusion matrix, precision recall curve, model information, and feature importance. - 05:46 This pipeline had the highest ranking, so you can save this as a machine learning model. - 05:52 Just accept the defaults and save the model. - 05:56 Now that you've trained the model, you're ready to view the model and deploy it. - 06:04 The ""Overview"" tab shows a model summary and the input schema. - 06:09 To deploy the model, you'll need to promote it to a deployment space. - 06:15 Select the deployment space from the list, add a description for the model, and click ""Promote"". - 06:24 Use the link to go to the deployment space. - 06:28 Here's the model you just created, which you can now deploy. - 06:33 In this case, it will be an online deployment. - 06:37 Just provide a name for the deployment and click ""Create"". - 06:41 Then wait, while the model is deployed. - 06:44 When the model deployment is complete, view the deployment. - 06:49 On the ""API reference"" tab, you'll find the scoring endpoint for future reference. -" -C926DFB3758881E6698F630E496F3817101E4176_3,C926DFB3758881E6698F630E496F3817101E4176," 06:56 You'll also find code snippets for various programming languages to utilize this deployment from your application. - 07:05 On the ""Test"" tab, you can test the model prediction. - 07:09 You can either enter test input data or paste JSON input data, and click ""Predict"". - 07:20 This shows that there's a very high probability that the first customer will buy a tent and a very high probability that the second customer will not buy a tent. - 07:33 And back in the project, you'll find the AutoAI experiment and the model on the ""Assets"" tab. - 07:44 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -" -C926DFB3758881E6698F630E496F3817101E4176_4,C926DFB3758881E6698F630E496F3817101E4176," Overview of the data sets - -The sample data is structured (in rows and columns) and saved in a .csv file format. - -You can view the sample data file in a text editor or spreadsheet program: -![Spreadsheet of the Go Sales data set that contains customer and purchase information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_sample_data.png) - -" -C926DFB3758881E6698F630E496F3817101E4176_5,C926DFB3758881E6698F630E496F3817101E4176," What do you want to predict? - -Choose the column whose values that your model predicts. - -In this tutorial, the model predicts the values of the IS_TENT column: - - - -* IS_TENT: Whether the customer bought a tent - - - -The model that is built in this tutorial predicts whether a customer is likely to purchase a tent. - -" -C926DFB3758881E6698F630E496F3817101E4176_6,C926DFB3758881E6698F630E496F3817101E4176," Tasks overview - -This tutorial presents the basic steps for building and training a machine learning model with AutoAI: - - - -1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep0) -2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep1) -3. [Training the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep2) -4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep3) -5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep4) -6. [Creating a batch to score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep5) - - - -" -C926DFB3758881E6698F630E496F3817101E4176_7,C926DFB3758881E6698F630E496F3817101E4176," Task 1: Create a project - - - -1. From the Samples, download the [GoSales](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa07a773f71cf1172a349f33e2028e4e?context=wx) data set file to your local computer. -2. From the Projects page, to create a new project, select New Project. -a. Select Create an empty project. -b. Include your project name. -c. Click Create. - - - -" -C926DFB3758881E6698F630E496F3817101E4176_8,C926DFB3758881E6698F630E496F3817101E4176," Task 2: Create an AutoAI experiment - - - -1. On the Assets tab from within your project, click New asset > Build machine learning models automatically. -2. Specify a name and optional description for your new experiment. -3. Select the Associate a Machine Learning service instance link to associate the Watson Machine Learning Server instance with your project. Click Reload to confirm your configuration. -4. To add a data source, you can choose one of these options: -a. If you downloaded your file locally, upload the training data file, GoSales.csv, from your local computer. Drag the file onto the data panel or click browse and follow the prompts. -b. If you already uploaded your file to your project, click select from project, then select the data asset tab and choose GoSales.csv. - - - -" -C926DFB3758881E6698F630E496F3817101E4176_9,C926DFB3758881E6698F630E496F3817101E4176," Task 3: Training the experiment - - - -1. In Configuration details, select No for the option to create a Time Series Forecast. -2. Choose IS_TENT as the column to predict. AutoAI analyzes your data and determines that the IS_TENT column contains True and False information, making this data suitable for a binary classification model. The default metric for a binary classification is ROC/AUC. - -![Configuring experiment details. No to time series forecast and IS TENT as the column to predict.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_run_configuration_test.png) -3. Click Run experiment. As the model trains, an infographic shows the process of building the pipelines. - -Note:You might see slight differences in results based on the Cloud Pak for Data platform and version you use. - -![Experiment summary of AutoAI generated pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_pipeline_build.png) - -For a list of algorithms or estimators that are available with each machine learning technique in AutoAI, see [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). -4. When all the pipelines are created, you can compare their accuracy on the Pipeline leaderboard. - -![Pipeline leaderboard that ranks generated pipelines based on accuracy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_pipeline_leaderboard.png) -5. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This option saves the pipeline under the Models section in the Assets tab. - - - -" -C926DFB3758881E6698F630E496F3817101E4176_10,C926DFB3758881E6698F630E496F3817101E4176," Task 4: Deploy the trained model - - - -1. You can deploy the model from the model details page. You can access the model details page in one of these ways: - - - -1. Clicking the model’s name in the notification displayed when you save the model. -2. Open the Assets tab for the project, select the Models section and select the model’s name. - - - -2. Click Promote to Deployment Space then select or create the space where the model will be deployed. - - - -1. To create a deployment space: - - - -1. Enter a name. -2. Associate it with a Machine Learning Service. -3. Select Create. - - - - - -3. After you create your deployment space or select an existing one, select Promote. -4. Click the deployment space link from the notification. -5. From the Assets tab of the deployment space: - - - -1. Hover over the model’s name and click the deployment icon ![Deploy icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-icon.png). - - - -1. In the page that opens, complete the fields: - - - -1. Select Online as the Deployment type. -2. Specify a name for the deployment. -3. Click Create. - - - - - - - - - -![Creating an online deployment space to promote the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_deployment.png) - -After the deployment is complete, click Deployments and select the deployment name to view the details page. - -" -C926DFB3758881E6698F630E496F3817101E4176_11,C926DFB3758881E6698F630E496F3817101E4176," Task 5: Test the deployed model - -You can test the deployed model from the deployment details page: - - - -1. On the Test tab of the deployment details page, complete the form with test values or enter JSON test data by clicking the terminal icon ![Terminal icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/terminal-icon.png) to provide the following JSON input data. - -{""input_data"":[{ - -""fields"": - -""GENDER"",""AGE"",""MARITAL_STATUS"",""PROFESSION"",""PRODUCT_LINE"",""PURCHASE_AMOUNT""], - -""values"": ""M"",27,""Single"", ""Professional"",""Camping Equipment"",144.78]] - -}]} - -Note: The test data replicates the data fields for the model, except for the prediction field. -2. Click Predict to predict whether a customer with the entered attributes is likely to buy a tent. The resulting prediction indicates that a customer with the attributes entered has a high probability of purchasing a tent. - - - -![Result of the Tent model prediction. Prediction equals true, likely to buy a tent](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_binary_test.png) - -" -C926DFB3758881E6698F630E496F3817101E4176_12,C926DFB3758881E6698F630E496F3817101E4176," Task 6: Creating a batch job to score the model - -For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction. - -In a real scenario, you would submit new data to the model to get a score. However, this tutorial uses the same training data GoSales-updated.csv that you downloaded as part of the tutorial setup. Ensure that you delete the IS_TENT column and save the file before you upload it to the batch job. When deploying a model, you can add the payload data to a project, upload it to a space, or link to it in a storage repository such as a Cloud Object Storage bucket. For this tutorial, upload the file directly to the deployment space. - -" -C926DFB3758881E6698F630E496F3817101E4176_13,C926DFB3758881E6698F630E496F3817101E4176," Step 1: Add data to space - -From the Assets page of the deployment space: - - - -1. Click Add to space then choose Data. -2. Upload the file GoSales-updated.csv file that you saved locally. - - - -" -C926DFB3758881E6698F630E496F3817101E4176_14,C926DFB3758881E6698F630E496F3817101E4176," Step 2: Create the batch deployment - -Now you can define the batch deployment. - - - -1. Click the deployment icon next to the model’s name. -2. Enter a name a name for the deployment. - - - -1. Select Batch as the Deployment type. -2. Choose the smallest hardware specification. -3. Click Create. - - - - - -" -C926DFB3758881E6698F630E496F3817101E4176_15,C926DFB3758881E6698F630E496F3817101E4176," Step 3: Create the batch job - -The batch job runs the deployment. To create the job, you must specify the input data and the name for the output file. You can set up a job to run on a schedule or run immediately. - - - -1. Click New job. -2. Specify a name for the job -3. Configure to the smallest hardware specification -4. (Optional): To set a schedule and receive notifications. -5. Upload the input file: GoSales-updated.csv -6. Name the output file: GoSales-output.csv -7. Review and click Create to run the job. - - - -" -C926DFB3758881E6698F630E496F3817101E4176_16,C926DFB3758881E6698F630E496F3817101E4176," Step 4: View the output - -When the deployment status changes to Deployed, return to the Assets page for the deployment space. The file GoSales-output.csv was created and added to your assets list. - -Click the download icon next to the output file and open the file in an editor. You can review the prediction results for the customer information that is submitted for batch processing. - -For each case, the prediction that is returned indicates the confidence score of whether a customer will buy a tent. - -" -C926DFB3758881E6698F630E496F3817101E4176_17,C926DFB3758881E6698F630E496F3817101E4176," Next steps - -[Building an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html) - -Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_0,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," SPSS predictive analytics classification and regression algorithms in notebooks - -You can use generalized linear model, linear regression, linear support vector machine, random trees, or CHAID SPSS predictive analytics algorithms in notebooks. - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_1,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Generalized Linear Model - -The Generalized Linear Model (GLE) is a commonly used analytical algorithm for different types of data. It covers not only widely used statistical models, such as linear regression for normally distributed targets, logistic models for binary or multinomial targets, and log linear models for count data, but also covers many useful statistical models via its very general model formulation. In addition to building the model, Generalized Linear Model provides other useful features such as variable selection, automatic selection of distribution and link function, and model evaluation statistics. This model has options for regularization, such as LASSO, ridge regression, elastic net, etc., and is also capable of handling very wide data. - -For more details about how to choose distribution and link function, see Distribution and Link Function Combination. - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_2,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 1: - -This example shows a GLE setting with specified distribution and link function, specified effects, intercept, conducting ROC curve, and printing correlation matrix. This scenario builds a model, then scores the model. - -Python example: - -from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear -from spss.ml.classificationandregression.params.effect import Effect - -gle1 = GeneralizedLinear(). -setTargetField(""Work_experience""). -setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). -setEffects([ -Effect(fields=""Beginning_salary""], nestingLevels=0]), -Effect(fields=""Sex_of_employee""], nestingLevels=0]), -Effect(fields=""Educational_level""], nestingLevels=0]), -Effect(fields=""Current_salary""], nestingLevels=0]), -Effect(fields=""Sex_of_employee"", ""Educational_level""], nestingLevels=0, 0])]). -setIntercept(True). -setDistribution(""NORMAL""). -setLinkFunction(""LOG""). -setAnalysisType(""BOTH""). -setConductRocCurve(True) - -gleModel1 = gle1.fit(data) -PMML = gleModel1.toPMML() -statXML = gleModel1.statXML() -predictions1 = gleModel1.transform(data) -predictions1.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_3,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 2: - -This example shows a GLE setting with unspecified distribution and link function, and variable selection using the forward stepwise method. This scenario uses the forward stepwise method to select distribution, link function and effects, then builds and scores the model. - -Python example: - -from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear -from spss.ml.classificationandregression.params.effect import Effect - -gle2 = GeneralizedLinear(). -setTargetField(""Work_experience""). -setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). -setEffects([ -Effect(fields=""Beginning_salary""], nestingLevels=0]), -Effect(fields=""Sex_of_employee""], nestingLevels=0]), -Effect(fields=""Educational_level""], nestingLevels=0]), -Effect(fields=""Current_salary""], nestingLevels=0])]). -setIntercept(True). -setDistribution(""UNKNOWN""). -setLinkFunction(""UNKNOWN""). -setAnalysisType(""BOTH""). -setUseVariableSelection(True). -setVariableSelectionMethod(""FORWARD_STEPWISE"") - -gleModel2 = gle2.fit(data) -PMML = gleModel2.toPMML() -statXML = gleModel2.statXML() -predictions2 = gleModel2.transform(data) -predictions2.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_4,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 3: - -This example shows a GLE setting with unspecified distribution, specified link function, and variable selection using the LASSO method, with two-way interaction detection and automatic penalty parameter selection. This scenario detects two-way interaction for effects, then uses the LASSO method to select distribution and effects using automatic penalty parameter selection, then builds and scores the model. - -Python example: - -from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear -from spss.ml.classificationandregression.params.effect import Effect - -gle3 = GeneralizedLinear(). -setTargetField(""Work_experience""). -setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]). -setEffects([ -Effect(fields=""Beginning_salary""], nestingLevels=0]), -Effect(fields=""Sex_of_employee""], nestingLevels=0]), -Effect(fields=""Educational_level""], nestingLevels=0]), -Effect(fields=""Current_salary""], nestingLevels=0])]). -setIntercept(True). -setDistribution(""UNKNOWN""). -setLinkFunction(""LOG""). -setAnalysisType(""BOTH""). -setDetectTwoWayInteraction(True). -setUseVariableSelection(True). -setVariableSelectionMethod(""LASSO""). -setUserSpecPenaltyParams(False) - -gleModel3 = gle3.fit(data) -PMML = gleModel3.toPMML() -statXML = gleModel3.statXML() -predictions3 = gleModel3.transform(data) -predictions3.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_5,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Linear Regression - -The linear regression model analyzes the predictive relationship between a continuous target and one or more predictors which can be continuous or categorical. - -Features of the linear regression model include automatic interaction effect detection, forward stepwise model selection, diagnostic checking, and unusual category detection based on Estimated Marginal Means (EMMEANS). - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_6,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code: - -Python example: - -from spss.ml.classificationandregression.linearregression import LinearRegression - -le = LinearRegression(). -setTargetField(""target""). -setInputFieldList([""predictor1"", ""predictor2"", ""predictorn""]). -setDetectTwoWayInteraction(True). -setVarSelectionMethod(""forwardStepwise"") - -leModel = le.fit(data) -predictions = leModel.transform(data) -predictions.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_7,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Linear Support Vector Machine - -The Linear Support Vector Machine (LSVM) provides a supervised learning method that generates input-output mapping functions from a set of labeled training data. The mapping function can be either a classification function or a regression function. LSVM is designed to resolve large-scale problems in terms of the number of records and the number of variables (parameters). Its feature space is the same as the input space of the problem, and it can handle sparse data where the average number of non-zero elements in one record is small. - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_8,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code: - -Python example: - -from spss.ml.classificationandregression.linearsupportvectormachine import LinearSupportVectorMachine - -lsvm = LinearSupportVectorMachine(). -setTargetField(""BareNuc""). -setInputFieldList([""Clump"", ""UnifSize"", ""UnifShape"", ""MargAdh"", ""SingEpiSize"", ""BlandChrom"", ""NormNucl"", ""Mit"", ""Class""]). -setPenaltyFunction(""L2"") - -lsvmModel = lsvm.fit(df) -predictions = lsvmModel.transform(data) -predictions.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_9,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Random Trees - -Random Trees is a powerful approach for generating strong (accurate) predictive models. It's comparable and sometimes better than other state-of-the-art methods for classification or regression problems. - -Random Trees is an ensemble model consisting of multiple CART-like trees. Each tree grows on a bootstrap sample which is obtained by sampling the original data cases with replacement. Moreover, during the tree growth, for each node the best split variable is selected from a specified smaller number of variables that are drawn randomly from the full set of variables. Each tree grows to the largest extent possible, and there is no pruning. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression). - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_10,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code: - -Python example: - -from spss.ml.classificationandregression.ensemble.randomtrees import RandomTrees - - Random trees required a ""target"" field and some input fields. If ""target"" is continuous, then regression trees will be generate else classification . - You can use the SPSS Attribute or Spark ML Attribute to indicate the field to categorical or continuous. -randomTrees = RandomTrees(). -setTargetField(""target""). -setInputFieldList([""feature1"", ""feature2"", ""feature3""]). -numTrees(10). -setMaxTreeDepth(5) - -randomTreesModel = randomTrees.fit(df) -predictions = randomTreesModel.transform(scoreDF) -predictions.show() - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_11,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," CHAID - -CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits. An extension applicable to regression problems is also available. - -CHAID first examines the crosstabulations between each of the input fields and the target, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that's the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged. Continuous input fields other than the target can't be used directly; they must be binned into ordinal fields first. - -Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute. - -" -7BF4B8F1F49406EEC43BE3B7350092F9165B0757_12,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code: - -Python example: - -from spss.ml.classificationandregression.tree.chaid import CHAID - -chaid = CHAID(). -setTargetField(""salary""). -setInputFieldList([""educ"", ""jobcat"", ""gender""]) - -chaidModel = chaid.fit(data) -pmmlStr = chaidModel.toPMML() -statxmlStr = chaidModel.statXML() - -predictions = chaidModel.transform(data) -predictions.show() - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -CE1B598A354C454F2D201039A2BB6D69BABBF840_0,CE1B598A354C454F2D201039A2BB6D69BABBF840," SPSS predictive analytics clustering algorithms in notebooks - -You can use the scalable Two-Step or the Cluster model evaluation algorithm to cluster data in notebooks. - -" -CE1B598A354C454F2D201039A2BB6D69BABBF840_1,CE1B598A354C454F2D201039A2BB6D69BABBF840," Two-Step Cluster - -Scalable Two-Step is based on the familiar two-step clustering algorithm, but extends both its functionality and performance in several directions. - -First, it can effectively work with large and distributed data supported by Spark that provides the Map-Reduce computing paradigm. - -Second, the algorithm provides mechanisms for selecting the most relevant features for clustering the given data, as well as detecting rare outlier points. Moreover, it provides an enhanced set of evaluation and diagnostic features for enabling insight. - -The two-step clustering algorithm first performs a pre-clustering step by scanning the entire dataset and storing the dense regions of data cases in terms of summary statistics called cluster features. The cluster features are stored in memory in a data structure called the CF-tree. Finally, an agglomerative hierarchical clustering algorithm is applied to cluster the set of cluster features. - -" -CE1B598A354C454F2D201039A2BB6D69BABBF840_2,CE1B598A354C454F2D201039A2BB6D69BABBF840,"Python example code: - -from spss.ml.clustering.twostep import TwoStep - -cluster = TwoStep(). -setInputFieldList([""region"", ""happy"", ""age""]). -setDistMeasure(""LOGLIKELIHOOD""). -setFeatureImportanceMethod(""CRITERION""). -setAutoClustering(True) - -clusterModel = cluster.fit(data) -predictions = clusterModel.transform(data) -predictions.show() - -" -CE1B598A354C454F2D201039A2BB6D69BABBF840_3,CE1B598A354C454F2D201039A2BB6D69BABBF840," Cluster model evaluation - -Cluster model evaluation (CME) aims to interpret cluster models and discover useful insights based on various evaluation measures. - -It's a post-modeling analysis that's generic and independent from any types of cluster models. - -" -CE1B598A354C454F2D201039A2BB6D69BABBF840_4,CE1B598A354C454F2D201039A2BB6D69BABBF840,"Python example code: - -from spss.ml.clustering.twostep import TwoStep - -cluster = TwoStep(). -setInputFieldList([""region"", ""happy"", ""age""]). -setDistMeasure(""LOGLIKELIHOOD""). -setFeatureImportanceMethod(""CRITERION""). -setAutoClustering(True) - -clusterModel = cluster.fit(data) -predictions = clusterModel.transform(data) -predictions.show() - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_0,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B," Coding and running a notebook - -After you created a notebook to use in the notebook editor, you need to add libraries, code, and data so you can do your analysis. - -To develop analytic applications in a notebook, follow these general steps: - - - -1. Open the notebook in edit mode: click the edit icon (![Edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pencil-icon.png)). If the notebook is locked, you might be able to [unlock and edit](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmleditassets) it. -2. If the notebook is marked as being untrusted, tell the Jupyter service to trust your notebook content and allow executing all cells by: - - - -1. Clicking Not Trusted in the upper right corner of the notebook. -2. Clicking Trust to execute all cells. - - - -3. Determine if the environment template that is associated with the notebook has the correct hardware size for the anticipated analysis processing throughput. - - - -1. Check the size of the environment by clicking the View notebook info icon (![Edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/info_panel.png)) from the notebook toolbar and selecting the Environments page. -2. If you need to change the environment, select another one from the list or, if none fits your needs, create your own environment template. See [Creating emvironment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - -If you create an environment template, you can add your own libraries to the template that are preinstalled at the time the environment is started. See [Customize your environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) for Python and R. - - - -" -AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_1,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B,"4. Import preinstalled libraries. See [Libraries and scripts for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html). -5. Load and access data. You can access data from project assets by running code that is generated for you when you select the asset or programmatically by using preinstalled library functions. See [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). -6. Prepare and analyze the data with the appropriate methods: - - - -* [Build Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) -* [Build Decision Optimization models](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) -* [Use Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -* [Use SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -* [Use geospatial location analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html) -* [Use Data skipping for Spark SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html) -* [Apply Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) -" -AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_2,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B,"* [Use Time series analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) - - - -7. If necessary, schedule the notebook to run at a regular time. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html). - - - -1. Monitor the status of your job runs from the project's Jobs page. -2. Click your job to open the job's details page to view the runs for your job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to troubleshoot the run. - - - -8. When you're not actively working on the notebook, click File > Stop Kernel to stop the notebook kernel and free up resources. -9. Stop the active runtime (and unnecessary capacity unit consumption) if no other notebook kernels are active under Tool runtimes on the Environments page on the Manage tab of your project. - - - -Video disclaimer: Some minor steps and graphical elements in these videos may differ from your deployment. - -Watch this short video to see how to create a Jupyter notebook and custom environment. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Watch this short video to see how to run basic SQL queries on Db2 Warehouse data in a Python notebook. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_3,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B," Learn more - - - -* [Markdown cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html) -* [Notebook interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html) - - - - - -* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) -* [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) - - - -Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_0,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Deployment space collaborator roles and permissions - -When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. Learn how to add collaborators to your deployment spaces and the differences between access levels. - -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_1,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," User roles and permissions in deployment spaces - -You can assign the following roles to collaborators based on the access level that you want to provide: - - - -* Admin: Administrators can control your deployment space assets, users, and settings. -* Editor: Editors can control your space assets. -* Viewer: Viewers can view your deployment space. - - - -The following table provides details on permissions based on user access level: - - - -Deployment space permissions - - Enabled permission Viewer Editor Admin - - View assets and deployments ✓ ✓ ✓ - Comment ✓ ✓ ✓ - Monitor ✓ ✓ ✓ - Test model deployment API ✓ ✓ ✓ - Find implementation details ✓ ✓ ✓ - Configure deployments ✓ ✓ - Batch deployment score ✓ ✓ - Online deployment score ✓ ✓ ✓ - Update assets ✓ ✓ - Import assets ✓ ✓ - Download assets ✓ ✓ - Deploy assets ✓ ✓ - Remove assets ✓ ✓ - Remove deployments ✓ ✓ - View spaces/members ✓ ✓ ✓ - Delete space ✓ - - - -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_2,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Service IDs - -You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Service IDs are not tied to a specific user. Therefore, if a user leaves an organization and is deleted from the account, the service ID remains. Thus, your application or service stays up and running. For more information, see [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids). - -To learn more about assigning space access by using a service ID, see [Adding collaborators to your deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html?context=cdpaas&locale=enadding-collaborators). - -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_3,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Adding collaborators to your deployment space - -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_4,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86,"Prerequisites: -All users in your IBM Cloud account with the Admin IAM platform access role for all IAM enabled services can manage space collaborators. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform). - -" -4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_5,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86,"Restriction: -You can add collaborators to your deployment space only if they are a part of your organization and if they provisioned Watson Studio. - -To add one or more collaborators to a deployment space: - - - -1. From your deployment space, go to the Manage tab and click Access Control. -2. Click Add collaborators and choose one of the following options: - - - -* If you want to add a user, click Add users. Assign a role that applies to the user. -* If you want to add pre-defined user groups, click . Assign a role that applies to all members of the group. - - - -3. Add the user or user groups that you want to have the same access level and click Add. - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_0,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Creating environment templates - -You can create custom environment templates if you do not want to use the default environments provided by Watson Studio. - -Required permissions : To create an environment template, you must have the Admin or Editor role within the project. - -You can create environment templates for the following types of assets: - - - -* Notebooks in the Notebook editor -* Notebooks in RStudio -* Modeler flows in the SPSS Modeler -* Data Refinery flows -* Jobs that run operational assets, such as Data Refinery flows, or Notebooks in a project - - - -Note: - -To create an environment template: - - - -1. On the Manage tab of your project, select the Environments page and click New template under Templates. -2. Enter a name and a description. -3. Select one of the following engine types: - - - -* Default: Select for Python, R, and RStudio runtimes for Watson Studio. -* Spark: Select for Spark with Python or R runtimes for Watson Studio. -* GPU: Select for more computing power to improve model training performance for Watson Studio. - - - -4. Select the hardware configuration from the Hardware configuration drop-down menu. -5. Select the software version if you selected a runtime of ""Default,"" ""Spark,"" or ""GPU."" - - - -" -BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_1,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Where to find your custom environment template - -Your new environment template is listed under Templates on the Environments page in the Manage tab of your project. From this page, you can: - - - -* Check which runtimes are active -* Update custom environment templates -* Track the number of capacity units per hour that your runtimes have consumed so far -* Stop active runtimes. - - - -" -BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_2,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Limitations - -The default environments provided by Watson Studio cannot be edited or modified. - -Notebook environments (Anaconda Python or R distributions): - -: - You can't add a software customization to the default Python and R environment templates included in Watson Studio. You can only add a customization to an environment template that you create. : - If you add a software customization using conda, your environment must have at least 2 GB RAM. : - You can't customize an R environment for a notebook by installing R packages directly from CRAN or GitHub. You can check if the CRAN package you want is available only from conda channels and, if the package is available, add that package name in the customization list as r-. - - - -* After you have started a notebook in an Watson Studio environment, you can't create another conda environment from inside that notebook and use it. Watson Studio environments do not behave like a Conda environment manager. - - - -Spark environments: : - You can't customize the software configuration of a Spark environment template. - -" -BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_3,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Next steps - - - -* [Customize environment templates for Python or R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) - - - -" -BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_4,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Learn more - -Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html) -" -D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF_0,D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF," Publishing a notebook as a gist - -A gist is a simple way to share a notebook or parts of a notebook with other users. Unlike when you publish to a GitHub repository, you don't need to manage your gists; you can edit your gists directly in the browser. - -All project collaborators, who have administrator or editor permission, can share notebooks or parts of a notebook as gists. The latest saved version of your notebook is published as a gist. - -Before you can create a gist, you must be logged in to GitHub and have authorized access to gists in GitHub from Watson Studio. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). If this information is missing, you are prompted for it. - -To publish a notebook as a gist: - - - -1. Open the notebook in edit mode. -2. Click the GitHub integration icon (![Shows the upload icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/upload.png)) and select Publish as gist. - - - -Watch this video to see how to enable GitHub integration. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. - 00:07 Navigate to your profile and settings. - 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token. - 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. - 00:29 Copy the token, return to the GitHub integration settings, and paste the token. - 00:36 The token is validated when you save it to your profile settings. - 00:42 Now, navigate to your projects. - 00:44 You enable GitHub integration at the project level on the ""Settings"" tab. -" -D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF_1,D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF," 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL. - 00:56 You'll find that on the ""Code"" tab in the repo. - 01:01 Click ""Update"" to make the connection. - 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish. - 01:14 Notice that this notebook has the credentials replaced with X's. - 01:19 It's a best practice to remove or replace credentials before publishing to GitHub. - 01:24 So, this notebook is ready for publishing. - 01:27 You can provide the target path along with a commit message. - 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. - 01:42 When you're, ready click ""Publish"". - 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. - 01:54 Let's take a look at the commit. - 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook. - 02:04 Lastly, you can publish as a gist. - 02:07 Gists are another way to share your work on GitHub. - 02:10 Every gist is a git repository, so it can be forked and cloned. - 02:15 There are two types of gists: public and secret. - 02:19 If you start out with a secret gist, you can convert it to a public gist later. - 02:24 And again, you have the option to remove hidden cells. - 02:29 Follow the link to see the published gist. - 02:32 So that's the basics of Watson Studio's GitHub integration. - 02:37 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html) -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_0,11A093CB8F1D24EA066663B3991084A84FC32BF2," Creating jobs in deployment spaces - -A job is a way of running a batch deployment, or a self-contained asset like a script, notebook, code package, or flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. From a deployment space, you can create, schedule, run, and manage jobs. - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_1,11A093CB8F1D24EA066663B3991084A84FC32BF2," Creating a batch deployment job - -Follow these steps when you are creating a batch deployment job: - -Important: You must have an existing batch deployment to create a batch job. - - - -1. From the Deployments tab, select your deployment and click New job. The Create a job dialog box opens. -2. In the Define details section, enter your job name, an optional description, and click Next. -3. In the Configure section, select a hardware specification. -You can follow these steps to optionally configure environment variables and job run retention settings: - - - -* Optional: If you are deploying a Python script, an R script, or a notebook, then you can enter environment variables to pass parameters to the job. Click Environment variables to enter the key - value pair. -* Optional: To avoid finishing resources by retaining all historical job metadata, follow one of these options: - - - -* Click By amount to set thresholds for saving a set number of job runs and associated logs. -* Click By duration (days) to set thresholds for saving artifacts for a specified number of days. - - - - - -4. Optional: In the Schedule section, toggle the Schedule off button to schedule a run. You can set a date and time for start of schedule and set a schedule for repetition. Click Next. - -Note: If you don't specify a schedule, the job runs immediately. -5. Optional: In the Notify section, toggle the Off button to turn on notifications associated with this job. Click Next. - -Note: You can receive notifications for three types of events: success, warning, and failure. -6. In the Choose data section, provide inline data that corresponds with your model schema. You can provide input in JSON format. Click Next. See [Example JSON payload for inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=enexample-json). -7. In the Review and create section, verify your job details, and click Create and run. - - - -Notes: - - - -* Scheduled jobs display on the Jobs tab of the deployment space. -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_2,11A093CB8F1D24EA066663B3991084A84FC32BF2,"* Results of job runs are written to the specified output file and saved as a space asset. -* A data asset can be a data source file that you promoted to the space, a connected data source, or tables from databases and files from file-based data sources. -* If you exclude certain weekdays in your job schedule, the job might not run as you would expect. The reason is due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the main node where the job runs. -* When you create or modify a scheduled job, an API key is generated. Future runs use this generated API key. - - - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_3,11A093CB8F1D24EA066663B3991084A84FC32BF2," Example JSON payload for inline data - -{ -""deployment"": { -""id"": """" -}, -""space_id"": """", -""name"": ""test_v4_inline"", -""scoring"": { -""input_data"": [{ -""fields"": ""AGE"", ""SEX"", ""BP"", ""CHOLESTEROL"", ""NA"", ""K""], -""values"": 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056], 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056]] -}] -} -} - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_4,11A093CB8F1D24EA066663B3991084A84FC32BF2," Queuing and concurrent job executions - -The maximum number of concurrent jobs for each deployment is handled internally by the deployment service. For batch deployment, by default, two jobs can be run concurrently. Any deployment job request for a batch deployment that already has two running jobs is placed in a queue for execution later. When any of the running jobs is completed, the next job in the queue is run. The queue has no size limit. - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_5,11A093CB8F1D24EA066663B3991084A84FC32BF2," Limitation on using large inline payloads for batch deployments - -Batch deployment jobs that use large inline payload might get stuck in starting or running state. - -Tip: If you provide huge payloads to batch deployments, use data references instead of inline. - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_6,11A093CB8F1D24EA066663B3991084A84FC32BF2," Retention of deployment job metadata - -Job-related metadata is persisted and can be accessed until the job and its deployment are deleted. - -" -11A093CB8F1D24EA066663B3991084A84FC32BF2_7,11A093CB8F1D24EA066663B3991084A84FC32BF2," Viewing deployment job details - -When you create or view a batch job, the deployment ID and the job ID are displayed. - -![Job IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/job-ids.png) - - - -* The deployment ID represents the deployment definition, including the hardware and software configurations and related assets. -* The job ID represents the details for a job, including input data and an output location and a schedule for running the job. - - - -Use these IDs to refer to the job in Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) requests or in notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_0,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Creating notebooks - -You can add a notebook to your project by using one of these methods: creating a notebook file or copying a sample notebook from the Samples. - -Required permissions : You must have the Admin or Editor role in the project to create a notebook. - -Watch this short video to learn the basics of Jupyter notebooks. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_1,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Creating a notebook file in the notebook editor - -To create a notebook file in the notebook editor: - - - -1. From your project, click New asset > Work with data and models in Python or R notebooks. -2. On the New Notebook page, specify the method to use to create your notebook. You can create a blank notebook, upload a notebook file from your file system, or upload a notebook file from a URL: - - - -* The notebook file you select to upload must follow these requirements: - - - -* The file type must be .ipynb. -* The file name must not exceed 255 characters. -* The file name must not contain these characters: < > : ” / | ( ) ? - - - -* The URL must be a public URL that is shareable and doesn't require authentication. -![Notebook options](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/createnotebook.png) - - - -3. Specify the runtime environment for the language you want to use (Python or R). You can select a provided environment template or an environment template which you created and configured under Templates on the Environments page on the Manage tab of your project. For more information on environments, see [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html). -4. Click Create Notebook. The notebook opens in edit mode. - -Note that the time that it takes to create a new notebook or to open an existing one for editing might vary. If no runtime container is available, a container needs to be created and only after it is available, the Jupyter notebook user interface can be loaded. The time it takes to create a container depends on the cluster load and size. Once a runtime container exists, subsequent calls to open notebooks will be significantly faster. - -The opened notebook is locked by you. For more information, see [Locking and unlocking notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=enlocking-and-unlocking). -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_2,D1AFA9BB4E0475A56190DC8254E004308BEA484D,"5. Tell the service to trust your notebook content and execute all cells. - -When a new notebook is opened in edit mode, the notebook is considered to be untrusted by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, HTML or Javascript in Markdown cells or in any output cells that you did not generate. - - - -1. Click Not Trusted in the upper right corner of the notebook. -2. Click Trust to execute all cells. - - - - - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_3,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Adding a notebook from the Samples - -Notebooks from the Samples are based on real-world scenarios and contain many useful examples of computations and visualizations that you can adapt to your analysis needs. - -To copy a sample notebook: - - - -1. In the main menu, click Samples, then filter for Notebooks to show only notebook cards. -2. Find the card for the sample notebook you want, and click the card. You can view the notebook contents to browse the steps and the code that it contains. -3. To work with a copy of the sample notebook, click Add to project. -4. Choose the project for the notebook, and click Add. -5. Optional: Change the name and description for the notebook. -6. Specify the runtime environment. If you created an environment template on the Environments page of your project, it will display in the list of runtimes you can select from. -7. Click Create. The notebook opens in edit mode and is locked by you. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. To get familiar with the structure of a notebook, see [Parts of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html). - - - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_4,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Locking and unlocking notebooks - -If you open a notebook in edit mode, this notebook is locked by you. While you hold the lock, only you can make changes to the notebook. All other projects users will see the lock icon on the notebook. Only project administrators are able to unlock a locked notebook and open it in edit mode. - -When you close the notebook, the lock is released and another user can select to open the notebook in edit mode. Note that you must close the notebook while the runtime environment is still active. The notebook lock can't be released for you if the runtime was stopped or is in idle state. If the notebook lock is not released for you, you can unlock the notebook from the project's Assets page. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_5,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Finding your notebooks - -You can find and open notebooks from the Assets page of the project. - -You can open a notebook in view or edit mode. When you open a notebook in view mode, you can't change or run the notebook. You can only change or run a notebook when it is opened in edit mode and started in an environment. - -You can open a notebook by: - - - -* Clicking the notebook. This opens the notebook in view mode. To then open the notebook in edit mode, click the pencil icon (![Edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pencil-icon.png)) on the notebook toolbar. This starts the environment associated with the notebook. -* Expanding the three vertical dots on the right of the notebook entry, and selecting View or Edit. - - - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_6,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Next step - - - -* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) - - - -" -D1AFA9BB4E0475A56190DC8254E004308BEA484D_7,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Learn more - - - -* [Provided CPU runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-cpu) -* [Provided Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-spark) -* [Change the environment runtime used by a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env) - - - -Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) -" -3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_0,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Customizing environment templates - -You can change the name, the description, and the hardware configuration of an environment template that you created. You can customize the software configuration of Jupyter notebook environment templates through conda channels or by using pip. You can provide a list of conda packages, a list of pip packages, or a combination of both. When using conda packages, you can provide a list of additional conda channel locations through which the packages can be obtained. - -Required permissions : You must be have the Admin or Editor role in the project to customize an environment template. - -Restrictions : You cannot change the language of an existing environment template. : You can’t customize the software configuration of a Spark environment template you created. - -To customize an environment template that you created: - - - -1. Under your project's Manage tab, click the Environments page. -2. In the Active Runtimes section, check that no runtime is active for the environment template you want to change. -3. In the Environment Templates section, click the environment template you want to customize. -4. Make your changes. - -For a Juypter notebook environment template, select to create a customization and specify the libraries to add to the standard packages that are available by default. You can also use the customization to upgrade or downgrade packages that are part of the standard software configuration. - -The libraries that are added to an environment template through the customization aren't persisted; however, they are automatically installed each time the environment runtime is started. Note that if you add a library using pip install through a notebook cell and not through the customization, only you will be able to use this library; the library is not available to someone else using the same environment template. - -If you want you can use the provided template to add the custom libraries. There is a different template for Python and for R. The following example shows you how to add Python packages: - - Modify the following content to add a software customization to an environment. - To remove an existing customization, delete the entire content and click Apply. - - Add conda channels below defaults, indented by two spaces and a hyphen. -channels: -- defaults - - To add packages through conda or pip, remove the comment on the following line. - dependencies: - -" -3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_1,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Add conda packages here, indented by two spaces and a hyphen. - Remove the comment on the following line and replace sample package name with your package name: - - a_conda_package=1.0 - - Add pip packages here, indented by four spaces and a hyphen. - Remove the comments on the following lines and replace sample package name with your package name. - - pip: - - a_pip_package==1.0 - -Important when customizing: - - - -* Before you customize a package, verify that the changes you are planning have the intended effect. - - - -* conda can report the changes required for installing a given package, without actually installing it. You can verify the changes from your notebook. For example, for the library Plotly: - - - -* In a Python notebook, enter: !conda install --dry-run plotly -* In an R notebook, enter: print(system2(""conda"", args=c(""install"",""--dry-run"",""r-plotly""), stdout=TRUE)) - - - -* pip does install the package. However, restarting the runtime again after verification will remove the package. Here too you verify the changes from your notebook. For example, for the library Plotly: - - - -* In a Python notebook, enter: !pip install plotly -* In an R notebook, enter: print(system2(""pip"", args=""install plotly"", stdout=TRUE)) - - - - - -* If you can get a package through conda from the default channels and through pip from PyPI, the preferred method is through conda from the default channels. -* Conda does dependency checking when installing packages which can be memory intensive if you add many packages to the customization. Ensure that you select an environment with sufficient RAM to enable dependency checking at the time the runtime is started. -* To prevent unnecessary dependency checking if you only want packages from one Conda channel, exclude the default channels by removing defaults from the channels list in the template and adding nodefaults. -" -3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_2,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94,"* In addition to the Anaconda main channel, many packages for R can be found in Anaconda's R channel. In R environments, this channel is already part of the default channels, hence it does not need to be added separately. -* If you add packages only through pip or only through conda to the customization template, you must make sure that dependencies is not commented out in the template. -* When you specify a package version, use a single = for conda packages and == for pip packages. Wherever possible, specify a version number as this reduces the installation time and memory consumption significantly. If you don't specify a version, the package manager might pick the latest version available, or keep the version that is available in the package. -* You cannot add arbitrary notebook extensions as a customization because notebook extensions must be pre-installed. - - - -5. Apply your changes. - - - -" -3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_3,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Learn more - - - -* [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html) -* [Installing custom packages through a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) - - - -Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html) -" -04B717FD06C5D906268E8530F4B521686065C6D5_0,04B717FD06C5D906268E8530F4B521686065C6D5," Data load support - -You can add automatically generated code to load data from project data assets to a notebook cell. The asset type can be a file or a database connection. - -By clicking in an empty code cell in your notebook, clicking the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) from the notebook toolbar, selecting Read data and an asset from the project, you can: - - - -* Insert the data source access credentials. This capability is available for all data assets that are added to a project. With the credentials, you can write your own code to access the asset and load the data into data structures of your choice. -* Generate code that is added to the notebook cell. The inserted code serves as a quick start to allow you to easily begin working with a data set or connection. For production systems, you should carefully review the inserted code to determine if you should write your own code that better meets your needs. - -When you run the code cell, the data is accessed and loaded into the data structure you selected. - -Notes: - - - -1. The ability to provide generated code is disabled for some connections if: - - - -* The connection credentials are personal credentials -* The connection uses a secure gateway link -* The connection credentials are stored in vaults - - - -2. If the file type or database connection that you are using doesn't appear in the following lists, you can select to create generic code. For Python this is a StreamingBody object and for R a textConnection object. - - - - - -The following tables show you which data source connections (file types and database connections) support the option to generate code. The options for generating code vary depending on the data source, the notebook coding language, and the notebook runtime compute. - -" -04B717FD06C5D906268E8530F4B521686065C6D5_1,04B717FD06C5D906268E8530F4B521686065C6D5," Supported files types - - - -Table 1. Supported file types - - Data source Notebook coding language Compute engine type Available support to load data - - CSV files - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame and sparkSessionDataFrame - With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame - R Anaconda R distribution Load data into R data frame - With Spark Load data into R data frame and sparkSessionDataFrame - With Hadoop Load data into R data frame and sparkSessionDataFrame - Python Script - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody - With Hadoop Load data into pandasStreamingBody - R Anaconda R distribution Load data into rRawObject - With Spark Load data into rRawObject - With Hadoop Load data into rRawObject - JSON files - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame and sparkSessionDataFrame - With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame - R Anaconda R distribution Load data into R data frame - With Spark Load data into R data frame, rRawObject and sparkSessionDataFrame - With Hadoop Load data into R data frame, rRawObject and sparkSessionDataFrame - .xlsx and .xls files - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame - With Hadoop Load data into pandasDataFrame - R Anaconda R distribution Load data into rRawObject - With Spark No data load support - With Hadoop No data load support - Octet-stream file types - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody - R Anaconda R distribution Load data in rRawObject - With Spark Load data in rDataObject - PDF file type - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody -" -04B717FD06C5D906268E8530F4B521686065C6D5_2,04B717FD06C5D906268E8530F4B521686065C6D5," With Hadoop Load data into pandasStreamingBody - R Anaconda R distribution Load data in rRawObject - With Spark Load data in rDataObject - With Hadoop Load data into rRawData - ZIP file type - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody - R Anaconda R distribution Load data in rRawObject - With Spark Load data in rDataObject - JPEG, PNG image files - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody - With Hadoop Load data into pandasStreamingBody - R Anaconda R distribution Load data in rRawObject - With Spark Load data in rDataObject - With Hadoop Load data in rDataObject - Binary files - Python Anaconda Python distribution Load data into pandasStreamingBody - With Spark Load data into pandasStreamingBody - Hadoop No data load support - R Anaconda R distribution Load data in rRawObject - With Spark Load data into rRawObject - Hadoop Load data in rDataObject - - - -" -04B717FD06C5D906268E8530F4B521686065C6D5_3,04B717FD06C5D906268E8530F4B521686065C6D5," Supported database connections - - - -Table 2. Supported database connections - - Data source Notebook coding language Compute engine type Available support to load data - - - [Db2 Warehouse on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
- [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html)
- [IBM Db2 Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) - Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas - With Spark Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame - With Hadoop Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame - R Anaconda R distribution Load data into ibmdbrIda and ibmdbrDataframe - With Spark Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame - With Hadoop Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame - - [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
- Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas - With Spark No data load support -" -04B717FD06C5D906268E8530F4B521686065C6D5_4,04B717FD06C5D906268E8530F4B521686065C6D5," - [Amazon Simple Storage Services (S3)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
- [Amazon Simple Storage Services (S3) with an IAM access policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) - Python Anaconda Python distribution Load data into pandasStreamingBody - With Hadoop Load data into pandasStreamingBody and sparkSessionSetup - R Anaconda R distributuion Load data into rRawObject - With Hadoop Load data into rRawObject and sparkSessionSetup - - [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html)
- [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame - R Anaconda R distribution Load data into R data frame - With Spark Load data into R data frame and sparkSessionDataFrame - - [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) -" -04B717FD06C5D906268E8530F4B521686065C6D5_5,04B717FD06C5D906268E8530F4B521686065C6D5," Python Anaconda Python distribution Load data into pandasDataFrame

In the generated code:
- Edit the path parameter in the last line of code
- Remove the comment tagging

To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html)
To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html)
To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html) - With Spark No data load support -" -04B717FD06C5D906268E8530F4B521686065C6D5_6,04B717FD06C5D906268E8530F4B521686065C6D5," R Anaconda R distribution Load data into R data frame

In the generated code:
- Edit the path parameter in the last line of code
- Remove the comment tagging

To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html)
To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html)
To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html) - With Spark No data load support - - [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame - R Anaconda R distribution No data load support - With Spark No data load support - - [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html)
- Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame - R Anaconda R distribution Load data into R data frame and sparkSessionDataFrame - With Spark No data load support -" -04B717FD06C5D906268E8530F4B521686065C6D5_7,04B717FD06C5D906268E8530F4B521686065C6D5," - [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html)
- [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html)
- [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) - Python Anaconda Python distribution Load data into pandasDataFrame - With Spark Load data into pandasDataFrame - R Anaconda R distribution Load data into R data frame - With Spark Load data into R data frame - - - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_0,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," Analyzing data and working with models - -You can analyze data and build or work with models in projects. The methods that you choose for preparing data or working models help you determine which tools best fit your needs. - -Each tool has a specific, primary task. Some tools have capabilities for multiple types of tasks. - -You can choose a tool based on how much automation you want: - - - -* Code editor tools: Use to write code in Python or R, all also with Spark. -* Graphical builder tools: Use menus and drag-and-drop functionality on a builder to visually program. -* Automated builder tools: Use to configure automated tasks that require limited user input. - - - - - -Tool to tasks - - Tool Primary task Tool type Work with data Work with models - - [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Prepare and visualize data Graphical builder ✓ - [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Build graphs to visualize data Graphical builder ✓ - [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with foundation models and prompts Graphical builder ✓ - [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tune a foundation model to return output in a certain style or format Graphical builder ✓ ✓ - [Jupyter notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Work with data and models in Python or R notebooks Code editor ✓ ✓ - [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train models on distributed data Code editor ✓ -" -E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_1,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Work with data and models in R Code editor ✓ ✓ - [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Build models as a visual flow Graphical builder ✓ ✓ - [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solve optimization problems Graphical builder, code editor ✓ ✓ - [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Build machine learning models automatically Automated builder ✓ ✓ - [Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automate model lifecycle Graphical builder ✓ ✓ - [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data Graphical builder ✓ ✓ - - - -" -E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_2,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," Learn more - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_0,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Data skipping for Spark SQL - -Data skipping can significantly boost the performance of SQL queries by skipping over irrelevant data objects or files based on a summary metadata associated with each object. - -Data skipping uses the open source Xskipper library for creating, managing and deploying data skipping indexes with Apache Spark. See [Xskipper - An Extensible Data Skipping Framework](https://xskipper.io). - -For more details on how to work with Xskipper see: - - - -* [Quick Start Guide](https://xskipper.io/getting-started/quick-start-guide/) -* [Demo Notebooks](https://xskipper.io/getting-started/sample-notebooks/) - - - -In addition to the open source features in Xskipper, the following features are also available: - - - -* [Geospatial data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=engeospatial-skipping) -* [Encrypting indexes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enencrypting-indexes) -* [Data skipping with joins (for Spark 3 only)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enskipping-with-joins) -* [Samples showing these features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=ensamples) - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_1,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Geospatial data skipping - -You can also use data skipping when querying geospatial data sets using [geospatial functions](https://www.ibm.com/support/knowledgecenter/en/SSCJDQ/com.ibm.swg.im.dashdb.analytics.doc/doc/geo_functions.html) from the [spatio-temporal library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html). - - - -* To benefit from data skipping in data sets with latitude and longitude columns, you can collect the min/max indexes on the latitude and longitude columns. -* Data skipping can be used in data sets with a geometry column (a UDT column) by using a built-in [Xskipper plugin](https://xskipper.io/api/indexing/plugins). - - - -The next sections show you to work with the geospatial plugin. - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_2,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Setting up the geospatial plugin - -To use the plugin, load the relevant implementations using the Registration module. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio. - - - -* For Scala: - -import com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory -import com.ibm.xskipper.stmetaindex.index.STIndexFactory -import com.ibm.xskipper.stmetaindex.translation.parquet.{STParquetMetaDataTranslator, STParquetMetadatastoreClauseTranslator} -import io.xskipper._ - -Registration.addIndexFactory(STIndexFactory) -Registration.addMetadataFilterFactory(STMetaDataFilterFactory) -Registration.addClauseTranslator(STParquetMetadatastoreClauseTranslator) -Registration.addMetaDataTranslator(STParquetMetaDataTranslator) -* For Python: - -from xskipper import Xskipper -from xskipper import Registration - -Registration.addMetadataFilterFactory(spark, 'com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory') -Registration.addIndexFactory(spark, 'com.ibm.xskipper.stmetaindex.index.STIndexFactory') -Registration.addMetaDataTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetaDataTranslator') -Registration.addClauseTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetadatastoreClauseTranslator') - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_3,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Index building - -To build an index, you can use the addCustomIndex API. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio. - - - -* For Scala: - -import com.ibm.xskipper.stmetaindex.implicits._ - -// index the dataset -val xskipper = new Xskipper(spark, dataset_path) - -xskipper -.indexBuilder() -// using the implicit method defined in the plugin implicits -.addSTBoundingBoxLocationIndex(""location"") -// equivalent -//.addCustomIndex(STBoundingBoxLocationIndex(""location"")) -.build(reader).show(false) -* For Python: - -xskipper = Xskipper(spark, dataset_path) - - adding the index using the custom index API -xskipper.indexBuilder() -.addCustomIndex(""com.ibm.xskipper.stmetaindex.index.STBoundingBoxLocationIndex"", ['location'], dict()) -.build(reader) -.show(10, False) - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_4,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Supported functions - -The list of supported geospatial functions includes the following: - - - -* ST_Distance -* ST_Intersects -* ST_Contains -* ST_Equals -* ST_Crosses -* ST_Touches -* ST_Within -* ST_Overlaps -* ST_EnvelopesIntersect -* ST_IntersectsInterior - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_5,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Encrypting indexes - -If you use a Parquet metadata store, the metadata can optionally be encrypted using Parquet Modular Encryption (PME). This is achieved by storing the metadata itself as a Parquet data set, and thus PME can be used to encrypt it. This feature applies to all input formats, for example, a data set stored in CSV format can have its metadata encrypted using PME. - -In the following section, unless specified otherwise, when referring to footers, columns, and so on, these are with respect to metadata objects, and not to objects in the indexed data set. - -Index encryption is modular and granular in the following way: - - - -* Each index can either be encrypted (with a per-index key granularity) or left in plain text -* Footer + object name column: - - - -* Footer column of the metadata object which in itself is a Parquet file contains, among other things: - - - -* Schema of the metadata object, which reveals the types, parameters and column names for all indexes collected. For example, you can learn that a BloomFilter is defined on column city with a false-positive probability of 0.1. -* Full path to the original data set or a table name in case of a Hive metastore table. - - - -* Object name column stores the names of all indexed objects. - - - -* Footer + metadata column can either be: - - - -* Both encrypted using the same key. This is the default. In this case, the plain text footer configuration for the Parquet objects comprising the metadata in encrypted footer mode, and the object name column is encrypted using the selected key. -* Both in plain text. In this case, the Parquet objects comprising the metadata are in plain text footer mode, and the object name column is not encrypted. - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_6,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,"If at least one index is marked as encrypted, then a footer key must be configured regardless of whether plain text footer mode is enabled or not. If plain text footer is set then the footer key is used only for tamper-proofing. Note that in that case the object name column is not tamper proofed. - -If a footer key is configured, then at least one index must be encrypted. - - - - - -Before using index encryption, you should check the documentation on [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) and make sure you are familiar with the concepts. - -Important: When using index encryption, whenever a key is configured in any Xskipper API, it's always the label NEVER the key itself. - -To use index encryption: - - - -1. Follow all the steps to make sure PME is enabled. See [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html). -2. Perform all regular PME configurations, including Key Management configurations. -3. Create encrypted metadata for a data set: - - - -1. Follow the regular flow for creating metadata. -2. Configure a footer key. If you wish to set a plain text footer + object name column, set io.xskipper.parquet.encryption.plaintext.footer to true (See samples below). -3. In IndexBuilder, for each index you want to encrypt, add the label of the key to use for that index. - -To use metadata during query time or to refresh existing metadata, no setup is necessary other than the regular PME setup required to make sure the keys are accessible (literally the same configuration needed to read an encrypted data set). - - - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_7,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Samples - -The following samples show metadata creation using a key named k1 as a footer + object name key, and a key named k2 as a key to encrypt a MinMax for temp, while also creating a ValueList for city, which is left in plain text. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio. - - - -* For Scala: - -// index the dataset -val xskipper = new Xskipper(spark, dataset_path) -// Configuring the JVM wide parameters -val jvmComf = Map( -""io.xskipper.parquet.mdlocation"" -> md_base_location, -""io.xskipper.parquet.mdlocation.type"" -> ""EXPLICIT_BASE_PATH_LOCATION"") -Xskipper.setConf(jvmConf) -// set the footer key -val conf = Map( -""io.xskipper.parquet.encryption.footer.key"" -> ""k1"") -xskipper.setConf(conf) -xskipper -.indexBuilder() -// Add an encrypted MinMax index for temp -.addMinMaxIndex(""temp"", ""k2"") -// Add a plaintext ValueList index for city -.addValueListIndex(""city"") -.build(reader).show(false) -* For Python - -xskipper = Xskipper(spark, dataset_path) - Add JVM Wide configuration -jvmConf = dict([ -(""io.xskipper.parquet.mdlocation"", md_base_location), -(""io.xskipper.parquet.mdlocation.type"", ""EXPLICIT_BASE_PATH_LOCATION"")]) -Xskipper.setConf(spark, jvmConf) - configure footer key -conf = dict([(""io.xskipper.parquet.encryption.footer.key"", ""k1"")]) -xskipper.setConf(conf) - adding the indexes -xskipper.indexBuilder() -.addMinMaxIndex(""temp"", ""k1"") -.addValueListIndex(""city"") -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_8,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,".build(reader) -.show(10, False) - - - -If you want the footer + object name to be left in plain text mode (as mentioned above), you need to add the configuration parameter: - - - -* For Scala: - -// index the dataset -val xskipper = new Xskipper(spark, dataset_path) -// Configuring the JVM wide parameters -val jvmComf = Map( -""io.xskipper.parquet.mdlocation"" -> md_base_location, -""io.xskipper.parquet.mdlocation.type"" -> ""EXPLICIT_BASE_PATH_LOCATION"") -Xskipper.setConf(jvmConf) -// set the footer key -val conf = Map( -""io.xskipper.parquet.encryption.footer.key"" -> ""k1"", -""io.xskipper.parquet.encryption.plaintext.footer"" -> ""true"") -xskipper.setConf(conf) -xskipper -.indexBuilder() -// Add an encrypted MinMax index for temp -.addMinMaxIndex(""temp"", ""k2"") -// Add a plaintext ValueList index for city -.addValueListIndex(""city"") -.build(reader).show(false) -* For Python - -xskipper = Xskipper(spark, dataset_path) - Add JVM Wide configuration -jvmConf = dict([ -(""io.xskipper.parquet.mdlocation"", md_base_location), -(""io.xskipper.parquet.mdlocation.type"", ""EXPLICIT_BASE_PATH_LOCATION"")]) -Xskipper.setConf(spark, jvmConf) - configure footer key -conf = dict([(""io.xskipper.parquet.encryption.footer.key"", ""k1""), -(""io.xskipper.parquet.encryption.plaintext.footer"", ""true"")]) -xskipper.setConf(conf) - adding the indexes -xskipper.indexBuilder() -.addMinMaxIndex(""temp"", ""k1"") -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_9,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,".addValueListIndex(""city"") -.build(reader) -.show(10, False) - - - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_10,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Data skipping with joins (for Spark 3 only) - -With Spark 3, you can use data skipping in join queries such as: - -SELECT -FROM orders, lineitem -WHERE l_orderkey = o_orderkey and o_custkey = 800 - -This example shows a star schema based on the TPC-H benchmark schema (see [TPC-H](http://www.tpc.org/tpch/)) where lineitem is a fact table and contains many records, while the orders table is a dimension table which has a relatively small number of records compared to the fact tables. - -The above query has a predicate on the orders tables which contains a small number of records which means using min/max will not benefit much from data skipping. - -Dynamic data skipping is a feature which enables queries such as the above to benefit from data skipping by first extracting the relevant l_orderkey values based on the condition on the orders table and then using it to push down a predicate on l_orderkey that uses data skipping indexes to filter irrelevant objects. - -To use this feature, enable the following optimization rule. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio. - - - -* For Scala: - -import com.ibm.spark.implicits. - -spark.enableDynamicDataSkipping() -* For Python: - -from sparkextensions import SparkExtensions - -SparkExtensions.enableDynamicDataSkipping(spark) - - - -Then use the Xskipper API as usual and your queries will benefit from using data skipping. - -For example, in the above query, indexing l_orderkey using min/max will enable skipping over the lineitem table and will improve query performance. - -" -89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_11,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Support for older metadata - -Xskipper supports older metadata created by the MetaIndexManager seamlessly. Older metadata can be used for skipping as updates to the Xskipper metadata are carried out automatically by the next refresh operation. - -If you see DEPRECATED_SUPPORTED in front of an index when listing indexes or running a describeIndex operation, the metadata version is deprecated but is still supported and skipping will work. The next refresh operation will update the metadata automatically. -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_0,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," SPSS predictive analytics data preparation algorithms in notebooks - -Descriptives provides efficient computation of the univariate and bivariate statistics and automatic data preparation features on large scale data. It can be used widely in data profiling, data exploration, and data preparation for subsequent modeling analyses. - -The core statistical features include essential univariate and bivariate statistical summaries, univariate order statistics, metadata information creation from raw data, statistics for visualization of single fields and field pairs, data preparation features, and data interestingness score and data quality assessment. It can efficiently support the functionality required for automated data processing, user interactivity, and obtaining data insights for single fields or the relationships between the pairs of fields inclusive with a specified target. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_1,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.descriptives import Descriptives - -de = Descriptives(). -setInputFieldsList([""Field1"", ""Field2""]). -setTargetFieldList([""Field3""]). -setTrimBlanks(""TRIM_BOTH"") - -deModel = de.fit(df) - -PMML = deModel.toPMML() -statXML = deModel.statXML() - -predictions = deModel.transform(df) -predictions.show() - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_2,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Descriptives Selection Strategy - -When the number of field pairs is too large (for example, larger than the default of 1000), SelectionStrategy is used to limit the number of pairs for which bivariate statistics will be computed. The strategy involves 2 steps: - - - -1. Limit the number of pairs based on the univariate statistics. -2. Limit the number of pairs based on the core association bivariate statistics. - - - -Notice that the pair will always be included under the following conditions: - - - -1. The pair consists of a predictor field and a target field. -2. The pair of predictors or targets is enforced. - - - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_3,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Smart Data Preprocessing - -The Smart Data Preprocessing (SDP) engine is an analytic component for data preparation. It consists of three separate modules: relevance analysis, relevance and redundancy analysis, and smart metadata (SMD) integration. - -Given the data with regular fields, list fields, and map fields, relevance analysis evaluates the associations of input fields with targets, and selects a specified number of fields for subsequent analysis. Meanwhile, it expands list fields and map fields, and extracts the selected fields into regular column-based format. - -Due to the efficiency of relevance analysis, it's also used to reduce the large number of fields in wide data to a moderate level where traditional analytics can work. - -SmartDataPreprocessingRelevanceAnalysis exports these outputs: - - - -* JSON file, containing model information -* new column-based data -* the related data model - - - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_4,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.smartdatapreprocessing import SmartDataPreprocessingRelevanceAnalysis - -sdpRA = SmartDataPreprocessingRelevanceAnalysis(). -setInputFieldList([""holderage"", ""vehicleage"", ""claimamt""]). -setTargetFieldList([""vehiclegroup"", ""nclaims""]). -setMaxNumTarget(3). -setInvalidPairsThresEnabled(True). -setRMSSEThresEnabled(True). -setAbsVariCoefThresEnabled(True). -setInvalidPairsThreshold(0.7). -setRMSSEThreshold(0.7). -setAbsVariCoefThreshold(0.05). -setMaxNumSelFields(2). -setConCatRatio(0.3). -setFilterSelFields(True) - -predictions = sdpRA.transform(data) -predictions.show() - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_5,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sparse Data Convertor - -Sparse Data Convertor (SDC) converts regular data fields into list fields. You just need to specify the fields that you want to convert into list fields, then SDC will merge the fields according to their measurement level. It will generate, at most, three kinds of list fields: continuous list field, categorical list field, and map field. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_6,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sparsedataconverter import SparseDataConverter - -sdc = SparseDataConverter(). -setInputFieldList([""Age"", ""Sex"", ""Marriage"", ""BP"", ""Cholesterol"", ""Na"", ""K"", ""Drug""]) -predictions = sdc.transform(data) -predictions.show() - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_7,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Binning - -You can use this function to derive one or more new binned fields or to obtain the bin definitions used to determine the bin values. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_8,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.binning.binning import Binning - -binDefinition = BinDefinitions(1, False, True, True, [CutPoint(50.0, False)]) -binField = BinRequest(""integer_field"", ""integer_bin"", binDefinition, None) - -params = [binField] -bining = Binning().setBinRequestsParam(params) - -outputDF = bining.transform(inputDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_9,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Hex Binning - -You can use this function to calculate and assign hexagonal bins to two fields. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_10,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.binning.hexbinning import HexBinning -from spss.ml.param.binningsettings import HexBinningSetting - -params = [HexBinningSetting(""field1_out"", ""field1"", 5, -1.0, 25.0, 5.0), -HexBinningSetting(""field2_out"", ""field2"", 5, -1.0, 25.0, 5.0)] - -hexBinning = HexBinning().setHexBinRequestsParam(params) -outputDF = hexBinning.transform(inputDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_11,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Complex Sampling - -The complexSampling function selects a pseudo-random sample of records from a data source. - -The complexSampling function performs stratified sampling of incoming data using simple exact sampling and simple proportional sampling. The stratifying fields are specified as input and the sampling counts or sampling ratio for each of the strata to be sampled must also be provided. Optionally, the record counts for each strata may be provided to improve performance. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_12,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sampling.complexsampling import ComplexSampling -from spss.ml.datapreparation.params.sampling import RealStrata, Strata, Stratification - -transformer = ComplexSampling(). -setRandomSeed(123444). -setRepeatable(True). -setStratification(Stratification([""real_field""], [ -Strata(key=RealStrata(11.1)], samplingCount=25), -Strata(key=RealStrata(2.4)], samplingCount=40), -Strata(key=RealStrata(12.9)], samplingRatio=0.5)])). -setFrequencyField(""frequency_field"") - -sampled = transformer.transform(unionDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_13,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Count and Sample - -The countAndSample function produces a pseudo-random sample having a size approximately equal to the \'samplingCount\' input. - -The sampling is accomplished by calling the SamplingComponent with a sampling ratio that's computed as \'samplingCount / totalRecords\' where \'totalRecords\' is the record count of the incoming data. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_14,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sampling.countandsample import CountAndSample - -transformer = CountAndSample().setSamplingCount(20000).setRandomSeed(123) -sampled = transformer.transform(unionDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_15,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," MR Sampling - -The mrsampling function selects a pseudo-random sample of records from a data source at a specified sampling ratio. The size of the sample will be approximately the specified proportion of the total number of records subject to an optional maximum. The set of records and their total number will vary with random seed. Every record in the data source has the same probability of being selected. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_16,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sampling.mrsampling import MRSampling - -transformer = MRSampling().setSamplingRatio(0.5).setRandomSeed(123).setDiscard(True) -sampled = transformer.transform(unionDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_17,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sampling Model - -The samplingModel function selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum. - -When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random. - -Note that with distributed data, the samplingModel function applies the selection criteria independently to each data split. The maximum sample size, if any, applies independently to each split and not to the entire data source; the subsequence is started fresh at the start of each split. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_18,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sampling.samplingcomponent import SamplingModel - -transformer = SamplingModel().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False) -sampled = transformer.transform(unionDF) - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_19,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sequential Sampling - -The sequentialSampling function is similar to the samplingModel function. It also selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum. - -When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random. The main difference between sequentialSampling and samplingModel is that with distributed data, the sequentialSampling function applies the selection criteria to the entire data source, while the samplingModel function applies the selection criteria independently to each data split. - -" -E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_20,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code: - -from spss.ml.datapreparation.sampling.samplingcomponent import SequentialSampling - -transformer = SequentialSampling().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False) -sampled = transformer.transform(unionDF) - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_0,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Data sources for scoring batch deployments - -You can supply input data for a batch deployment job in several ways, including directly uploading a file or providing a link to database tables. The types of allowable input data vary according to the type of deployment job that you are creating. - -For supported input types by framework, refer to [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html). - -Input data can be supplied to a batch job as [inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=eninline_data) or [data reference](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=endata_ref). - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_1,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Available input types for batch deployments by framework and asset type - - - -Available input types for batch deployments by framework and asset type - - Framework Batch deployment type - - Decision optimization Reference - Python function Inline - PyTorch Inline and Reference - Tensorflow Inline and Reference - Scikit-learn Inline and Reference - Python scripts Reference - Spark MLlib Inline and Reference - SPSS Inline and Reference - XGBoost Inline and Reference - - - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_2,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Inline data description - -Inline type input data for batch processing is specified in the batch deployment job's payload. For example, you can pass a CSV file as the deployment input in the UI or as a value for the scoring.input_data parameter in a notebook. When the batch deployment job is completed, the output is written to the corresponding job's scoring.predictions metadata parameter. - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_3,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Data reference description - -Input and output data of type data reference that is used for batch processing can be stored: - - - -* In a remote data source, like a Cloud Object Storage bucket or an SQL or no-SQL database. -* As a local or managed data asset in a deployment space. - - - -Details for data references include: - - - -* Data source reference type depends on the asset type. Refer to Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* For data_asset type, the reference to input data must be specified as a /v2/assets href in the input_data_references.location.href parameter in the deployment job's payload. The data asset that is specified is a reference to a local or a connected data asset. Also, if the batch deployment job's output data must be persisted in a remote data source, the references to output data must be specified as a /v2/assets href in output_data_reference.location.href parameter in the deployment job's payload. -* Any input and output data_asset references must be in the same space ID as the batch deployment. -* If the batch deployment job's output data must be persisted in a deployment space as a local asset, output_data_reference.location.name must be specified. When the batch deployment job is completed successfully, the asset with the specified name is created in the space. -* Output data can contain information on where in a remote database the data asset is located. In this situation, you can specify whether to append the batch output to the table or truncate the table and update the output data. Use the output_data_references.location.write_mode parameter to specify the values truncate or append. - - - -* Specifying truncate as value truncates the table and inserts the batch output data. -* Specifying append as value appends the batch output data to the remote database table. -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_4,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1,"* write_mode is applicable only for the output_data_references parameter. -* write_mode is applicable only for remote database-related data assets. This parameter is not applicable for a local data asset or a Cloud Object Storage based data asset. - - - - - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_5,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Example data_asset payload - -""input_data_references"": [{ -""type"": ""data_asset"", -""connection"": { -}, -""location"": { -""href"": ""/v2/assets/?space_id="" -} -}] - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_6,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Example connection_asset payload - -""input_data_references"": [{ -""type"": ""connection_asset"", -""connection"": { -""id"": """" -}, -""location"": { -""bucket"": """", -""file_name"": ""/"" -} - -}] - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_7,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Structuring the input data - -How you structure the input data, also known as the payload, for the batch job depends on the framework for the asset you are deploying. - -A .csv input file or other structured data formats must be formatted to match the schema of the asset. List the column names (fields) in the first row and values to be scored in subsequent rows. For example, see the following code snippet: - -PassengerId, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked -1,3,""Braund, Mr. Owen Harris"",0,22,1,0,A/5 21171,7.25,,S -4,1,""Winslet, Mr. Leo Brown"",1,65,1,0,B/5 200763,7.50,,S - -A JSON input file must provide the same information on fields and values, by using this format: - -{""input_data"":[{ -""fields"": , , ...], -""values"": , , ...]] -}]} - -For example: - -{""input_data"":[{ -""fields"": ""PassengerId"",""Pclass"",""Name"",""Sex"",""Age"",""SibSp"",""Parch"",""Ticket"",""Fare"",""Cabin"",""Embarked""], -""values"": 1,3,""Braund, Mr. Owen Harris"",0,22,1,0,""A/5 21171"",7.25,null,""S""], -4,1,""Winselt, Mr. Leo Brown"",1,65,1,0,""B/5 200763"",7.50,null,""S""]] -}]} - -" -3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_8,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Preparing a payload that matches the schema of an existing model - -Refer to this sample code: - -model_details = client.repository.get_details("""") retrieves details and includes schema -columns_in_schema = [] -for i in range(0, len(model_details['entity']['input'].get('fields'))): -columns_in_schema.append(model_details['entity']['input'].get('fields')[i]) - -X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored -(...) -scoring_values = X.values.tolist() -array_of_input_fields = X.columns.tolist() -payload_scoring = {""input_data"": [{""fields"": array_of_input_fields],""values"": scoring_values}]} - -Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_0,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating batch deployments in Watson Machine Learning - -A batch deployment processes input data from a file, data connection, or connected data in a storage bucket, and writes the output to a selected destination. - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_1,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Before you begin - - - -1. Save a model to a deployment space. -2. Promote or add the input file for the batch deployment to the space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - - - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_2,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Supported frameworks - -Batch deployment is supported for these frameworks and asset types: - - - -* Decision Optimization -* PMML -* Python functions -* PyTorch-Onnx -* Tensorflow -* Scikit-learn -* Python scripts -* Spark MLlib -* SPSS -* XGBoost - - - -Notes: - - - -* You can create batch deployments only of Python functions and models based on the PMML framework programmatically. -* Your list of deployment jobs can contain two types of jobs: WML deployment job and WML batch deployment. -* When you create a batch deployment (through the UI or programmatically), an extra default deployment job is created of the type WML deployment job. The extra job is a parent job that stores all deployment runs generated for that batch deployment that were triggered by the Watson Machine Learning API. -* The standard WML batch deployment type job is created only when you create a deployment from the UI. You cannot create a WML batch deployment type job by using the API. -* The limitations of WML deployment job are as follows: - - - -* The job cannot be edited. -* The job cannot be deleted unless the associated batch deployment is deleted. -* The job doesn't allow scheduling. -* The job doesn't allow notifications. -* The job doesn't allow changing retention settings. - - - - - -For more information, see [Data sources for scoring batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html). For more information, see [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_3,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating a batch deployment - -To create a batch deployment: - - - -1. From the deployment space, click the name of the saved model that you want to deploy. The model detail page opens. -2. Click New deployment. -3. Choose Batch as the deployment type. -4. Enter a name and an optional description for your deployment. -5. Select a [hardware specification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html). -6. Click Create. When status changes to Deployed, your deployment is created. - - - -Note: Additionally, you can create a batch deployment by using any of these interfaces: - - - -* Watson Studio user interface, from an Analytics deployment space -* Watson Machine Learning Python Client -* Watson Machine Learning REST APIs - - - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_4,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating batch deployments programmatically - -See [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_5,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Viewing deployment details - -Click the name of a deployment to view the details. - -![View deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/batch-details.png) - -You can view the configuration details such as hardware and software specifications. You can also get the deployment ID, which you can use in API calls from an endpoint. For more information, see [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html). - -" -653FFEDFAC00F360750F776A3A60F6AAD38ED954_6,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Learn more - - - -* For more information, see [Creating jobs in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). -* Refer to [Machine Learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning-cp) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -7F755B81AB25CBD0950D528A240B12262FE6CA08_0,7F755B81AB25CBD0950D528A240B12262FE6CA08," Batch deployment input details for AutoAI models - -Follow these rules when you are specifying input details for batch deployments of AutoAI models. - -Data type summary table: - - - - Data Description - - Type inline, data references - File formats CSV - - - -" -7F755B81AB25CBD0950D528A240B12262FE6CA08_1,7F755B81AB25CBD0950D528A240B12262FE6CA08," Data Sources - -Input/output data references: - - - -* Local/managed assets from the space -* Connected (remote) assets: Cloud Object Storage - - - -Notes: - - - -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) , you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).{: new_window} -* Your training data source can differ from your deployment data source, but the schema of the data must match or the deployment will fail. For example, you can train an experiment by using data from a Snowflake database and deploy by using input data from a Db2 database if the schema is an exact match. -* The environment variables parameter of deployment jobs is not applicable. - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* For AutoAI assets, if the input or output data reference is of type connection_asset and the remote data source is a database then location.table_name and location.schema_name are required parameters. For example: - - - -""input_data_references"": [{ -""type"": ""connection_asset"", -""connection"": { -""id"": -}, -""location"": { -""table_name"": , -""schema_name"": - -} -" -7F755B81AB25CBD0950D528A240B12262FE6CA08_2,7F755B81AB25CBD0950D528A240B12262FE6CA08,"}] - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -4C6242D9F2B3E125780FDF188F994270A6E2340D_0,4C6242D9F2B3E125780FDF188F994270A6E2340D," Batch deployment input details by framework - -Various data types are supported as input for batch deployments, depending on your specific model type. - -For details, follow these links: - - - -* [AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-autoai.html) -* [Decision optimization models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-do.html) -* [Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-function.html) -* [Python scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-script.html) -* [Pytorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html) -* [Scikit-Learn and XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html) -* [Spark models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html) -* [SPSS models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spss.html) -* [Tensorflow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-tensorflow.html) - - - -" -4C6242D9F2B3E125780FDF188F994270A6E2340D_1,4C6242D9F2B3E125780FDF188F994270A6E2340D,"For more information, see [Using multiple inputs for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html). - -Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) -" -722D44681192F1766A0B1BACC328E719526E8DE2_0,722D44681192F1766A0B1BACC328E719526E8DE2," Batch deployment input details for Decision Optimization models - -Follow these rules when you are specifying input details for batch deployments of Decision Optimization models. - -Data type summary table: - - - - Data Description - - Type inline and data references - File formats Refer to [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html). - - - -" -722D44681192F1766A0B1BACC328E719526E8DE2_1,722D44681192F1766A0B1BACC328E719526E8DE2," Data sources - -Input/output inline data: - - - -* Inline input data is converted to CSV files and used by the engine. -* CSV output data is converted to output inline data. -* Base64-encoded raw data is supported as input and output. - - - -Input/output data references: - - - -* Tabular data is loaded from CSV, XLS, XLSX, JSON files or database data sources supported by the WDP connection library, converted to CSV files, and used by the engine. -* CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files, or database data sources supported by the WDP connection library. -* Raw data can be loaded and saved from or to any file data sources that are supported by the WDP connection library. -* No support for compressed files. -* The environment variables parameter of deployment jobs is not applicable. - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* For S3 or Db2, connection details must be specified in the input_data_references.connection parameter, in the deployment job’s payload. -* For S3 or Db2, location details such as table name, bucket name, or path must be specified in the input_data_references.location.path parameter, in the deployment job’s payload. -* For data_asset, a managed asset can be updated or created. For creation, you can set the name and description for the created asset. -* You can use a pattern in ID or connection properties. For example, see the following code snippet: - - - -* To collect all output CSV as inline data: - -""output_data"": [ { ""id"":""..csv""}] -* To collect job output in a particular S3 folder: - -" -722D44681192F1766A0B1BACC328E719526E8DE2_2,722D44681192F1766A0B1BACC328E719526E8DE2,"""output_data_references"": [ {""id"":""."", ""type"": ""s3"", ""connection"": {...}, ""location"": { ""bucket"": ""do-wml"", ""path"": ""${job_id}/${attachment_name}"" }}] - - - - - -Note:Support for s3 and db2 values for scoring.input_data_references.type and scoring.output_data_references.type is deprecated and will be removed in the future. Use connection_asset or data_asset instead. See the documentation for the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/){: new_window} for details and examples. - -For more information, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html). - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -4F89E6B2B76E64B9618F799611DD1B053D045222_0,4F89E6B2B76E64B9618F799611DD1B053D045222," Batch deployment input details for Python functions - -Follow these rules when you are specifying input details for batch deployments of Python functions. - -Data type summary table: - - - - Data Description - - Type inline - File formats N/A - - - -You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions in the same way that they send data to deployed models. Deploying functions gives you the ability to: - - - -* Hide details (such as credentials) -* Preprocess data before you pass it to models -* Handle errors -* Include calls to multiple models All of these actions take place within the deployed function, instead of in your application. - - - -" -4F89E6B2B76E64B9618F799611DD1B053D045222_1,4F89E6B2B76E64B9618F799611DD1B053D045222," Data sources - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - - - -Notes: - - - -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main). -* The environment variables parameter of deployment jobs is not applicable. -* Make sure that the output is structured to match the output schema that is described in [Execute a synchronous deployment prediction](https://cloud.ibm.com/apidocs/machine-learningdeployments-compute-predictions). - - - -" -4F89E6B2B76E64B9618F799611DD1B053D045222_2,4F89E6B2B76E64B9618F799611DD1B053D045222," Learn more - -[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html). - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -85A8F36D819B12B355508090E787F4A182686394_0,85A8F36D819B12B355508090E787F4A182686394," Batch deployment input details for Python scripts - -Follow these rules when you specify input details for batch deployments of Python scripts. - -Data type summary table: - - - - Data Description - - Type Data references - File formats Any - - - -" -85A8F36D819B12B355508090E787F4A182686394_1,85A8F36D819B12B355508090E787F4A182686394," Data sources - -Input or output data references: - - - -* Local or managed assets from the space -* Connected (remote) assets: Cloud Object Storage - - - -Notes: - - - -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage(infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main). - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. For more information, see Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* You can specify the environment variables that are required for running the Python Script as 'key': 'value' pairs in scoring.environment_variables. The key must be the name of an environment variable and the value must be the corresponding value of the environment variable. -* The deployment job's payload is saved as a JSON file in the deployment container where you run the Python script. The Python script can access the full path file name of the JSON file that uses the JOBS_PAYLOAD_FILE environment variable. -* If input data is referenced as a local or managed data asset, deployment service downloads the input data and places it in the deployment container where you run the Python script. You can access the location (path) of the downloaded input data through the BATCH_INPUT_DIR environment variable. -" -85A8F36D819B12B355508090E787F4A182686394_2,85A8F36D819B12B355508090E787F4A182686394,"* For input data references (data asset or connection asset), downloading of the data must be handled by the Python script. If a connected data asset or a connection asset is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable that contains the full path to the deployment job's payload that is saved as a JSON file. -* If output data must be persisted as a local or managed data asset in a space, you can specify the name of the asset to be created in scoring.output_data_reference.location.name. As part of a Python script, output data can be placed in the path that is specified by the BATCH_OUTPUT_DIR environment variable. The deployment service compresses the data to compressed file format and upload it in the location that is specified in BATCH_OUTPUT_DIR. -* These environment variables are set internally. If you try to set them manually, your values are overridden: - - - -* BATCH_INPUT_DIR -* BATCH_OUTPUT_DIR -* JOBS_PAYLOAD_FILE - - - -* If output data must be saved in a remote data store, you must specify the reference of the output data reference (for example, a data asset or a connected data asset) in output_data_reference.location.href. The Python script must take care of uploading the output data to the remote data source. If a connected data asset or a connection asset reference is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable, which contains the full path to the deployment job's payload that is saved as a JSON file. -* If the Python script does not require any input or output data references to be specified in the deployment job payload, then do not provide the scoring.input_data_references and scoring.output_data_references objects in the payload. - - - -" -85A8F36D819B12B355508090E787F4A182686394_3,85A8F36D819B12B355508090E787F4A182686394," Learn more - -[Deploying scripts in Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html). - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -27A861059A73E83BC02C633EE194DAC6F8ACE374_0,27A861059A73E83BC02C633EE194DAC6F8ACE374," Batch deployment input details for Pytorch models - -Follow these rules when you are specifying input details for batch deployments of Pytorch models. - -Data type summary table: - - - - Data Description - - Type inline, data references - File formats .zip archive that contains JSON files - - - -" -27A861059A73E83BC02C633EE194DAC6F8ACE374_1,27A861059A73E83BC02C633EE194DAC6F8ACE374," Data sources - -Input or output data references: - - - -* Local or managed assets from the space -* Connected (remote) assets: Cloud Object Storage - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* If you deploy Pytorch models with ONNX format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (always set opset_version to the most recent version that is supported by the deployment runtime). - -torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9) - - - -Note: The environment variables parameter of deployment jobs is not applicable. - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -CDF460B2BB910F74723297BCB8E940BF370C6FFD_0,CDF460B2BB910F74723297BCB8E940BF370C6FFD," Batch deployment input details for Scikit-learn and XGBoost models - -Follow these rules when you are specifying input details for batch deployments of Scikit-learn and XGBoost models. - -Data type summary table: - - - - Data Description - - Type inline, data references - File formats CSV, .zip archive that contains CSV files - - - -" -CDF460B2BB910F74723297BCB8E940BF370C6FFD_1,CDF460B2BB910F74723297BCB8E940BF370C6FFD," Data source - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - - - -Notes: - - - -* The environment variables parameter of deployment jobs is not applicable. -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main), - - - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -ADBD308EEB761B4A1516D49F68C880EAF3F08D78,ADBD308EEB761B4A1516D49F68C880EAF3F08D78," Batch deployment input details for Spark models - -Follow these rules when you are specifying input details for batch deployments of Spark models. - -Data type summary table: - - - - Data Description - - Type Inline - File formats N/A - - - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_0,62BF74E391CFE1696E5218B3DF0926B735A4788F," Batch deployment input details for SPSS models - -Follow these rules when you are specifying input details for batch deployments of SPSS models. - -Data type summary table: - - - - Data Description - - Type inline, data references - File formats CSV - - - -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_1,62BF74E391CFE1696E5218B3DF0926B735A4788F," Data sources - -Input or output data references: - - - -* Local or managed assets from the space -* Connected (remote) assets from these sources: - - - -* [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) -* [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) -* [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) -* [Google Big-Query (googlebq)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) -* [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) -* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) -* [Teradata (teradata)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) -* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) -* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) -* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) -* [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_2,62BF74E391CFE1696E5218B3DF0926B735A4788F,"* [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) - - - - - -Notes: - - - -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main). -* For SPSS deployments, these data sources are not compliant with Federal Information Processing Standard (FIPS): - - - -* Cloud Object Storage -* Cloud Object Storage (infrastructure) -* Storage volumes - - - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). -* SPSS jobs support multiple data source inputs and a single output. If the schema is not provided in the model metadata at the time of saving the model, you must enter id manually and select a data asset for each connection. If the schema is provided in model metadata, id names are populated automatically by using metadata. You select the data asset for the corresponding ids in Watson Studio. For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html). -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_3,62BF74E391CFE1696E5218B3DF0926B735A4788F,"* To create a local or managed asset as an output data reference, the name field must be specified for output_data_reference so that a data asset is created with the specified name. Specifying an href that refers to an existing local data asset is not supported. Note: - - - -Connected data assets that refer to supported databases can be created in the output_data_references only when the input_data_references also refers to one of these sources. - - - -* Table names that are provided in input and output data references are ignored. Table names that are referred in the SPSS model stream are used during the batch deployment. -* Use SQL PushBack to generate SQL statements for IBM SPSS Modeler operations that can be “pushed back” to or run in the database to improve performance. SQL Pushback is only supported by: - - - -* Db2 -* SQL Server -* Netezza Performance Server - - - -* If you are creating a job by using the Python client, you must provide the connection name that is referred in the data nodes of the SPSS model stream in the id field, and the data asset href in location.href for input/output data references of the deployment jobs payload. For example, you can construct the job payload like this: - -job_payload_ref = { -client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [{ -""id"": ""DB2Connection"", -""name"": ""drug_ref_input1"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": -} -},{ -""id"": ""Db2 WarehouseConn"", -""name"": ""drug_ref_input2"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": -} -}], -client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: { -""type"": ""data_asset"", -""connection"": {}, -""location"": { -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_4,62BF74E391CFE1696E5218B3DF0926B735A4788F,"""href"": -} -} -} - - - -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_5,62BF74E391CFE1696E5218B3DF0926B735A4788F," Using connected data for an SPSS Modeler flow job - -An SPSS Modeler flow can have a number of input and output data nodes. When you connect to a supported database as an input and output data source, the connection details are selected from the input and output data reference, but the input and output table names are selected from the SPSS model stream file. - -For batch deployment of an SPSS model that uses a database connection, make sure that the modeler stream Input and Output nodes are Data Asset nodes. In SPSS Modeler, the Data Asset nodes must be configured with the table names that are used later for job predictions. Set the nodes and table names before you save the model to Watson Machine Learning. When you are configuring the Data Asset nodes, choose the table name from the Connections; choosing a Data Asset that is created in your project is not supported. - -When you are creating the deployment job for an SPSS model, make sure that the types of data sources are the same for input and output. The configured table names from the model stream are passed to the batch deployment and the input/output table names that are provided in the connected data are ignored. - -For batch deployment of an SPSS model that uses a Cloud Object Storage connection, make sure that the SPSS model stream has single input and output data asset nodes. - -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_6,62BF74E391CFE1696E5218B3DF0926B735A4788F," Supported combinations of input and output sources - -You must specify compatible sources for the SPSS Modeler flow input, the batch job input, and the output. If you specify an incompatible combination of types of data sources, you get an error when you try to run the batch job. - -These combinations are supported for batch jobs: - - - - SPSS model stream input/output Batch deployment job input Batch deployment job output - - File Local, managed, or referenced data asset or connection asset (file) Remote data asset or connection asset (file) or name - Database Remote data asset or connection asset (database) Remote data asset or connection asset (database) - - - -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_7,62BF74E391CFE1696E5218B3DF0926B735A4788F," Specifying multiple inputs - -If you are specifying multiple inputs for an SPSS model stream deployment with no schema, specify an ID for each element in input_data_references. - -For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html). - -In this example, when you create the job, provide three input entries with IDs: sample_db2_conn, sample_teradata_conn, and sample_googlequery_conn and select the required connected data for each input. - -{ -""deployment"": { -""href"": ""/v4/deployments/"" -}, -""scoring"": { -""input_data_references"": [{ -""id"": ""sample_db2_conn"", -""name"": ""DB2 connection"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -}, -{ -""id"": ""sample_teradata_conn"", -""name"": ""Teradata connection"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -}, -{ -""id"": ""sample_googlequery_conn"", -""name"": ""Google bigquery connection"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -}], -""output_data_references"": { -""id"": ""sample_db2_conn"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -" -62BF74E391CFE1696E5218B3DF0926B735A4788F_8,62BF74E391CFE1696E5218B3DF0926B735A4788F,"""href"": ""/v2/assets/?space_id="" -}, -} -} - -Note: The environment variables parameter of deployment jobs is not applicable. - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -7D385692A31E1E88E675AF0B91F98F55797BC02D_0,7D385692A31E1E88E675AF0B91F98F55797BC02D," Batch deployment input details for Tensorflow models - -Follow these rules when you are specifying input details for batch deployments of Tensorflow models. - -Data type summary table: - - - - Data Description - - Type Inline or data references - File formats .zip archive that contains JSON files - - - -" -7D385692A31E1E88E675AF0B91F98F55797BC02D_1,7D385692A31E1E88E675AF0B91F98F55797BC02D," Data sources - -Input or output data references: - - - -* Local or managed assets from the space -* Connected (remote) assets: Cloud Object Storage - - - -If you are specifying input/output data references programmatically: - - - -* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - - - -Notes: - - - -* The environment variables parameter of deployment jobs is not applicable. -* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main). - - - -Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html) -" -09897DCF1128D66144D2B165564C228C16CD5EC5_0,09897DCF1128D66144D2B165564C228C16CD5EC5," Deploying foundation model assets - -Deploy foundation model assets to test the assets, put them into production, and monitor them. - -After you save a prompt template as a project asset, you can promote it to a deployment space. A deployment space is used to organize the assets for deployments and to manage access to deployed assets. Use a Pre-production space to test and validate assets, and use a Production space for deploying assets for productive use. - -For details, see [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html). - -" -09897DCF1128D66144D2B165564C228C16CD5EC5_1,09897DCF1128D66144D2B165564C228C16CD5EC5," Learn more - - - -* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) -* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) - - - -Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_0,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing deployment jobs - -A job is a way of running a batch deployment, script, or notebook in Watson Machine Learning. You can choose to run a job manually or on a schedule that you specify. After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space. - -From the Jobs tab of your space, you can: - - - -* See the list of the jobs in your space -* View the details of each job. You can change the schedule settings of a job and pick a different environment template. -* Monitor job runs -* Delete jobs - - - -See the following sections for various aspects of job management: - - - -* [Creating a job for a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=encreate-jobs-batch) -* [Viewing jobs in a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=enviewing-jobs-in-a-space) -* [Managing job metadata retention ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=endelete-jobs) - - - -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_1,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Creating a job for a batch deployment - -Important: You must have an existing batch deployment to create a batch job. - -To learn how to create a job for a batch deployment, see [Creating jobs in a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). - -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_2,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Viewing jobs in a space - -You can view all of the jobs that exist for your deployment space from the Jobs page. You can also delete a job. - -To view the details of a specific job, click the job. From the job's details page, you can do the following: - - - -* View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support. -* When a job is running, a progress indicator on the information page displays information about relative progress of the run. You can use the progress indicator to monitor a long run. -* Edit schedule settings or pick another environment template. -* Run the job manually by clicking the run icon from the job action bar. You must deselect the schedule to run the job manually. - - - -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_3,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing job metadata retention - -The Watson Machine Learning plan that is associated with your IBM Cloud account sets limits on the number of running and stored deployments that you can create. If you exceed your limit, you cannot create new deployments until you delete existing deployments or upgrade your plan. For more information, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_4,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing metadata retention and deletion programmatically - -If you are managing a job programmatically by using the Python client or REST API, you can retrieve metadata from the deployment endpoint by using the GET method during the 30 days. - -To keep the metadata for more or less than 30 days, change the query parameter from the default of retention=30 for the POST method to override the default and preserve the metadata. - -Note:Changing the value to retention=-1 cancels the auto-delete and preserves the metadata. - -To delete a job programmatically, specify the query parameter hard_delete=true for the Watson Machine Learning DELETE method to completely remove the job metadata. - -The following example shows how to use DELETE method: - -DELETE /ml/v4/deployment_jobs/{JobsID} - -" -F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_5,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Learn from samples - -Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments and jobs by using the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_0,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating online deployments in Watson Machine Learning - -Create an online (also called Web service) deployment to load a model or Python code when the deployment is created to generate predictions online, in real time. For example, if you create a classification model to test whether a new customer is likely to participate in a sales promotion, you can create an online deployment for the model. Then, you can enter the new customer data to get an immediate prediction. - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_1,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Supported frameworks - -Online deployment is supported for these frameworks: - - - -* PMML -* Python Function -* PyTorch-Onnx -* Tensorflow -* Scikit-Learn -* Spark MLlib -* SPSS -* XGBoost - - - -You can create an online deployment [from the user interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-interface) or [programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-programmatically). - -To send payload data to an asset that is deployed online, you must know the endpoint URL of the deployment. Examples include, classification of data, or making predictions from the data. For more information, see [Retrieving the deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enget-online-endpoint). - -Additionally, you can: - - - -* [Test your online deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=entest-online-deployment) -* [Access the deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enaccess-online-details) - - - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_2,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating an online deployment from the User Interface - - - -1. From the deployment space, click the name of the asset that you want to deploy. The details page opens. -2. Click New deployment. -3. Choose Online as the deployment type. -4. Provide a name and an optional description for the deployment. -5. If you want to specify a name to be used instead of deployment ID, use the Serving name field. - - - -* The name must be validated to be unique per IBM cloud region (all names in a specific region share a global namespace). -* The name must contain only these characters: [a-z,0-9,_] and must be a maximum 36 characters long. -* Serving name works only as part of the prediction URL. In some cases, you must still use the deployment ID. - - - -6. Click Create to create the deployment. - - - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_3,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating an online deployment programmatically - -Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks. These notebooks demonstrate creating online deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_4,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Retrieving the online deployment endpoint - -You can find the endpoint URL of a deployment in these ways: - - - -* From the Deployments tab of your space, click your deployment name. A page with deployment details opens. You can find the endpoint there. -* Using the Watson Machine Learning Python client: - - - -1. List the deployments by calling the [Python client method](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Deployments.list)client.deployments.list() -2. Find the row with your deployment. The deployment endpoint URL is listed in the url column. - - - - - -Notes: - - - -* If you added Serving name to the deployment, two alternative endpoint URLs show on the screen; one containing the deployment ID, and the other containing your serving name. You can use either one of these URLs with your deployment. -* The API Reference tab also shows code snippets in various programming languages that illustrate how to access the deployment. - - - -For more information, see [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url). - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_5,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Testing your online deployment - -From the Deployments tab of your space, click your deployment name. A page with deployment details opens. The Test tab provides a place where you can enter data and get a prediction back from the deployed model. If your model has a defined schema, a form shows on screen. In the form, you can enter data in one of these ways: - - - -* Enter data directly in the form -* Download a CSV template, enter values, and upload the input data -* Upload a file that contains input data from your local file system or from the space -* Change to the JSON tab and enter your input data as JSON code Regardless of method, the input data must match the schema of the model. Submit the input data and get a score, or prediction, back. - - - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_6,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Sample deployment code - -When you submit JSON code as the payload, or input data, for a deployment, your input data must match the schema of the model. The 'fields' must match the column headers for the data, and the 'values' must contain the data, in the same order. Use this format: - -{""input_data"":[{ -""fields"": , , ...], -""values"": , , ...]] -}]} - -Refer to this example: - -{""input_data"":[{ -""fields"": ""PassengerId"",""Pclass"",""Name"",""Sex"",""Age"",""SibSp"",""Parch"",""Ticket"",""Fare"",""Cabin"",""Embarked""], -""values"": 1,3,""Braund, Mr. Owen Harris"",0,22,1,0,""A/5 21171"",7.25,null,""S""]] -}]} - -Notes: - - - -* All strings are enclosed in double quotation marks. The Python notation for dictionaries looks similar, but Python strings in single quotation marks are not accepted in the JSON data. -* Missing values can be indicated with null. -* You can specify a hardware specification for an online deployment, for example if you are [scaling a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html). - - - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_7,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Preparing payload that matches the schema of an existing model - -Refer to this sample code: - -model_details = client.repository.get_details("""") retrieves details and includes schema -columns_in_schema = [] -for i in range(0, len(model_details['entity']['input'].get('fields'))): -columns_in_schema.append(model_details['entity']['input'].get('fields')[i]) - -X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored -(...) -scoring_values = X.values.tolist() -array_of_input_fields = X.columns.tolist() -payload_scoring = {""input_data"": [{""fields"": array_of_input_fields],""values"": scoring_values}]} - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_8,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Accessing the online deployment details - -To access your online deployment details: From the Deployments tab of your space, click your deployment name and then click the Deployment details tab. The Deployment details tab contains specific information that is related to the currently opened online deployment and allows for adding a model to the model inventory, to enable activity tracking and model comparison. - -" -F4A482326D45DC729EB8D1A6735CEFACD7AE5578_9,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Additional information - -Refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) for details on managing deployment jobs, and updating, scaling, or deleting an online deployment. - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -32AFAFA1C90D43BA1D3330A64491039F63D9FEB5_0,32AFAFA1C90D43BA1D3330A64491039F63D9FEB5," Deploying scripts in Watson Machine Learning - -When a script is copied to a deployment space, you can deploy it for use. Supported script types are Python scripts. [Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) is the only supported deployment type for a script. - - - -* When the script is promoted from a project, your software specification is included. -* When you create a deployment job for a script, you must manually override the default environment with the correct environment for your script. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html) - - - -" -32AFAFA1C90D43BA1D3330A64491039F63D9FEB5_1,32AFAFA1C90D43BA1D3330A64491039F63D9FEB5," Learn more - - - -* To learn more about supported input and output types and setting environment variables, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html). -* To learn more about software specifications, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259_0,2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259," Choosing compute resources for running tools in projects - -You use compute resources in projects when you run jobs and most tools. Depending on the tool, you might have a choice of compute resources for the runtime for the tool. - -Compute resources are known as either environment templates or hardware and software specifications. In general, compute resources with larger hardware configurations incur larger usage costs. - -These tools have multiple choices for configuring runtimes that you can choose from: - - - -* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) -* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) -* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) -* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) -* [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) -* [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) -* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) -* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html) - - - -Prompt Lab does not consume compute resources. Prompt Lab usage is measured by the number of processed tokens. - -" -2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259_1,2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259," Learn more - - - -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_0,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Examples of environment template customizations - -You can follow examples of how to add custom libraries through conda or pip using the provided templates for Python and R when you create an environment template. - -You can use mamba in place of conda in the following examples with conda. Remember to select the checkbox to install from mamba if you add channels or packages from mamba to the existing environment template. - -Examples exist for: - - - -* [Adding conda packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-conda-package) -* [Adding pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-pip-package) -* [Combining conda and pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encombine-conda-pip) -* [Adding complex packages with internal dependencies](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encomplex-packages) -* [Adding conda packages for R notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enconda-in-r) -* [Setting environment variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enset-vars) - - - -Hints and tips: - - - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_1,D83BAAE9C79E5DF9CA904AB1886AC4826447B495,"* [Best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enbest-practices) - - - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_2,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding conda packages - -To get latest versions of pandas-profiling: - -dependencies: -- pandas-profiling - -This is equivalent to running conda install pandas-profiling in a notebook. - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_3,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding pip packages - -You can also customize an environment using pip if a particular package is not available in conda channels: - -dependencies: -- pip: -- ibm-watson-machine-learning - -This is equivalent to running pip install ibm-watson-machine-learning in a notebook. - -The customization will actually do more than just install the specified pip package. The default behavior of conda is to also look for a new version of pip itself and then install it. Checking all the implicit dependencies in conda often takes several minutes and also gigabytes of memory. The following customization will shortcut the installation of pip: - -channels: -- empty -- nodefaults - -dependencies: -- pip: -- ibm-watson-machine-learning - -The conda channel empty does not provide any packages. There is no pip package in particular. conda won't try to install pip and will use the already pre-installed version instead. Note that the keyword nodefaults in the list of channels needs at least one other channel in the list. Otherwise conda will silently ignore the keyword and use the default channels. - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_4,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Combining conda and pip packages - -You can list multiple packages with one package per line. A single customization can have both conda packages and pip packages. - -dependencies: -- pandas-profiling -- scikit-learn=0.20 -- pip: -- watson-machine-learning-client-V4 -- sklearn-pandas==1.8.0 - -Note that the required template notation is sensitive to leading spaces. Each item in the list of conda packages must have two leading spaces. Each item in the list of pip packages must have four leading spaces. The version of a conda package must be specified using a single equals symbol (=), while the version of a pip package must be added using two equals symbols (==). - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_5,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding complex packages with internal dependencies - -When you add many packages or a complex package with many internal dependencies, the conda installation might take long or might even stop without you seeing any error message. To avoid this from happening: - - - -* Specify the versions of the packages you want to add. This reduces the search space for conda to resolve dependencies. -* Increase the memory size of the environment. -* Use a specific channel instead of the default conda channels that are defined in the .condarc file. This avoids running lengthy searches through big channels. - - - -Example of a customization that doesn't use the default conda channels: - - get latest version of the prophet package from the conda-forge channel -channels: -- conda-forge -- nodefaults - -dependencies: -- prophet - -This customization corresponds to the following command in a notebook: - -!conda install -c conda-forge --override-channels prophet -y - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_6,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding conda packages for R notebooks - -The following example shows you how to create a customization that adds conda packages to use in an R notebook: - -channels: -- defaults - -dependencies: -- r-plotly - -This customization corresponds to the following command in a notebook: - -print(system(""conda install r-plotly"", intern=TRUE)) - -The names of R packages in conda generally start with the prefix r-. If you just use plotly in your customization, the installation would succeed but the Python package would be installed instead of the R package. If you then try to use the package in your R code as in library(plotly), this would return an error. - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_7,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Setting environment variables - -You can set environment variables in your environment by adding a variables section to the software customization template as shown in the following example: - -variables: -my_var: my_value -HTTP_PROXY: https://myproxy:3128 -HTTPS_PROXY: https://myproxy:3128 -NO_PROXY: cluster.local - -The example also shows that you can use the variables section to set a proxy server for an environment. - -Limitation: You cannot override existing environment variables, for example LD_LIBRARY_PATH, using this approach. - -" -D83BAAE9C79E5DF9CA904AB1886AC4826447B495_8,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Best practices - -To avoid problems that can arise finding packages or resolving conflicting dependencies, start by installing the packages you need manually through a notebook in a test environment. This enables you to check interactively if packages can be installed without errors. After you have verified that the packages were all correctly installed, create a customization for your development or production environment and add the packages to the customization template. - -Parent topic:[Customizing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_0,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," IBM Federated Learning - -Federated Learning provides the tools for multiple remote parties to collaboratively train a single machine learning model without sharing data. Each party trains a local model with a private data set. Only the local model is sent to the aggregator to improve the quality of the global model that benefits all parties. - -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_1,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8,"Data format -Any data format including but not limited to CSV files, JSON files, and databases for PostgreSQL. - -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_2,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," How Federated Learning works - -Watch this overview video to learn the basic concepts and elements of a Federated Learning experiment. Learn how you can apply the tools for your company's analytics enhancements. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -An example for using Federated Learning is when an aviation alliance wants to model how a global pandemic impacts airline delays. Each participating party in the federation can use their data to train a common model without ever moving or sharing their data. They can do so either in application silos or any other scenario where regulatory or pragmatic considerations prevent users from sharing data. The resulting model benefits each member of the alliance with improved business insights while lowering risk from data migration and privacy issues. - -As the following graphic illustrates, parties can be geographically distributed and run on different platforms. - -![Diagram of a global Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-overview.svg) - -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_3,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," Why use IBM Federated Learning - -IBM Federated Learning has a wide range of applications across many enterprise industries. Federated Learning: - - - -* Enables sites with large volumes of data to be collected, cleaned, and trained on an enterprise scale without migration. -* Accommodates for the differences in data format, quality, and constraints. -* Complies with data privacy and security while training models with different data sources. - - - -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_4,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," Learn more - - - -* [Federated Learning tutorials and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) - - - -* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html) -* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html) -* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html) -* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html) -* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html) - - - -* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html) - - - -* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html) -* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) - - - -* [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) - - - -" -A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_5,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8,"* [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html) - - - -* [Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) - - - -* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html) -* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html) -* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html) -* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html) -* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html) -* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html) - - - -* [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html) -* [Limitations and troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) - - - -Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -" -CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_0,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Starting the aggregator (Admin) - -An administrator completes the following steps to start the experiment and train the global model. - - - -* [Step 1: Set up the Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enfl-setup) -* [Step 2: Create the remote training system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enrts) -* [Step 3: Start the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enstart) - - - -" -CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_1,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 1: Set up the Federated Learning experiment - -Set up a Federated Learning experiment from a project. - - - -1. From the project, click New asset > Federated Learning. -2. Name the experiment. -Optional: Add an optional description and tags. -3. [Add new collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to the project. -4. In the Configure tab, choose the training framework and model type. See [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) for a table listing supported frameworks, fusion methods, and their attributes. Optional: You can choose to enable the homomorphic encryption feature. For more details, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html). -5. Click Select under Model specification and upload the .zip file that contains your initial model. -6. In the Define hyperparameters tab, you can choose hyperparameter options available for your framework and fusion method to tune your model. - - - -" -CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_2,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 2: Create the Remote Training System - -Create Remote Training Systems (RTS) that authenticates the participating parties of the experiment. - - - -1. At Select remote training system, click Add new systems. -![Screenshot of Remote Training System UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-rts.png) -2. Configure the RTS. - -| Field name | Definition | Example | | -- | -- | -- | | Name | A name to identify this RTS instance. | Canada Bank Model: Federated Learning Experiment | | Description -(Optional) | Description of the training system. | This Remote Training System is for a -Federated Learning experiment to train a model for -predicting credit card fraud with data from Canadian banks. | | System administrator -(Optional) | Specify a user with read-only access to this RTS. They can see system details, logs, and scripts, but not necessarily participate in the experiment. They should be contacted if issues occur when running the experiment. | Admin (admin@example.com) | | Allowed identities | List project collaborators who can participate in the Federated Learning experiment training. Multiple collaborators can be registered in this RTS, but only one can participate in the experiment. Multiple RTS's are needed to authenticate all participating collaborators. | John Doe (john.doe@example.com) -Jane Doe (jane.doe@example.com) | | Allowed IP addresses -(Optional) | Restrict individual parties from connecting to Federated Learning outside of a specified IP address. - -1. To configure this, click Configure. -2. For Allowed identities, select the user to place IP constraints on. -3. For Allowed IP addresses for user, enter a comma seperated list of IPs and or CIDRs that can connect to the Remote Training System. Note: Both IPv4 and IPv6 are supported. | John -" -CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_3,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF,"1234:5678:90ab:cdef:1234:5678:90ab:cdef: (John’s office IP), 123.123.123.123 (John’s home IP), 0987.6543.21ab.cdef (Remote VM IP) -Jane -123.123.123.0/16 (Jane's home IP), 0987.6543.21ab.cdef (Remote machine IP) | | Tags -(Optional) | Associate keywords with the Remote Training System to make it easier to find. | Canada -Bank -Model -Credit -Fraud | - - - - - -1. Click Add to save the RTS instance. If you are creating multiple remote training instances, you can repeat these steps. -2. Click Add systems to save the RTS as an asset in the project. - -Tip: You can use an RTS definition for future experiments. For example, in the Select remote training system tab, you can select any Remote Training System that you previously created. -3. Each RTS can only authenticate one of its allowed party identities. Create an RTS for each new participating part(ies). - - - -" -CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_4,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 3: Start the experiment - -Start the Federated Learning aggregator to initiate training of the global model. - - - -1. Click Review and create to view the settings of your current Federated Learning experiment. Then, click Create. ![Screenshot of Review and Create Experiment UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-rev.png) -2. The Federated Learning experiment will be in Pending status while the aggregator is starting. When the aggregator starts, the status will change to Setup – Waiting for remote systems. - - - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -4B48EF3D089F3142B1ED604A32873217F89E052F_0,4B48EF3D089F3142B1ED604A32873217F89E052F," Federated Learning architecture - -IBM Federated Learning has two main components: the aggregator and the remote training parties.  - -" -4B48EF3D089F3142B1ED604A32873217F89E052F_1,4B48EF3D089F3142B1ED604A32873217F89E052F," Aggregator - -The aggregator is a model fusion processor. The admin manages the aggregator. - -The aggregator runs the following tasks: - - - -* Runs as a platform service in regions Dallas, Frankfurt, London, or Tokyo. -* Starts with a Federated Learning experiment. - - - -" -4B48EF3D089F3142B1ED604A32873217F89E052F_2,4B48EF3D089F3142B1ED604A32873217F89E052F," Party - -A party is a user that provides model input to the Federated Learning experiment aggregator. The party can be: - - - -* on any system that can run the Watson Machine Learning Python client and compatible with Watson Machine Learning frameworks. - -Note:The system does not have to be specifically IBM watsonx. For a list of system requirements, see [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html). -* running on the system in any geographical location. You are recommended to locate each party in the same region where the data is to avoid data extraction out of different regions. - - - -This illustration shows the architecture of IBM Federated Learning. - -A Remote Training System is used to authenticate the party's identity to the aggregator during training. - -![Illustration of the Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-arch.svg) - -" -4B48EF3D089F3142B1ED604A32873217F89E052F_3,4B48EF3D089F3142B1ED604A32873217F89E052F," User workflow - - - -1. The data scientist: - - - -1. Identifies the data sources. -2. Creates an initial ""untrained"" model. -3. Creates a data handler file. -These tasks might overlap with a training party entity. - - - -2. A party connects to the aggregator on their system, which can be remote. -3. An admin controls the Federated Learning experiment by: - - - -1. Configuring the experiment to accommodate remote parties. -2. Starting the aggregator. - - - - - -This illustration shows the actions that are associated with each role in the Federated Learning process. - -![Illustration of the Federated Learning group workflow process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-workflow.svg) - -Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html) -" -924550083A3A6ACD177024DF788C02D236874893_0,924550083A3A6ACD177024DF788C02D236874893," Connecting to the aggregator (Party) - -Each party follows these steps to connect to a started aggregator. - - - -1. Open the project and click the Federated Learning experiment. -2. Click View setup information and click the download icon to download the party connector script. ![Screen capture of View Setup Information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-view-setup-info.png) -3. Each party must configure the party connector script and provide valid credentials to run the script. This is what a sample completed party connector script looks like: - -from ibm_watson_machine_learning import APIClient - -wml_credentials = { -""url"": ""https://us-south.ml.cloud.ibm.com"", -""apikey"": """" -} - -wml_client = APIClient(wml_credentials) - -wml_client.set.default_project(""XXX-XXX-XXX-XXX-XXX"") - -party_metadata = { - -wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: { - -""name"": ""MnistSklearnDataHandler"", - -""path"": ""example.mnist_sklearn_data_handler"", - -""info"": { - -""npz_file"":""./example_data/example_data.npz"" - -} - -party = wml_client.remote_training_systems.create_party(""XXX-XXX-XXX-XXX-XXX"", party_metadata) - -party.monitor_logs() -party.run(aggregator_id=""XXX-XXX-XXX-XXX-XXX"", asynchronous=False) - -Parameters: - - - -* api_key: -Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud Pak for Data API key under Manage > Access(IAM) > API keys. - -" -924550083A3A6ACD177024DF788C02D236874893_1,924550083A3A6ACD177024DF788C02D236874893,"Optional: If you're reusing a script from a different project, you can copy the updated project_id, aggregator_id and experiment_id from the setup information window and copy them into the script. - - - -4. Install Watson Machine Learning with the latest Federated Learning package if you have not yet done so: - - - -* If you are using M-series on a Mac, install the latest package with the following script: - - ----------------------------------------------------------------------------------------- - (C) Copyright IBM Corp. 2023. - https://opensource.org/licenses/BSD-3-Clause - ----------------------------------------------------------------------------------------- - - - Script to create a conda environment and install ibm-watson-machine-learning with - the dependencies required for Federated Learning on MacOS. - The name of the conda environment to be created is passed as the first argument. - - Note: This script requires miniforge to be installed for conda. - - -usage="". install_fl_rt22.2_macos.sh conda_env_name"" - -arch=$(uname -m) -os=$(uname -s) - -if (($ < 1)) -then -echo $usage -exit -fi - -ENAME=$1 - -conda create -y -n ${ENAME} python=3.10 -conda activate ${ENAME} -pip install ibm-watson-machine-learning - -if [ ""$os"" == ""Darwin"" -a ""$arch"" == ""arm64"" ] -then -conda install -y -c apple tensorflow-deps -fi - -python - <_.py - - - -" -924550083A3A6ACD177024DF788C02D236874893_3,924550083A3A6ACD177024DF788C02D236874893," More resources - -[Federated Learning library functions](https://ibm.github.io/watson-machine-learning-sdk/) - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -D579ABA442C4652BAC088173107ECFEBBF4D8290_0,D579ABA442C4652BAC088173107ECFEBBF4D8290," Federated Learning tutorials and samples - -Select the tutorial that fits your needs. To facilitate the learning process of Federated Learning, one tutorial with a UI-based approach and one tutorial with an API calling approach for multiple frameworks and data sets is provided. The results of either are the same. All UI-based tutorials demonstrate how to create the Federated Learning experiment in a low-code environment. All API-based tutorials use two sample notebooks with Python scripts to demonstrate how to build and train the experiment. - -" -D579ABA442C4652BAC088173107ECFEBBF4D8290_1,D579ABA442C4652BAC088173107ECFEBBF4D8290," Tensorflow - -These hands-on tutorials teach you how to create a Federated Learning experiment step by step. These tutorials use the MNIST data set to demonstrate how different parties can contribute data to train a model to recognize handwriting. You can choose between a UI-based or API version of the tutorial. - - - -* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html) - -* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html) - - - -" -D579ABA442C4652BAC088173107ECFEBBF4D8290_2,D579ABA442C4652BAC088173107ECFEBBF4D8290," XGBoost - -This is a tutorial for Federated Learning that teaches you how to create an experiment step by step with an income in the XGBoost framework. The tutorial demonstrates how different parties can contribute data to train a model about adult incomes. - - - -* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html) - -* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html) - - - -" -D579ABA442C4652BAC088173107ECFEBBF4D8290_3,D579ABA442C4652BAC088173107ECFEBBF4D8290," Homomorphic encryption - -This is a tutorial for Federated Learning that teaches you how to use the advanced method of homomorphic encryption step by step. - - - -* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html) - - - -Parent topic:[IBM Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_0,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Federated Learning homomorphic encryption sample for API - -Download and review sample files that show how to run a Federated Learning experiment with Fully Homomorphic Encryption (FHE). - -" -CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_1,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Homomorphic encryption - -FHE is an advanced, optional method to provide additional security and privacy for your data by encrypting data sent between parties and the aggregator. This method still creates a computational result that is the same as if the computations were done on unencrypted data. For more details on applying homomorphic encryption in Federated Learning, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html). - -" -CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_2,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Download the Federated Learning sample files - -Download the following notebooks. - -[Federated Learning FHE Demo](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa449d3939b73847c502bd7822d0949a) - -Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) -" -1D1783967CBF46A0B75539BADBAA1D601BC9F412_0,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Frameworks, fusion methods, and Python versions - -These are the available machine learning model frameworks and model fusion methods for the Federated Learning model. The software spec and frameworks are also compatible with specific Python versions. - -" -1D1783967CBF46A0B75539BADBAA1D601BC9F412_1,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Frameworks and fusion methods - -This table lists supported software frameworks for building Federated Learning models. For each framework you can see the supported model types, fusion methods, and hyperparameter options. - - - -Table 1. Frameworks and fusion methods - - Frameworks Model Type Fusion Method Description Hyperparameters - - TensorFlow
Used to build neural networks.
See [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmltf-config). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Termination predicate (Optional)
- Quorum (Optional)
- Max Timeout (Optional) - Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
- Termination predicate (Optional)
- Quorum (Optional)
- Max Timeout (Optional) - Scikit-learn
Used for predictive data analysis.
See [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlsklearn-config). Classification Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Termination predicate (Optional) - Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
- Termination predicate (Optional) - Regression Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds - Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds -" -1D1783967CBF46A0B75539BADBAA1D601BC9F412_2,1D1783967CBF46A0B75539BADBAA1D601BC9F412," XGBoost XGBoost Classification Use to build classification models that use XGBoost. - Learning rate
- Loss
- Rounds
- Number of classes - XGBoost Regression Use to build regression models that use XGBoost. - Learning rate
- Rounds
- Loss - K-Means/SPAHM Used to train KMeans (unsupervised learning) models when parties have heterogeneous data sets. - Max Iter
- N cluster - Pytorch
Used for training neural network models.
See [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlpytorch). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Epochs
- Quorum (Optional)
- Max Timeout (Optional) - Neural Networks Probabilistic Federated Neural Matching (PFNM) Communication-efficient method for fully connected neural networks when parties have heterogeneous data sets. - Rounds
- Termination accuracy (Optional)
- Epochs
- sigma
- sigma0
- gamma
- iters - - - -" -1D1783967CBF46A0B75539BADBAA1D601BC9F412_3,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Software specifications and Python version by framework - -This table lists the software spec and Python versions available for each framework. - - - -Software specifications and Python version by framework - - Watson Studio frameworks Python version Software Spec Python Client Extras Framework package - - scikit-learn 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 scikit-learn 1.1.1 - Tensorflow 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 tensorflow 2.9.2 - PyTorch 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 torch 1.12.1 - - - -" -1D1783967CBF46A0B75539BADBAA1D601BC9F412_4,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Learn more - -[Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html) - -Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -ADEB3C4BA4949F2C87919D5493B71B67028B76EE_0,ADEB3C4BA4949F2C87919D5493B71B67028B76EE," Get started - -Federated Learning is appropriate for any situation where different entities from different geographical locations or Cloud providers want to train an analytical model without sharing their data. - -To get started with Federated Learning, choose from these options: - - - -* Familiarize yourself with the key concepts and [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). -* Review the [architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) for creating a Federated Learning experiment. -* Follow a tutorial for step-by-step instructions for creating a [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) or review samples. - - - -" -ADEB3C4BA4949F2C87919D5493B71B67028B76EE_1,ADEB3C4BA4949F2C87919D5493B71B67028B76EE," Learn more - - - -* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html) - -* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) - - - -Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -51426DCF985B97AF6172727AFCF353A481591560_0,51426DCF985B97AF6172727AFCF353A481591560," Create the data handler - -Each party in a Federated Learning experiment must get a data handler to process their data. You or a data scientist must create the data handler. A data handler is a Python class that loads and transforms data so that all data for the experiment is in a consistent format. - -" -51426DCF985B97AF6172727AFCF353A481591560_1,51426DCF985B97AF6172727AFCF353A481591560," About the data handler class - -The data handler performs the following functions: - - - -* Accesses the data that is required to train the model. For example, reads data from a CSV file into a Pandas data frame. -* Pre-processes the data so data is in a consistent format across all parties. Some example cases are as follows: - - - -* The Date column might be stored as a time epoch or timestamp. -* The Country column might be encoded or abbreviated. - - - -* The data handler ensures that the data formatting is in agreement. - - - -* Optional: feature engineer as needed. - - - - - -The following illustration shows how a data handler is used to process data and make it consumable by the experiment: - -![A use case of the data handler unifying data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-data-handler.svg) - -" -51426DCF985B97AF6172727AFCF353A481591560_2,51426DCF985B97AF6172727AFCF353A481591560," Data handler template - -A general data handler template is as follows: - - your import statements - -from ibmfl.data.data_handler import DataHandler - -class MyDataHandler(DataHandler): -"""""" -Data handler for your dataset. -"""""" -def __init__(self, data_config=None): -super().__init__() -self.file_name = None -if data_config is not None: - This can be any string field. - For example, if your data set is in csv format, - can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more. -if '' in data_config: -self.file_name = data_config[''] - extract other additional parameters from info if any. - - load and preprocess the training and testing data -self.load_and_preprocess_data() - -"""""" - Example: - (self.x_train, self.y_train), (self.x_test, self.y_test) = self.load_dataset() -"""""" - -def load_and_preprocess_data(self): -"""""" -Loads and pre-processeses local datasets, -and updates self.x_train, self.y_train, self.x_test, self.y_test. - - Example: - return (self.x_train, self.y_train), (self.x_test, self.y_test) -"""""" - -pass - -def get_data(self): -"""""" -Gets the prepared training and testing data. - -:return: ((x_train, y_train), (x_test, y_test)) most build-in training modules expect data is returned in this format -:rtype: tuple - -This function should be as brief as possible. Any pre-processing operations should be performed in a separate function and not inside get_data(), especially computationally expensive ones. - - Example: - X, y = load_somedata() - x_train, x_test, y_train, y_test = -" -51426DCF985B97AF6172727AFCF353A481591560_3,51426DCF985B97AF6172727AFCF353A481591560," train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE) - return (x_train, y_train), (x_test, y_test) -"""""" -pass - -def preprocess(self, X, y): -pass - -" -51426DCF985B97AF6172727AFCF353A481591560_4,51426DCF985B97AF6172727AFCF353A481591560,"Parameters - - - -* your_data_file_type: This can be any string field. For example, if your data set is in csv format, your_data_file_type can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more. - - - -" -51426DCF985B97AF6172727AFCF353A481591560_5,51426DCF985B97AF6172727AFCF353A481591560," Return a data generator defined by Keras or Tensorflow - -The following is a code example that needs to be included as part of the get_data function to return a data generator defined by Keras or Tensorflow: - -train_gen = ImageDataGenerator(rotation_range=8, -width_sht_range=0.08, -shear_range=0.3, -height_shift_range=0.08, -zoom_range=0.08) - -train_datagenerator = train_gen.flow( -x_train, y_train, batch_size=64) - -return train_datagenerator - -" -51426DCF985B97AF6172727AFCF353A481591560_6,51426DCF985B97AF6172727AFCF353A481591560," Data handler examples - - - -* [MNIST Keras data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) -* [Adult XGBoost data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/adult_sklearn_data_handler.py) - - - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_0,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Applying homomorphic encryption for security and privacy - -Federated learning supports homomorphic encryption as an added measure of security for federated training data. Homomorphic encryption is a form of public key cryptography that enables computations on the encrypted data without first decrypting it, meaning the data can be used in modeling without exposing it to the risk of discovery. - -With homomorphic encryption, the results of the computations remain in encrypted form and when decrypted, result in an output that is the same as the output produced with computations performed on unencrypted data. It uses a public key for encryption and a private key for decryption. - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_1,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," How it works with Federated Learning - -Homomorphic encryption is an optional encryption method to add additional security and privacy to a Federated Learning experiment. When homomorphic encryption is applied in a Federated Learning experiment, the parties send their homomorphically encrypted model updates to the aggregator. The aggregator does not have the private key and can only see the homomorphically encrypted model updates. For example, the aggregator cannot reverse engineer the model updates to discover information on the parties' training data. The aggregator fuses the model updates in their encrypted form which results in an encrypted aggregated model. Then the aggregator sends the encrypted aggregated model to the participating parties who can use their private key for decryption and continue with the next round of training. Only the participating parties can decrypt model data. - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_2,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Supported frameworks and fusion methods - -Fully Homomorphic Encryption (FHE) supports the simple average fusion method for these model frameworks: - - - -* Tensorflow -* Pytorch -* Scikit-learn classification -* Scikit-learn regression - - - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_3,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Before you begin - -To get started with using homomorphic encryption, ensure that your experiment meets the following requirements: - - - -* The hardware spec must be minimum small. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption caused by more powerful data encryption. See the encryption level table in Configuring the aggregator.- The software spec is fl-rt22.2-py3.10. -* FHE is supported in Python client version 1.0.263 or later. All parties must use the same Python client version. - - - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_4,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Requirements for the parties - -Each party must: - - - -* Run on a Linux x86 system. -* Configure with a root certificate that identifies a certificate authority that is uniform to all parties. -* Configure an RSA public and private key pair with attributes described in the following table. -* Configure with a certificate of the party issued by the certificate authority. The RSA public key must be included in the party's certificate. - - - -Note: You can also choose to use self-signed certificates. - -Homomorphic public and private encryption keys are generated and distributed automatically and securely among the parties for each experiment. Only the parties participating in an experiment have access to the private key generated for the experiment. To support the automatic generation and distribution mechanism, the parties must be configured with the certificates and RSA keys specified previously. - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_5,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," RSA key requirements - - - -Table 1. RSA Key Requirements - - Attribute Requirement - - Key size 4096 bit - Public exponent 65537 - Password None - Hash algorithm SHA256 - File format The key and certificate files must be in ""PEM"" format - - - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_6,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Configuring the aggregator (admin) - -As you create a Federated Learning experiment, follow these steps: - - - -1. In the Configure tab, toggle ""Enable homomorphic encryption"". -2. Choose small or above for Hardware specification. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption for homomorphic encryption. -3. Ensure that you upload an unencrypted initial model when selecting the model file for Model specification. -4. Select ""Simple average (encrypted)"" for Fusion method. Click Next. -5. Check Show advanced in the Define hyperparameters tab. -6. Select the level of encryption in Encryption level. -Higher encryption levels increase security and precision, and require higher resource consumption (e.g. computation, memory, network bandwidth). The default is encryption level 1. -See the following table for description of the encryption levels: - - - - - -Increasing encryption level and security and precision - - Level Security Precision - - 1 High Good - 2 High High - 3 Very high Good - 4 Very high High - - - -Security is the strength of the encryption, typically measured by the number of operations that an attacker must perform to break the encryption. -Precision is the precision of the encryption system's outcomes. Higher precision levels reduce loss of accuracy of the model due to the encryption. - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_7,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Connecting to the aggregator (party) - -The following steps only show the configuration needed for homomorphic encryption. For a step-by-step tutorial of using homomorphic encryption in Federated Learning, see [FHE sample](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html). - -To see how to create a general end-to-end party connector script, see [Connect to the aggregator (party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html). - - - -1. Install the Python client with FHE with the following command: -pip install 'ibm_watson_machine_learning[fl-rt23.1-py3.10,fl-crypto]' -2. Configure the party as follows: - -party_config = { -""local_training"": { -""info"": { -""crypto"": { -""key_manager"": { -""key_mgr_info"": { -""distribution"": { -""ca_cert_file_path"": ""path of the root certificate file identifying the certificate authority"", -""my_cert_file_path"": ""path of the certificate file of the party issued by the certificate authority"", -""asym_key_file_path"": ""path of the RSA key file of the party"" -} -} -} -} -} -} -} -} -3. Run the party connector script after configuration. - - - -" -C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_8,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Additional resources - -Parent topic:[Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_0,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Creating the initial model - -Parties can create and save the initial model before training by following a set of examples. - - - -* [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=entf-config) -* [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensklearn-config) -* [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=enpytorch) - - - -Consider the configuration examples that match your model type. - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_1,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the Tensorflow model - -import tensorflow as tf -from tensorflow.keras import -from tensorflow.keras.layers import -import numpy as np -import os - -class MyModel(Model): -def __init__(self): -super(MyModel, self).__init__() -self.conv1 = Conv2D(32, 3, activation='relu') -self.flatten = Flatten() -self.d1 = Dense(128, activation='relu') -self.d2 = Dense(10) - -def call(self, x): -x = self.conv1(x) -x = self.flatten(x) -x = self.d1(x) -return self.d2(x) - - Create an instance of the model - -model = MyModel() -loss_object = tf.keras.losses.SparseCategoricalCrossentropy( -from_logits=True) -optimizer = tf.keras.optimizers.Adam() -acc = tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy') -model.compile(optimizer=optimizer, loss=loss_object, metrics=[acc]) -img_rows, img_cols = 28, 28 -input_shape = (None, img_rows, img_cols, 1) -model.compute_output_shape(input_shape=input_shape) - -dir = ""./model_architecture"" -if not os.path.exists(dir): -os.makedirs(dir) - -model.save(dir) - -If you choose Tensorflow as the model framework, you need to save a Keras model as the SavedModel format. A Keras model can be saved in SavedModel format by using tf.keras.model.save(). - -To compress your files, run the command zip -r mymodel.zip model_architecture. The contents of your .zip file must contain: - -mymodel.zip -└── model_architecture -├── assets -├── keras_metadata.pb -├── saved_model.pb -└── variables -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_2,4CD539B8153216F80B26729A35AD4CD04A9C27DB,"├── variables.data-00000-of-00001 -└── variables.index - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_3,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the Scikit-learn model - - - -* [SKLearn classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-class) -* [SKLearn regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-reg) -* [SKLearn Kmeans](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-k) - - - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_4,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn classification - - SKLearn classification - -from sklearn.linear_model import SGDClassifier -import numpy as np -import joblib - -model = SGDClassifier(loss='log', penalty='l2') -model.classes_ = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) - You must specify the class label for IBM Federated Learning using model.classes. Class labels must be contained in a numpy array. - In the example, there are 10 classes. - -joblib.dump(model, ""./model_architecture.pickle"") - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_5,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn regression - - Sklearn regression - -from sklearn.linear_model import SGDRegressor -import pickle - -model = SGDRegressor(loss='huber', penalty='l2') - -with open(""./model_architecture.pickle"", 'wb') as f: -pickle.dump(model, f) - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_6,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn Kmeans - - SKLearn Kmeans -from sklearn.cluster import KMeans -import joblib - -model = KMeans() -joblib.dump(model, ""./model_architecture.pickle"") - -You need to create a .zip file that contains your model in pickle format by running the command zip mymodel.zip model_architecture.pickle. The contents of your .zip file must contain: - -mymodel.zip -└── model_architecture.pickle - -" -4CD539B8153216F80B26729A35AD4CD04A9C27DB_7,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the PyTorch model - -import torch -import torch.nn as nn - -model = nn.Sequential( -nn.Flatten(start_dim=1, end_dim=-1), -nn.Linear(in_features=784, out_features=256, bias=True), -nn.ReLU(), -nn.Linear(in_features=256, out_features=256, bias=True), -nn.ReLU(), -nn.Linear(in_features=256, out_features=256, bias=True), -nn.ReLU(), -nn.Linear(in_features=256, out_features=100, bias=True), -nn.ReLU(), -nn.Linear(in_features=100, out_features=50, bias=True), -nn.ReLU(), -nn.Linear(in_features=50, out_features=10, bias=True), -nn.LogSoftmax(dim=1), -).double() - -torch.save(model, ""./model_architecture.pt"") - -You need to create a .zip file containing your model in pickle format. Run the command zip mymodel.zip model_architecture.pt. The contents of your .zip file should contain: - -mymodel.zip -└── model_architecture.pt - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_0,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Monitoring the experiment and saving the model - -Any party or admin with collaborator access to the experiment can monitor the experiment and save a copy of the model. - -As the experiment runs, you can check the progress of the experiment. After the training is complete, you can view your results, save and deploy the model, and then test the model with new data. - -" -3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_1,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Monitoring the experiment - -When all parties run the party connector script, the experiment starts training automatically. As the training runs, you can view a dynamic diagram of the training progress. For each round of training, you can view the four stages of a training round: - - - -* Sending model: Federated Learning sends the model metrics to each party. -* Training: The process of training the data locally. Each party trains to produce a local model that is fused. No data is exchanged between parties. -* Receiving models: After training is complete, each party sends its local model to the aggregator. The data is not sent and remains private. -* Aggregating: The aggregator combines the models that are sent by each of the remote parties to create an aggregated model. - - - -" -3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_2,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Saving your model - -When the training is complete, a chart that displays the model accuracy over each round of training is drawn. Hover over the points on the chart for more information on a single point's exact metrics. - -A Training rounds table shows details for each training round. The table displays the participating parties' average accuracy of their model training for each round. - -![Screenshot of View Setup Information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-display.png) - -When you are done with the viewing, click Save model to project to save the Federated Learning model to your project. - -" -3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_3,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Rerun the experiment - -You can rerun the experiment as many times as you need in your project. - -Note:If you encounter errors when rerunning an experiment, see [Troubleshoot](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) for more details. - -" -3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_4,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Deploying your model - -After you save your Federated Learning model, you can deploy and score the model like other machine learning models in a Watson Studio platform. - -See [Deploying models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) for more details. - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -4B16740C786C0846194987998DAD887250BE95BF_0,4B16740C786C0846194987998DAD887250BE95BF," Hyperparameter definitions - -Definitions of hyperparameters used in the experiment training. One or more of these hyperparameter options might be used, depending on your framework and fusion method. - - - -Hyperparameter definitions - - Hyperparameters Description - - Rounds Int value. The number of training iterations to complete between the aggregator and the remote systems. - Termination accuracy (Optional) Float value. Takes model_accuracy and compares it to a numerical value. If the condition is satisfied, then the experiment finishes early.

For example, termination_predicate: accuracy >= 0.8 finishes the experiment when the mean of model accuracy for participating parties is greater than or equal to 80%. Currently, Federated Learning accepts one type of early termination condition (model accuracy) for classification models only. - Quorum (Optional) Float value. Proceeds with model training after the aggregator reaches a certain ratio of party responses. Takes a decimal value between 0 - 1. The default is 1. The model training starts only after party responses reach the indicated ratio value.
For example, setting this value to 0.5 starts the training after 50% of the registered parties responded to the aggregator call. - Max Timeout (Optional) Int value. Terminates the Federated Learning experiment if the waiting time for party responses exceeds this value in seconds. Takes a numerical value up to 43200. If this value in seconds passes and the quorum ratio is not reached, the experiment terminates.

For example, max_timeout = 1000 terminates the experiment after 1000 seconds if the parties do not respond in that time. - Sketch accuracy vs privacy (Optional) Float value. Used with XGBoost training to control the relative accuracy of sketched data sent to the aggregator. Takes a decimal value between 0 and 1. Higher values will result in higher quality models but with a reduction in data privacy and increase in resource consumption. -" -4B16740C786C0846194987998DAD887250BE95BF_1,4B16740C786C0846194987998DAD887250BE95BF," Number of classes Int value. Number of target classes for the classification model. Required if ""Loss"" hyperparameter is:
- auto
- binary_crossentropy
- categorical_crossentropy
- Learning rate Decimal value. The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values. - Loss String value. The loss function to use in the boosting process.
- binary_crossentropy (also known as logistic loss) is used for binary classification.
- categorical_crossentropy is used for multiclass classification.
- auto chooses either loss function depending on the nature of the problem.
- least_squares is used for regression. - Max Iter Int value. The total number of passes over the local training data set to train a Scikit-learn model. - N cluster Int value. The number of clusters to form and the number of centroids to generate. - Epoch (Optional) Int value. The number of local training iterations to be preformed by each remote party for each round. For example, if you set Rounds to 2 and Epochs to 5, all remote parties train locally 5 times before the model is sent to the aggregator. In round 2, the aggregator model is trained locally again by all parties 5 times and re-sent to the aggregator. - sigma Float value. Determines how far the local model neurons are allowed from the global model. A bigger value allows more matching and produces a smaller global model. Default value is 1. - sigma0 Float value. Defines the permitted deviation of the global network neurons. Default value is 1. - gamma Float value. Indian Buffet Process parameter that controls the expected number of features in each observation. Default value is 1. - - - -Parent topic:[Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_0,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Set up your system - -Before you can use IBM Federated Learning, ensure that you have the required hardware, software, and dependencies. - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_1,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Core requirements by role - -Each entity that participates in a Federated Learning experiment must meet the requirements for their role. - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_2,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Admin software requirements - -Designate an admin for the Federated Learning experiment. The admin must have: - - - -* Access to the platform with Watson Studio and Watson Machine Learning enabled. -You must [create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning). -* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) for assembling the global model. You must [associate the Watson Machine Learning service instance with your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html). - - - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_3,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Party hardware and software requirements - -Each party must have a system that meets these minimum requirements. - -Note: Remote parties participating in the same Federated Learning experiment can use different hardware specs and architectures, as long as they each meet the minimum requirement. - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_4,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Supported architectures - - - -* x86 64-bit -* PPC -* Mac M-series -* 4 GB memory or greater - - - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_5,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Supported environments - - - -* Linux -* Mac OS/Unix -* Windows - - - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_6,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Software dependencies - - - -* A supported [Python version and a machine learning framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html). -* The Watson Machine Learning Python client. - - - -1. If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'. -2. If you are using Mac OS with M-series CPU and Conda, download the installation script and then run ./install_fl_rt22.2_macos.sh . - - - - - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_7,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Network requirements - -An outbound connection from the remote party to aggregator is required. Parties can use firewalls that restrict internal connections with each other. - -" -E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_8,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Data sources requirements - -Data must comply with these requirements. - - - -* Data must be in a directory or storage repository that is accessible to the party that uses them. -* Each data source for a federate model must have the same features. IBM Federated Learning supports horizontal federated learning only. -* Data must be in a readable format, but the formats can vary by data source. Suggested formats include: - - - -* Hive -* Excel -* CSV -* XML -* Database - - - - - -Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html) -" -E5895BC081EDBF0CD7340015DECD0D0180AAC44A,E5895BC081EDBF0CD7340015DECD0D0180AAC44A," Creating a Federated Learning experiment - -Learn how to create a Federated Learning experiment to train a machine learning model. - -Watch this short overview video of how to create a Federated Learning experiment. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Follow these steps to create a Federated Learning experiment: - - - -* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html) -* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html) -* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html) -* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html) -* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html) -* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html) - - - -Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -8FFE1FB9CAF854DED9CA52190D4874D8280D26B0_0,8FFE1FB9CAF854DED9CA52190D4874D8280D26B0," Terminology - -Terminology that is used in IBM Federated Learning training processes. - -" -8FFE1FB9CAF854DED9CA52190D4874D8280D26B0_1,8FFE1FB9CAF854DED9CA52190D4874D8280D26B0," Terminology - - - -Federated Learning terminology - - Term Definition - - Party Users that contribute different sources of data to train a model collaboratively. Federated Learning ensures that the training occurs with no data exposure risk across the different parties.
A party must have at least Viewer permission in the Watson Studio Federated Learning project. - Admin A party member that configures the Federated Learning experiment to specify how many parties are allowed, which frameworks to use, and sets up the Remote Training Systems (RTS). They start the Federated Learning experiment and see it to the end.
An admin must have at least Editor permission in the Watson Studio Federated Learning project. - Remote Training System An asset that is used to authenticate a party to the aggregator. Project members register in the Remote Training System (RTS) before training. Only one of the members can use one RTS to participate in an experiment as a party. Multiple contributing parties must each authenticate with one RTS for an experiment. - Aggregator The aggregator fuses the model results between the parties to build one model. - Fusion method The algorithm that is used to combine the results that the parties return to the aggregator. - Data handler In IBM Federated Learning, data handler is a class that is used to load and pre-process data. It also helps to ensure that data that is collected from multiple sources are formatted uniformly to be trained. More details about the data handler can be found in [Data Handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html). - Global model The resulting model that is fused between different parties. - Training round A training round is the process of local data training, global model fusion, and update. Training is iterative. The admin can choose the number of training rounds. - - - -Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html) -" -E64B1811E55868CF510B06BFD1A24BA4AC3008F1_0,E64B1811E55868CF510B06BFD1A24BA4AC3008F1," Federated Learning Tensorflow samples - -Download and review sample files that show how to run a Federated Learning experiment by using API calls with a Tensorflow Keras model framework. - -To see a step-by-step UI driven approach rather than sample files, see the [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html). - -" -E64B1811E55868CF510B06BFD1A24BA4AC3008F1_1,E64B1811E55868CF510B06BFD1A24BA4AC3008F1," Download the Federated Learning sample files - -The Federated Learning sample has two parts, both in Jupyter Notebook format that can run in the latest Python environment. - -For single-user demonstrative purposes, the Notebooks are placed in a project. Access the [Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/cab78523832431e767c41527a42a6727), and click Create project to get all the sample files at once. - -You can also get the Notebook separately. Since, for practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). - - - -1. [Federated Learning Tensorflow Demo Part 1 - for Admin](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_1.ipynb) -2. [Federated Learning Tensorflow Demo Part 2 - for Party](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_2.ipynb) - - - -Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) -" -37DC9376A7FB6EB772D242B85909A023C43C2417_0,37DC9376A7FB6EB772D242B85909A023C43C2417," Federated Learning Tensorflow tutorial - -This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with a Tensorflow framework. - -Note:This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, see [Federated Learning Tensorflow samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html). Tip:In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full runthrough as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). - -Watch this short video tutorial of how to create a Federated Learning experiment with Watson Studio. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -In this tutorial you will learn to: - - - -* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-1) -* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-2) -" -37DC9376A7FB6EB772D242B85909A023C43C2417_1,37DC9376A7FB6EB772D242B85909A023C43C2417,"* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-3) - - - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_2,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 1: Start Federated Learning as the admin - -In this tutorial, you train a Federated Learning experiment with a Tensorflow framework and the MNIST data set. - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_3,37DC9376A7FB6EB772D242B85909A023C43C2417," Before you begin - - - -1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email. -2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment. -3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx). -4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission. -5. Associate the Watson Machine Learning service with your project. - - - -1. In your project, click the Manage > Service & integrations. -2. Click Associate service. -3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance. - - - -![Screenshot of associating the service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_tut_add_wml_service.png) - - - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_4,37DC9376A7FB6EB772D242B85909A023C43C2417," Start the aggregator - - - -1. Create the Federated learning experiment asset: - - - -1. Click the Assets tab in your project. -2. Click New asset > Train models on distributed data. -3. Type a Name for your experiment and optionally a description. -4. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps: - - - -1. Click Associate a Machine Learning Service Instance. -2. Select an existing instance and click Associate, or create a New service. -3. Click Reload to see the associated service. - -![Screenshot of associating the service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_2.png) -4. Click Next. - - - - - -2. Configure the experiment. - - - -1. On the Configure page, select a Hardware specification. -2. Under the Machine learning framework dropdown, select Tensorflow 2. -3. Select a Model type. -4. Download the [untrained model](https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/tf_mnist_model.zip). -5. Back in the Federated Learning experiment, click Select under Model specification. -6. Drag the downloaded file named tf_mnist_model.zip onto the Upload file box.1. Select runtime-22.2-py3.10 for the Software Specification dropdown. -7. Give your model a name, and then click Add. - -![Screenshot of importing an initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_3.png) -8. Click Weighted average for the Fusion method, and click Next. - -![Screenshot of Fusion methods UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_4.png) - - - -3. Define the hyperparameters. - - - -1. Accept the default hyperparameters or adjust as needed. -2. When you are finished, click Next. - - - -4. Select remote training systems. - - - -1. Click Add new systems. - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_5,37DC9376A7FB6EB772D242B85909A023C43C2417,"![Screenshot of Add RTS UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_7.png) -2. Give your Remote Training System a name. -3. Under Allowed identities, choose the user that is your party, and then click Add. In this tutorial, you can add a dummy user or yourself, for demonstrative purposes. -This user must be added to your project as a collaborator with Editor or higher permissions. Add additional systems by repeating this step for each remote party you intent to use. -4. When you are finished, click Add systems. - -![Screenshot of adding users](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_8.png) -5. Return to the Select remote training systems page, verify that your system is selected, and then click Next. - - - -5. Review your settings, and then click Create. -6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes. -7. Click View setup information to download the party configuration and the party connector script that can be run on the remote party. -8. Click the download icon besides each of the remote training systems that you created, and then click Party connector script. This gives you the party connector script. Save the script to a directory on your machine. - -![Screenshot of Training UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl-demo_9_2.png) - - - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_6,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 2: Train model as the party - -Follow these steps to train the model as a party: - - - -1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk). -2. Create a new local directory, and put your party connector script in it. -3. [Download the data handler mnist_keras_data_handler.py](https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/mnist_keras_data_handler.py) by right-clicking on it and click Save link as. Save it to the same directory as the party connector script. -4. [Download the MNIST handwriting data set](https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/903188bb984a30f38bb889102a1baae5/data) from our Samples. In the the same directory as the party connector script, data handler, and the rest of your files, unzip it by running the unzip command unzip MNIST-pkl.zip. -5. Install Watson Machine Learning. - - - -* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'. -* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh . -" -37DC9376A7FB6EB772D242B85909A023C43C2417_7,37DC9376A7FB6EB772D242B85909A023C43C2417,"You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory. - - - -6. Your party connector script looks similar to the following. Edit it by filling in the data file locations, the data handler, and API key for the user defined in the remote training system. To get your API key, go to Manage > Access(IAM) > API keys in your [IBM Cloud account](https://cloud.ibm.com/iam/apikeys). If you don't have one, click Create API key, fill out the fields, and click Create. - -from ibm_watson_machine_learning import APIClient -wml_credentials = { -""url"": ""https://us-south.ml.cloud.ibm.com"", -""apikey"": """" -} -wml_client = APIClient(wml_credentials) -wml_client.set.default_project(""XXX-XXX-XXX-XXX-XXX"") -party_metadata = { -wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: { - Supply the name of the data handler class and path to it. - The info section may be used to pass information to the - data handler. - For example, - ""name"": ""MnistSklearnDataHandler"", - ""path"": ""example.mnist_sklearn_data_handler"", - ""info"": { - ""train_file"": pwd + ""/mnist-keras-train.pkl"", - ""test_file"": pwd + ""/mnist-keras-test.pkl"" - } -""name"": """", -""path"": """", -""info"": { -"""" -} -} -} -party = wml_client.remote_training_systems.create_party(""XXX-XXX-XXX-XXX-XXX"", party_metadata) -party.monitor_logs() -" -37DC9376A7FB6EB772D242B85909A023C43C2417_8,37DC9376A7FB6EB772D242B85909A023C43C2417,"party.run(aggregator_id=""XXX-XXX-XXX-XXX-XXX"", asynchronous=False) -7. Run the party connector script: python3 rts__.py. -From the UI you can monitor the status of your Federated Learning experiment. - - - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_9,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 3: Save and deploy the model online - -In this section, you will learn to save and deploy the model that you trained. - - - -1. Save your model. - - - -1. In your completed Federated Learning experiment, click Save model to project. -2. Give your model a name and click Save. -3. Go to your project home. - - - -2. Create a deployment space, if you don't have one. - - - -1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), click Deployments. -2. Click New deployment space. -3. Fill in the fields, and click Create. - - - -3. Promote the model to a space. - - - -1. Return to your project, and click the Assets tab. -2. In the Models section, click the model to view its details page. -3. Click Promote to space. -4. Choose a deployment space for your trained model. -5. Select the Go to the model in the space after promoting it option. -6. Click Promote. - - - -4. When the model displays inside the deployment space, click New deployment. - - - -1. Select Online as the Deployment type. -2. Specify a name for the deployment. -3. Click Create. - - - -5. Click the Deployments tab to monitor your model's deployment status. - - - -" -37DC9376A7FB6EB772D242B85909A023C43C2417_10,37DC9376A7FB6EB772D242B85909A023C43C2417," Next steps - -Ready to create your own customized Federated Experiment? See the high level steps in [Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html). - -Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) -" -866BBCABEF2C6E3EDDF66300DC2639C938D815F4_0,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Troubleshooting Federated Learning experiments - -The following are some of the limitations and troubleshoot methods that apply to Federated learning experiments. - -" -866BBCABEF2C6E3EDDF66300DC2639C938D815F4_1,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Limitations - - - -* If you choose to enable homomorphic encryption, intermediate models can no longer be saved. However, the final model of the training experiment can be saved and used normally. The aggregator will not be able to decrypt the model updates and the intermediate global models. The aggregator can see only the final global model. - - - -" -866BBCABEF2C6E3EDDF66300DC2639C938D815F4_2,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Troubleshooting - - - -* If a quorum error occurs during homomorphic keys distribution, restart the experiment. -* Changing the name of a Federated Learning experiment causes it to lose its current name, including earlier runs. If this is not intended, create a new experiment with the new name. -* The default software spec is used by every run. If your model type becomes outdated and not compatible with future software specs, re-running an older experiment might run into issues. -* As Remote Training Systems are meant to run on different servers, you might encounter unexpected behavior when you run with multiple parties that are based in the same server. - - - -" -866BBCABEF2C6E3EDDF66300DC2639C938D815F4_3,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Federated Learning known issues - - - -* [Known issues for Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlwml) - - - -Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) -" -D0142FFCD3063427101CCC165C5E5F2B0FA286DB_0,D0142FFCD3063427101CCC165C5E5F2B0FA286DB," Federated Learning XGBoost samples - -These are links to sample files to run Federated Learning by using API calls with an XGBoost framework. To see a step-by-step UI driven approach, go to [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html). - -" -D0142FFCD3063427101CCC165C5E5F2B0FA286DB_1,D0142FFCD3063427101CCC165C5E5F2B0FA286DB," Download the Federated Learning sample files - -The Federated Learning samples have two parts, both in Jupyter Notebook format that can run in the latest Python environment. - -For single-user demonstrative purposes, the Notebooks are placed in a project. Go to the following link and click Create project to get all the sample files. - -[Download the Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/45a71514d67d87bb7900880b4501732c?context=wx) - -You can also get the Notebook separately. For practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html) - - - -1. [Federated Learning XGBoost Demo Part 1 - for Admin](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c95a130a2efdddc0a4b38c319a011fed) -2. [Federated Learning XGBoost Demo Part 2 - for Party](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/155a5e78ca72a013e45d54ae87012306) - - - -Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_0,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Federated Learning XGBoost tutorial for UI - -This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with an XGBoost framework. - -In this tutorial you learn to: - - - -* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-1) - - - -* [Before you begin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enbefore-you-begin) -* [Start the aggregator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstart-the-aggregator) - - - -* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-2) - - - -* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-3) -* [Step 4: Score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-4) - - - - - -Notes: - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_1,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"* This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, go to [Federated Learning XGBoost samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html). -* In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full run through as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html). - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_2,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 1: Start Federated Learning - -In this section, you learn to start the Federated Learning experiment. - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_3,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Before you begin - - - -1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email. -2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment. -3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx). -4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission. -5. Associate the Watson Machine Learning service with your project. - - - -1. In your project, click the Manage > Service & integrations. -2. Click Associate service. -3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance. - - - -![Screenshot of associating the service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_tut_add_wml_service.png) - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_4,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Start the aggregator - - - -1. Create the Federated learning experiment asset: - - - -1. Click the Assets tab in your project. - - - -1. Click New asset > Train models on distributed data. -2. Type a Name for your experiment and optionally a description. -3. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps: - - - -1. Click Associate a Machine Learning Service Instance. -2. Select an existing instance and click Associate, or create a New service. -3. Click Reload to see the associated service. - -![Screenshot of associating the service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_2.png) -4. Click Next. - - - - - - - -2. Configure the experiment. - - - -1. On the Configure page, select a Hardware specification. -2. Under the Machine learning framework dropdown, select scikit-learn. -3. For the Model type, select XGBoost. -4. For the Fusion method, select XGBoost classification fusion - -![Screenshot of selecting XGBoost classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_xg_framework.png) - - - -3. Define the hyperparameters. - - - -1. Set the value for the Rounds field to 5. -2. Accept the default values for the rest of the fields. - -![Screenshot of selecting hyperparameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_xg_hyperparameters.png) -3. Click Next. - - - -4. Select remote training systems. - - - -1. Click Add new systems. - -![Screenshot of Add RTS UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_demo_7.png) -2. Give your Remote Training System a name. -3. Under Allowed identities, select the user that will participate in the experiment, and then click Add. You can add as many allowed identities as participants in this Federated Experiment training instance. For this tutorial, choose only yourself. -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_5,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"Any allowed identities must be part of the project and have at leastAdmin permission. -4. When you are finished, click Add systems. - -![Screenshot of creating an RTS](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_xg_create_rts.png) -5. Return to the Select remote training systems page, verify that your system is selected, and then click Next. - -![Screenshot of selecting RTS](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_xg_select_rts.png) - - - -5. Review your settings, and then click Create. -6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes. - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_6,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 2: Train model as a party - - - -1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk). -2. Create a new local directory. -3. Download the Adult data set into the directory with this command: wget https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/5fcc01b02d8f0e50af8972dc8963f98e/data -O adult.csv. -4. Download the data handler by running wget https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/adult_sklearn_data_handler.py -O adult_sklearn_data_handler.py. -5. Install Watson Machine Learning. - - - -* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'. -* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh . -You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory. - - - -6. Go back to the Federated Learning experiment page, where the aggregator is running. Click View Setup Information. -7. Click the download icon next to the remote training system, and select Party connector script. -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_7,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"8. Ensure that you have the party connector script, the Adult data set, and the data handler in the same directory. If you run ls -l, you should see: - -adult.csv -adult_sklearn_data_handler.py -rts__.py -9. In the party connector script: - - - -1. Authenticate using any method. -2. Put in these parameters for the ""data"" section: - -""data"": { -""name"": ""AdultSklearnDataHandler"", -""path"": ""./adult_sklearn_data_handler.py"", -""info"": { -""txt_file"": ""./adult.csv"" -}, -}, - -where: - - - -* name: Class name defined for the data handler. -* path: Path of where the data handler is located. -* info: Create a key value pair for the file type of local data set, or the path of your data set. - - - - - -10. Run the party connector script: python3 rts__.py. -11. When all participating parties connect to the aggregator, the aggregator facilitates the local model training and global model update. Its status is Training. You can monitor the status of your Federated Learning experiment from the user interface. -12. When training is complete, the party receives a Received STOP message on the party. -13. Now, you can save the trained model and deploy it to a space. - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_8,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 3: Save and deploy the model online - -In this section, you learn how to save and deploy the model that you trained. - - - -1. Save your model. - - - -1. In your completed Federated Learning experiment, click Save model to project. -2. Give your model a name and click Save. -3. Go to your project home. - - - -2. Create a deployment space, if you don't have one. - - - -1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), click Deployments. -2. Click New deployment space. -3. Fill in the fields, and click Create. - -![Screenshot of creating a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fl_xg_create_deploy.png) - - - -3. Promote the model to a space. - - - -1. Return to your project, and click the Assets tab. -2. In the Models section, click the model to view its details page. -3. Click Promote to space. -4. Choose a deployment space for your trained model. -5. Select the Go to the model in the space after promoting it option. -6. Click Promote. - - - -4. When the model displays inside the deployment space, click New deployment. - - - -1. Select Online as the Deployment type. -2. Specify a name for the deployment. -3. Click Create. - - - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_9,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 4: Score the model - -In this section, you learn to create a Python function to process the scoring data to ensure that it is in the same format that was used during training. For comparison, you will also score the raw data set by calling the Python function that we created. - - - -1. Define the Python function as follows. The function loads the scoring data in its raw format and processes the data exactly as it was done during training. Then, score the processed data. - -def adult_scoring_function(): - -import pandas as pd - -from ibm_watson_machine_learning import APIClient - -wml_credentials = { -""url"": ""https://us-south.ml.cloud.ibm.com"", -""apikey"": """" -} -client = APIClient(wml_credentials) -client.set.default_space('') - - converts scoring input data format to pandas dataframe -def create_dataframe(raw_dataset): - -fields = raw_dataset.get(""input_data"")[0].get(""fields"") -values = raw_dataset.get(""input_data"")[0].get(""values"") - -raw_dataframe = pd.DataFrame( -columns = fields, -data = values -) - -return raw_dataframe - - reuse preprocess definition from training data handler -def preprocess(training_data): - -"""""" -Performs the following preprocessing on adult training and testing data: -* Drop following features: 'workclass', 'fnlwgt', 'education', 'marital-status', 'occupation', -'relationship', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country' -* Map 'race', 'sex' and 'class' values to 0/1 -* ' White': 1, ' Amer-Indian-Eskimo': 0, ' Asian-Pac-Islander': 0, ' Black': 0, ' Other': 0 -* ' Male': 1, ' Female': 0 -* Further details in Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_10,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"* Split 'age' and 'education' columns into multiple columns based on value - -:param training_data: Raw training data -:type training_data: pandas.core.frame.DataFrame -:return: Preprocessed training data -:rtype: pandas.core.frame.DataFrame -"""""" -if len(training_data.columns)==15: - drop 'fnlwgt' column -training_data = training_data.drop(training_data.columns[2], axis='columns') - -training_data.columns = ['age', -'workclass', -'education', -'education-num', -'marital-status', -'occupation', -'relationship', -'race', -'sex', -'capital-gain', -'capital-loss', -'hours-per-week', -'native-country', -'class'] - - filter out columns unused in training, and reorder columns -training_dataset = training_data['race', 'sex', 'age', 'education-num', 'class']] - - map 'sex' and 'race' feature values based on sensitive attribute privileged/unpriveleged groups -training_dataset['sex'] = training_dataset['sex'].map({' Female': 0, -' Male': 1}) - -training_dataset['race'] = training_dataset['race'].map({' Asian-Pac-Islander': 0, -' Amer-Indian-Eskimo': 0, -' Other': 0, -' Black': 0, -' White': 1}) - - map 'class' values to 0/1 based on positive and negative classification -training_dataset['class'] = training_dataset['class'].map({' <=50K': 0, ' >50K': 1}) - -training_dataset['age'] = training_dataset['age'].astype(int) -training_dataset['education-num'] = training_dataset['education-num'].astype(int) - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_11,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," split age column into category columns -for i in range(8): -if i != 0: -training_dataset['age' + str(i)] = 0 - -for index, row in training_dataset.iterrows(): -if row['age'] < 20: -training_dataset.loc[index, 'age1'] = 1 -elif ((row['age'] < 30) & (row['age'] >= 20)): -training_dataset.loc[index, 'age2'] = 1 -elif ((row['age'] < 40) & (row['age'] >= 30)): -training_dataset.loc[index, 'age3'] = 1 -elif ((row['age'] < 50) & (row['age'] >= 40)): -training_dataset.loc[index, 'age4'] = 1 -elif ((row['age'] < 60) & (row['age'] >= 50)): -training_dataset.loc[index, 'age5'] = 1 -elif ((row['age'] < 70) & (row['age'] >= 60)): -training_dataset.loc[index, 'age6'] = 1 -elif row['age'] >= 70: -training_dataset.loc[index, 'age7'] = 1 - - split age column into multiple columns -training_dataset['ed6less'] = 0 -for i in range(13): -if i >= 6: -training_dataset['ed' + str(i)] = 0 -training_dataset['ed12more'] = 0 - -for index, row in training_dataset.iterrows(): -if row['education-num'] < 6: -training_dataset.loc[index, 'ed6less'] = 1 -elif row['education-num'] == 6: -training_dataset.loc[index, 'ed6'] = 1 -elif row['education-num'] == 7: -training_dataset.loc[index, 'ed7'] = 1 -elif row['education-num'] == 8: -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_12,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"training_dataset.loc[index, 'ed8'] = 1 -elif row['education-num'] == 9: -training_dataset.loc[index, 'ed9'] = 1 -elif row['education-num'] == 10: -training_dataset.loc[index, 'ed10'] = 1 -elif row['education-num'] == 11: -training_dataset.loc[index, 'ed11'] = 1 -elif row['education-num'] == 12: -training_dataset.loc[index, 'ed12'] = 1 -elif row['education-num'] > 12: -training_dataset.loc[index, 'ed12more'] = 1 - -training_dataset.drop(['age', 'education-num'], axis=1, inplace=True) - - move class column to be last column -label = training_dataset['class'] -training_dataset.drop('class', axis=1, inplace=True) -training_dataset['class'] = label - -return training_dataset - -def score(raw_dataset): -try: - - create pandas dataframe from input -raw_dataframe = create_dataframe(raw_dataset) - - reuse preprocess from training data handler -processed_dataset = preprocess(raw_dataframe) - - drop class column -processed_dataset.drop('class', inplace=True, axis='columns') - - create data payload for scoring -fields = processed_dataset.columns.values.tolist() -values = processed_dataset.values.tolist() -scoring_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]} -print(scoring_dataset) - - score data -prediction = client.deployments.score('', scoring_dataset) -return prediction - -except Exception as e: -return {'error': repr(e)} - -return score -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_13,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"2. Replace the variables in the previous Python function: - - - -* API KEY: Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud API key under Manage > Access(IAM) > API keys. -* SPACE ID: ID of the Deployment space where the adult income deployment is running. To see your space ID, go to Deployment spaces > YOUR SPACE NAME > Manage. Copy the Space GUID. -* MODEL DEPLOYMENT ID: Online deployment ID for the adult income model. To see your model ID, you can see it by clicking the model in your project. It is in both the address bar and the information pane. - - - -3. Get the Software Spec ID for Python 3.9. For list of other environments run client.software_specifications.list(). software_spec_id = client.software_specifications.get_id_by_name('default_py3.9') -4. Store the Python function into your Watson Studio space. - - stores python function in space -meta_props = { -client.repository.FunctionMetaNames.NAME: 'Adult Income Scoring Function', -client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id -} -stored_function = client.repository.store_function(meta_props=meta_props, function=adult_scoring_function) -function_id = stored_function['metadata'] -5. Create an online deployment by using the Python function. - - create online deployment for fucntion -meta_props = { -client.deployments.ConfigurationMetaNames.NAME: ""Adult Income Online Scoring Function"", -client.deployments.ConfigurationMetaNames.ONLINE: {} -} -online_deployment = client.deployments.create(function_id, meta_props=meta_props) -function_deployment_id = online_deployment['metadata'] -6. Download the Adult Income data set. This is reused as our scoring data. - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_14,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"import pandas as pd - - read adult csv dataset -adult_csv = pd.read_csv('./adult.csv', dtype='category') - - use 10 random rows for scoring -sample_dataset = adult_csv.sample(n=10) - -fields = sample_dataset.columns.values.tolist() -values = sample_dataset.values.tolist() -7. Score the adult income data by using the Python function created. - -raw_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]} - -prediction = client.deployments.score(function_deployment_id, raw_dataset) -print(prediction) - - - -" -FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_15,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Next steps - -[Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html). - -Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) -" -FD48879C34D316981B4F67C2B82C8179E0042F74_0,FD48879C34D316981B4F67C2B82C8179E0042F74," Credentials for prompting foundation models (IBM Cloud API key and IAM token) - -To prompt foundation models in IBM watsonx.ai programmatically, you need an IBM Cloud API key and sometimes an IBM Cloud IAM token. - -" -FD48879C34D316981B4F67C2B82C8179E0042F74_1,FD48879C34D316981B4F67C2B82C8179E0042F74," IBM Cloud API key - -To use the [foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html), you need an IBM Cloud API key. - -" -FD48879C34D316981B4F67C2B82C8179E0042F74_2,FD48879C34D316981B4F67C2B82C8179E0042F74,"Python pseudo-code - -my_credentials = { -""url"" : ""https://us-south.ml.cloud.ibm.com"", -""apikey"" : -} -... -model = Model( ... credentials=my_credentials ... ) - -You can create this API key by using multiple interfaces. For full instructions, see [Creating an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key) - -" -FD48879C34D316981B4F67C2B82C8179E0042F74_3,FD48879C34D316981B4F67C2B82C8179E0042F74," IBM Cloud IAM token - -When you click the View code button in the Prompt Lab, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token. - -For information about generating that access token, see: [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey) - -Parent topic:[Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -" -52507FE59C92EF1667E463B2C5D709C139673F4D,52507FE59C92EF1667E463B2C5D709C139673F4D," Foundation model terms of use in watsonx.ai - -Review these model terms of use to understand your responsibilities and risks with foundation models. - -By using any foundation model provided with this IBM offering, you acknowledge and understand that: - - - -* Some models that are included in this IBM offering are Non-IBM Products. Review the applicable model information for details on the third-party provider and license terms that apply. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). -* Third Party models have been trained with data that may contain biases and inaccuracies and could generate outputs containing misinformation, obscene or offensive language, or discriminatory content. Users should review and validate the outputs that are generated. -* The output that is generated by all models is provided to augment, not replace, human decision-making by the Client. - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_0,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Generating accurate output - -Foundation models sometimes generate output that is not factually accurate. If factual accuracy is important for your project, set yourself up for success by learning how and why these models might sometimes get facts wrong and how you can ground generated output in correct facts. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_1,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Why foundation models get facts wrong - -Foundation models can get facts wrong for a few reasons: - - - -* Pre-training builds word associations, not facts -* Pre-training data sets contain out-of-date facts -* Pre-training data sets do not contain esoteric or domain-specific facts and jargon -* Sampling decoding is more likely to stray from the facts - - - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_2,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training builds word associations, not facts - -During pre-training, a foundation model builds up a vocabulary of words ([tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)) encountered in the pre-training data sets. Also during pre-training, statistical relationships between those words become encoded in the model weights. - -For example, ""Mount Everest"" often appears near ""tallest mountain in the world"" in many articles, books, speeches, and other common pre-training sources. As a result, a pre-trained model will probably correctly complete the prompt ""The tallest mountain in the world is "" with the output ""Mount Everest."" - -These word associations can make it seem that facts have been encoded into these models too. For very common knowledge and immutable facts, you might have good luck generating factually accurate output using pre-trained foundation models with simple prompts like the tallest-mountain example. However, it is a risky strategy to rely on only pre-trained word associations when using foundation models in applications where accuracy matters. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_3,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training data sets contain out-of-date facts - -Collecting pre-training data sets and performing pre-training runs can take a significant amount of time, sometimes months. If a model was pre-trained on a data set from several years ago, the model vocabulary and word associations encoded in the model weights won't reflect current world events or newly popular themes. For this reason, if you submit the prompt ""The most recent winner of the world cup of football (soccer) is "" to a model pre-trained on information a few years old, the generated output will be out of date. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_4,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training data sets do not contain esoteric or domain-specific facts and jargon - -Common foundation model pre-training data sets, such as [The Pile (Wikipedia)](https://en.wikipedia.org/wiki/The_Pile_%28dataset%29), contain hundreds of millions of documents. Given how famous Mount Everest is, it's reasonable to expect a foundation model to have encoded a relationship between ""tallest mountain in the world"" and ""Mount Everest"". However, if a phenomenon, person, or concept is mentioned in only a handful of articles, chances are slim that a foundation model would have any word associations about that topic encoded in its weights. Prompting a pre-trained model about information that was not in its pre-training data sets is unlikely to produce factually accurate generated output. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_5,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Sampling decoding is more likely to stray from the facts - -Decoding is the process a model uses to choose the words (tokens) in the generated output: - - - -* Greedy decoding always selects the token with the highest probability -* Sampling decoding selects tokens pseudo-randomly from a probability distribution - - - -Greedy decoding generates output that is more predictable and more repetitive. Sampling decoding is more random, which feels ""creative"". If, based on pre-training data sets, the most likely words to follow ""The tallest mountain is "" are ""Mount Everest"", then greedy decoding could reliably generate that factually correct output, whereas sampling decoding might sometimes generate the name of some other mountain or something that's not even a mountain. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_6,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," How to ground generated output in correct facts - -Rather than relying on only pre-trained word associations for factual accuracy, provide context in your prompt text. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_7,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Use context in your prompt text to establish facts - -When you prompt a foundation model to generate output, the words (tokens) in the generated output are influenced by the words in the model vocabulary and the words in the prompt text. You can use your prompt text to boost factually accurate word associations. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_8,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Example 1 - -Here's a prompt to cause a model to complete a sentence declaring your favorite color: - -My favorite color is - -Given that only you know what your favorite color is, there's no way the model could reliably generate the correct output. - -Instead, a color will be selected from colors mentioned in the model's pre-training data: - - - -* If greedy decoding is used, whichever color appears most frequently with statements about favorite colors in pre-training content will be selected. -* If sampling decoding is used, a color will be selected randomly from colors mentioned most often as favorites in the pre-training content. - - - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_9,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Example 2 - -Here's a prompt that includes context to establish the facts: - -I recently painted my kitchen yellow, which is my favorite color. - -My favorite color is - -If you prompt a model with text that includes factually accurate context like this, then the output the model generates will be more likely to be accurate. - -For more examples of including context in your prompt, see these samples: - - - -* [Sample 4a - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4a) -* [Sample 4b - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4b) - - - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_10,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Use less ""creative"" decoding - -When you include context with the needed facts in your prompt, using greedy decoding is likely to generate accurate output. If you need some variety in the output, you can experiment with sampling decoding with low values for parameters like Temperature, Top P, and Top K. However, using sampling decoding increases the risk of inaccurate output. - -" -43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_11,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Retrieval-augmented generation - -The retrieval-augmented generation pattern scales out the technique of pulling context into prompts. If you have a knowledge base, such as process documentation in web pages, legal contracts in PDF files, a database of products for sale, a GitHub repository of C++ code files, or any other collection of information, you can use the retrieval-augmented generation pattern to generate factually accurate output based on the information in that knowledge base. - -Retrieval-augmented generation involves three basic steps: - - - -1. Search for relevant content in your knowledge base -2. Pull the most relevant content into your prompt as context -3. Send the combined prompt text to the model to generate output - - - -For more information, see: [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) - -Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_0,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for avoiding undesirable output - -Every foundation model has the potential to generate output that includes incorrect or even harmful content. Understand the types of undesirable output that can be generated, the reasons for the undesirable output, and steps that you can take to reduce the risk of harm. - -The foundation models that are available in IBM watsonx.ai can generate output that contains hallucinations, personal information, hate speech, abuse, profanity, and bias. The following techniques can help reduce the risk, but do not guarantee that generated output will be free of undesirable content. - -Find techniques to help you avoid the following types of undesirable content in foundation model output: - - - -* [Hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhallucinations) -* [Personal information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enpersonal-info) -* [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhap) -* [Bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enbias) - - - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_1,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Hallucinations - -When a foundation model generates off-topic, repetitive, or incorrect content or fabricates details, that behavior is sometimes called hallucination. - -Off-topic hallucinations can happen because of pseudo-randomness in the decoding of the generated output. In the best cases, that randomness can result in wonderfully creative output. But randomness can also result in nonsense output that is not useful. - -The model might return hallucinations in the form of fabricated details when it is prompted to generate text, but is not given enough related text to draw upon. If you include correct details in the prompt, for example, the model is less likely to hallucinate and make up details. - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_2,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for avoiding hallucinations - -To avoid hallucinations, test one or more of these techniques: - - - -* Choose a model with pretraining and fine-tuning that matches your domain and the task you are doing. -* Provide context in your prompt. - -If you instruct a foundation model to generate text on a subject that is not common in its pretraining data and you don't add information about the subject to the prompt, the model is more likely to hallucinate. -* Specify conservative values for the Min tokens and Max tokens parameters and specify one or more stop sequences. - -When you specify a high value for the Min tokens parameter, you can force the model to generate a longer response than the model would naturally return for a prompt. The model is more likely to hallucinate as it adds words to the output to reach the required limit. -* For use cases that don't require much creativity in the generated output, use greedy decoding. If you prefer to use sampling decoding, be sure to specify conservative values for the temperature, top-p, and top-k parameters. -* To reduce repetitive text in the generated output, try increasing the repetition penalty parameter. -* If you see repetitive text in the generated output when you use greedy decoding, and if some creativity is acceptable for your use case, then try using sampling decoding instead. Be sure to set moderately low values for the temperature, top-p, and top-k parameters. -* In your prompt, instruct the model what to do when it has no confident or high-probability answer. - -For example, in a question-answering scenario, you can include the instruction: If the answer is not in the article, say “I don't know”. - - - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_3,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Personal information - -A foundation model's vocabulary is formed from words in its pretraining data. If pretraining data includes web pages that are scraped from the internet, the model's vocabulary might contain the following types of information: - - - -* Names of article authors -* Contact information from company websites -* Personal information from questions and comments that are posted in open community forums - - - -If you use a foundation model to generate text for part of an advertising email, the generated content might include contact information for another company! - -If you ask a foundation model to write a paper with citations, the model might include references that look legitimate but aren't. It might even attribute those made-up references to real authors from the correct field. A foundation model is likely to generate imitation citations, correct in form but not grounded in facts, because the models are good at stringing together words (including names) that have a high probability of appearing together. The fact that the model lends the output a touch of legitimacy, by including the names of real people as authors in citations, makes this form of hallucination compelling and believable. It also makes this form of hallucination dangerous. People can get into trouble if they believe that the citations are real. Not to mention the harm that can come to people who are listed as authors of works they did not write. - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_4,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for excluding personal information - -To exclude personal information, try these techniques: - - - -* In your prompt, instruct the model to refrain from mentioning names, contact details, or personal information. - -For example, when you prompt a model to generate an advertising email, instruct the model to include your company name and phone number. Also, instruct the model to “include no other company or personal information”. -* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to find and remove personal information. - - - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_5,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Hate speech, abuse, and profanity - -As with personal information, when pretraining data includes hateful or abusive terms or profanity, a foundation model that is trained on that data has those problematic terms in its vocabulary. If inappropriate language is in the model's vocabulary, the foundation model might generate text that includes undesirable content. - -When you use foundation models to generate content for your business, you must do the following things: - - - -* Recognize that this kind of output is always possible. -* Take steps to reduce the likelihood of triggering the model to produce this kind of harmful output. -* Build human review and verification processes into your solutions. - - - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_6,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for reducing the risk of hate speech, abuse, and profanity - -To avoid hate speech, abuse, and profanity, test one or more of these techniques: - - - -* In the Prompt Lab, set the AI guardrails switch to On. When this feature is enabled, any sentence in the input prompt or generated output that contains harmful language is replaced with a message that says that potentially harmful text was removed. -* Do not include hate speech, abuse, or profanity in your prompt to prevent the model from responding in kind. -* In your prompt, instruct the model to use clean language. - -For example, depending on the tone you need for the output, instruct the model to use “formal”, “professional”, “PG”, or “friendly” language. -* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to remove undesirable content. - - - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_7,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Reducing the risk of bias in model output - -During pretraining, a foundation model learns the statistical probability that certain words follow other words based on how those words appear in the training data. Any bias in the training data is trained into the model. - -For example, if the training data more frequently refers to doctors as men and nurses as women, that bias is likely to be reflected in the statistical relationships between those words in the model. As a result, the model is likely to generate output that more frequently refers to doctors as men and nurses as women. Sometimes, people believe that algorithms can be more fair and unbiased than humans because the algorithms are “just using math to decide”. But bias in training data is reflected in content that is generated by foundation models that are trained on that data. - -" -E59B59312D1EB3B2BA78D7E78993883BB3784C2B_8,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for reducing bias - -It is difficult to debias output that is generated by a foundation model that was pretrained on biased data. However, you might improve results by including content in your prompt to counter bias that might apply to your use case. - -For example, instead of instructing a model to “list heart attack symptoms”, you might instruct the model to “list heart attack symptoms, including symptoms common for men and symptoms common for women”. - -Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) -" -120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_0,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Choosing a foundation model in watsonx.ai - -To determine which models might work well for your project, consider model attributes, such as license, pretraining data, model size, and how the model was fine-tuned. After you have a short list of models that best fit your use case, systematically test the models to see which ones consistently return the results you want. - - - -Table 1. Considerations for choosing a foundation model in IBM watsonx.ai - - Model attribute Considerations - - Context length Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter. For some models, the token length of model output for Lite plans is limited by a dynamic, model-specific, environment-driven upper limit. - Cost The cost of using foundation models is measured in resource units. The price of a resource unit is based on the rate of the billing class for the foundation model. - Fine-tuning After being pretrained, many foundation models are fine-tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that was fine-tuned on tasks similar to your planned use typically perform better with zero-shot prompts than models that were not fine-tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine-tune that model. - Instruction-tuned Instruction-tuned means that the model was fine-tuned with prompts that include an instruction. When a model is instruction-tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples. -" -120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_1,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," IP indemnity In addition to license terms, review the intellectual property indemnification policy for the model. Some foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models. For information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). - License In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution. - Model architecture The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures:
* Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction.
* Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions.
* Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization. - Regional availability You can work with models that are available in the same IBM Cloud regional data center as your watsonx services. - Supported natural languages Many foundation models work well in English only. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind. -" -120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_2,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Supported programming languages Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case. - - - -" -120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_3,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Learn more - - - -* [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html) -* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html) -* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) -* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers) - - - -Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) -" -42AE491240EF740E6A8C5CF32B817E606F554E49_0,42AE491240EF740E6A8C5CF32B817E606F554E49," Foundation model parameters: decoding and stopping criteria - -You can specify parameters to control how the model generates output in response to your prompt. This topic lists parameters that you can control in the Prompt Lab. - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_1,42AE491240EF740E6A8C5CF32B817E606F554E49," Decoding - -Decoding is the process a model uses to choose the tokens in the generated output. - -Greedy decoding selects the token with the highest probability at each step of the decoding process. Greedy decoding produces output that closely matches the most common language in the model's pretraining data and in your prompt text, which is desirable in less creative or fact-based use cases. A weakness of greedy decoding is that it can cause repetitive loops in the generated output. - -Sampling decoding is more variable, more random than greedy decoding. Variability and randomness is desirable in creative use cases. However, with greater variability comes the risk of nonsensical output. Sampling decoding selects tokens from a probability distribution at each step: - - - -* Temperature sampling refers to selecting a high- or low-probability next token. -* Top-k sampling refers to selecting the next token randomly from a specified number, k, of tokens with the highest probabilities. -* Top-p sampling refers to selecting the next token randomly from the smallest set of tokens for which the cumulative probability exceeds a specified value, p. (Top-p sampling is also called nucleus sampling.) - - - -You can specify values for both Top K and Top P. When both parameters are used, Top K is applied first. When Top P is computed, any tokens below the cutoff set by Top K are considered to have a probability of zero. - - - -Table 1. Supported values, defaults, and usage notes for sampling decoding - - Parameter Supported values Default Use - - Temperature Floating-point number in the range 0.0 (same as greedy decoding) to 2.0 (maximum creativity) 0.7 Higher values lead to greater variability - Top K Integer in the range 1 to 100 50 Higher values lead to greater variability - Top P Floating-point number in the range 0.0 to 1.0 1.0 Higher values lead to greater variability - - - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_2,42AE491240EF740E6A8C5CF32B817E606F554E49," Random seed - -When you submit the same prompt to a model multiple times with sampling decoding, you'll usually get back different generated text each time. This variability is the result of intentional pseudo-randomness built into the decoding process. Random seed refers to the number used to generate that pseudo-random behavior. - - - -* Supported values: Integer in the range 1 to 4 294 967 295 -* Default: Generated based on the current server system time -* Use: To produce repeatable results, set the same random seed value every time. - - - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_3,42AE491240EF740E6A8C5CF32B817E606F554E49," Repetition penalty - -If you notice the result generated for your chosen prompt, model, and parameters consistently contains repetitive text, you can try adding a repetition penalty. - - - -* Supported values: Floating-point number in the range 1.0 (no penalty) to 2.0 (maximum penalty) -* Default: 1.0 -* Use: The higher the penalty, the less likely it is that the result will include repeated text. - - - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_4,42AE491240EF740E6A8C5CF32B817E606F554E49," Stopping criteria - -You can affect the length of the output generated by the model in two ways: specifying stop sequences and setting Min tokens and Max tokens. Text generation stops after the model considers the output to be complete, a stop sequence is generated, or the maximum token limit is reached. - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_5,42AE491240EF740E6A8C5CF32B817E606F554E49," Stop sequences - -A stop sequence is a string of one or more characters. If you specify stop sequences, the model will automatically stop generating output after one of the stop sequences that you specify appears in the generated output. For example, one way to cause a model to stop generating output after just one sentence is to specify a period as a stop sequence. That way, after the model generates the first sentence and ends it with a period, output generation stops. Choosing effective stop sequences depends on your use case and the nature of the generated output you expect. - -Supported values: 0 to 6 strings, each no longer than 40 tokens - -Default: No stop sequence - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_6,42AE491240EF740E6A8C5CF32B817E606F554E49,"Use: - - - -* Stop sequences are ignored until after the number of tokens that are specified in the Min tokens parameter are generated. -* If your prompt includes examples of input-output pairs, ensure the sample output in the examples ends with one of the stop sequences. - - - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_7,42AE491240EF740E6A8C5CF32B817E606F554E49," Minimum and maximum new tokens - -If you're finding the output from the model is too short or too long, try adjusting the parameters that control the number of generated tokens: - - - -* The Min tokens parameter controls the minimum number of tokens in the generated output -* The Max tokens parameter controls the maximum number of tokens in the generated output - - - -The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_8,42AE491240EF740E6A8C5CF32B817E606F554E49,"Defaults: - - - -* Min tokens: 0 -* Max tokens: 20 - - - -" -42AE491240EF740E6A8C5CF32B817E606F554E49_9,42AE491240EF740E6A8C5CF32B817E606F554E49,"Use: - - - -* Min tokens must be less than or equal to Max tokens. -* Because the cost of using foundation models in IBM watsonx.ai is based on use, which is partly related to the number of tokens that are generated, specifying the lowest value for Max tokens that works for your use case is a cost-saving strategy. -* For Lite plans, output stops being generated after a dynamic, model-specific, environment-driven upper limit is reached, even if the value specified with the Max tokens parameter is not reached. To determine the upper limit, see the Tokens limits section for the model in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) or call the [get_details](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details) function of the foundation models Python library. - - - -Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -" -B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_0,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C," Foundation models built by IBM - -In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business. - -The Granite family of foundation models includes decoder-only models that can efficiently predict and generate language in English. - -The models were built with trusted data that has the following characteristics: - - - -* Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more. -* Compliant with rigorous IBM data clearance and governance standards. -* Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things. - - - -IBM is committed to building AI that is open, trusted, targeted, and empowering. For more information about contractual protections related to the IBM Granite foundation models, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) and [model license](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883). - -The following Granite models are available in watsonx.ai today: - -granite-13b-chat-v2 : General use model that is optimized for dialogue use cases. This version of the model is able to generate longer, higher-quality responses with a professional tone. The model can recognize mentions of people and can detect tone and sentiment. - -granite-13b-chat-v1 : General use model that is optimized for dialogue use cases. Useful for virtual agent and chat applications that engage in conversation with users. - -granite-13b-instruct-v2 : General use model. This version of the model is optimized for classification, extraction, and summarization tasks. The model can recognize mentions of people and can summarize longer inputs. - -" -B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_1,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C,"granite-13b-instruct-v1 : General use model. The model was tuned on relevant business tasks, such as detecting sentiment from earnings calls transcripts, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. - -To learn more about the models, read the following resources: - - - -* [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) -* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) -* [granite-13b-instruct-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) -* [granite-13b-instruct-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) -* [granite-13b-chat-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) -* [granite-13b-chat-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) - - - -To get started with the models, try these samples: - - - -* [Prompt Lab sample: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample2a) -* [Prompt Lab sample: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3c) -" -B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_2,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C,"* [prompt Lab sample: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c) -* [Prompt Lab sample: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d) -* [Prompt Lab sample: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a) - - - - - -* [Sample Python notebook: Use watsonx and a Granite model to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) - - - -Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_0,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Supported foundation models available with watsonx.ai - -A collection of open source and IBM foundation models are deployed in IBM watsonx.ai. - -The following models are available in watsonx.ai: - - - -* flan-t5-xl-3b -* flan-t5-xxl-11b -* flan-ul2-20b -* gpt-neox-20b -* granite-13b-chat-v2 -* granite-13b-chat-v1 -* granite-13b-instruct-v2 -* granite-13b-instruct-v1 -* llama-2-13b-chat -* llama-2-70b-chat -* mpt-7b-instruct2 -* mt0-xxl-13b -* starcoder-15.5b - - - -You can prompt these models in the Prompt Lab or programmatically by using the Python library. - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_1,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Summary of models - -To understand how the model provider, instruction tuning, token limits, and other factors can affect which model you choose, see [Choosing a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-choose.html). - -The following table lists the supported foundation models that IBM provides. - - - -Table 1. IBM foundation models in watsonx.ai - - Model name Provider Instruction-tuned Billing class Maximum tokens
Context (input + output) More information - - [granite-13b-chat-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx)
* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - [granite-13b-chat-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx)
* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_2,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [granite-13b-instruct-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx)
* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - [granite-13b-instruct-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx)
* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - - - -The following table lists the supported foundation models that third parties provide through Hugging Face. - - - -Table 2. Supported third party foundation models in watsonx.ai - - Model name Provider Instruction-tuned Billing class Maximum tokens
Context (input + output) More information - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_3,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [flan-t5-xl-3b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xl-3b) Google Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx)
* [Research paper](https://arxiv.org/abs/2210.11416) - [flan-t5-xxl-11b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xxl-11b) Google Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx)
* [Research paper](https://arxiv.org/abs/2210.11416) - [flan-ul2-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-ul2-20b) Google Yes Class 3 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx)
* [UL2 research paper](https://arxiv.org/abs/2205.05131v1)
* [Flan research paper](https://arxiv.org/abs/2210.11416) -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_4,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [gpt-neox-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engpt-neox-20b) EleutherAI No Class 3 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx)
* [Research paper](https://arxiv.org/abs/2204.06745) - [llama-2-13b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx)
* [Research paper](https://arxiv.org/abs/2307.09288) - [llama-2-70b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx)
* [Research paper](https://arxiv.org/abs/2307.09288) -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_5,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [mpt-7b-instruct2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmpt-7b-instruct2) Mosaic ML Yes Class 1 2048 * [Model card](https://huggingface.co/ibm/mpt-7b-instruct2)
* [Website](https://www.mosaicml.com/blog/mpt-7b) - [mt0-xxl-13b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmt0-xxl-13b) BigScience Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx)
* [Research paper](https://arxiv.org/abs/2211.01786) - [starcoder-15.5b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enstarcoder-15.5b) BigCode No Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx)
* [Research paper](https://arxiv.org/abs/2305.06161) - - - - - -* For a list of which models are provided in each regional data center, see [Regional availability of foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers). -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_6,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"* For information about the billing classes and rate limiting, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering). - - - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_7,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Foundation model details - -The available foundation models support a range of use cases for both natural languages and programming languages. To see the types of tasks that these models can do, review and try the [sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_8,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-t5-xl-3b - -The flan-t5-xl-3b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks. - -Note: This foundation model can be tuned by using the Tuning Studio. - -Usage : General use with zero- or few-shot prompts. - -Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) - -Size : 3 billion parameters - -Token limits : Context window length (input + output): 4096 - -: Note: Lite plan output is limited to 700 - -Supported natural languages : English, German, French - -Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. - -Details about the training data sets used are published. - -Model architecture : Encoder-decoder - -License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) : [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_9,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-t5-xxl-11b - -The flan-t5-xxl-11b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks. - -Usage : General use with zero- or few-shot prompts. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx and Google flan-t5-xxl to generate advertising copy](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/73243d67b49a6e05f4cdf351b4b35e21?context=wx) : [Sample notebook: Use watsonx and LangChain to make a series of calls to a language model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx) - -Size : 11 billion parameters - -Token limits : Context window length (input + output): 4096 - -: Note: Lite plan output is limited to 700 - -Supported natural languages : English, German, French - -Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published. - -Model architecture : Encoder-decoder - -License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_10,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_11,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-ul2-20b - -The flan-ul2-20b model is provided by Google on Hugging Face. This model was trained by using the Unifying Language Learning Paradigms (UL2). The model is optimized for language generation, language understanding, text classification, question answering, common sense reasoning, long text reasoning, structured-knowledge grounding, and information retrieval, in-context learning, zero-shot prompting, and one-shot prompting. - -Usage : General use with zero- or few-shot prompts. - -Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_12,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx to summarize cybersecurity documents](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1cb62d6a5847b8ed5cdb6531a08e9104?context=wx) : [Sample notebook: Use watsonx and LangChain to answer questions by using retrieval-augmented generation (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6?context=wx&audience=wdp) : [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) : [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) - -Size : 20 billion parameters - -Token limits : Context window length (input + output): 4096 - -: Note: Lite plan output is limited to 700 - -Supported natural languages : English - -Instruction tuning information : The flan-ul2-20b model is pretrained on the colossal, cleaned version of Common Crawl's web crawl corpus. The model is fine-tuned with multiple pretraining objectives to optimize it for various natural language processing tasks. Details about the training data sets used are published. - -Model architecture : Encoder-decoder - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_13,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -Learn more : [Unifying Language Learning (UL2) research paper](https://arxiv.org/abs/2205.05131v1) : [Fine-tuned Language Model (Flan) research paper](https://arxiv.org/abs/2210.11416) - -: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_14,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," gpt-neox-20b - -The gpt-neox-20b model is provided by EleutherAI on Hugging Face. This model is an autoregressive language model that is trained on diverse English-language texts to support general-purpose use cases. GPT-NeoX-20B has not been fine-tuned for downstream tasks. - -Usage : Works best with few-shot prompts. Accepts special characters, which can be used for generating structured output. : The data set used for training contains profanity and offensive text. Be sure to curate any output from the model before using it in an application. - -Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) - -Size : 20 billion parameters - -Token limits : Context window length (input + output): 8192 - -: Note: Lite plan output is limited to 700 - -Supported natural languages : English - -Data used during training : The gpt-neox-20b model was trained on the Pile. For more information about the Pile, see [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027). The Pile was not deduplicated before being used for training. - -Model architecture : Decoder - -License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -Learn more : [Research paper](https://arxiv.org/abs/2204.06745) - -: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_15,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-chat-v2 - -The granite-13b-chat-v2 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications. - -Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a) - -Size : 13 billion parameters - -Token limits : Context window length (input + output): 8192 - -Supported natural languages : English - -Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. - -Model architecture : Decoder - -License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). - -Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_16,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_17,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-chat-v1 - -The granite-13b-chat-v1 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications. - -Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a) - -Size : 13 billion parameters - -Token limits : Context window length (input + output): 8192 - -Supported natural languages : English - -Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. - -Model architecture : Decoder - -License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). - -Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_18,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_19,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-instruct-v2 - -The granite-13b-instruct-v2 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. - -Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d) - -: [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) - -Size : 13 billion parameters - -Token limits : Context window length (input + output): 8192 - -Supported natural languages : English - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_20,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. - -Model architecture : Decoder - -License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). - -Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - -: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_21,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-instruct-v1 - -The granite-13b-instruct-v1 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. - -Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d) - -: [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) - -Size : 13 billion parameters - -Token limits : Context window length (input + output): 8192 - -Supported natural languages : English - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_22,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. - -Model architecture : Decoder - -License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). - -Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) - -: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_23,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Llama-2 Chat - -The Llama-2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. - -You can choose to use the 13 billion parameter or 70 billion parameter version of the model. - -Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. - -Cost : 13b: Class 1 : 70b: Class 2 : For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7b) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) - -Available sizes : 13 billion parameters : 70 billion parameters - -Token limits : Context window length (input + output): 4096 - -: Lite plan output is limited as follows: : - 70b version: 900 : - 13b version: 2048 - -Supported natural languages : English - -Instruction tuning information : Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction data sets and more than one million new examples that were annotated by humans. - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_24,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Model architecture : Llama 2 is an auto-regressive decoder-only language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning and reinforcement learning with human feedback. - -License : [License](https://ai.meta.com/llama/license/) - -Learn more : [Research paper](https://arxiv.org/abs/2307.09288) - -: [13b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) : [70b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_25,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," mpt-7b-instruct2 - -The mpt-7b-instruct2 model is provided by MosaicML on Hugging Face. This model is a fine-tuned version of the base MosaicML Pretrained Transformer (MPT) model that was trained to handle long inputs. This version of the model was optimized by IBM for following short-form instructions. - -Usage : General use with zero- or few-shot prompts. - -Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) - -Size : 7 billion parameters - -Token limits : Context window length (input + output): 2048 - -: Note: Lite plan output is limited to 500 - -Supported natural languages : English - -Instruction tuning information : The dataset that was used to train this model is a combination of the Dolly dataset from Databrick and a filtered subset of the Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback training data from Anthropic. - -During filtering, parts of dialog exchanges that contain instruction-following steps were extracted to be used as samples. - -Model architecture : Encoder-decoder - -License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -Learn more : [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) : [Blog](https://www.mosaicml.com/blog/mpt-7b) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_26,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," mt0-xxl-13b - -The mt0-xxl-13b model is provided by BigScience on Hugging Face. The model is optimized to support language generation and translation tasks with English, languages other than English, and multilingual prompts. - -Usage : General use with zero- or few-shot prompts. For translation tasks, include a period to indicate the end of the text you want translated or the model might continue the sentence rather than translate it. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) - -: [Sample notebook: Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43?context=wx) - -Size : 13 billion parameters - -Token limits : Context window length (input + output): 4096 - -: Note: Lite plan output is limited to 700 - -Supported natural languages : The model is pretrained on multilingual data in 108 languages and fine-tuned with multilingual data in 46 languages to perform multilingual tasks. - -Instruction tuning information : BigScience publishes details about its code and data sets. - -Model architecture : Encoder-decoder - -License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) - -Learn more : [Research paper](https://arxiv.org/abs/2211.01786) - -: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_27,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," starcoder-15.5b - -The starcoder-15.5b model is provided by BigCode on Hugging Face. This model can generate code and convert code from one programming language to another. The model is meant to be used by developers to boost their productivity. - -Usage : Code generation and code conversion : Note: The model output might include code that is taken directly from its training data, which can be licensed code that requires attribution. - -Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlcode) : [Sample notebook: Use watsonx and BigCode starcoder-15.5b to generate code based on instruction](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6?context=wx) - -Size : 15.5 billion parameters - -Token limits : Context window length (input + output): 8192 - -Supported programming languages : Over 80 programming languages, with an emphasis on Python. - -Data used during training : This model was trained on over 80 programming languages from GitHub. A filter was applied to exclude from the training data any licensed code or code that is marked with opt-out requests. Nevertheless, the model's output might include code from its training data that requires attribution. The model was not instruction-tuned. Submitting input with only an instruction and no examples might result in poor model output. - -Model architecture : Decoder - -License : [License](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) - -Learn more : [Research paper](https://arxiv.org/abs/2305.06161) - -" -5B37710FE7BBD6EFB842FEB7B49B036302E18F81_28,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_0,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Foundation models - -Build generative AI solutions with foundation models in IBM watsonx.ai. - -Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. Watsonx.ai has a range of deployed large language models for you to try. For details, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -" -58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_1,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Foundation model architecture - -Foundation models represent a fundamentally different model architecture and purpose for AI systems. The following diagram illustrates the difference between traditional AI models and foundation models. - -![Comparison of traditional AI models to foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-overview-diagram.png) - -As shown in the diagram, traditional AI models specialize in specific tasks. Most traditional AI models are built by using machine learning, which requires a large, structured, well-labeled data set that encompasses a specific task that you want to tackle. Often these data sets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time. After it is trained, a traditional AI model can do a single task well. The traditional AI model uses what it learns from patterns in the training data to predict outcomes in unknown data. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them. - -In contrast, foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks. Foundation models were first used to generate text by calculating the most-probable next word in natural language translation tasks. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well. Instead of creating your own foundation models, you use existing deployed models and engineer prompts to generate the results that you need. - -" -58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_2,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Methods of working with foundation models - -The possibilities and applications of foundation models are just starting to be discovered. Explore and validate use cases with foundation models in watsonx.ai to automate, simplify, and speed up existing processes or provide value in a new way. - -You can interact with foundation models in the following ways: - - - -* Engineer prompts and inference deployed foundation models directly by using the Prompt Lab -* Inference deployed foundation models programmatically by using the Python library -* Tune foundation models to return output in a certain style or format by using the Tuning Studio - - - -" -58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_3,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Learn more - - - -* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) -* [Security and privacy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) -* [Model terms of use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html) -* [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html) -* [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) -* [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) - - - -Parent topic:[Analyzing data and working with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -" -78A8C07B83DF1B01276353D098E84F12304636E2_0,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt Lab - -In the Prompt Lab in IBM watsonx.ai, you can experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. - -You use the Prompt Lab to engineer effective prompts that you submit to deployed foundation models for inferencing. You do not use the Prompt Lab to create new foundation models. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_1,78A8C07B83DF1B01276353D098E84F12304636E2," Requirements - -If you signed up for watsonx.ai and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab. - -You must meet these requirements to use the Prompt Lab: - - - -* You must have a project. -* You must have the Editor or Admin role in the project. -* The project must have an associated Watson Machine Learning service instance. Otherwise, you are prompted to associate the service when you open the Prompt Lab. - - - -" -78A8C07B83DF1B01276353D098E84F12304636E2_2,78A8C07B83DF1B01276353D098E84F12304636E2," Creating and running a prompt - -To create and run a new prompt, complete the following steps: - - - -1. From the [watsonx.ai home page](https://dataplatform.cloud.ibm.com/wx/home?context=wx), choose a project, and then click Experiment with foundation models and build prompts. - - - - - -1. Select a model. -2. Enter a prompt. -3. If necessary, update model parameters or add prompt variables. -4. Click Generate. -5. To preserve your work, so you can reuse or share a prompt with collaborators in the current project, save your work as a project asset. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html). - - - -To run a sample prompt, complete the following steps: - - - -1. From the Sample prompts menu in the Prompt Lab, select a sample prompt. - -The prompt is opened in the editor and an appropriate model is selected. -2. Click Generate. - - - -" -78A8C07B83DF1B01276353D098E84F12304636E2_3,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt editing options - -You type your prompt in the prompt editor. The prompt editor has the following modes: - -Freeform : You add your prompt in plain text. Your prompt text is sent to the model exactly as you typed it. : Quotation marks in your text are escaped with a backslash (""). Newline characters are represented by n. Apostrophes are escaped (it'''s) so that they can be handled properly in the cURL command. - -Structured : You add parts of your prompt into the appropriate fields: : - Instruction: Add an instruction if it makes sense for your use case. An instruction is an imperative statement, such as Summarize the following article. : - Examples: Add one or more pairs of examples that contain the input and the corresponding output that you want. Providing a few example input-and-output pairs in your prompt is called few-shot prompting. If you need a specific prefix to the input or the output, you can replace the default labels, ""Input:"" or ""Output:"", with the labels you want to use. A space is added between the example label and the example text. : - Test your input: In the Try area, enter the final input of your prompt. : Structured mode is designed to help new users create effective prompts. Text from the fields is sent to the model in a template format. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_4,78A8C07B83DF1B01276353D098E84F12304636E2," Model and prompt configuration options - -You must specify which model to prompt and can optionally set parameters that control the generated result. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_5,78A8C07B83DF1B01276353D098E84F12304636E2," Model choices - -In the Prompt Lab, you can submit your prompt to any of the models that are supported by watsonx.ai. You can choose recently-used models from the drop-down list. Or you can click View all foundation models to view all the supported models, filter them by task, and read high-level information about the models. - -If you tuned a foundation model by using the Tuning Studio and deployed the tuned model, your tuned model is also available for prompting from the Prompt Lab. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_6,78A8C07B83DF1B01276353D098E84F12304636E2," Model parameters - -To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria. For more information, see [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html). - -" -78A8C07B83DF1B01276353D098E84F12304636E2_7,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt variables - -To add flexibility to your prompts, you can define prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). - -" -78A8C07B83DF1B01276353D098E84F12304636E2_8,78A8C07B83DF1B01276353D098E84F12304636E2," AI guardrails - -When you set the AI guardrails switcher to On, harmful language is automatically removed from the input prompt text and from the output that is generated by the model. Specifically, any sentence in the input or output that contains harmful language is replaced with a message that says that potentially harmful text was removed. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_9,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt code - -If you want to run the prompt programmatically, you can view and copy the prompt code or use the Python library. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_10,78A8C07B83DF1B01276353D098E84F12304636E2," View code - -When you click the View code icon (![](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code.svg)), a cURL command is displayed that you can call from outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. - -In the command, there is a placeholder for an IBM Cloud IAM token. For information about generating the access token, see [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey). - -" -78A8C07B83DF1B01276353D098E84F12304636E2_11,78A8C07B83DF1B01276353D098E84F12304636E2," Programmatic alternative to the Prompt Lab - -The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts. However, you can also prompt foundation models in watsonx.ai programmatically by using the Python library. For details, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html). - -" -78A8C07B83DF1B01276353D098E84F12304636E2_12,78A8C07B83DF1B01276353D098E84F12304636E2," Available prompts - -In the side panel, you can access sample prompts, your session history, and saved prompts. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_13,78A8C07B83DF1B01276353D098E84F12304636E2," Samples - -A collection of sample prompts are available in the Prompt Lab. The samples demonstrate effective prompt text and model parameters for different tasks, including classification, extraction, content generation, question answering, and summarization. - -When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_14,78A8C07B83DF1B01276353D098E84F12304636E2," History - -As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt. To load a previous prompt, click the entry in the history and then click Restore. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_15,78A8C07B83DF1B01276353D098E84F12304636E2," Saved - -From the Saved prompt templates menu, you can load any prompts that you saved to the current project as a prompt template asset. - -" -78A8C07B83DF1B01276353D098E84F12304636E2_16,78A8C07B83DF1B01276353D098E84F12304636E2," Learn more - - - -* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) -* [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html) -* [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html) -* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html) -* [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) -* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) -* Try these tutorials: - - - -* [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) -* [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) - - - - - - - -* Watch these other prompt lab videos - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -E5D702E67E93752155510B56A3B2F464E190EBA2_0,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample foundation model prompts for common tasks - -Try these samples to learn how different prompts can guide foundation models to do common tasks. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_1,E5D702E67E93752155510B56A3B2F464E190EBA2," How to use this topic - -Explore the sample prompts in this topic: - - - -* Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai -* See what text is generated. -* See how different models generate different output. -* Change the prompt text and parameters to see how results vary. - - - -There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_2,E5D702E67E93752155510B56A3B2F464E190EBA2,"Video chapters -[ 0:11 ] Introduction to prompts and Prompt Lab -[ 0:33 ] Key concept: Everything is text completion -[ 1:34 ] Useful prompt pattern: Few-shot prompt -[ 1:58 ] Stopping criteria: Max tokens, stop sequences -[ 3:32 ] Key concept: Fine-tuning -[ 4:32 ] Useful prompt pattern: Zero-shot prompt -[ 5:32 ] Key concept: Be flexible, try different prompts -[ 6:14 ] Next steps: Experiment with sample prompts - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_3,E5D702E67E93752155510B56A3B2F464E190EBA2," Samples overview - -You can find samples that prompt foundation models to generate output that supports the following tasks: - - - -* [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enclassification) -* [Extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enextraction) -* [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=engeneration) -* [Question answering (QA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enqa) -* [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensummarization) -* [Code generation and conversion](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=encode) -* [Dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=endialogue) - - - -The following table shows the foundation models that are used in task-specific samples. A checkmark indicates that the model is used in a sample for the associated task. - - - -Table 1. Models used in samples for certain tasks - - Model Classification Extraction Generation QA Summarization Coding Dialogue - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_4,E5D702E67E93752155510B56A3B2F464E190EBA2," flan-t5-xxl-11b ✓ ✓ - flan-ul2-20b ✓ ✓ ✓ - gpt-neox-20b ✓ ✓ ✓ - granite-13b-chat-v1 ✓ - granite-13b-instruct-v1 ✓ ✓ - granite-13b-instruct-v2 ✓ ✓ ✓ - llama-2 chat ✓ - mpt-7b-instruct2 ✓ ✓ - mt0-xxl-13b ✓ ✓ - starcoder-15.5b ✓ - - - -The following table summarizes the available sample prompts. - - - -Table 2. List of sample prompts - - Scenario Prompt editor Prompt format Model Decoding Notes - - [Sample 1a: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1a) Freeform Zero-shot * mt0-xxl-13b
* flan-t5-xxl-11b
* flan-ul2-20b Greedy * Uses the class names as stop sequences to stop the model after it prints the class name - [Sample 1b: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1b) Freeform Few-shot * gpt-neox-20b
* mpt-7b-instruct Greedy * Uses the class names as stop sequences - [Sample 1c: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1c) Structured Few-shot * gpt-neox-20b
* mpt-7b-instruct Greedy * Uses the class names as stop sequences -" -E5D702E67E93752155510B56A3B2F464E190EBA2_5,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 2a: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample2a) Freeform Zero-shot * flan-ul2-20b
* granite-13b-instruct-v2 Greedy - [Sample 3a: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3a) Freeform Few-shot * gpt-neox-20b Sampling * Generates formatted output
* Uses two newline characters as a stop sequence to stop the model after one list - [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3b) Structured Few-shot * gpt-neox-20b Sampling * Generates formatted output.
* Uses two newline characters as a stop sequence - [Sample 3c: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3c) Freeform Zero-shot * granite-13b-instruct-v1
* granite-13b-instruct-v2 Greedy * Generates formatted output -" -E5D702E67E93752155510B56A3B2F464E190EBA2_6,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 4a: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4a) Freeform Zero-shot * mt0-xxl-13b
* flan-t5-xxl-11b
* flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence to cause the model to return only a single sentence - [Sample 4b: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4b) Structured Zero-shot * mt0-xxl-13b
* flan-t5-xxl-11b
* flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence
* Generates results for multiple inputs at once - [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4c) Freeform Zero-shot * granite-13b-instruct-v2 Greedy - [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4d) Freeform Zero-shot * granite-13b-instruct-v1 Greedy -" -E5D702E67E93752155510B56A3B2F464E190EBA2_7,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 5a: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5a) Freeform Zero-shot * flan-t5-xxl-11b
* flan-ul2-20b
* mpt-7b-instruct2 Greedy - [Sample 5b: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5b) Freeform Few-shot * gpt-neox-20b Greedy - [Sample 5c: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5c) Structured Few-shot * gpt-neox-20b Greedy * Generates formatted output
* Uses two newline characters as a stop sequence to stop the model after one list - [Sample 6a: Generate programmatic code from instructions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6a) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output
* Uses as a stop sequence -" -E5D702E67E93752155510B56A3B2F464E190EBA2_8,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 6b: Convert code from one programming language to another](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6b) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output
* Uses as a stop sequence - [Sample 7a: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7a) Freeform Custom structure * granite-13b-chat-v1 Greedy * Generates dialogue output like a chatbot
* Uses a special token that is named END_KEY as a stop sequence - [Sample 7b: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7b) Freeform Custom structure * llama-2 chat Greedy * Generates dialogue output like a chatbot
* Uses a model-specific prompt format - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_9,E5D702E67E93752155510B56A3B2F464E190EBA2," Classification - -Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_10,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1a: Classify a message - -Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct support team for the issue type. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_11,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_12,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_13,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop. -* With such short output, the Max tokens parameter can be set to 5. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_14,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Classify this customer message into one of two classes: Question, Problem. - -Class name: Question -Description: The customer is asking a technical question or a how-to question -about our products or services. - -Class name: Problem -Description: The customer is describing a problem they are having. They might -say they are trying something, but it's not working. They might say they are -getting an error or unexpected results. - -Message: I'm having trouble registering for a new account. -Class name: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_15,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1b: Classify a message - -Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_16,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_17,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_18,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop. -* With such short output, the Max tokens parameter can be set to 5. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_19,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Message: When I try to log in, I get an error. -Class name: Problem - -Message: Where can I find the plan prices? -Class name: Question - -Message: What is the difference between trial and paygo? -Class name: Question - -Message: The registration page crashed, and now I can't create a new account. -Class name: Problem - -Message: What regions are supported? -Class name: Question - -Message: I can't remember my password. -Class name: Problem - -Message: I'm having trouble registering for a new account. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_20,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1c: Classify a message - -Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_21,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_22,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return one of the specified class names, not be creative and make up new classes. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_23,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop. -* With such short output, the Max tokens parameter can be set to 5. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_24,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section -Paste these headers and examples into the Examples area of the Set up section: - - - -Table 2. Classification few-shot examples - - Message: Class name: - - When I try to log in, I get an error. Problem - Where can I find the plan prices? Question - What is the difference between trial and paygo? Question - The registration page crashed, and now I can't create a new account. Problem - What regions are supported? Question - I can't remember my password. Problem - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_25,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section -Paste this message in the Try section: - -I'm having trouble registering for a new account. - -Select the model and set parameters, then click Generate to see the result. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_26,E5D702E67E93752155510B56A3B2F464E190EBA2," Extracting details - -Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_27,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 2a: Extract details from a complaint - -Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_28,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choices -flan-ul2-20b, granite-13b-instruct-v2 - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_29,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_30,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -The list of extracted factors will not be long, so set the Max tokens parameter to 50. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_31,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -From the following customer complaint, extract all the factors that -caused the customer to be unhappy. - -Customer complaint: -I just tried to book a flight on your incredibly slow website. All -the times and prices were confusing. I liked being able to compare -the amenities in economy with business class side by side. But I -never got to reserve a seat because I didn't understand the seat map. -Next time, I'll use a travel agent! - -Numbered list of all the factors that caused the customer to be unhappy: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_32,E5D702E67E93752155510B56A3B2F464E190EBA2," Generating natural language - -Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_33,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3a: Generate a numbered list on a particular theme - -Scenario: Generate a numbered list on a particular theme. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_34,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_35,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Sampling. This is a creative task. Set the following parameters: - - - -* Temperature: 0.7 -* Top P: 1 -* Top K: 50 -* Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.) - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_36,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, then click Add sequence. -* The list will not be very long, so set the Max tokens parameter to 50. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_37,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -What are 4 types of dog breed? -1. Poodle -2. Dalmatian -3. Golden retriever -4. Bulldog - -What are 3 ways to incorporate exercise into your day? -1. Go for a walk at lunch -2. Take the stairs instead of the elevator -3. Park farther away from your destination - -What are 4 kinds of vegetable? -1. Spinach -2. Carrots -3. Broccoli -4. Cauliflower - -What are the 3 primary colors? -1. Red -2. Green -3. Blue - -What are 3 ingredients that are good on pizza? - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_38,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3b: Generate a numbered list on a particular theme - -Scenario: Generate a numbered list on a particular theme. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_39,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_40,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Sampling. This scenario is a creative one. Set the following parameters: - - - -* Temperature: 0.7 -* Top P: 1 -* Top K: 50 -* Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.) - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_41,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. -* The list will not be long, so set the Max tokens parameter to 50. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_42,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section -Paste these headers and examples into the Examples area of the Set up section: - - - -Table 3. Generation few-shot examples - - Input: Output: - - What are 4 types of dog breed? 1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog - What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination - What are 4 kinds of vegetable? 1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower - What are the 3 primary colors? 1. Red 2. Green 3. Blue - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_43,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section -Paste this input in the Try section: - -What are 3 ingredients that are good on pizza? - -Select the model and set parameters, then click Generate to see the result. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_44,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3c: Generate a numbered list on a particular theme - -Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_45,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Similar to gpt-neox-20b, the granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 oe granite-13b-instruct-v1 model is a good choice when you want your generated text to be formatted in a specific way with special characters. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_46,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_47,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -The summary might run several sentences, so set the Max tokens parameter to 60. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_48,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. - -Plan we are considering: -Extend our store hours. -Three problems with this plan are: -1. We'll have to pay more for staffing. -2. Risk of theft increases late at night. -3. Clerks might not want to work later hours. - -Plan we are considering: -Open a second location for our business. -Three problems with this plan are: -1. Managing two locations will be more than twice as time-consuming than managed just one. -2. Creating a new location doesn't guarantee twice as many customers. -3. A new location means added real estate, utility, and personnel expenses. - -Plan we are considering: -Refreshing our brand image by creating a new logo. -Three problems with this plan are: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_49,E5D702E67E93752155510B56A3B2F464E190EBA2," Question answering - -Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content. - -To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html). - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_50,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4a: Answer a question based on an article - -Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_51,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_52,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_53,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -To cause the model to return a one-sentence answer, specify a period ""."" as a stop sequence. The Max tokens parameter can be set to 50. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_54,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_55,E5D702E67E93752155510B56A3B2F464E190EBA2,"Tomatoes are one of the most popular plants for vegetable gardens. -Tip for success: If you select varieties that are resistant to -disease and pests, growing tomatoes can be quite easy. For -experienced gardeners looking for a challenge, there are endless -heirloom and specialty varieties to cultivate. Tomato plants come -in a range of sizes. There are varieties that stay very small, less -than 12 inches, and grow well in a pot or hanging basket on a balcony -or patio. Some grow into bushes that are a few feet high and wide, -and can be grown is larger containers. Other varieties grow into -huge bushes that are several feet wide and high in a planter or -garden bed. Still other varieties grow as long vines, six feet or -more, and love to climb trellises. Tomato plants do best in full -sun. You need to water tomatoes deeply and often. Using mulch -prevents soil-borne disease from splashing up onto the fruit when you -water. Pruning suckers and even pinching the tips will encourage the -" -E5D702E67E93752155510B56A3B2F464E190EBA2_56,E5D702E67E93752155510B56A3B2F464E190EBA2,"Answer the following question using only information from the article. -Answer in a complete sentence, with proper capitalization and punctuation. -If there is no good answer in the article, say ""I don't know"". - -Question: Why should you use mulch when growing tomatoes? -Answer: - -You can experiment with asking other questions too, such as: - - - -* How large do tomato plants get? -* Do tomato plants prefer shade or sun? -* Is it easy to grow tomatoes? - - - -Try out-of-scope questions too, such as: - - - -* How do you grow cucumbers? - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_57,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4b: Answer a question based on an article - -Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_58,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_59,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_60,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -To cause the model to return a one-sentence answer, specify a period ""."" as a stop sequence. The Max tokens parameter can be set to 50. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_61,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section -Paste this text into the Instruction area of the Set up section: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_62,E5D702E67E93752155510B56A3B2F464E190EBA2,"Tomatoes are one of the most popular plants for vegetable gardens. -Tip for success: If you select varieties that are resistant to -disease and pests, growing tomatoes can be quite easy. For -experienced gardeners looking for a challenge, there are endless -heirloom and specialty varieties to cultivate. Tomato plants come -in a range of sizes. There are varieties that stay very small, less -than 12 inches, and grow well in a pot or hanging basket on a balcony -or patio. Some grow into bushes that are a few feet high and wide, -and can be grown is larger containers. Other varieties grow into -huge bushes that are several feet wide and high in a planter or -garden bed. Still other varieties grow as long vines, six feet or -more, and love to climb trellises. Tomato plants do best in full -sun. You need to water tomatoes deeply and often. Using mulch -prevents soil-borne disease from splashing up onto the fruit when you -water. Pruning suckers and even pinching the tips will encourage the -" -E5D702E67E93752155510B56A3B2F464E190EBA2_63,E5D702E67E93752155510B56A3B2F464E190EBA2,"Answer the following question using only information from the article. -Answer in a complete sentence, with proper capitalization and punctuation. -If there is no good answer in the article, say ""I don't know"". - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_64,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section -In the Try section, add an extra test row so you can paste each of these two questions in a separate row: - -Why should you use mulch when growing tomatoes? - -How do you grow cucumbers? - -Select the model and set parameters, then click Generate to see two results. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_65,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4c: Answer a question based on a document - -Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_66,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_67,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_68,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -Use a Max tokens parameter of 50. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_69,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed. - -DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. -DIALOG: USER: What are foundation models? - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_70,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4d: Answer general knowledge questions - -Scenario: Answer general questions about finance. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_71,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -The granite-13b-instruct-v1 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_72,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. This sample is answering questions, so we don't want creative output. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_73,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -Set the Max tokens parameter to 200 so the model can return a complete answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_74,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -The model was tuned for question-answering with examples in the following format: - -<|user|> -content of the question -`<|assistant|> -new line for the model's answer - -You can use the exact syntax <|user|> and <|assistant|> in the lines before and after the question or you can replace the values with equivalent terms, such as User and Assistant. - -If you're using version 1, do not include any trailing white spaces after the label, and be sure to add a new line. - -Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -<|user|> -Tell me about interest rates -<|assistant|> - -After the model generates an answer, you can ask a follow-up question. The model uses information from the previous question when it generates a response. - -<|user|> -Who sets it? -<|assistant|> - -The model retains information from a previous question when it answers a follow-up question, but it is not optimized to support an extended dialogue. - -Note: When you ask a follow-up question, the previous question is submitted again, which adds to the number of tokens that are used. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_75,E5D702E67E93752155510B56A3B2F464E190EBA2," Summarization - -Summarization tasks save you time by condensing large amounts of text into a few key pieces of information. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_76,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5a: Summarize a meeting transcript - -Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_77,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mpt-7b-instruct2. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_78,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_79,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -The summary might run several sentences, so set the Max tokens parameter to 60. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_80,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Summarize the following transcript. -Transcript: -00:00 [alex] Let's plan the team party! -00:10 [ali] How about we go out for lunch at the restaurant? -00:21 [sam] Good idea. -00:47 [sam] Can we go to a movie too? -01:04 [alex] Maybe golf? -01:15 [sam] We could give people an option to do one or the other. -01:29 [alex] I like this plan. Let's have a party! -Summary: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_81,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5b: Summarize a meeting transcript - -Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_82,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -With few-shot examples, most models can complete this task well. Try: gpt-neox-20b. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_83,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_84,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. -* Set the Max tokens parameter to 60. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_85,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Transcript: -00:00 [sam] I wanted to share an update on project X today. -00:15 [sam] Project X will be completed at the end of the week. -00:30 [erin] That's great! -00:35 [erin] I heard from customer Y today, and they agreed to buy our product. -00:45 [alex] Customer Z said they will too. -01:05 [sam] Great news, all around. -Summary: -Sam shared an update that project X will be complete at the end of the week. -Erin said customer Y will buy our product. And Alex said customer Z will buy -our product too. - -Transcript: -00:00 [ali] The goal today is to agree on a design solution. -00:12 [alex] I think we should consider choice 1. -00:25 [ali] I agree -00:40 [erin] Choice 2 has the advantage that it will take less time. -01:03 [alex] Actually, that's a good point. -01:30 [ali] So, what should we do? -01:55 [alex] I'm good with choice 2. -02:20 [erin] Me too. -02:45 [ali] Done! -Summary: -Alex suggested considering choice 1. Erin pointed out choice two will take -less time. The team agreed with choice 2 for the design solution. - -Transcript: -00:00 [alex] Let's plan the team party! -00:10 [ali] How about we go out for lunch at the restaurant? -00:21 [sam] Good idea. -00:47 [sam] Can we go to a movie too? -01:04 [alex] Maybe golf? -01:15 [sam] We could give people an option to do one or the other. -01:29 [alex] I like this plan. Let's have a party! -Summary: - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_86,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5c: Summarize a meeting transcript - -Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_87,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_88,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_89,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. -* Set the Max tokens parameter to 60. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_90,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section -Paste these headers and examples into the Examples area of the Set up section: - - - -Table 4. Summarization few-shot examples - - Transcript: Summary: - - 00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around. - Sam shared an update that project X will be complete at the end of the week - Erin said customer Y will buy our product - And Alex said customer Z will buy our product too - 00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done! - Alex suggested considering choice 1 - Erin pointed out choice two will take less time - The team agreed with choice 2 for the design solution - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_91,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section -Paste this message in the Try section: - -00:00 [alex] Let's plan the team party! -00:10 [ali] How about we go out for lunch at the restaurant? -00:21 [sam] Good idea. -00:47 [sam] Can we go to a movie too? -01:04 [alex] Maybe golf? -01:15 [sam] We could give people an option to do one or the other. -01:29 [alex] I like this plan. Let's have a party! - -Select the model and set parameters, then click Generate to see the result. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_92,E5D702E67E93752155510B56A3B2F464E190EBA2," Code generation and conversion - -Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_93,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 6a: Generate programmatic code from instructions - -Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers that are lower than the number that is passed to the function as a variable. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_94,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_95,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_96,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -To stop the model after it returns a single code snippet, specify as the stop sequence. The Max tokens parameter can be set to 1,000. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_97,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Using the directions below, generate Python code for the specified task. - -Input: - Write a Python function that prints 'Hello World!' string 'n' times. - -Output: -def print_n_times(n): -for i in range(n): -print(""Hello World!"") - - - -Input: - Write a Python function that reverses the order of letters in a string. - The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order. - -Output: - -The output contains Python code similar to the following snippet: - -def reversed(my_string): -return my_string[::-1] - -Be sure to test the generated code to verify that it works as you expect. - -For example, if you run reversed(""good morning""), the result is 'gninrom doog'. - -Note: The StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_98,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 6b: Convert code from one programming language to another - -Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_99,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_100,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_101,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -To stop the model after it returns a single code snippet, specify as the stop sequence. The Max tokens parameter can be set to 300. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_102,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code. - -The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters. - -The final part of the prompt identifies the language that you want the C++ code snippet to be converted into. - -Translate the following code from C++ to Python. - -C++: -include ""bits/stdc++.h"" -using namespace std; -bool isPerfectSquare(long double x) { -long double sr = sqrt(x); -return ((sr - floor(sr)) == 0); -} -void checkSunnyNumber(int N) { -if (isPerfectSquare(N + 1)) { -cout << ""Yes -""; -} else { -cout << ""No -""; -} -} -int main() { -int N = 8; -checkSunnyNumber(N); -return 0; -} - -Python: -from math import - -def isPerfectSquare(x): -sr = sqrt(x) -return ((sr - floor(sr)) == 0) - -def checkSunnyNumber(N): -if (isPerfectSquare(N + 1)): -print(""Yes"") -else: -print(""No"") - -if __name__ == '__main__': -N = 8 -checkSunnyNumber(N) - - - -C++: -include -using namespace std; -int countAPs(int S, int D) { -S = S * 2; -int answer = 0; -for (int i = 1; i <= sqrt(S); i++) { -if (S % i == 0) { -if (((S / i) - D * i + D) % 2 == 0) -answer++; -" -E5D702E67E93752155510B56A3B2F464E190EBA2_103,E5D702E67E93752155510B56A3B2F464E190EBA2,"if ((D * i - (S / i) + D) % 2 == 0) -answer++; -} -} -return answer; -} -int main() { -int S = 12, D = 1; -cout << countAPs(S, D); -return 0; -} - -Python: - -The output contains Python code similar to the following snippet: - -from math import - -def countAPs(S, D): -S = S * 2 -answer = 0 -for i in range(1, int(sqrt(S)) + 1): -if (S % i == 0): -if (((S / i) - D * i + D) % 2 == 0): -answer += 1 -if ((D * i - (S / i) + D) % 2 == 0): -answer += 1 -return answer - -if __name__ == '__main__': -S = 12 -D = 1 -print(countAPs(S, D)) - -The generated Python code functions the same as the C++ function included in the prompt. - -Test the generated Python code to verify that it works as you expect. - -Remember, the StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_104,E5D702E67E93752155510B56A3B2F464E190EBA2," Dialogue - -Dialogue tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_105,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 7a: Converse in a dialogue - -Scenario: Generate dialogue output like a chatbot. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_106,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Like other foundation models, granite-13b-chat can be used for multiple tasks. However, it is optimized for carrying on a dialogue. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_107,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_108,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria - - - -* A helpful feature of the model is the inclusion of a special token that is named END_KEY at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt. Add END_KEY as the stop sequence. -* Set the Max tokens parameter to 200 so the model can return a complete answer. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_109,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -The model expects the input to follow a specific pattern. - -Start the input with an instruction. For example, the instruction might read as follows: - -Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. If you don't know an answer, you state that you don't know or aren't sure about the right answer. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous. - -Next, add lines to capture the question and answer pattern with the following syntax: - -Human: -content of the question -Assistant: -new line for the model's answer - -You can replace the terms Human and Assistant with other terms. - -If you're using version 1, do not include any trailing white spaces after the Assistant: label, and be sure to add a new line. - -Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous. - -Human: How does a bill become a law? -Assistant: - -After the initial output is generated, continue the dialogue by asking a follow-up question. For example, if the output describes how a bill becomes a law in the United States, you can ask about how laws are made in other countries. - -Human: What about in Canada? -Assistant: - -A few notes about using this sample with the model: - - - -* The prompt input outlines the chatbot scenario and describes the personality of the AI assistant. The description explains that the assistant should indicate when it doesn't know an answer. It also directs the assistant to avoid discussing unethical topics. -" -E5D702E67E93752155510B56A3B2F464E190EBA2_110,E5D702E67E93752155510B56A3B2F464E190EBA2,"* The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialogue. -* The model expects the input to follow a specific pattern. -* The generated response from the model is clearly indicated by the keyword END_KEY. You can use this keyword as a stop sequence to help the model generate succinct responses. - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_111,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 7b: Converse in a dialogue - -Scenario: Generate dialogue output like a chatbot. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_112,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice -Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialogue use cases. The llama-2-70b-chat and llama-2-13b-chat are the only models in watsonx.ai that are fine-tuned for the [INST]<><>[/INST] prompt format. For more information about this prompt format, see [How to prompt Llama 2](https://huggingface.co/blog/llama2how-to-prompt-llama-2). - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_113,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding -Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_114,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria -Set the Max tokens parameter to 100. - -The template for Llama 2 uses an end-of-string (EOS) token () to delimit the end of the agent's response. The model output stops generating text when the EOS token is encountered. - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_115,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text -Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. - -[INST] <> -You are a helpful assistant. Begin each response with the phrase ""Dear user, "". If you are unsure about an answer, truthfully say ""I don't know""<> - -I have a question about the Earth.[/INST] -Dear user, go on. -[INST] - -How many moons are there? -[/INST] - -After the initial output is generated, continue the dialogue to ask about more planets, wrapping your user messages in the same [INST] structure. You don't need to add an token to the start of follow-up questions. - -The output looks something like this: - -[INST] <> -You are a helpful assistant. Begin each response with the phrase ""Dear user, "". If you are unsure about an answer, truthfully say ""I don't know""<> - -I have a question about the Earth.[/INST] -Dear user, go on. -[INST] - -How many moons are there? -[/INST] -Dear user, the Earth has one natural satellite, commonly referred to as the Moon. -[INST] - -What about Saturn? -[/INST] -Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. -[INST] - -And Mars? -[/INST] -Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively. - -Notice a few important points about the output that the model generates as you go: - - - -" -E5D702E67E93752155510B56A3B2F464E190EBA2_116,E5D702E67E93752155510B56A3B2F464E190EBA2,"* The so-called system instruction, in the <><> tags, continues to influence the output at each dialogue turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with ""Dear user, "". -* In true dialogue fashion, the assistant response to the user input ""How many moons are there?"" takes into account the previous user input ""I have a question about the Earth."" and returns the number of moons orbiting Earth. -* Also in proper dialogue form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message ""And Mars?"" could wander off in any direction.) -* Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated. -* When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<><>[/INST] prompt format when you use Llama 2 for any other tasks besides chat. - - - -Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -" -38DBE0E16434502696281563802B76F3E38B25D2_0,38DBE0E16434502696281563802B76F3E38B25D2," Saving your work - -Prompt engineering involves trial and error. Keep track of your experimentation and save model-and-prompt combinations that generate the output you want. - -When you save your work, you can choose to save it as different asset types. Saving your work as an asset makes it possible to share your work with collaborators in the current project. - - - -Table 1: Asset types - - Asset type When to use this asset type What is saved How to retrieve the asset - - Prompt template asset When you find a combination of prompt static text, prompt variables, and prompt engineering parameters that generate the results you want from a specific model and want to reuse it. Prompt text, model, prompt engineering parameters, and prompt variables.
Note: The output that is generated by the model is not saved. From the Saved prompt templates tab - Prompt session asset When you want to keep track of the steps involved with your experimentation so you know what you've tried and what you haven't. Prompt text, model, prompt engineering parameters, and model output for up to 500 prompts that are submitted during a prompt engineering session. From the History tab - Notebook asset When you want to work with models programmatically, but want to start from the Prompt Lab interface for a better prompt engineering experience. Prompt text, model, prompt engineering parameters, and prompt variable names and default values are formatted as Python code and stored as a notebook. From the Assets page of the project - - - -Each of these asset types is available from the project's Assets page. Project collaborators with the Admin or Editor role can open and work with them. Your prompt template and prompt session assets are locked automatically, but you can unlock them by clicking the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/lockicon-new.png)). - -" -38DBE0E16434502696281563802B76F3E38B25D2_1,38DBE0E16434502696281563802B76F3E38B25D2," Saving your work - -To save your prompt engineering work, complete the following steps: - - - -1. From the header of the prompt editor, click Save work, and then click Save as. -2. Choose an asset type. -3. Name the asset, and then optionally add a description. -4. Choose the task type that best matches your goal. -5. If you save the prompt as a notebook asset only: Select View in project after saving. -6. Click Save. - - - -" -38DBE0E16434502696281563802B76F3E38B25D2_2,38DBE0E16434502696281563802B76F3E38B25D2," Working with prompts saved in a notebook - -When you save your work as a notebook asset, a Python notebook is built. - -To work with a prompt notebook asset, complete the following steps: - - - -1. Open the notebook asset from the Assets tab of your project. -2. Click the Edit icon (![edit notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/edit.svg)) to instantiate the notebook so you can step through the code. - -The notebook contains runnable code that manages the following steps for you: - - - -* Authenticates with the service. -* Defines a Python class. -* Defines the input text for the model and declares any prompt variables. You can edit the static prompt text and assign values to prompt variables. -* Uses the defined class to call the watsonx.ai inferencing API and pass your input to the foundation model. -* Shows the output that is generated by the foundation model. - - - -3. Use the notebook as is, or change it to meet the needs of your use case. - -The Python code that is generated by using the Prompt Lab executes successfully. You must test and validate any changes that you make to the code. - - - -" -38DBE0E16434502696281563802B76F3E38B25D2_3,38DBE0E16434502696281563802B76F3E38B25D2," Working with saved prompt templates - -To continue working with a saved prompt, open it from the Saved prompt templates tab of the Prompt Lab. - -When you open a saved prompt template, Autosave is on, which means that any changes you make to the prompt will be reflected in the saved prompt template asset. If you want the prompt template that you saved to remain unchanged, click New prompt to start a new prompt. - -For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). - -" -38DBE0E16434502696281563802B76F3E38B25D2_4,38DBE0E16434502696281563802B76F3E38B25D2," Working with saved prompt sessions - -To continue working with a saved prompt session, open it from the History tab of the Prompt Lab. - -To review previous prompt submissions, you can click a prompt entry from the history to open it in the prompt editor. If you prefer the results from the earlier prompt, you can reset it as your current prompt by clicking Restore. When you restore an earlier prompt, your current prompt session is replaced by the earlier version of the prompt session. - -" -38DBE0E16434502696281563802B76F3E38B25D2_5,38DBE0E16434502696281563802B76F3E38B25D2," Learn more - - - -* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) - - - -Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_0,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tips for writing foundation model prompts: prompt engineering - -Part art, part science, prompt engineering is the process of crafting prompt text to best effect for a given model and parameters. When it comes to prompting foundation models, there isn't just one right answer. There are usually multiple ways to prompt a foundation model for a successful result. - -Use the Prompt Lab to experiment with crafting prompts. - - - -* For help using the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). -* Try the samples that are available from the Sample prompts tab. -* Learn from documented samples. See [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). - - - -As you experiment, remember these tips. The tips in this topic will help you successfully prompt most text-generating foundation models. - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_1,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 1: Always remember that everything is text completion - -Your prompt is the text you submit for processing by a foundation model. - -The Prompt Lab in IBM watsonx.ai is not a chatbot interface. For most models, simply asking a question or typing an instruction usually won't yield the best results. That's because the model isn't answering your prompt, the model is appending text to it. - -This image demonstrates prompt text and generated output: - - - -* Prompt text: ""I took my dog "" -* Generated output: ""to the park."" - - - -![Text completion in Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-text-completion.png) - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_2,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 2: Include all the needed prompt components - -Effective prompts usually have one or more of the following components: instruction, context, examples, and cue. - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_3,F839CD35991DF790F17239C9C63BFCAE701F3D65," Instruction - -An instruction is an imperative statement that tells the model what to do. For example, if you want the model to list ideas for a dog-walking business, your instruction could be: ""List ideas for starting a dog-walking business:"" - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_4,F839CD35991DF790F17239C9C63BFCAE701F3D65," Context - -Including background or contextual information in your prompt can nudge the model output in a desired direction. Specifically, (tokenized) words that appear in your prompt text are more likely to be included in the generated output. - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_5,F839CD35991DF790F17239C9C63BFCAE701F3D65," Examples - -To indicate the format or shape that you want the model response to be, include one or more pairs of example input and corresponding desired output showing the pattern you want the generated text to follow. (Including one example in your prompt is called one-shot prompting, including two or more examples in your prompt is called few-shot prompting, and when your prompt has no examples, that's called zero-shot prompting.) - -Note that when you are prompting models that have been fine-tuned, you might not need examples. - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_6,F839CD35991DF790F17239C9C63BFCAE701F3D65," Cue - -A cue is text at the end of the prompt that is likely to start the generated output on a desired path. (Remember, as much as it seems like the model is responding to your prompt, the model is really appending text to your prompt or continuing your prompt.) - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_7,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 3: Include descriptive details - -The more guidance, the better. Experiment with including descriptive phrases related to aspects of your ideal result: content, style, and length. Including these details in your prompt can cause a more creative or more complete result to be generated. - -For example, you could improve upon the sample instruction given previously: - - - -* Original: ""List ideas for starting a dog-walking business"" -* Improved: ""List ideas for starting a large, wildly successful dog-walking business"" - - - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_8,F839CD35991DF790F17239C9C63BFCAE701F3D65," Example - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_9,F839CD35991DF790F17239C9C63BFCAE701F3D65," Before - -In this image, you can see a prompt with the original, simple instruction. This prompt doesn't produce great results. - -![Example prompt text with just a simple instruction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-too-simple.png) - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_10,F839CD35991DF790F17239C9C63BFCAE701F3D65," After - -In this image, you can see all the prompt components: instruction (complete with descriptive details), context, example, and cue. This prompt produces a much better result. - -![Example prompt text with an instruction, context, an example, and a cue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-components.png) - -You can experiment with this prompt in the Prompt Lab yourself: - -Model: gpt-neox-20b - -Decoding: Sampling - - - -* Temperature: 0.7 -* Top P: 1 -* Top K: 50 -* Repetition penalty: 1.02 - - - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_11,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Stopping criteria: - - - -* Stop sequence: Two newline characters -* Min tokens: 0 -* Max tokens: 80 - - - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_12,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Prompt text: - -Copy this prompt text and paste it into the freeform prompt editor in Prompt Lab, then click Generate to see a result. - -With no random seed specified, results will vary each time you submit the prompt. - -Based on the following industry research, suggest ideas for starting a large, wildly -successful dog-walking business. - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_13,F839CD35991DF790F17239C9C63BFCAE701F3D65,"The most successful dog-walking businesses cater to owners' needs and desires while -also providing great care to the dogs. For example, owners want flexible hours, a -shuttle to pick up and drop off dogs at home, and personalized services, such as -custom meal and exercise plans. Consider too how social media has permeated our lives. -Web-enabled interaction provide images and video that owners will love to share online, -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_14,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Ideas for starting a lemonade business: -- Set up a lemonade stand -- Partner with a restaurant -- Get a celebrity to endorse the lemonade - -Ideas for starting a large, wildly successful dog-walking business: - -" -F839CD35991DF790F17239C9C63BFCAE701F3D65_15,F839CD35991DF790F17239C9C63BFCAE701F3D65," Learn more - - - -* [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) -* [Avoiding hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html) -* [Generating accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html) - - - -Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -" -6049D5AA5DE41309E6281534A464ABD6898A758C_0,6049D5AA5DE41309E6281534A464ABD6898A758C," Building reusable prompts - -Prompt engineering to find effective prompts for a model takes time and effort. Stretch the benefits of your work by building prompts that you can reuse and share with others. - -A great way to add flexibility to a prompt is to add prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_1,6049D5AA5DE41309E6281534A464ABD6898A758C," Using variables to change prompt text dynamically - -Variables help you to generalize a prompt so that it can be reused more easily. - -For example, a prompt for a generative task might contain the following static text: - -Write a story about a dog. - -If you replace the text dog with a variable that is named {animal}, you add support for dynamic content to the prompt. - -Write a story about a {animal}. - -With the variable {animal}, the text can still be used to prompt the model for a story about a dog. But now it can be reused to ask for a story about a cat, a mouse, or another animal, simply by swapping the value that is specified for the {animal} variable. - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_2,6049D5AA5DE41309E6281534A464ABD6898A758C," Creating prompt variables - -To create a prompt variable, complete the following steps: - - - -1. From the Prompt Lab, review the text in your prompt for words or phrases that, when converted to a variable, will make the prompt easier to reuse. -2. Click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) at the start of the page. - -The Prompt variables panel is displayed where you can add variable name-and-value pairs. -3. Click New variable. -4. Click to add a variable name, tab to the next field, and then add a default value. - -The variable name can contain alphanumeric characters or an underscore (_), but cannot begin with a number. - -The default value for the variable is a fallback value; it is used every time that the prompt is submitted, unless someone overwrites the default value by specifying a new value for the variable. -5. Repeat the previous step to add more variables. - -The following table shows some examples of the types of variables that you might want to add. - -| Variable name | Default value | |---------------|---------------| | country | Ireland | | city | Boston | | project | Project X | | company | IBM | -6. Replace static text in the prompt with your variables. - -Select the word or phrase in the prompt that you want to replace, and then click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) within the text box to see a list of available variables. Click the variable that you want to use from the list. - -The variable replaces the selected text. It is formatted with the syntax {variable name}, where the variable name is surrounded by braces. - -If your static text already contains variables that are formatted with braces, they are ignored unless prompt variables of the same name exist. -" -6049D5AA5DE41309E6281534A464ABD6898A758C_3,6049D5AA5DE41309E6281534A464ABD6898A758C,"7. To specify a value for a variable at run time, open the Prompt variables panel, click Preview, and then add a value for the variable. - -You can also change the variable value from the edit view of the Prompt variables panel, but the value you specify will become the new default value. - - - -When you find a set of prompt static text, prompt variables, and prompt engineering parameters that generates the results you want from a model, save the prompt as a prompt template asset. After you save the prompt template asset, you can reuse the prompt or share it with collaborators in the current project. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html). - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_4,6049D5AA5DE41309E6281534A464ABD6898A758C," Examples of reusing prompts - -The following examples help illustrate ways that using prompt variables can add versatility to your prompts. - - - -* [Thank you note example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=enthank-you-example) -* [Devil's advocate example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=endevil-example) - - - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_5,6049D5AA5DE41309E6281534A464ABD6898A758C," Thank you note example - -Replace static text in the Thank you note generation built-in sample prompt with variables to make the prompt reusable. - -To add versatility to a built-in prompt, complete the following steps: - - - -1. From the Prompt Lab, click Sample prompts to list the built-in sample prompts. From the Generation section, click Thank you note generation. - -The input for the built-in sample prompt is added to the prompt editor and the flan-ul2-20b model is selected. - -Write a thank you note for attending a workshop. - -Attendees: interns -Topic: codefest, AI -Tone: energetic -2. Review the text for words or phrases that make good variable candidates. - -In this example, if the following words are replaced, the prompt meaning will change: - - - -* workshop -* interns -* codefest -* AI -* energetic - - - -3. Create a variable to represent each word in the list. Add the current value as the default value for the variable. - -| Variable name | Value | |---------------|---------------| | event | workshop | | attendees | interns | | topic1 | codefest | | topic2 | AI | | tone | energetic | -4. Click Preview to review the variables that you added. -5. Update the static prompt text to use variables in place of words. - -Write a thank you note for attending a {event}. - -Attendees: {attendees} -Topic: {topic1}, {topic2} -Tone: {tone} - -![Screenshot that shows static text in the prompt editor being replaced with variables.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-var-replacement.png) - -The original meaning of the prompt is maintained. -6. Now, change the values of the variables to change the meaning of the prompt. - -From the Fill in prompt variables view of the prompt variables panel, add values for the variables. - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_6,6049D5AA5DE41309E6281534A464ABD6898A758C,"| Variable name | Value | |---------------|---------------| | event | human resources presentation | | attendees | expecting parents | | topic1 | resources for new parents | | topic2 | parental leave | | tone | supportive | - -You effectively converted the original prompt into the following prompt: - -Write a thank you note for attending a human resources presentation. - -Attendees: expecting parents -Topic: resources for new parents, parental leave -Tone: supportive - -Click Generate to see how the model responds. -7. Swap the values for the variables to reuse the same prompt again to generate thank you notes for usability test attendees. - -| Variable name | Value | |---------------|-------| | event | usability test | | attendees | user volunteers | | topic1 | testing out new features | | topic2 | sharing early feedback | | tone | appreciative | - -Click Generate to see how the model responds. - - - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_7,6049D5AA5DE41309E6281534A464ABD6898A758C," Devil's advocate example - -Use prompt variables to reuse effective examples that you devise for a prompt. - -You can guide a foundation model to answer in an expected way by adding a few examples that establish a pattern for the model to follow. This kind of prompt is called a few-shot prompt. Inventing good examples for a prompt requires imagination and testing and can be time-consuming. If you successfully create a few-shot prompt that proves to be effective, you can make it reusable by adding prompt variables. - -Maybe you want to use the granite-13b-instruct-v1 model to help you consider risks or problems that might arise from an action or plan under consideration. - -For example, the prompt might have the following instruction and examples: - -You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. - -Plan we are considering: -Extend our store hours. -Three problems with this plan are: -1. We'll have to pay more for staffing. -2. Risk of theft increases late at night. -3. Clerks might not want to work later hours. - -Plan we are considering: -Open a second location for our business. -Three problems with this plan are: -1. Managing two locations will be more than twice as time-consuming than managed just one. -2. Creating a new location doesn't guarantee twice as many customers. -3. A new location means added real estate, utility, and personnel expenses. - -Plan we are considering: -Refreshing our brand image by creating a new logo. -Three problems with this plan are: - -You can reuse the prompt by completing the following steps: - - - -1. Replace the text that describes the action that you are considering with a variable. - -For example, you can add the following variable: - -| Variable name | Default value | |---------------|---------------| | plan | Refreshing our brand image by creating a new logo. | -2. Replace the static text that defines the plan with the {plan} variable. - -" -6049D5AA5DE41309E6281534A464ABD6898A758C_8,6049D5AA5DE41309E6281534A464ABD6898A758C,"You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. - -Plan we are considering: -Extend our store hours. -Three problems with this plan are: -1. We'll have to pay more for staffing. -2. Risk of theft increases late at night. -3. Clerks might not want to work later hours. - -Plan we are considering: -Open a second location for our business. -Three problems with this plan are: -1. Managing two locations will be more than twice as time-consuming than managed just one. -2. Creating a new location doesn't guarantee twice as many customers. -3. A new location means added real estate, utility, and personnel expenses. - -Plan we are considering: -{plan} -Three problems with this plan are: - -Now you can use the same prompt to prompt the model to brainstorm about other actions. -3. Change the text in the {plan} variable to describe a different plan, and then click Generate to send the new input to the model. - - - -Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_0,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Foundation models Python library - -You can prompt foundation models in IBM watsonx.ai programmatically by using the Python library. - -The Watson Machine Learning Python library is a publicly available library that you can use to work with Watson Machine Learning services. The Watson Machine Learning service hosts the watsonx.ai foundation models. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_1,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Using the Python library - -After you create a prompt in the Prompt Lab, you can save the prompt as a notebook, and then edit the notebook. Using the generated notebook as a starting point is useful because it handles the initial setup steps, such as getting credentials and the project ID information for you. - -If you want to work with the models directly from a notebook, you can do so by using the Watson Machine Learning Python library. - -The ibm-watson-machine-learning Python library is publicly available on PyPI from the url: [https://pypi.org/project/ibm-watson-machine-learning/](https://pypi.org/project/ibm-watson-machine-learning/). However, you can install it in your development environment by using the following command: - -pip install ibm-watson-machine-learning - -If you installed the library before, include the -U parameter to ensure that you have the latest version. - -pip install -U ibm-watson-machine-learning - -For more information about the available methods for working with foundation models, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html). - -You need to take some steps before you can use the Python library: - - - -* [Setting up credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) -* [Looking up your project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=cdpaas&locale=enproject-id) - - - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_2,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Looking up your project ID - -To prompt foundation models in IBM watsonx.ai programmatically, you need to pass the identifier (ID) of a project that has an instance of IBM Watson Machine Learning associated with it. - -To get the ID of a project, complete the following steps: - - - -1. Navigate to the project in the watsonx web console, open the project, and then click the Manage tab. -2. Copy the project ID from the Details section of the General page. - - - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_3,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Examples - -The following examples show you how to use the library to perform a few basic tasks in a notebook. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_4,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example 1: List available foundation models - -You can view [ModelTypes](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.utils.enums.ModelTypes) to see available foundation models. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_5,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code - -from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes -import json - -print( json.dumps( ModelTypes._member_names_, indent=2 ) ) - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_6,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output - -[ -""FLAN_T5_XXL"", -""FLAN_UL2"", -""MT0_XXL"", -... -] - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_7,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example: View details of a foundation model - -You can view details, such as a short description and foundation model limits, by using [get_details()](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details). - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_8,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code - -from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes -from ibm_watson_machine_learning.foundation_models import Model -import json - -my_credentials = { -""url"" : ""https://us-south.ml.cloud.ibm.com"", -""apikey"" : {my-IBM-Cloud-API-key} -} - -model_id = ModelTypes.MPT_7B_INSTRUCT2 -gen_parms = None -project_id = {my-project-ID} -space_id = None -verify = False - -model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) - -model_details = model.get_details() - -print( json.dumps( model_details, indent=2 ) ) - -Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_9,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output - -{ -""model_id"": ""ibm/mpt-7b-instruct2"", -""label"": ""mpt-7b-instruct2"", -""provider"": ""IBM"", -""source"": ""Hugging Face"", -""short_description"": ""MPT-7B is a decoder-style transformer pretrained from -scratch on 1T tokens of English text and code. This model was trained by IBM."", -... -} - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_10,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example: Prompt a foundation model with default parameters - -Prompt a foundation model to generate a response. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_11,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code - -from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes -from ibm_watson_machine_learning.foundation_models import Model -import json - -my_credentials = { -""url"" : ""https://us-south.ml.cloud.ibm.com"", -""apikey"" : {my-IBM-Cloud-API-key} -} - -model_id = ModelTypes.FLAN_T5_XXL -gen_parms = None -project_id = {my-project-ID} -space_id = None -verify = False - -model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) - -prompt_txt = ""In today's sales meeting, we "" -gen_parms_override = None - -generated_response = model.generate( prompt_txt, gen_parms_override ) - -print( json.dumps( generated_response, indent=2 ) ) - -Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID. - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_12,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output - -{ -""model_id"": ""google/flan-t5-xxl"", -""created_at"": ""2023-07-27T03:40:17.575Z"", -""results"": [ -{ -""generated_text"": ""will discuss the new product line."", -""generated_token_count"": 8, -""input_token_count"": 10, -""stop_reason"": ""EOS_TOKEN"" -} -], -... -} - -" -B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_13,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Learn more - - - -* [Credentials for prompting foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_0,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Retrieval-augmented generation - -You can use foundation models in IBM watsonx.ai to generate factually accurate output that is grounded in information in a knowledge base by applying the retrieval-augmented generation pattern. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_1,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2,"Video chapters -[ 0:08 ] Scenario description -[ 0:27 ] Overview of pattern -[ 1:03 ] Knowledge base -[ 1:22 ] Search component -[ 1:41 ] Prompt augmented with context -[ 2:13 ] Generating output -[ 2:31 ] Full solution -[ 2:55 ] Considerations for search -[ 3:58 ] Considerations for prompt text -[ 5:01 ] Considerations for explainability - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_2,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Providing context in your prompt improves accuracy - -Foundation models can generate output that is factually inaccurate for various reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text. - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_3,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Example - -The following prompt includes context to establish some facts: - -Aisha recently painted the kitchen yellow, which is her favorite color. - -Aisha's favorite color is - -Unless Aisha is a famous person whose favorite color was mentioned in many online articles that are included in common pretraining data sets, without the context at the beginning of the prompt, no foundation model could reliably generate the correct completion of the sentence at the end of the prompt. - -If you prompt a model with text that includes fact-filled context, then the output the model generates is more likely to be accurate. For more details, see [Generating factually accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html). - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_4,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," The retrieval-augmented generation pattern - -You can scale out the technique of including context in your prompts by using information in a knowledge base. - -The following diagram illustrates the retrieval-augmented generation pattern. Although the diagram shows a question-answering example, the same workflow supports other use cases. - -![Diagram that shows adding search results to the input for retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-rag.png) - -The retrieval-augmented generation pattern involves the following steps: - - - -1. Search in your knowledge base for content that is related to the user's input. -2. Pull the most relevant search results into your prompt as context and add an instruction, such as “Answer the following question by using only information from the following passages.” -3. Only if the foundation model that you're using is not instruction-tuned: Add a few examples that demonstrate the expected input and output format. -4. Send the combined prompt text to the model to generate output. - - - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_5,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," The origin of retrieval-augmented generation - -The term retrieval-augmented generation (RAG) was introduced in this paper: [Retrieval-augmented generation for knowledge-intensive NLP tasks](https://arxiv.org/abs/2005.11401). - -> We build RAG models where the parametric memory is a pre-trained seq2seq transformer, and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. - -In that paper, the term ""RAG models"" refers to a specific implementation of a retriever (a specific query encoder and vector-based document search index) and a generator (a specific pre-trained, generative language model). However, the basic search-and-generate approach can be generalized to use different retriever components and foundation models. - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_6,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Knowledge base - -The knowledge base can be any collection of information-containing artifacts, such as: - - - -* Process information in internal company wiki pages -* Files in GitHub (in any format: Markdown, plain text, JSON, code) -* Messages in a collaboration tool -* Topics in product documentation -* Text passages in a database like Db2 -* A collection of legal contracts in PDF files -* Customer support tickets in a content management system - - - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_7,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Retriever - -The retriever can be any combination of search and content tools that reliably returns relevant content from the knowledge base: - - - -* Search tools like IBM Watson Discovery -* Search and content APIs (GitHub has APIs like this, for example) -* Vector databases (such as chromadb) - - - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_8,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Generator - -The generator component can use any model in watsonx.ai, whichever one suits your use case, prompt format, and content you are pulling in for context. - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_9,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Examples - -The following examples demonstrate how to apply the retrieval-augmented generation pattern. - - - -Retrieval-augmented generation examples - - Example Description Link - - Simple introduction Uses a small knowledge base and a simple search component to demonstrate the basic pattern. [Introduction to retrieval-augmented generation](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43) - Introduction to RAG with Discovery Contains the steps and code to demonstrate the retrieval-augmented generation pattern in IBM watsonx.ai by using IBM Watson Discovery as the search component. [Simple introduction to retrieval-augmented generation with watsonx.ai and Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec) - Real-world example The watsonx.ai documentation has a search-and-answer feature that can answer basic what-is questions by using the topics in the documentation as a knowledge base. [Answering watsonx.ai questions using a foundation model](https://ibm.biz/watsonx-llm-search) - Example with LangChain Contains the steps and code to demonstrate support of retrieval-augumented generation with LangChain in watsonx.ai. It introduces commands for data retrieval, knowledge base building and querying, and model testing. [Use watsonx and LangChain to answer questions by using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6) -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_10,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Example with LangChain and an Elasticsearch vector database Demonstrates how to use LangChain to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) - Example with the Elasticsearch Python SDK Demonstrates how to use the Elasticsearch Python SDK to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) - Example with LangChain and a SingleStore database Shows you how to apply retrieval-augmented generation to large language models in watsonx by using the SingleStore database. [RAG with SingleStore and watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/daf645b2-281d-4969-9292-5012f3b18215) - - - -" -752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_11,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Learn more - -Try these tutorials: - - - -* [Prompt a foundation model by using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) -* [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_0,38FB0908B90954D96CEFF54BA975DE832286A0A7," Security and privacy for foundation models - -Your work with foundation models is secure and private, in the same way that all your work on watsonx is secure and private. - -Foundation models that you interact with through watsonx are hosted in IBM Cloud. Your data is not sent to any third-party or open source platforms. - -The foundation model prompts that you create and engineer in the Prompt Lab or send by using the API are accessible only by you. Your prompts are used only by you and are submitted only to models you choose. Your prompt text is not accessible or used by IBM or any other person or organization. - -You control whether prompts, model choices, and prompt engineering parameter settings are saved. When saved, your data is stored in a dedicated IBM Cloud Object Storage bucket that is associated with your project. - -Data that is stored in your project storage bucket is encrypted at rest and in motion. You can delete your stored data at any time. - -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_1,38FB0908B90954D96CEFF54BA975DE832286A0A7," Privacy of text in Prompt Lab during a session - -Text that you submit by clicking Generate from the prompt editor in Prompt Lab is reformatted as tokens, and then submitted to the foundation model you choose. The submitted message is encrypted in transit. - -Your prompt text is not saved unless you choose to save your work. - -Unsaved prompt text is kept in the web page until the page is refreshed, at which time the prompt text is deleted. - -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_2,38FB0908B90954D96CEFF54BA975DE832286A0A7," Privacy and security of saved work - -How saved work is managed differs based on the asset type that you choose to save: - - - -* Prompt template asset: The current prompt text, model, prompt engineering parameters, and any prompt variables are saved as a prompt template asset and stored in the IBM Cloud Object Storage bucket that is associated with your project. Prompt template assets are retained until they are deleted or changed by you. When autosave is on, if you open a saved prompt and change the text, the text in the saved prompt template asset is replaced. -* Prompt session asset: A prompt session asset includes the prompt input text, model, prompt engineering parameters, and model output. After you create the prompt session asset, prompt information for up to 500 submitted prompts is stored in the project storage bucket where it is retained for 30 days. -* Notebook asset: Your prompt, model, prompt engineering parameters, and any prompt variables are formatted as Python code and stored as a notebook asset in the project storage bucket. - - - -Only people with Admin or Editor role access to the project or the project storage bucket can view saved assets. You control who can access your project and its associated Cloud Object Storage bucket. - - - -* For more information about asset security, see [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html). -* For more information about managing project access, see [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) - - - -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_3,38FB0908B90954D96CEFF54BA975DE832286A0A7," Logging and text in the Prompt Lab - -Nothing that you add to the prompt editor or submit to a model from the Prompt Lab or by using the API is logged by IBM. Messages that are generated by foundation models and returned to the Prompt Lab also are not logged. - -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_4,38FB0908B90954D96CEFF54BA975DE832286A0A7," Ownership of your content and foundation model output - -Content that you upload into watsonx is yours. - -IBM does not use the content that you upload to watsonx or the output generated by a foundation model to further train or improve any IBM developed models. - -IBM does not claim to have any ownership rights to any foundation model outputs. You remain solely responsible for your content and the output of any foundation model. - -" -38FB0908B90954D96CEFF54BA975DE832286A0A7_5,38FB0908B90954D96CEFF54BA975DE832286A0A7," Learn more - - - -* [Watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document) -* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) -* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -B193A2795BDEF17A5D204CDD18188A767E2FE7B7_0,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Tokens and tokenization - -A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens. - -You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab. - -" -B193A2795BDEF17A5D204CDD18188A767E2FE7B7_1,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Converting words to tokens and back again - -Prompt text is converted to tokens before being processed by foundation models. - -The correlation between words and tokens is complex: - - - -* Sometimes a single word is broken into multiple tokens -* The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words) -* Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not -* The way words are broken into tokens varies from language to language -* The way words are broken into tokens varies from model to model - - - -For a rough idea, a sentence that has 10 words could be 15 to 20 tokens. - -The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor. - -" -B193A2795BDEF17A5D204CDD18188A767E2FE7B7_2,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Example - -The following image shows how this sample input might be tokenized: - -> Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. - -![Visualization of tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tokenization.png) - -Notice a few interesting points: - - - -* Some words are broken into multiple tokens and some are not -* The word ""Tomatoes"" is broken into multiple tokens at the beginning, but later ""tomatoes"" is all one token -* Spaces are sometimes included at the beginning of a word-token and sometimes spaces are a token all by themselves -* Punctuation marks are tokens - - - -" -B193A2795BDEF17A5D204CDD18188A767E2FE7B7_3,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Token limits - -Every model has an upper limit to the number of tokens in the input prompt plus the number of tokens in the generated output from the model (sometimes called context window length, context window, context length, or maximum sequence length.) In the Prompt Lab, an informational message shows how many tokens are used in a given prompt submission and the resulting generated output. - -In the Prompt Lab, you use the Max tokens parameter to specify an upper limit on the number of output tokens for the model to generate. The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -96597F608C26E68BFC4BDCA45061400D63793523_0,96597F608C26E68BFC4BDCA45061400D63793523," Data formats for tuning foundation models - -Prepare a set of prompt examples to use to tune the model. The examples must contain the type of input that the model will need to process at run time and the appropriate output for the model to generate in response. - -You can add one file as training data. The maximum file size that is allowed is 200 MB. - -Prompt input-and-output example pairs are sometimes also referred to as samples or records. - -Follow these guidelines when you create your training data: - - - -* Add 100 to 1,000 labeled prompt examples to a file. Between 50 to 10,000 examples are allowed. -* Use one of the following formats: - - - -* JavaScript Object Notation (JSON) -* JSON Lines (JSONL) format - - - -* Each example must include one input and output pair. -* The language of the training data must be English. -* If the input or output text includes quotation marks, escape each quotation mark with a backslash(). For example, He said, ""Yes."". -* To represent a carriage return or line break, you can use a backslash followed by n (n) to represent the new line. For example, ...end of paragraph.nStart of new paragraph. - - - -You can control the number of tokens from the input and output that are used during training. If an input or output example from the training data is longer than the specified limit, it will be truncated. Only the allowed maximum number of tokens will be used by the experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.htmltuning-tokens). - -How tokens are counted differs by model, which makes the number of tokens difficult to estimate. For language-based foundation models, you can think of 256 tokens as about 130—170 words and 128 tokens as about 65—85 words. To learn more about tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). - -If you are using the model to classify data, follow these extra guidelines: - - - -" -96597F608C26E68BFC4BDCA45061400D63793523_1,96597F608C26E68BFC4BDCA45061400D63793523,"* Try to limit the number of class labels to 10 or fewer. -* Include an equal number of examples of each class type. - - - -You can use the Prompt Lab to craft examples for the training data. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). - -" -96597F608C26E68BFC4BDCA45061400D63793523_2,96597F608C26E68BFC4BDCA45061400D63793523," JSON example - -The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSON format. - -{ -[ -{ -""input"":""Message: When I try to log in, I get an error."", -""output"":""Class name: Problem"" -} -{ -""input"":""Message: Where can I find the plan prices?"", -""output"":""Class name: Question"" -} -{ -""input"":""Message: What is the difference between trial and paygo?"", -""output"":""Class name: Question"" -} -{ -""input"":""Message: The registration page crashed, and now I can't create a new account."", -""output"":""Class name: Problem"" -} -{ -""input"":""Message: What regions are supported?"", -""output"":""Class name: Question"" -} -{ -""input"":""Message: I can't remember my password."", -""output"":""Class name: Problem"" -} -{ -""input"":""Message: I'm having trouble registering for a new account."", -""output"":""Classname: Problem"" -} -{ -""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"", -""output"":""Class name: Problem"" -} -{ -""input"":""Message: What extra privileges does an administrator have?"", -""output"":""Class name: Question"" -} -{ -""input"":""Message: Can I create a service instance for data in a language other than English?"", -""output"":""Class name: Question"" -} -] -} - -" -96597F608C26E68BFC4BDCA45061400D63793523_3,96597F608C26E68BFC4BDCA45061400D63793523," JSONL example - -The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSONL format. - -{""input"":""Message: When I try to log in, I get an error."",""output"":""Class name: Problem""} -{""input"":""Message: Where can I find the plan prices?"",""output"":""Class name: Question""} -{""input"":""Message: What is the difference between trial and paygo?"",""output"":""Class name: Question""} -{""input"":""Message: The registration page crashed, and now I can't create a new account."",""output"":""Class name: Problem""} -{""input"":""Message: What regions are supported?"",""output"":""Class name: Question""} -{""input"":""Message: I can't remember my password."",""output"":""Class name: Problem""} -{""input"":""Message: I'm having trouble registering for a new account."",""output"":""Classname: Problem""} -{""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"",""output"":""Class name: Problem""} -{""input"":""Message: What extra privileges does an administrator have?"",""output"":""Class name: Question""} -{""input"":""Message: Can I create a service instance for data in a language other than English?"",""output"":""Class name: Question""} - -Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_0,FC8DBF139A485E98914CBB73B8BA684B283AE983," Deploying a tuned foundation model - -Deploy a tuned model so you can add it to a business workflow and start to use foundation models in a meaningful way. - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_1,FC8DBF139A485E98914CBB73B8BA684B283AE983," Before you begin - -The tuning experiment that you used to tune the foundation model must be finished. For more information, see [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html). - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_2,FC8DBF139A485E98914CBB73B8BA684B283AE983," Deploy a tuned model - -To deploy a tuned model, complete the following steps: - - - -1. From the navigation menu, expand Projects, and then click All projects. -2. Click to open your project. -3. From the Assets tab, click the Experiments asset type. -4. Click to open the tuning experiment for the model you want to deploy. -5. From the Tuned models list, find the completed tuning experiment, and then click New deployment. -6. Name the tuned model. - -The name of the tuning experiment is used as the tuned model name if you don't change it. The name has a number after it in parentheses, which counts the deployments. The number starts at one and is incremented by one each time you deploy this tuning experiment. -7. Optional: Add a description and tags. -8. In the Target deployment space field, choose a deployment space. - -The deployment space must be associated with a machine learning instance that is in the same account as the project where the tuned model was created. - -If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). - -For more information, see [What is a deployment space?](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=endeployment-space) -9. In the Deployment serving name field, add a label for the deployment. - -The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated ID that is assigned otherwise. - -The serving name also abstracts the deployment from its service instance details. Applications refer to this name that allows for the underlying service instance to be changed without impacting users. - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_3,FC8DBF139A485E98914CBB73B8BA684B283AE983,"The name can have up to 36 characters. The supported characters are [a-z,0-9,_]. - -The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. -10. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed model. -11. Click Deploy. - - - -After the tuned model is promoted to the deployment space and deployed, a copy of the tuned model is stored in your project as a model asset. - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_4,FC8DBF139A485E98914CBB73B8BA684B283AE983," What is a deployment space? - -When you create a new deployment, a tuned model is promoted to a deployment space, and then deployed. A deployment space is separate from the project where you create the asset. A deployment space is associated with the following services that it uses to deploy assets: - - - -* Watson Machine Learning: A product with tools and services you can use to build, train, and deploy machine learning models. This service hosts your turned model. -* IBM Cloud Object Storage: A secure platform for storing structured and unstructured data. Your deployed model asset is stored in a Cloud Object Storage bucket that is associated with your project. - - - -For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_5,FC8DBF139A485E98914CBB73B8BA684B283AE983," Testing the deployed model - -The true test of your tuned model is how it responds to input that follows tuned-for patterns. - -You can test the tuned model from one of the following pages: - - - -* Prompt Lab: A tool with an intuitive user interface for prompting foundation models. You can customize the prompt parameters for each input. You can also save the prompt as a notebook so you can interact with it programmatically. -* Deployment space: Useful when you want to test your model programmatically. From the API Reference tab, you can find information about the available endpoints and code examples. You can also submit input as text and choose to return the output or in a stream, as the output is generated. However, you cannot change the prompt parameters for the input text. - - - -To test your tuned model, complete the following steps: - - - -1. From the navigation menu, select Deployments. -2. Click the name of the deployment space where you deployed the tuned model. -3. Click the name of your deployed model. -4. Follow the appropriate steps based on where you want to test the tuned model: - - - -* From Prompt Lab: - - - -1. Click Open in Prompt Lab, and then choose the project where you want to work with the model. - -Prompt Lab opens and the tuned model that you deployed is selected from the Model field. -2. In the Try section, add a prompt to the Input field that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate. - - - -For more information about how to use the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). -* From the deployment space: - - - -1. Click the Test tab. -2. In the Input data field, add a prompt that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate. - -You can click View parameter settings to see the prompt parameters that are applied to the model by default. To change the prompt parameters, you must go to the Prompt Lab. - - - - - - - -" -FC8DBF139A485E98914CBB73B8BA684B283AE983_6,FC8DBF139A485E98914CBB73B8BA684B283AE983," Learn more - - - -* [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) -* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) - - - -Parent topic:[Deploying foundation model assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-found-assets.html) -" -15A014C514B00FF78C689585F393E21BAE922DB2_0,15A014C514B00FF78C689585F393E21BAE922DB2," Methods for tuning foundation models - -Learn more about different tuning methods and how they work. - -Models can be tuned in the following ways: - - - -* Fine-tuning: Changes the parameters of the underlying foundation model to guide the model to generate output that is optimized for a task. - -Note: You currently cannot fine-tune models in Tuning Studio. -* Prompt-tuning: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. - -When you prompt-tune a model, the underlying foundation model can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs. - - - -" -15A014C514B00FF78C689585F393E21BAE922DB2_1,15A014C514B00FF78C689585F393E21BAE922DB2," How prompt-tuning works - -Foundation models are sensitive to the input that you give them. Your input, or how you prompt the model, can introduce context that the model will use to tailor its generated output. Prompt engineering to find the right prompt often works well. However, it can be time-consuming, error-prone, and its effectiveness can be restricted by the context window length that is allowed by the underlying model. - -Prompt-tuning a model in the Tuning Studio applies machine learning to the task of prompt engineering. Instead of adding words to the input itself, prompt-tuning is a method for finding a sequence of values that, when added as a prefix to the input text, improve the model's ability to generate the output you want. This sequence of values is called a prompt vector. - -Normally, words in the prompt are vectorized by the model. Vectorization is the process of converting text to tokens, and then to numbers defined by the model's tokenizer to identify the tokens. Lastly, the token IDs are encoded, meaning they are converted into a vector representation, which is the input format that is expected by the embedding layer of the model. Prompt-tuning bypasses the model's text-vectorization process and instead crafts a prompt vector directly. This changeable prompt vector is concatenated to the vectorized input text and the two are passed as one input to the embedding layer of the model. Values from this crafted prompt vector affect the word embedding weights that are set by the model and influence the words that the model chooses to add to the output. - -To find the best values for the prompt vector, you run a tuning experiment. You demonstrate the type of output that you want for a corresponding input by providing the model with input and output example pairs in training data. With each training run of the experiment, the generated output is compared to the training data output. Based on what it learns from differences between the two, the experiment adjusts the values in the prompt vector. After many runs through the training data, the model finds the prompt vector that works best. - -" -15A014C514B00FF78C689585F393E21BAE922DB2_2,15A014C514B00FF78C689585F393E21BAE922DB2,"You can choose to start the training process by providing text that is vectorized by the experiment. Or you can let the experiment use random values in the prompt vector. Either way, unless the initial values are exactly right, they will be changed repeatedly as part of the training process. Providing your own initialization text can help the experiment reach a good result more quickly. - -The result of the experiment is a tuned version of the underlying model. You submit input to the tuned model for inferencing and the model generates output that follows the tuned-for pattern. - -For more information about this tuning method, read the research paper named [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691). - -" -15A014C514B00FF78C689585F393E21BAE922DB2_3,15A014C514B00FF78C689585F393E21BAE922DB2," Learn more - - - -* [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) - - - -Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) -" -51747F17F413F1F34CFD73D170DE392D874D03DD_0,51747F17F413F1F34CFD73D170DE392D874D03DD," Parameters for tuning foundation models - -Tuning parameters configure the tuning experiments that you use to tune the model. - -During the experiment, the tuning model repeatedly adjusts the structure of the prompt so that its predictions can get better over time. - -The following diagram illustrates the steps that occur during a tuning training experiment run. The parts of the experiment flow that you can configure are highlighted. These decision points correspond with experiment tuning parameters that you control. - -![Tuning experiment run process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tuning-training-experiment.png) - -The diagram shows the following steps of the experiment: - - - -1. Starts from the initialization method that you choose to use to initialize the prompt. - -If the initialization method parameter is set to text, then you must add the initialization text. -2. If specified, tokenizes the initialization text and converts it into a prompt vector. -3. Reads the training data, tokenizes it, and converts it into batches. - -The size of the batches is determined by the batch size parameter. -4. Sends input from the examples in the batch to the foundation model for the model to process and generate output. -5. Compares the model's output to the output from the training data that corresponds to the training data input that was submitted. Then, computes the loss gradient, which is the difference between the predicted output and the actual output from the training data. - -At some point, the experiment adjusts the prompt vector that is added to the input based on the performance of the model. When this adjustment occurs depends on how the Accumulation steps parameter is configured. -6. Adjustments are applied to the prompt vector that was initialized in Step 2. The degree to which the vector is changed is controlled by the Learning rate parameter. The edited prompt vector is added as a prefix to the input from the next example in the training data, and is submitted to the model as input. -7. The process repeats until all of the examples in all of the batches are processed. -8. The entire set of batches are processed again as many times as is specified in the Number of epochs parameter. - - - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_1,51747F17F413F1F34CFD73D170DE392D874D03DD,"Note: No layer of the base foundation model is changed during this process. - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_2,51747F17F413F1F34CFD73D170DE392D874D03DD," Parameter details - -The parameters that you change when you tune a model are related to the tuning experiment, not to the underlying foundation model. - - - -Table 1: Tuning parameters - - Parameter name Value options Default value Learn more - - Initialization method Random, Text Random [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize) - Initialization text None None [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize) - Batch size 1 - 16 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment) - Accumulation steps 1 - 128 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment) - Learning rate 0.01 - 0.5 0.3 [Managing the learning rate](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enlearning-rate) - Number of epochs (training cycles) 1 - 50 20 [Choosing the number of training runs to complete](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enruns) - - - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_3,51747F17F413F1F34CFD73D170DE392D874D03DD," Segmenting the training data - -When an experiment runs, the experiment first breaks the training data into smaller batches, and then trains on one batch at a time. Each batch must fit in GPU memory to be processed. To reduce the amount of GPU memory that is needed, you can configure the tuning experiment to postpone making adjustments until more than one batch is processed. Tuning runs on a batch and its performance metrics are calculated, but the prompt vector isn't changed. Instead, the performance information is collected over some number of batches before the cumulative performance metrics are evaluated. - -Use the following parameters to control how the training data is segmented: - -Batch size Number of labeled examples (also known as samples) to process at one time. - -For example, for a data set with 1,000 examples and a batch size of 10, the data set is divided into 100 batches of 10 examples each. - -If the training data set is small, specify a smaller batch size to ensure that each batch has enough examples in it. - -Accumulation steps: Number of batches to process before the prompt vector is adjusted. - -For example, if the data set is divided into 100 batches and you set the accumulation steps value to 10, then the prompt vector is adjusted 10 times instead of 100 times. - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_4,51747F17F413F1F34CFD73D170DE392D874D03DD," Initializing prompt tuning - -When you create an experiment, you can choose whether to specify your own text to serve as the initial prompt vector or let the experiment generate it for you. These new tokens start the training process either in random positions, or based on the embedding of a vocabulary or instruction that you specify in text. Studies show that as the size of the underlying model grows beyond 10 billion parameters, the initialization method that is used becomes less important. - -The choice that you make when you create the tuning experiment customizes how the prompt is initialized. - -Initialization method: Choose a method from the following options: - - - -* Text: The Prompt Tuning method is used where you specify the initialization text of the prompt yourself. -* Random: The Prompt Tuning method is used that allows the experiment to add values that are chosen at random to include with the prompt. - - - -Initialization text: The text that you want to add. Specify a task description or instructions similar to what you use for zero-shot prompting. - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_5,51747F17F413F1F34CFD73D170DE392D874D03DD," Managing the learning rate - -The learning rate parameter determines how much to change the prompt vector when the it is adjusted. The higher the number, the greater the change to the vector. - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_6,51747F17F413F1F34CFD73D170DE392D874D03DD," Choosing the number of training runs to complete - -The Number of epochs parameter specifies the number of times to cycle through the training data. - -For example, with a batch size of 10 and a data set with 1,000 examples, one epoch must process 100 batches and update the prompt vector 100 times. If you set the number of epochs to 20, the model is passed through the data set 20 times, which means it processes a total of 2,000 batches during the tuning process. - -The higher the number of epochs and bigger your training data, the longer it takes to tune a model. - -" -51747F17F413F1F34CFD73D170DE392D874D03DD_7,51747F17F413F1F34CFD73D170DE392D874D03DD," Learn more - - - -* [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) - - - -Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) -" -8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_0,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Tuning Studio - -Tune a foundation model with the Tuning Studio to guide an AI foundation model to return useful output. - -Required permissions : To run training experiments, you must have the Admin or Editor role in a project. - -: The Tuning Studio is not available with all plans or in all data centers. See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) and [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html). - -Data format : Tabular: JSON, JSONL. For details, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). - -Note: You can use the same training data file with one or more tuning experiments. - -Data size : 50 to 10,000 input and output example pairs. The maximum file size is 200 MB. - -You use the Tuning Studio to create a tuned version of an existing foundation model. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Foundation models are AI models that are pretrained on terabytes of data from across the internet and other public resources. They are unrivaled in their ability to predict the next best word and generate language. While language-generation can be useful for brainstorming and spurring creativity, it is less useful for achieving concrete tasks. Model tuning, and other techniques, such as retrieval-augmented generation, help you to use foundation models in meaningful ways for your business. - -With the Tuning Studio, you can tune a smaller foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation. Tuning can help a smaller foundation model achieve results comparable to larger models in the same model family. By tuning and deploying the smaller model, you can reduce long-term inference costs. - -" -8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_1,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE,"Much like prompt engineering, tuning a foundation model helps you to influence the content and format of the foundation model output. Knowing what to expect from a foundation model is essential if you want to plug the step of inferencing a foundation model into a business workflow. - -The following diagram illustrates how tuning a foundation model can help you guide the model to generate useful output. You provide labeled data that illustrates the format and type of output that you want the model to return, which helps the foundation model to follow the established pattern. - -![How a tuned model relates to a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tune-overview.png) - -You can tune a foundation model to optimize the model's ability to do many things, including: - - - -* Generate new text in a specific style -* Generate text that summarizes or extracts information in a certain way -* Classify text - - - -To learn more about when tuning a model is the right approach, see [When to tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html). - -" -8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_2,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Workflow - -Tuning a model involves the following tasks: - - - -1. Engineer prompts that work well with the model you want to use. - - - -* Find the largest foundation model that works best for the task. -* Experiment until you understand which prompt formats show the most potential for getting good results from the model. - - - -Tuning doesn't mean you can skip prompt engineering altogether. Experimentation is necessary to find the right foundation model for your use case. Tuning means you can do the work of prompt engineering once and benefit from it again and again. - -You can use the Prompt Lab to experiment with prompt engineering. For help, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). -2. Create training data to use for model tuning. -3. Create a tuning experiment to tune the model. -4. Evaluate the tuned model. - -If necessary, change the training data or the experiment parameters and run more experiments until you're satisfied with the results. -5. Deploy the tuned model. - - - -" -8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_3,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Learn more - - - -* [When to tune](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html) -* [Methods for tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) -* [Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) - - - - - -* [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) -* [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_0,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Tuning a foundation model - -To tune a foundation model, create a tuning experiment that guides the foundation model to return the output you want in the format you want. - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_1,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Requirements - -If you signed up for watsonx.ai and specified the Dallas region, all requirements are met and you're ready to use the Tuning Studio. - -The Tuning Studio is available from a project that is created for you automatically when you sign up for watsonx.ai. The project is named sandbox and you can use it to get started with testing and customizing foundation models. - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_2,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Before you begin - -Experiment with the Prompt Lab to determine the best model to use for your task. Craft and try prompts until you find the input and output patterns that generate the best results from the model. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). - -Create a set of example prompts that follow the patterns that generate the best results based on your prompt engineering work. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_3,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Tune a model - - - -1. Click the Tune a foundation model with labeled data task. -2. Name the tuning experiment. -3. Optional: Add a description and tags. Add a description as a reminder to yourself and to help collaborators understand the goal of the tuned model. Assigning a tag gives you a way to filter your tuning assets later to show only the assets associated with a tag. -4. Click Create. -5. The flan-t5-xl foundation model is selected for you to tune. - -To read more about the model, click the Preview icon (![Preview icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-preview-icon.png)) that is displayed from the drop-down list. - -For more information, see the [model card](https://huggingface.co/google/flan-t5-xl) -6. Choose how to initialize the prompt from the following options: - -Text : Uses text that you specify. - -Random : Uses values that are generated for you as part of the tuning experiment. - -These options are related to the prompt tuning method for tuning models. For more information about how each option affects the tuning experiment, see [How prompt-tuning works](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.htmlhow-prompt-tuning-works). -7. Required for the Text initialization method only: Add the initialization text that you want to include with the prompt. - - - -* For a classification task, give an instruction that describes what you want to classify and lists the class labels to be used. For example, Classify whether the sentiment of each comment is Positive or Negative. -* For a generative task, describe what you want the model to provide in the output. For example, Make the case for allowing employees to work from home a few days a week. -* For a summarization task, give an instruction such as, Summarize the main points from a meeting transcript. - - - -8. Choose a task type. - -Choose the task type that most closely matches what you want the model to do: - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_4,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E,"Classification : Predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. By separating out customer problems, you can find and address them more quickly. - -Generation : Generates text. For example, writes a promotional email. - -Summarization : Generates text that describes the main ideas that are expressed in a body of text. For example, summarizes a research paper. - -Whichever task you choose, the input is submitted to the underlying foundation model as a generative request type during the experiment. For classification tasks, class names are taken into account in the prompts that are used to tune the model. As models and tuning methods evolve, task-specific enhancements are likely to be added that you can leverage if tasks are represented accurately. -9. Required for classification tasks only: In the Classification output (verbalizer) field, add the class labels that you want the model to use one at a time. - -Important: Specify the same labels that are used in your training data. - -During the tuning experiment, class label information is submitted along with the input examples from the training data. -10. Add the training data that will be used to tune the model. You can upload a file or use an asset from your project. - -To see examples of how to format your file, expand What should your data look like?, and then click Preview template. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). -11. Optional: If you want to limit the size of the input or output examples that are used during training, adjust the maximum number of tokens that are allowed. Expand What should your data look like?, and then drag the sliders to change the values. Limiting the size can reduce the time that it takes to run the tuning experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html?context=cdpaas&locale=entuning-tokens). -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_5,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E,"12. Optional: Click Configure parameters to edit the parameters that are used by the tuning experiment. - -The tuning run is configured with parameter values that represent a good starting point for tuning a model. You can adjust them if you want. - -For more information about the available parameters and what they do, see [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html). - -After you change parameter values, click Save. -13. Click Start tuning. - - - -The tuning experiment begins. It might take a few minutes to a few hours depending on the size of your training data and the availability of compute resources. When the experiment is finished, the status shows as completed. - -A tuned model asset is not created until after you create a deployment from a completed tuning experiment. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html). - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_6,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Controlling the number of tokens used - -You can change the number of tokens that are allowed in the model input and output during a tuning experiment. - - - -Table 1: Token number parameters - - Parameter name Value options Default value - - Maximum input tokens 1 - 256 256 - Maximum output tokens 1 - 128 128 - - - -You already have some control over the input size. The input text that is used during a tuning experiment comes from your training data. So, you can manage the input size by keeping your example inputs to a set length. However, you might be getting training data that isn't curated from another team or process. In that case, you can use the Maximum input tokens slider to manage the input size. If you set the parameter to 200 and the training data has an example input with 1,000 tokens, for example, the example is truncated. Only the first 200 tokens of the example input are used. - -The Max output tokens value is important because it controls the number of tokens that the model is allowed to generate as output at training time. You can use the slider to limit the output size, which helps the model to generate concise output. - -For classification tasks, minimizing the size of the output is a good way to force a generative model to return the class label only, without repeating the classification pattern in the output. - -For natural language models, words are converted to tokens. 256 tokens is equal to approximately 130—170 words. 128 tokens is equal to approximately 65—85 words. However, token numbers are difficult to estimate and can differ by model. For more information, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_7,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Evaluating the tuning experiment - -When the experiment is finished, a loss function graph is displayed that illustrates the improvement in the model output over time. The epochs are shown on the x-axis and a measure of the difference between predicted and actual results per epoch is shown on the y-axis. The value that is shown per epoch is calculated from the average gradient value from all of the accumulation steps in the epoch. - -The best experiment outcome is represented by a downward-sloping curve. A decreasing curve means that the model gets better at generating the expected outputs in the expected format over time. - -If the gradient value for the last epoch remains too high, you can run another experiment. To help improve the results, try one of the following approaches: - - - -* Augment or edit the training data that you're using. -* Adjust the experiment parameters. - - - -When you're satisfied with the results from the tuning experiment, deploy the tuned foundation model. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html). - -" -2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_8,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Learn more - - - -* [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) -* [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) - - - - - -* [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) -* [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) - - - -Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) -" -FBC3C5F81D060CD996489B772ABAC886F12130A3_0,FBC3C5F81D060CD996489B772ABAC886F12130A3," When to tune a foundation model - -Find out when tuning a model can help you use a foundation model to achieve your goals. - -Tune a foundation model when you want to do the following things: - - - -* Reduce the cost of inferencing at scale - -Larger foundation models typically generate better results. However, they are also more expensive to use. By tuning a model, you can get similar, sometimes even better results from a smaller model that costs less to use. -* Get the model's output to use a certain style or format -* Improve the model's performance by teaching the model a specialized task -* Generate output in a reliable form in response to zero-shot prompts - - - -" -FBC3C5F81D060CD996489B772ABAC886F12130A3_1,FBC3C5F81D060CD996489B772ABAC886F12130A3," When not to tune a model - -Tuning a model is not always the right approach for improving the output of a model. For example, tuning a model cannot help you do the following things: - - - -* Improve the accuracy of answers in model output - -If you're using a foundation model for factual recall in a question-answering scenario, tuning will marginally improve answer accuracy. To get factual answers, you must provide factual information as part of your input to the model. Tuning can be used to help the generated factual answers conform to a format that can be more-easily used by a downstream process in a workflow. To learn about methods for returning factual answers, see [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html). -* Get the model to use a specific vocabulary in its output consistently - -Large language models that are trained on large amounts of data formulate a vocabulary based on that initial set of data. You can introduce significant terms to the model from training data that you use to tune the model. However, the model might not use these preferred terms reliably in its output. -* Teach a foundation model to perform an entirely new task - -Experimenting with prompt engineering is an important first step because it helps you understand the type of output that a foundation model is and is not capable of generating. You can use tuning to tweak, tailor, and shape the output that a foundation model is able to return. - - - -" -FBC3C5F81D060CD996489B772ABAC886F12130A3_2,FBC3C5F81D060CD996489B772ABAC886F12130A3," Learn more - - - -* [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) -* [Tuning methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) - - - -Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_0,E3B9F33C36E5636808B137CFA4745E39F3B48D62," SPSS predictive analytics forecasting using data preparation for time series data in notebooks - -Data preparation for time series data (TSDP) provides the functionality to convert raw time data (in Flattened multi-dimensional format, which includes transactional (event) based and column-based data) into regular time series data (in compact row-based format) which is required by the subsequent time series analysis methods. - -The main job of TSDP is to generate time series in terms of the combination of each unique value in the dimension fields with metric fields. In addition, it sorts the data based on the timestamp, extracts metadata of time variables, transforms time series with another time granularity (interval) by applying an aggregation or distribution function, checks the data quality, and handles missing values if needed. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_1,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code: - -from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation - -tsdp = TimeSeriesDataPreparation(). -setMetricFieldList([""Demand""]). -setDateTimeField(""Date""). -setEncodeSeriesID(True). -setInputTimeInterval(""MONTH""). -setOutTimeInterval(""MONTH""). -setQualityScoreThreshold(0.0). -setConstSeriesThreshold(0.0) - -tsdpOut = tsdp.transform(data) - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_2,E3B9F33C36E5636808B137CFA4745E39F3B48D62," TimeSeriesDataPreparationConvertor - -This is the date/time convertor API that's used to provide some functionalities of the date/time convertor inside TSDP for applications to use. There are two use cases for this component: - - - -* Compute the time points between a specified start and end time. In this case, the start and end time both occur after the first observation in the previous TSDP\'s output. -* Compute the time points between a start index and end index referring to the last observation in the previous TSDP\'s output. - - - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_3,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal causal modeling - -Temporal causal modeling (TCM) refers to a suite of methods that attempt to discover key temporal relationships in time series data by using a combination of Granger causality and regression algorithms for variable selection. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_4,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code: - -from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation -from spss.ml.common.wrapper import LocalContainerManager -from spss.ml.forecasting.temporalcausal import TemporalCausal -from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor -from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettings, Forecast, Fit -from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation - -tsdp = TimeSeriesDataPreparation().setDimFieldList([""Demension1"", ""Demension2""]). -setMetricFieldList([""m1"", ""m2"", ""m3"", ""m4""]). -setDateTimeField(""date""). -setEncodeSeriesID(True). -setInputTimeInterval(""MONTH""). -setOutTimeInterval(""MONTH"") -tsdpOutput = tsdp.transform(changedDF) - -lcm = LocalContainerManager() -lcm.exportContainers(""TSDP"", tsdp.containers) - -estimator = TemporalCausal(lcm). -setInputContainerKeys([""TSDP""]). -setTargetPredictorList([Predictor( -targetList="""", """", """"]], -predictorCandidateList="""", """", """"]])]). -setMaxNumPredictor(MaxNumberOfPredictor(False, 4)). -setMaxLag(MaxLag(""SETTING"", 5)). -setTolerance(1e-6) - -tcmModel = estimator.fit(tsdpOutput) -transformer = tcmModel.setDataEncoded(True). -setCILevel(0.95). -setOutTargetValues(False). -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_5,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"setTargets(FieldSettings(fieldNameList=FieldNameList(seriesIDList=[""da1"", ""db1"", ""m1""]]))). -setReestimate(False). -setForecast(Forecast(outForecast=True, forecastSpan=5, outCI=True)). -setFit(Fit(outFit=True, outCI=True, outResidual=True)) - -predictions = transformer.transform(tsdpOutput) -rtsdp = ReverseTimeSeriesDataPreparation(lcm). -setInputContainerKeys([""TSDP""]). -setDeriveFutureIndicatorField(True) - -rtsdpOutput = rtsdp.transform(predictions) -rtsdpOutput.show() - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_6,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Auto Regressive Model - -Autoregressive (AR) models are built to compute out-of-sample forecasts for predictor series that aren't target series. These predictor forecasts are then used to compute out-of-sample forecasts for the target series. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_7,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Model produced by TemporalCausal - -TemporalCausal exports outputs: - - - -* a JSON file that contains TemporalCausal model information -* an XML file that contains multi series model - - - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_8,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code: - -from spss.ml.common.wrapper import LocalContainerManager -from spss.ml.forecasting.temporalcausal import TemporalCausal, TemporalCausalAutoRegressiveModel -from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor -from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettingsAr, ForecastAr - -lcm = LocalContainerManager() -arEstimator = TemporalCausal(lcm). -setInputContainerKeys([tsdp.uid]). -setTargetPredictorList([Predictor( -targetList = ""da1"", ""db1"", ""m2""]], -predictorCandidateList = ""da1"", ""db1"", ""m1""], -""da1"", ""db2"", ""m1""], -""da1"", ""db2"", ""m2""], -""da1"", ""db3"", ""m1""], -""da1"", ""db3"", ""m2""], -""da1"", ""db3"", ""m3""]])]). -setMaxNumPredictor(MaxNumberOfPredictor(False, 5)). -setMaxLag(MaxLag(""SETTING"", 5)) - -arEstimator.fit(df) - -tcmAr = TemporalCausalAutoRegressiveModel(lcm). -setInputContainerKeys([arEstimator.uid]). -setDataEncoded(True). -setOutTargetValues(True). -setTargets(FieldSettingsAr(FieldNameList( -seriesIDList=[""da1"", ""db1"", ""m1""], -""da1"", ""db2"", ""m2""], -""da1"", ""db3"", ""m3""]]))). -setForecast(ForecastAr(forecastSpan = 5)) - -scored = tcmAr.transform(df) -scored.show() - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_9,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Outlier Detection - -One of the advantages of building TCM models is the ability to detect model-based outliers. Outlier detection refers to a capability to identify the time points in the target series with values that stray too far from their expected (fitted) values based on the TCM models. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_10,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Root Cause Analysis - -The root cause analysis refers to a capability to explore the Granger causal graph in order to analyze the key/root values that resulted in the outlier in question. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_11,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Scenario Analysis - -Scenario analysis refers to a capability of the TCM models to ""play-out"" the repercussions of artificially setting the value of a time series. A scenario is the set of forecasts that are performed by substituting the values of a root time series by a vector of substitute values. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_12,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Summary - -TCM Summary selects Top N models based on one model quality measure. There are five model quality measures: Root Mean Squared Error (RMSE), Root Mean Squared Percentage Error (RMSPE), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and R squared (RSQUARE). Both N and the model quality measure can be set by the user. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_13,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Time Series Exploration - -Time Series Exploration explores the characteristics of time series data based on some statistics and tests to generate preliminary insights about the time series before modeling. It covers not only analytic methods for expert users (including time series clustering, unit root test, and correlations), but also provides an automatic exploration process based on a simple time series decomposition method for business users. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_14,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code: - -from spss.ml.forecasting.timeseriesexploration import TimeSeriesExploration - -tse = TimeSeriesExploration(). -setAutoExploration(True). -setClustering(True) - -tseModel = tse.fit(data) -predictions = tseModel.transform(data) -predictions.show() - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_15,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Reverse Data preparation for time series data - -Reverse Data preparation for time series data (RTSDP) provides functionality that converts the compact row based (CRB) format that's generated by TimeSeriesDataPreperation (TSDP) or TemporalCausalModel (TCM Score) back to the flattened multidimensional (FMD) format. - -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_16,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code: - -from spss.ml.common.wrapper import LocalContainerManager -from spss.ml.forecasting.params.temporal import GroupType -from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation -from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation - -manager = LocalContainerManager() -tsdp = TimeSeriesDataPreparation(manager). -setDimFieldList([""Dimension1"", ""Dimension2"", ""Dimension3""]). -setMetricFieldList( -[""Metric1"", ""Metric2"", ""Metric3"", ""Metric4"", ""Metric5"", ""Metric6"", ""Metric7"", ""Metric8"", ""Metric9"", ""Metric10""]). -setDateTimeField(""TimeStamp""). -setEncodeSeriesID(False). -setInputTimeInterval(""WEEK""). -setOutTimeInterval(""WEEK""). -setMissingImputeType(""LINEAR_INTERP""). -setQualityScoreThreshold(0.0). -setConstSeriesThreshold(0.0). -setGroupType( -GroupType([(""Metric1"", ""MEAN""), (""Metric2"", ""SUM""), (""Metric3"", ""MODE""), (""Metric4"", ""MIN""), (""Metric5"", ""MAX"")])) - -tsdpOut = tsdp.transform(changedDF) -rtsdp = ReverseTimeSeriesDataPreparation(manager). -setInputContainerKeys([tsdp.uid]). -setDeriveFutureIndicatorField(True) - -rtdspOut = rtsdp.transform(tsdpOut) - -import com.ibm.spss.ml.forecasting.traditional.TimeSeriesForecastingModelReEstimate - -val tsdp = TimeSeriesDataPreparation(). -setDimFieldList(Array(""da"", ""db"")). -setMetricFieldList(Array(""metric"")). -setDateTimeField(""date""). -" -E3B9F33C36E5636808B137CFA4745E39F3B48D62_17,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"setEncodeSeriesID(false). -setInputTimeInterval(""MONTH""). -setOutTimeInterval(""MONTH"") - -val lcm = LocalContainerManager() -lcm.exportContainers(""k"", tsdp.containers) - -val reestimate = TimeSeriesForecastingModelReEstimate(lcm). -setForecast(ForecastEs(outForecast = true, forecastSpan = 4, outCI = true)). -setFitSettings(Fit(outFit = true, outCI = true, outResidual = true)). -setOutInputData(true). -setInputContainerKeys(Seq(""k"")) - -val rtsdp = ReverseTimeSeriesDataPreparation(tsdp.manager). -setInputContainerKeys(List(tsdp.uid)). -setDeriveFutureIndicatorField(true) - -val pipeline = new Pipeline().setStages(Array(tsdp, reestimate, rtsdp)) -val scored = pipeline.fit(data).transform(data) -scored.show() - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_0,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Geospatial data analysis - -You can use the geospatio-temporal library to expand your data science analysis in Python notebooks to include location analytics by gathering, manipulating and displaying imagery, GPS, satellite photography and historical data. - -The gespatio-temporal library is available in all IBM Watson Studio Spark with Python runtime environments. - -" -3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_1,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Key functions - -The geospatio-temporal library includes functions to read and write data, topological functions, geohashing, indexing, ellipsoidal and routing functions. - -Key aspects of the library include: - - - -* All calculated geometries are accurate without the need for projections. -* The geospatial functions take advantage of the distributed processing capabilities provided by Spark. -* The library includes native geohashing support for geometries used in simple aggregations and in indexing, thereby improving storage retrieval considerably. -* The library supports extensions of Spark distributed joins. -* The library supports the SQL/MM extensions to Spark SQL. - - - -" -3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_2,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Getting started with the library - -Before you can start using the library in a notebook, you must register STContext in your notebook to access the st functions. - -To register STContext: - -from pyst import STContext -stc = STContext(spark.sparkContext._gateway) - -" -3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_3,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Next steps - -After you have registered STContext in your notebook, you can begin exploring the spatio-temporal library for: - - - -* Functions to read and write data -* Topological functions -* Geohashing functions -* Geospatial indexing functions -* Ellipsoidal functions -* Routing functions - - - -Check out the following sample Python notebooks to learn how to use these different functions in Python notebooks: - - - -* [Use the spatio-temporal library for location analytics](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/92c6ab6ea922d1da6a2cc9496a277005) -* [Use spatial indexing to query spatial data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a7432f0c29c5bda2fb42749f3628d981) -* [Spatial queries in PySpark](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/27ecffa80bd3a386fffca1d8d1256ba7) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_0,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Publishing notebooks on GitHub - -To collaborate with stakeholders and other data scientists, you can publish your notebooks in GitHub repositories. You can also use GitHub to back up notebooks for source code management. - -Watch this video to see how to enable GitHub integration. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. - 00:07 Navigate to your profile and settings. - 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token. - 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. - 00:29 Copy the token, return to the GitHub integration settings, and paste the token. - 00:36 The token is validated when you save it to your profile settings. - 00:42 Now, navigate to your projects. - 00:44 You enable GitHub integration at the project level on the ""Settings"" tab. - 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL. - 00:56 You'll find that on the ""Code"" tab in the repo. - 01:01 Click ""Update"" to make the connection. - 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish. - 01:14 Notice that this notebook has the credentials replaced with X's. - 01:19 It's a best practice to remove or replace credentials before publishing to GitHub. - 01:24 So, this notebook is ready for publishing. - 01:27 You can provide the target path along with a commit message. - 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. - 01:42 When you're, ready click ""Publish"". - 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. - 01:54 Let's take a look at the commit. -" -B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_1,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook. - 02:04 Lastly, you can publish as a gist. - 02:07 Gists are another way to share your work on GitHub. - 02:10 Every gist is a git repository, so it can be forked and cloned. - 02:15 There are two types of gists: public and secret. - 02:19 If you start out with a secret gist, you can convert it to a public gist later. - 02:24 And again, you have the option to remove hidden cells. - 02:29 Follow the link to see the published gist. - 02:32 So that's the basics of Watson Studio's GitHub integration. - 02:37 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -" -B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_2,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Enabling access to GitHub from your account - -Before you can publish notebooks on GitHub, you must enable your IBM watsonx account to access GitHub. You enable access by creating a personal access token with the required access scope in GitHub and linking the token to your IBM watsonx account. - -Follow these steps to create a personal access token: - - - -1. Click your avatar in the header, and then click Profile and settings. -2. Go to the Integrations tab and click the GitHub personal access tokens link on the dialog and generate a new token. -3. On the New personal access token page, select repo scope and then click to generate a token. -4. Copy the generated access token and paste it in the GitHub integration dialog window in IBM watsonx. - - - -" -B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_3,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Linking a project to a GitHub repository - -After you have saved the access token, your project must be connected to an existing GitHub repository. You can only link to one existing GitHub repository from a project. Private repositories are supported. - -To link a project to an existing GitHub repository, you must have administrator permission to the project. All project collaborators, who have adminstrator or editor permission, can publish files to this GitHub repository. However, these users must have permission to access the repository. Granting user permissions to repositories must be done in GitHub. - -To connect a project to an existing GitHub repository: - - - -1. Select the Manage tab and go to the Services and Integrations page. -2. Click the Third-party integrations tab. -3. Click Connect integration. -4. Enter your generated access token from Github. - - - -Now you can begin publishing notebooks on GitHub. - -Note:For information on how to change your Git integration, refer to [Managing your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlintegrations). - -" -B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_4,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Publishing a notebook on GitHub - -To publish a notebook on GitHub: - - - -1. Open the notebook in edit mode. -2. Click the GitHub integration icon (![Shows the upload icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/upload.png)) and select Publish on GitHub from the opened notebook's action bar. - - - -When you enter the name of the file you want to publish on GitHub, you can specify a folder path in the GitHub repository. Note that notebook files are always pushed to the master branch. - -If you get this error: An error occurred while publishing the notebook. Invalid access token permissions or repository does not exist. make sure that: - - - -* You generated your personal access token, as described in [Enabling access to GitHub from your account](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/enabling-access.html) and the token was not deleted. -* The repository that you want to publish your notebook to still exists. - - - -Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html) -" -3C307031346D4FD7DD1A66E2A2F919713582B075,3C307031346D4FD7DD1A66E2A2F919713582B075," Hiding sensitive code cells in a notebook - -If your notebook includes code cells with sensitive data, such as credentials for data sources, you can hide those code cells from anyone you share your notebook with. Any collaborators in the same project can see the cells, but when you share a notebook with a link, those cells will be hidden from anyone who uses the link. - -To hide code cells: - - - -1. Open the notebook and select the code cell to hide. -2. Insert a comment with the hide tag on the first line of the code cell. - -For the Python and R languages, enter the following syntax: @hidden_cell - -![Syntax for hiding code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/hide_tag.png) - - - -Parent topic:[Sharing notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) -" -AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7_0,AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7," Installing custom libraries through notebooks - -The prefered way of installing additional Python libraries to use in a notebook is to customize the software configuration of the environment runtime associated with the notebook. You can add the conda or PyPi packages through a customization template when you customize the environment template. - -See [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). - -However, if you want to install packages from somewhere else or packages you created on your local machine, for example, you can install and import the packages through the notebook. - -To install packages other than conda or PyPi packages through your notebook: - - - -1. Add the package to your project storage by clicking the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)), and then browsing the package file or dragging it into your notebook sidebar. -2. Add a project token to the notebook by clicking More > Insert project token from the notebook action bar. The code that is generated by this action initializes the variable project, which is required to access the library you uploaded to object storage. - -Example of an inserted project token: - - @hidden_cell - The project token is an authorization token that is used to access project resources like data sources, connections, and used by platform APIs. -from project_lib import Project -project = Project(project_id='7c7a9455-1916-4677-a2a9-a61a75942f58', project_access_token='p-9a4c487075063e610471d6816e286e8d0d222141') -pc = project.project_context - -If you don't have a token, you need to create one. See [Adding a project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). -3. Install the library: - - - - -" -AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7_1,AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7," Fetch the library file, for example the tar.gz or whatever installable distribution you created -with open(""xxx-0.1.tar.gz"",""wb"") as f: -f.write(project.get_file(""xxx-0.1.tar.gz"").read()) - - Install the library -!pip install xxx-0.1.tar.gz - - - - -1. Now you can import the library: - -import xxx - - - -Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) -" -7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4_0,7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4," Jupyter kernels and notebook environments - -Jupyter notebooks run in kernels in Jupyter notebook environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment. - -The number of notebook Juypter kernels started in an environment depends on the environment type: - - - -* CPU or GPU environments - -When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per user and not per notebook. This means that if you open a second notebook with the same environment template, a second kernel is started in that runtime. Resources are shared. If you want to avoid sharing runtime resources, you must associate each notebook with its own environment template. - -Important: Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because other notebook kernels could still be active in that runtime. Only stop an environment runtime if you are sure that no kernels are active. -* Spark environments - -When you open a notebook in edit mode in a Spark environment, a dedicated Spark cluster is started, even if another notebook was opened in the same Spark environment template. Each notebook kernel has its own Spark driver and set of Spark executors. No resources are shared. - - - -If necessary, you can restart or reconnect to a kernel. When you restart a kernel, the kernel is stopped and then started in the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available. - -The kernel remains active even if you leave the notebook or close the web browser window. When you reopen the same notebook, the notebook is connected to the same kernel. Only the output cells that were saved (auto-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible. You will not see the output for any cells which ran in the background after you left the notebook or closed the window. To see all of the output cells, you need to rerun the notebook. - -" -7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4_1,7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4," Learn more - - - -* [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) - - - - - -* [Associated Spark services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) - - - - - -* [Runtime scope in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlruntime-scope) - - - -Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) -" -A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_0,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Libraries and scripts for notebooks - -Watson Studio includes a large selection of preinstalled open source libraries for Python and R in its runtime environments. You can also use preinstalled IBM libraries or install custom libraries. - -Watson Studio includes the following libraries and the appropriate runtime environments with which you can expand your data analysis: - - - -* The Watson Natural Language Processing library in Python and Python with GPU runtime environments. -* The gespatio-temporal library in Spark with Python runtime environments -* The Xskipper library for data skipping uses the open source in Spark with Python runtime environments -* Parquet encryption in Spark with Python runtime environments -* The tspy library for time series analysis in Spark with Python runtime environments - - - -" -A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_1,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Listing installed libraries - -Many of your favorite open source libraries are pre-installed on runtime environments. All you have to do is import them. See [Import preinstalled libraries and packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html?context=cdpaas&locale=enimport-lib). - -If a library is not preinstalled, you can add it: - - - -* Through the notebook - -Some libraries require a kernel restart after a version change. If you need to work with a library version that isn't pre-installed in the environment in which you start the notebook, and you install this library version through the notebook, the notebook only runs successfully after you restart the kernel. - -Note that when you run the notebook non-interactively, for example as a notebook job, it fails because the kernel can't be restarted. -* By adding a customization to the environment in which the notebook runs - -If you add a library with a particular version to the software customization, the library is preinstalled at the time the environment is started and no kernel restart is required. Also, if the notebook is run in a scheduled job, it won't fail. - -The advantage of adding an environment customization is that the library is preinstalled each time the environment runtime is started. Libraries that you add through a notebook are persisted for the lifetime of the runtime only. If the runtime is stopped and later restarted, those libraries are not installed. - - - -To see the list of installed libraries in your environment runtime: - - - -1. From the Manage tab, on the project's Environments page, select the environment template. -2. From a notebook, run the appropriate command from a notebook cell: - - - -* Python: !pip list --isolated -* R: installed.packages() - - - -3. Optional: Add custom libraries and packages to the environment. See [customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). - - - -" -A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_2,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Importing an installed library - -To import an installed library into your notebook, run the appropriate command from a notebook cell with the library name: - - - -* Python: import library_name -* R: library(library_name) - - - -Alternatively, you can write a script that includes multiple classes and methods and then [import the script into your notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html). - -" -A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_3,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Learn more - - - -* [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) -* [Importing scripts into a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html) -* [Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -* [gespatio-temporal library for location analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html) -* [Xskipper library for data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html) -* [Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) -* [tspy library for time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_0,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading and accessing data in a notebook - -You can integrate data into notebooks by accessing the data from a local file, from free data sets, or from a data source connection. You load that data into a data structure or container in the notebook, for example, a pandas.DataFrame, numpy.array, Spark RDD, or Spark DataFrame. - -To work with data in a notebook, you can choose between the following options: - - - -Recommended methods for adding data to your notebook - - Option Recommended method Requirements Details - - Add data from a file on your local system Add a Code snippet that loads your data The file must exist as an asset in your project [Add a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles) - Add data from a free data set from the Samples Add a Code snippet that loads your data The data set (file) must exist as an asset in your project [Add a free data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles) -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_1,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Load data from data source connections Add a Code snippet that loads your data The connection must exist as an asset in your project [Add a connection to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) and then [Add a code snippet that loads the data from your data source connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enconns) - Access project assets and metadata programmatically Use ibm-watson-studio-lib The data asset must exist in your project [Use the ibm-watson-studio-lib library to interact with data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) - Create and use feature store data Use assetframe-lib library functions The data asset must exist in your project [Use the assetframe-lib library for Python to create and use feature store data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) - Access data using an API function or an operating system command For example, use wget N/A [Access data using an API function or an operating system command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enapi-function) - - - -Important: Make sure that the environment in which the notebook is started has enough memory to store the data that you load to the notebook. The environment must have significantly more memory than the total size of the data that is loaded to the notebook. Some data frameworks, like pandas, can hold multiple copies of the data in memory. - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_2,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Adding a file from your local system - -To add a file from your local system to your project by using the Jupyterlab notebook editor: - - - -1. Open your notebook in edit mode. -2. From the toolbar, click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and add your file. - - - -Tip: You can also drag the file into your notebook sidebar. - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_3,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Load data sets from the Samples - -The data sets on the Samples contain open data. Watch this short video to see how to work with public data sets in the Samples. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -To add a data set from the Samples to your project: - - - -1. From the IBM watsonx navigation menu, select Samples. -2. Find the card for the data set that you want to add. ![A view of data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/datasets.png) -3. Click Add to project, select the project, and click Add. Clicking View project takes you to the project Overview page. The data asset is added to the list of data assets on the project's Assets page. - - - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_4,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading data from files - -Prerequisites The file must exist as an asset in your project. For details, see [Adding a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) or [Loading a data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm). - -To load data from a project file to your notebook: - - - -1. Open your notebook in edit mode. -2. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click Read data, and then select the data file from your project. If you want to change your selection, use Edit icon. -3. From the Load as drop-down list, select the load option that you prefer. If you select Credentials, only file access credentials will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds). -4. Click in an empty code cell in your notebook and then click Insert code to cell to insert the generated code. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook. - - - -The generated code serves as a quick start to begin working with a data set. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs. - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_5,773FA6558F9FD3115F36AF9E4B11F67C1F501432,"To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types). - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_6,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading data from data source connections - -Prerequisites Before you can load data from an IBM data service or from an external data source, you must create or add a connection to your project. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -To load data from an existing data source connection into a data structure in your notebook: - - - -1. Open your notebook in edit mode. -2. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click Read data, and then select the data source connection from your project. -3. Select the schema and choose a table. If you want to change your selection, use Edit icon. -4. Select the load option. If you select Credentials, only metadata will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds). -5. Click in an empty code cell in your notebook and then insert code to the cell. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook. -6. If necessary, enter your personal credentials for locked data connections that are marked with a key icon (![the key symbol for connections with personal credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)). This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. For more information, see [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - - - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_7,773FA6558F9FD3115F36AF9E4B11F67C1F501432,"The generated code serves as a quick start to begin working with a connection. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs. - -To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types). - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_8,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Adding credentials - -You can generate your own code to access the file located in your IBM Cloud Object Storage or a file accessible through a connection. This is useful when, for example, your file format is not supported by the snippet generation tool. With the credentials, you can write your own code to load the data into a data structure in a notebook cell. - -To add the credentials: - - - -1. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) and then click Read data. -2. Click in an empty code cell in your notebook, select Credentials as the load option, and then load the credentials to the cell. You can also click to copy the credentials to the clipboard and then paste them into your notebook. -3. Insert your credentials into the code in your notebook to access the data. For example, see this code in a [blog for Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f). - - - -" -773FA6558F9FD3115F36AF9E4B11F67C1F501432_9,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Use an API function or an operating system command to access the data - -You can use API functions or operating system commands in your notebook to access data, for example, the wget command to access data by using the HTTP, HTTPS or FTP protocols. When you use these types of API functions and commands, you must include code that sets the project access token. See [Manually add the project access token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). - -For reference information about the API, see [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api). - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -7BAB40E15D18920009E4168C32265A950A8AFE38_0,7BAB40E15D18920009E4168C32265A950A8AFE38," Managing compute resources - -If you have the Admin role or Editor in a project, you can perform management tasks for environments. - - - -* [Create an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) -* [Customize an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) -* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enstop-active-runtimes) -* [Promote an environment template to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html) -* [Track capacity unit consumption of runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html) - - - -" -7BAB40E15D18920009E4168C32265A950A8AFE38_1,7BAB40E15D18920009E4168C32265A950A8AFE38," Stop active runtimes - -You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). - -Jupyter notebook runtimes are started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. You should only stop a notebook runtime if you are sure that no other notebook kernels are active. - -Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration of the scheduled job and deactivated again after the job has finished. - -Project users with Admin role can stop all runtimes in the project. Users added to the project with Editor role can stop the runtimes they started, but can't stop other project users' runtimes. Users added to the project with the viewer role can't see the runtimes in the project. - -You can stop runtimes from: - - - -* The Environment Runtimes page, which lists all active runtimes across all projects for your account, by clicking Administration > Environment runtimes from the Watson Studio navigation menu. -* Under Tool runtimes on the Environments page on the Manage tab of your project, which lists the active runtimes for a specific project. -* The Environments page when you click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar in the notebook editor. You can stop the runtime under Runtime status. - - - -Idle timeouts for: - - - -* [Jupyter notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=encpu) -" -7BAB40E15D18920009E4168C32265A950A8AFE38_2,7BAB40E15D18920009E4168C32265A950A8AFE38,"* [Spark runtimes for notebooks and Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enspark) -* [Notebook with GPU runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=engpu) -* [RStudio runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enrstudio) - - - -" -7BAB40E15D18920009E4168C32265A950A8AFE38_3,7BAB40E15D18920009E4168C32265A950A8AFE38," Jupyter notebook idle timeout - -Runtime idle times differ for the Jupyter notebook runtimes depending on your Watson Studio plan. - - - -Idle timeout for default CPU runtimes - - Plan Idle timeout - - Lite - Idle stop time: 1 hour
- CUH limit: 10 CUHs - Professional - Idle stop time: 1 hour
- CUH limit: no limit - Standard (Legacy) - Idle stop time: 1 hour
- CUH limit: no limit - Enterprise (Legacy) - Idle stop time: 3 hours
- CUH limit: no limit - All plans
Free runtime - Idle stop time: 1 hour
- Maximum lifetime: 12 hours - - - -Important: A runtime is started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. Only stop a runtime if you are sure that no kernels are active. - -" -7BAB40E15D18920009E4168C32265A950A8AFE38_4,7BAB40E15D18920009E4168C32265A950A8AFE38," Spark idle timeout - -All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity. The Default Data Refinery XS runtime that is used when you refine data in Data Refinery is stopped after an idle time of 1 hour. - -Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes. - -" -7BAB40E15D18920009E4168C32265A950A8AFE38_5,7BAB40E15D18920009E4168C32265A950A8AFE38," GPU idle timeout - -All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users. - -" -7BAB40E15D18920009E4168C32265A950A8AFE38_6,7BAB40E15D18920009E4168C32265A950A8AFE38," RStudio idle timeout - -An RStudio is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours. - -Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -6349E43EA9B4AC5775DB122E0F6C365D5DB810BF,6349E43EA9B4AC5775DB122E0F6C365D5DB810BF," Managing the lifecycle of notebooks and scripts - -After you have created and tested your notebooks, you can add them to pipelines, publish them to a catalog so that other catalog members can use the notebook in their projects, or share read-only copies outside of Watson Studio so that people who aren't collaborators in your Watson Studio projects can see and use them. R scripts and Shiny apps can't be published or shared using functionality in a project at this time. - -You can use any of these methods for notebooks: - - - -* [Add notebooks to a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) -* [Share a URL on social media](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) -* [Publish on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html) -* [Publish as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html) -* [Publish your notebook to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) - - - -Make sure that before you share or publish a notebook, you hide any sensitive code, like credentials, that you don't want others to see! See [Hide sensitive cells in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html). - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -FF69E780BD8FECEAF7A0ADD24C159679F7359F81_0,FF69E780BD8FECEAF7A0ADD24C159679F7359F81," Markdown cheatsheet - -You can use Markdown tagging to improve the readability of a project readme or the Markdown cells in Jupyter notebooks. The differences between Markdown in the readme files and in notebooks are noted. - -Headings: Use #s followed by a blank space for notebook titles and section headings: - - title - major headings - subheadings - 4th level subheadings - -Emphasis: Use this code: Bold: __string__ or string, Italic: _string_ or string, Strikethrough: string - -Mathematical symbols: Use this code: $ mathematical symbols $ - -Monospace font: Surround text with a back single quotation mark (`). Use monospace for file path and file names and for text users enter or message text users see. - -Line breaks: Sometimes Markdown doesn’t make line breaks when you want them. Put two spaces at the end of the line, or use this code for a manual line break:
- -Indented quoting: Use a greater-than sign (>) and then a space, then type the text. The text is indented and has a gray horizontal line to the left of it until the next carriage return. - -Bullets: Use the dash sign (-) with a space after it or a space, a dash, and a space (-), to create a circular bullet. To create a sub bullet, use a tab followed a dash and a space. You can also use an asterisk instead of a dash, and it works the same. - -Numbered lists: Start with 1. followed by a space, then your text. Hit return and numbering is automatic. Start each line with some number and a period, then a space. Tab to indent to get subnumbering. - -Checkboxes in readme files: Use this code for an unchecked box: ( ) -Use this code for a checked box: (x) - -Tables in readme files: Use this code: - - Heading Heading - - text text - text text - -" -FF69E780BD8FECEAF7A0ADD24C159679F7359F81_1,FF69E780BD8FECEAF7A0ADD24C159679F7359F81,"Graphics in notebooks: Drag and drop images to the Markdown cell to attach it to the notebook. To add images to other cell types, use graphics that are hosted on the web with this code, substituting url/name with the full URL and name of the image: - -Graphics in readme files: Use this code: Alt text] - -Geometric shapes: Use this code with a decimal or hex reference number from here: [!UTF-8 Geometric shapes](https://www.w3schools.com/charsets/ref_utf_geometric.asp)&#reference_number; - -Horizontal lines: Use three asterisks: - -Internal links: To link to a section, add an anchor above the section title and then create a link. - -Use this code to create an anchor: -Use this code to create the link: [section title](section-ID) -Make sure that the section_ID is unique within the notebook or readme. - -Alternatively, for notebooks you can skip creating anchors and use this code: [section title](section-title) -For the text in the parentheses, replace spaces and special characters with a hyphen and make all characters lowercase. - -Test all links! - -External links: Use this code: [link text](http://url) - -To create a link that opens in a new window or tab, use this code: link text - -Test all links! - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_0,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Accessing asset details - -Display details about an asset and preview data assets in a deployment space. - -To display details about the asset, click the asset name. For example, click a model name to view details such as the associated software and hardware specifications, the model creation date, and more. Some details, such as the model name, description, and tags, are editable. - -For data assets, you can also preview the data. - -" -FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_1,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Previewing data assets - -To preview a data asset, click the data asset name. - - - -* User's access to the data is based on the API layer. This means that if user's bearer token allows for viewing data, the data preview is displayed. -* For tabular data, only a subset of the data is displayed. Also, column names are displayed but their data types are not inferred. -* For data in XLS files, only the first worksheet is displayed for preview. -* All data from Cloud Object Storage connectors is assumed to be tabular data. - - - -MIME types supported for preview: - - - - Format Mime types - - Image image/bmp, image/cmu-raster, image/fif, image/florian, image/g3fax, image/gif, image/ief, image/jpeg, image/jutvision, image/naplps, image/pict, image/png, image/svg+xml, image/vnd.net-fpx, image/vnd.rn-realflash, image/vnd.rn-realpix, image/vnd.wap.wbmp, image/vnd.xiff, image/x-cmu-raster, image/x-dwg, image/x-icon, image/x-jg, image/x-jps, image/x-niff, image/x-pcx, image/x-pict, image/x-portable-anymap, image/x-portable-bitmap, image/x-portable-greymap, image/x-portable-pixmap, image/x-quicktime, image/x-rgb, image/x-tiff, image/x-windows-bmp, image/x-xwindowdump, image/xbm, image/xpm -" -FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_2,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Text application/json, text/asp, text/css, text/csv, text/html, text/mcf, text/pascal, text/plain, text/richtext, text/scriplet, text/tab-separated-values, text/tab-separated-values, text/uri-list, text/vnd.abc, text/vnd.fmi.flexstor, text/vnd.rn-realtext, text/vnd.wap.wml, text/vnd.wap.wmlscript, text/webviewhtml, text/x-asm, text/x-audiosoft-intra, text/x-c, text/x-component, text/x-fortran, text/x-h, text/x-java-source, text/x-la-asf, text/x-m, text/x-pascal, text/x-script, text/x-script.csh, text/x-script.elisp, text/x-script.ksh, text/x-script.lisp, text/x-script.perl, text/x-script.perl-module, text/x-script.python, text/x-script.rexx, text/x-script.tcl, text/x-script.tcsh, text/x-script.zsh, text/x-server-parsed-html, text/x-setext, text/x-sgml, text/x-speech, text/x-uil, text/x-uuencode, text/x-vcalendar, text/xml - Tabular data text/csv, application/excel, application/vnd.ms-excel, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, data from connections - - - -Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_0,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authenticating for programmatic access - -To use Watson Machine Learning with the Python client library or the REST API, you must authenticate to secure your work. Learn about the different ways to authenticate and how to apply them to the service of your choosing. - -You use IBM Cloud® Identity and Access Management (IAM) to make authenticated requests to public IBM Watson™ services. With IAM access policies, you can assign access to more than one resource from a single key. In addition, a user, service ID, and service instance can hold multiple API keys. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_1,B518A7A2D4AA3B05564C965889116F6A6151A34B," Security overview - -Refer to the section that describes your security needs. - - - -* [Authentication credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enterminology) -* [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython-client) -* [Rest API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api) - - - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_2,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authentication credentials - -These terms relate to the security requirements described in this topic. - - - -* API keys allow you to easily authenticate when you are using the Python client or APIs and can be used across multiple services. API Keys are considered confidential because they are used to grant access. Treat all API keys as you would a password because anyone with your API key can access your service. -* An IAM token is an authentication token that is required to access IBM Cloud services. You can generate a token by using your API key in the token request. For details on using IAM tokens, refer to [Authenticating to Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learningauthentication). - - - -To authenticate to a service through its API, pass your credentials to the API. You can pass either a bearer token in an authorization header or an API key. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_3,B518A7A2D4AA3B05564C965889116F6A6151A34B," Generating an API key - -To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account. - -You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html). - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_4,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authenticate with an IAM token - -IAM tokens are temporary security credentials that are valid for 60 minutes. When a token expires, you generate a new one. Tokens can be useful for temporary access to resources. For more information, see [Generating an IBM Cloud IAM token by using an API key](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey). - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_5,B518A7A2D4AA3B05564C965889116F6A6151A34B," Getting a service-level token - -You can also authenticate with a service-level token. To generate a service-level token: - - - -1. Refer to the IBM Cloud instructions for [creating a Service ID](https://cloud.ibm.com/iam/serviceids). -2. Generate an API key for that Service ID. -3. Open the space where you plan to keep your deployable assets. -4. On the Access control tab, add the Service ID and assign an access role of Admin or Editor. - - - -You can use the service-level token with your API scoring requests. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_6,B518A7A2D4AA3B05564C965889116F6A6151A34B," Interfaces - - - -* [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython) -* [REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api) - - - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_7,B518A7A2D4AA3B05564C965889116F6A6151A34B," Python client - -Refer to: [Watson Machine Learning Python client ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://ibm.github.io/watson-machine-learning-sdk/) - -To create an instance of the Watson Machine Learning Python client object, you need to pass your credentials to Watson Machine Learning API client. - -wml_credentials = { -""apikey"":""123456789"", -""url"": "" https://HIJKL"" -} -from ibm_watson_machine_learning import APIClient -wml_client = APIClient(wml_credentials) - -Note:Even though you do not explicitly provide an instance_id, it will be picked up from the associated space or project for billing purposes. For details on plans and billing for Watson Machine Learning services, refer to [Watson Machine Learning plans and runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -Refer to [sample notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for examples of how to authenticate and then score a model by using the Python client. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_8,B518A7A2D4AA3B05564C965889116F6A6151A34B," REST API - -Refer to: [Watson Machine Learning REST API ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://cloud.ibm.com/apidocs/machine-learning) - -To use the Watson Machine Learning REST API, you must obtain an IBM Cloud Identity and Access Management (IAM) token. In this example, you would supply your API key in place of the example key. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_9,B518A7A2D4AA3B05564C965889116F6A6151A34B," cURL example - -curl -k -X POST ---header ""Content-Type: application/x-www-form-urlencoded"" ---header ""Accept: application/json"" ---data-urlencode ""grant_type=urn:ibm:params:oauth:grant-type:apikey"" ---data-urlencode ""apikey=123456789"" -""https://iam.cloud.ibm.com/identity/token"" - -The obtained IAM token needs to be prefixed with the word Bearer, and passed in the Authorization header for API calls. - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_10,B518A7A2D4AA3B05564C965889116F6A6151A34B," Python example - -import requests - - Paste your Watson Machine Learning service apikey here - -apikey = ""123456789"" - - Get an IAM token from IBM Cloud -url = ""https://iam.cloud.ibm.com/identity/token"" -headers = { ""Content-Type"" : ""application/x-www-form-urlencoded"" } -data = ""apikey="" + apikey + ""&grant_type=urn:ibm:params:oauth:grant-type:apikey"" -response = requests.post( url, headers=headers, data=data, auth=( apikey ) -iam_token = response.json()[""access_token""] - -" -B518A7A2D4AA3B05564C965889116F6A6151A34B_11,B518A7A2D4AA3B05564C965889116F6A6151A34B," Node.js example - -var btoa = require( ""btoa"" ); -var request = require( 'request' ); - -// Paste your Watson Machine Learning service apikey here -var apikey = ""123456789""; - -// Use this code as written to get an access token from IBM Cloud REST API -// -var IBM_Cloud_IAM_uid = ""bx""; -var IBM_Cloud_IAM_pwd = ""bx""; - -var options = { url : ""https://iam.cloud.ibm.com/identity/token"", -headers : { ""Content-Type"" : ""application/x-www-form-urlencoded"", -""Authorization"" : ""Basic "" + btoa( IBM_Cloud_IAM_uid + "":"" + IBM_Cloud_IAM_pwd ) }, -body : ""apikey="" + apikey + ""&grant_type=urn:ibm:params:oauth:grant-type:apikey"" }; - -request.post( options, function( error, response, body ) -{ -var iam_token = JSON.parse( body )[""access_token""]; -} ); - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_0,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Customizing with third-party and private Python libraries - -If your model requires custom components such as user-defined transformers, estimators, or user-defined tensors, you can create a custom software specification that is derived from a base, or a predefined specification. Python functions and Python scripts also support custom software specifications. - -You can use custom software specification to reference any third-party libraries, user-created Python packages, or both. Third-party libraries or user-created Python packages must be specified as package extensions so that they can be referenced in a custom software specification. - -You can customize deployment runtimes in these ways: - - - -* [Define customizations in a Watson Studio project and then promote them to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-ws) -* [Create package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-wml) - - - -For more information, see [Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ents). - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_1,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Defining customizations in a Watson Studio project and then promoting them to a deployment space - -Environments in Watson Studio projects can be customized to include third-party libraries that can be installed from Anaconda or from the PyPI repository. - -For more information, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). - -As part of custom environment creation, these steps are performed internally (visible to the user): - - - -* A package extension that contains the details of third-party libraries is created in conda YAML format. -* A custom software specification with the same name as the custom environment is created and the package extension that is created is associated with this custom software specification. - - - -The models or Python functions/scripts created with the custom environment must reference the custom software specification when they are saved in Watson Machine Learning repository in the project scope. - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_2,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Propagating software specifications and package extensions from projects to deployment spaces - -To export custom software specifications and package extensions that were created in a Watson Studio project to a deployment space: - - - -1. From your project interface, click the Manage tab. -2. Select Environments. -3. Click the Templates tab. -4. From your custom environment's Options menu, select Promote to space. - - - -![Selecting ""Promote to space"" for a custom environment in Watson Studio interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/promote-custom-env-from-ws.png) - -Alternatively, when you promote any model or Python function that is associated with a custom environment from a Watson Studio project to a deployment space, the associated custom software specification and package extension is also promoted to the deployment space. - -If you want to update software specifications and package extensions after you promote them to deployment space, follow these steps: - - - -1. In the deployment space, delete the software specifications, package extensions, and associated models (optional) by using the Watson Machine Learning Python client. -2. In a project, promote the model, function, or script that is associated with the changed custom software specification and package extension to the space. - - - -Software specifications are also included when you import a project or space that includes one. - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_3,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Creating package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client - -You can use the Watson Machine Learning APIs or Python client to define a custom software specification that is derived from a base specification. - -High-level steps to create a custom software specification that uses third-party libraries or user-created Python packages: - - - -1. Optional: [Save a conda YAML file that contains a list of third-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-conda-yaml) or [save a user-created Python library and create a package extension](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-user-created). - -Note: This step is not required if the model does not have any dependency on a third-party library or a user-created Python library. -2. Create a custom software specification -3. Add a reference of the package extensions to the custom software specification that you created. - - - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_4,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Saving a conda YAML file that contains a list of third-party libraries - -To save a conda YAML file that contains a list of third-party libraries as a package extension and create a custom software specification that is linked to the package extension: - - - -1. Authenticate and create the client. - -Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). -2. Create and set the default deployment space, then list available software specifications. - -metadata = { -wml_client.spaces.ConfigurationMetaNames.NAME: -'examples-create-software-spec', -wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: -'For my models' -} -space_details = wml_client.spaces.store(meta_props=metadata) -space_uid = wml_client.spaces.get_id(space_details) - - set the default space -wml_client.set.default_space(space_uid) - - see available meta names for software specs -print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) -wml_client.software_specifications.list() - -asset_id = 'undefined' -pe_asset_id = 'undefined' -3. Create the metadata for package extensions to add to the base specification. - -pe_metadata = { -wml_client.package_extensions.ConfigurationMetaNames.NAME: -'My custom library', - optional: - wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: -wml_client.package_extensions.ConfigurationMetaNames.TYPE: -'conda_yml' -} -4. Create a yaml file that contains the list of packages and then save it as customlibrary.yaml. - -Example yaml file: - -name: add-regex-package -dependencies: -- regex - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_5,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"For more information, see [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html). -5. Store package extension information. - -pe_asset_details = wml_client.package_extensions.store( -meta_props=pe_metadata, -file_path='customlibrary.yaml' -) -pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) -6. Create the metadata for the software specification and store the software specification. - - Get the id of the base software specification -base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') - - create the metadata for software specs -ss_metadata = { -wml_client.software_specifications.ConfigurationMetaNames.NAME: -'Python 3.9 with pre-installed ML package', -wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: -'Adding some custom libraries like regex', optional -wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: -{'guid': base_id}, -wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: -[{'guid': pe_asset_id}] -} - - store the software spec -ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) - - get the id of the new asset -asset_id = wml_client.software_specifications.get_id(ss_asset_details) - - view new software specification details -import pprint as pp - -ss_asset_details = wml_client.software_specifications.get_details(asset_id) -print('Package extensions', pp.pformat( -ss_asset_details['entity']['package_extensions'] -)) - - - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_6,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Saving a user-created Python library and creating a package extension - -For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). - -To save a user-created Python package as a package extension and create a custom software specification that is linked to the package extension: - - - -1. Authenticate and create the client. - -Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). -2. Create and set the default deployment space, then list available software specifications. - -metadata = { -wml_client.spaces.ConfigurationMetaNames.NAME: -'examples-create-software-spec', -wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: -'For my models' -} -space_details = wml_client.spaces.store(meta_props=metadata) -space_uid = wml_client.spaces.get_id(space_details) - - set the default space -wml_client.set.default_space(space_uid) - - see available meta names for software specs -print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) -wml_client.software_specifications.list() - -asset_id = 'undefined' -pe_asset_id = 'undefined' -3. Create the metadata for package extensions to add to the base specification. - -Note:You can specify pip_zip only as a value for the wml_client.package_extensions.ConfigurationMetaNames.TYPE metadata property. - -pe_metadata = { -wml_client.package_extensions.ConfigurationMetaNames.NAME: -'My Python library', - optional: - wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_7,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"wml_client.package_extensions.ConfigurationMetaNames.TYPE: -'pip.zip' -} -4. Specify the path of the user-created Python library. - -python_lib_file_path=""my-python-library-0.1.zip"" - -For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). -5. Store package extension information. - -pe_asset_details = wml_client.package_extensions.store( -meta_props=pe_metadata, -file_path=python_lib_file_path -) -pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) -6. Create the metadata for the software specification and store the software specification. - - Get the id of the base software specification -base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') - - create the metadata for software specs -ss_metadata = { -wml_client.software_specifications.ConfigurationMetaNames.NAME: -'Python 3.9 with pre-installed ML package', -wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: -'Adding some custom libraries like regex', optional -wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: -{'guid': base_id}, -wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: -[{'guid': pe_asset_id}] -} - - store the software spec -ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) - - get the id of the new asset -asset_id = wml_client.software_specifications.get_id(ss_asset_details) - - view new software specification details -import pprint as pp - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_8,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"ss_asset_details = wml_client.software_specifications.get_details(asset_id) -print('Package extensions', pp.pformat( -ss_asset_details['entity']['package_extensions'] -)) - - - -" -CD27E36E95AE5324468C33CF3A112DC1611CA74C_9,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Troubleshooting - -When a conda yml based custom library installation fails with this error: Encountered error while installing custom library, try these alternatives: - - - -* Use a different version of the same package that is available in Anaconda for the concerned Python version. -* Install the library from the pypi repository, by using pip. Edit the conda yml installation file contents: - -name: -dependencies: -- numpy -- pip: -- pandas==1.2.5 - - - -Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html) -" -9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_0,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Requirements for using custom components in ML models - -You can define your own transformers, estimators, functions, classes, and tensor operations in models that you deploy in IBM Watson Machine Learning as online deployments. - -" -9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_1,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Defining and using custom components - -To use custom components in your models, you need to package your custom components in a [Python distribution package](https://packaging.python.org/glossary/term-distribution-package). - -" -9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_2,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Package requirements - - - -* The package type must be: [source distribution](https://packaging.python.org/glossary/term-source-distribution-or-sdis) (distributions of type Wheel and Egg are not supported) -* The package file format must be: .zip -* Any third-party dependencies for your custom components must be installable by pip and must be passed to the install_requires argument of the setup function of the setuptools library. - - - -Refer to: [Creating a source distribution](https://docs.python.org/2/distutils/sourcedist.html) - -" -9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_3,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Storing your custom package - -You must take extra steps when you store your trained model in the Watson Machine Learning repository: - - - -* Store your custom package in the [Watson Machine Learning repository](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store_library) (use the runtimes.store_library function from the Watson Machine Learning Python client, or the store libraries Watson Machine Learning CLI command.) -* Create a runtime resource object that references your stored custom package, and then [store the runtime resource object](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store) in the Watson Machine Learning repository (use the runtimes.store function, or the store runtimes command.) -* When you store your trained model in the Watson Machine Learning repository, reference your stored runtime resource in the [metadata](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Repository.store_model) that is passed to the store_model function (or the store command.) - - - -" -9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_4,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Supported frameworks - -These frameworks support custom components: - - - -* Scikit-learn -* XGBoost -* Tensorflow -* Python Functions -* Python Scripts -* Decision Optimization - - - -For more information, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) - -Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html) -" -F8E12F246225210B8C984D447B3E15867D2E8869,F8E12F246225210B8C984D447B3E15867D2E8869," Customizing Watson Machine Learning deployment runtimes - -Create custom Watson Machine Learning deployment runtimes with libraries and packages that are required for your deployments. You can build custom images based on deployment runtime images available in IBM Watson Machine Learning. The images contain preselected open source libraries and selected IBM libraries. - -For a list of requirements for creating private Python packages, refer to [Requirements for using custom components in ML models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). - -You can customize your deployment runtimes by [customizing Python runtimes with third-party libraries and user-created Python packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html) - -Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -82512A3915BF43DF08D9106027A67D5E059B2719_0,82512A3915BF43DF08D9106027A67D5E059B2719," Creating an SPSS Modeler batch job with multiple data sources - -In an SPSS Modeler flow, it's common to have multiple import and export nodes, where multiple import nodes can be fetching data from one or more relational databases. Learn how to use Watson Machine Learning to create an SPSS Modeler batch job with multiple data sources from relational databases. - -Note:The examples use IBM Db2 and IBM Db2 Warehouse, referred to in examples as dashdb. - -" -82512A3915BF43DF08D9106027A67D5E059B2719_1,82512A3915BF43DF08D9106027A67D5E059B2719," Connecting to multiple relational databases as input to a batch job - -The number of import nodes in an SPSS Modeler flow can vary. You might use as many as 60 or 70. However, the number of distinct connections to databases in these cases are just a few, though the table names that are accessed through the connections vary. Rather than specifying the details for every table connection, the approach that is described here focuses on the database connections. Therefore, the batch jobs accept a list of data connections or references by node name that are mapped to connection names in the SPSS Modeler flow's import nodes. - -For example, assume that if a flow has 30 nodes, only three database connections are used to connect to 30 different tables. In this case, you submit three connections (C1, C2, and C3) to the batch job. C1, C2, and C3 are connection names in the import node of the flow and the node name in the input of the batch job. - -When a batch job runs, the data reference for a node is provided by mapping the node name with the connection name in the import node. This example illustrates the steps for creating the mapping. - -The following diagram shows the flow from model creation to job submission: - -![SPSS Modeler job with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job.svg) - -Limitation: The connection reference for a node in a flow is overridden by the reference that is received from the batch job. However, the table name in the import or export node is not overridden. - -" -82512A3915BF43DF08D9106027A67D5E059B2719_2,82512A3915BF43DF08D9106027A67D5E059B2719," Deployment scenario with example - -In this example, an SPSS model is built by using 40 import nodes and a single output. The model has the following configuration: - - - -* Connections to three databases: 1 Db2 Warehouse (dashDB) and 2 Db2. -* The import nodes are read from 40 tables (30 from Db2 Warehouse and 5 each from the Db2 databases). -* A single output table is written to a Db2 database. - - - -![SPSS Modeler flow with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job2.svg) - -" -82512A3915BF43DF08D9106027A67D5E059B2719_3,82512A3915BF43DF08D9106027A67D5E059B2719," Example - -These steps demonstrate how to create the connections and identify the tables. - - - -1. Create a connection in your project. - -To run the SPSS Modeler flow, you start in your project and create a connection for each of the three databases your model connects to. You then configure each import node in the flow to point to a table in one of the connected databases. - -For this example, the database connections in the project are named dashdb_conn, db2_conn1, and db2_conn2. -2. Configure Data Asset to import nodes in your SPSS Modeler flow with connections. - -Configure each node in the flow to reference one of the three connections you created (dashdb_conn, db2_conn1, and db2_conn2), then specify a table for each node. - -Note: You can change the name of the connection at the time of the job run. The table names that you select in the flow are referenced when the job runs. You can't overwrite or change them. -3. Save the SPSS model to the Watson Machine Learning repository. - -For this example, it's helpful to provide the input and output schema when you are saving the model. It simplifies the process of identifying each input when you create and submit the batch job in the Watson Studio user interface. Connections that are referenced in the Data Asset nodes of the SPSS Modeler flow must be provided in the node name field of the input schema. To find the node name, double-click the Data Asset import node in your flow to open its properties: - -![Data Asset import node name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/spss-node-name.png) - -Note:SPSS models that are saved without schemas are still supported for jobs, but you must enter node name fields manually and provide the data asset when you submit the job. - -This code sample shows how to save the input schema when you save the model (Endpoint: POST /v4/models). - -{ -""name"": ""SPSS Drug Model"", -""label_column"": ""label"", -" -82512A3915BF43DF08D9106027A67D5E059B2719_4,82512A3915BF43DF08D9106027A67D5E059B2719,"""type"": ""spss-modeler_18.1"", -""runtime"": { -""href"": ""/v4/runtimes/spss-modeler_18.1"" -}, -""space"": { -""href"": ""/v4/spaces/"" -}, -""schemas"": { -""input"": [ { ""id"": ""dashdb_conn"", ""fields"": ] }, -{ ""id"": ""db2_conn1 "", ""fields"": ] } , -{ ""id"": ""db2_conn2"", ""fields"": ] } ], -""output"": [{ ""id"": ""db2_conn2 "",""fields"": ] }] -} -} - -Note: The number of fields in each of these connections doesn't matter. They’re not validated or used. What's important is the number of connections that are used. -4. Create the batch deployment for the SPSS model. - -For SPSS models, the creation process of the batch deployment job is the same. You can submit the deployment request with the model that was created in the previous step. -5. Submit SPSS batch jobs. - -You can submit a batch job from the Watson Studio user interface or by using the REST API. If the schema is saved with the model, the Watson Studio user interface makes it simple to accept input from the connections specified in the schema. Because you already created the data connections, you can select a connected data asset for each node name field that displays in the Watson Studio user interface as you define the job. - -The name of the connection that is created at the time of job submission can be different from the one used at the time of model creation. However, it must be assigned to the node name field. - - - -" -82512A3915BF43DF08D9106027A67D5E059B2719_5,82512A3915BF43DF08D9106027A67D5E059B2719," Submitting a job when schema is not provided - -If the schema isn't provided in the model metadata at the time the model is saved, you must enter the import node name manually. Further, you must select the data asset in the Watson Studio user interface for each connection. Connections that are referenced in the Data Asset import nodes of the SPSS Modeler flow must be provided in the node name field of the import/export data references. - -" -82512A3915BF43DF08D9106027A67D5E059B2719_6,82512A3915BF43DF08D9106027A67D5E059B2719," Specifying the connections for a job with data asset - -This code sample demonstrates how to specify the connections for a job that is submitted by using the REST API (Endpoint: /v4/deployment_jobs). - -{ -""deployment"": { -""href"": ""/v4/deployments/"" -}, -""scoring"": { -""input_data_references"": [ -{ -""id"": ""dashdb_conn"", -""name"": ""dashdb_conn"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -""schema"": {} -}, -{ -""id"": ""db2_conn1 "", -""name"": ""db2_conn1 "", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -""schema"": {} -}, -{ -""id"": ""db2_conn2 "", -""name"": ""db2_conn2"", -""type"": ""data_asset"", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -""schema"": {} -}], -""output_data_reference"": { -""id"": ""db2_conn2"" -""name"": ""db2_conn2"", -""type"": ""data_asset "", -""connection"": {}, -""location"": { -""href"": ""/v2/assets/?space_id="" -}, -""schema"": {} -} -} - -Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) -" -315971AE6C6A4EEDE13E9E1449B2A36F548B928F_0,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment - -Delete your deployment when you no longer need it to free up resources. You can delete a deployment from a deployment space, or programmatically, by using the Python client or Watson Machine Learning APIs. - -" -315971AE6C6A4EEDE13E9E1449B2A36F548B928F_1,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment from a space - -To remove a deployment: - - - -1. Open the Deployments page of your deployment space. -2. Choose Delete from the action menu for the deployment name. -![Deleting a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-delete.png) - - - -" -315971AE6C6A4EEDE13E9E1449B2A36F548B928F_2,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment by using the Python client - -Use the following method to delete the deployment. - -client.deployments.delete(deployment_uid) - -Returns a SUCCESS message. To check that the deployment was removed, you can list deployments and make sure that the deleted deployment is no longer listed. - -client.deployments.list() - -Returns: - ----- ---- ----- ------- ------------- -GUID NAME STATE CREATED ARTIFACT_TYPE ----- ---- ----- ------- ------------- - -" -315971AE6C6A4EEDE13E9E1449B2A36F548B928F_3,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment by using the REST API - -Use the DELETE method for deleting a deployment. - -DELETE /ml/v4/deployments/{deployment_id} - -For more information, see [Delete](https://cloud.ibm.com/apidocs/machine-learningdeployments-delete). - -For example, see the following code snippet: - -curl --location --request DELETE 'https://us-south.ml.cloud.ibm.com/ml/v4/deployments/:deployment_id?space_id=&version=2020-09-01' - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -53019DD52EDB5790460DFF9A02363856B83CAFB7_0,53019DD52EDB5790460DFF9A02363856B83CAFB7," Managing predictive deployments - -For proper deployment, you must set up a deployment space and then select and configure a specific deployment type. After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. - -To be able to deploy assets from a space, you must have a machine learning service instance that is provisioned and associated with that space. For more information, see [Associating a service instance with a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.htmlassociating-instance-with-space). - -Online and batch deployments provide simple ways to create an online scoring endpoint or do batch scoring with your models. - -If you want to implement a custom logic: - - - -* Create a Python function to use for creating your online endpoint -* Write a notebook or script for batch scoring - - - -Note: If you create a notebook or a script to perform batch scoring such an asset runs as a platform job, not as a batch deployment. - -" -53019DD52EDB5790460DFF9A02363856B83CAFB7_1,53019DD52EDB5790460DFF9A02363856B83CAFB7," Deployable assets - -Following is the list of assets that you can deploy from a Watson Machine Learning space, with information on applicable deployment types: - - - -List of assets that you can deploy - - Asset type Batch deployment Online deployment - - Functions Yes Yes - Models Yes Yes - Scripts Yes No - - - -An R Shiny app is the only asset type that is supported for web app deployments. - -Notes: - - - -* A deployment job is a way of running a batch deployment, or a self-contained asset like a flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). -* Notebooks and flows use notebook environments. You can run them in a deployment space, but they are not deployable. - - - -For more information, see: - - - -* [Creating online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html) -* [Creating batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) -* [Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) -* [Deploying scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) - - - -After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. Some ways to manage or update a deployment are as follows: - - - -" -53019DD52EDB5790460DFF9A02363856B83CAFB7_2,53019DD52EDB5790460DFF9A02363856B83CAFB7,"* [Manage deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html). After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space. -* [Update a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html). For example, you can replace a model with a better-performing version without having to create a new deployment. -* [Scale a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html) to increase availability and throughput by creating replicas of the deployment. -* [Delete a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html) to remove a deployment and free up resources. - - - -" -53019DD52EDB5790460DFF9A02363856B83CAFB7_3,53019DD52EDB5790460DFF9A02363856B83CAFB7," Learn more - - - -* [Full list of asset types that can be added to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) - - - -Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_0,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Writing deployable Python functions - -Learn how to write a Python function and then store it as an asset that allows for deploying models. - -For a list of general requirements for deployable functions refer to [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). For information on what happens during a function deployment, refer to [Function deployment process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enfundepro) - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_1,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," General requirements for deployable functions - -To be deployed successfully, a function must meet these requirements: - - - -* The Python function file on import must have the score function object as part of its scope. Refer to [Score function requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscore) -* Scoring input payload must meet the requirements that are listed in [Scoring input requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscoinreq) -* The output payload expected as output of score must include the schema of the score_response variable for status code 200. Note that the prediction parameter, with an array of JSON objects as its value, is mandatory in the score output. -* When you use the Python client to save a Python function that contains a reference to an outer function, only the code in the scope of the outer function (including its nested functions) is saved. Therefore, the code outside the outer function's scope will not be saved and thus will not be available when you deploy the function. - - - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_2,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Score function requirements - - - -* Two ways to add the score function object exist: - - - -* explicitly, by user -* implicitly, by the method that is used to save the Python function as an asset in the Watson Machine Learning repository - - - -* The score function must accept a single, JSON input parameter. -* The score function must return a JSON-serializable object (for example: dictionaries or lists) - - - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_3,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Scoring input requirements - - - -* The scoring input payload must include an array with the name values, as shown in this example schema. - -{""input_data"": [!{ -""values"": ""Hello world""]] -}] -} - -Note: -- The input_data parameter is mandatory in the payload. -- The input_data parameter can also include additional name-value pairs. -* The scoring input payload must be passed as input parameter value for score. This way you can ensure that the value of the score input parameter is handled accordingly inside the score. -* The scoring input payload must match the input requirements for the concerned Python function. -* The scoring input payload must include an array that matches the [Example input data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enexschema). - - - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_4,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Example input data schema - -{""input_data"": [!{ -""values"": ""Hello world""]] -}] -} - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_5,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Example Python code - -wml_python_function -def my_deployable_function(): - -def score( payload ): - -message_from_input_payload = payload.get(""input_data"")[0].get(""values"")[0] -response_message = ""Received message - {0}"".format(message_from_input_payload) - - Score using the pre-defined model -score_response = { -'predictions': [{'fields': 'Response_message_field'], -'values': response_message]] -}] -} -return score_response - -return score - -score = my_deployable_function() - -You can test your function like this: - -input_data = { ""input_data"": [{ ""fields"": ""message"" ]!, -""values"": ""Hello world"" ]] -} -] -} -function_result = score( input_data ) -print( function_result ) - -It returns the message ""Hello world!"". - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_6,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Function deployment process - -The Python code of your Function asset gets loaded as a Python module by the Watson Machine Learning engine by using an import statement. This means that the code will be executed exactly once (when the function is deployed or each time when the corresponding pod gets restarted). The score function that is defined by the Function asset is then called in every prediction request. - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_7,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Handling deployable functions - -Use one of these methods to create a deployable Python function: - - - -* [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest) -* [Creating deployable functions through the Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpy) - - - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_8,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Creating deployable functions through REST API - -For REST APIs, because the Python function is uploaded directly through a file, the file must already contain the score function. Any one time import that needs to be done to be used later within the score function can be done within the global scope of the file. When this file is deployed as a Python function, the one-time imports available in the global scope get executed during the deployment and later simply reused with every prediction request. - -Important:The function archive must be a .gz file. - -Sample score function file: - -Score function.py ---------------------- -def score(input_data): -return {'predictions': [{'values': 'Just a test']]}]} - -Sample score function with one time imports: - -import subprocess -subprocess.check_output('pip install gensim --user', shell=True) -import gensim - -def score(input_data): -return {'predictions': [{'fields': 'gensim_version'], 'values': gensim.__version__]]}]} - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_9,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Creating deployable functions through the Python client - -To persist a Python function as an asset, the Python client uses the wml_client.repository.store_function method. You can do that in two ways: - - - -* [Persisting a function through a file that contains the Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfufile) -* [Persisting a function through the function object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfunob) - - - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_10,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Persisting a function through a file that contains the Python function - -This method is the same as persisting the Python function file through REST APIs (score must be defined in the scope of the Python source file). For details, refer to [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest). - -Important:When you are calling the wml_client.repository.store_function method, pass the file name as the first argument. - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_11,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Persisting a function through the function object - -You can persist Python function objects by creating Python Closures with a nested function named score. The score function is returned by the outer function that is being stored as a function object, when called. This score function must meet the requirements that are listed in [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). In this case, any one time imports and initial setup logic must be added in the outer nested function so that they get executed during deployment and get used within the score function. Any recurring logic that is needed during the prediction request must be added within the nested score function. - -Sample Python function save by using the Python client: - -def my_deployable_function(): - -import subprocess -subprocess.check_output('pip install gensim', shell=True) -import gensim - -def score(input_data): -import -message_from_input_payload = payload.get(""input_data"")[0].get(""values"")[0] -response_message = ""Received message - {0}"".format(message_from_input_payload) - - Score using the pre-defined model -score_response = { -'predictions': [{'fields': 'Response_message_field', 'installed_lib_version'], -'values': response_message, gensim.__version__]] -}] -} -return score_response - -return score - -function_meta = { -client.repository.FunctionMetaNames.NAME:""test_function"", -client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: sw_spec_id -} -func_details = client.repository.store_function(my_deployable_function, function_meta) - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_12,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2,"In this scenario, the Python function takes up the job of creating a Python file taht contains the score function and persisting the function file as an asset in the Watson Machine Learning repository: - -score = my_deployable_function() - -" -45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_13,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Learn more - - - -* [Python Closures](https://www.programiz.com/python-programming/closure) -* [Closures](https://www.learnpython.org/en/Closures) -* [Nested function, Scope of variable & closures in Python](https://www.codesdope.com/blog/article/nested-function-scope-of-variable-closures-in-pyth/) - - - -Parent topic:[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) -" -03FF997603B065D2DF1FBB49934CA8C348765ACF_0,03FF997603B065D2DF1FBB49934CA8C348765ACF," Deploying Python functions in Watson Machine Learning - -You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the deployed function instead of in your application. - -" -03FF997603B065D2DF1FBB49934CA8C348765ACF_1,03FF997603B065D2DF1FBB49934CA8C348765ACF," Sample notebooks for creating and deploying Python functions - -For examples of how to create and deploy Python functions by using the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to these sample notebooks: - - - - Sample name Framework Techniques demonstrated - - [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model and deploy it. - [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Hybrid(Tensorflow) Set up an AI definition
Prepare the data
Create a Keras model by using Tensorflow
Deploy and score the model
Define, store, and deploy a Python function - [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function
Create a web service
Score the model - - - -The notebooks demonstrate the six steps for creating and deploying a function: - - - -1. Define the function. -2. Authenticate and define a space. -3. Store the function in the repository. -4. Get the software specification. -5. Deploy the stored function. -6. Send data to the function for processing. - - - -For links to other sample notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to [Using Watson Machine Learning in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). - -" -03FF997603B065D2DF1FBB49934CA8C348765ACF_2,03FF997603B065D2DF1FBB49934CA8C348765ACF," Increasing scalability for a function - -When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More replicas allow for a larger volume of scoring requests. - -The following example uses the Python client API to set the number of replicas to 3. - -change_meta = { -client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { -""name"":""S"", -""num_nodes"":3} -} - -client.deployments.update(, change_meta) - -" -03FF997603B065D2DF1FBB49934CA8C348765ACF_3,03FF997603B065D2DF1FBB49934CA8C348765ACF," Learn more - - - -* To learn more about defining a deployable Python function, see General requirements for deployable functions section in [Writing and storing deployable Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html). -* You can deploy a function from a deployment space through the user interface. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_0,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Scaling a deployment - -When you create an online deployment for a model or function from a deployment space or programmatically, a single copy of the asset is deployed by default. To increase scalability and availability, you can increase the number of copies (replicas) by editing the configuration of the deployment. More copies allow for a larger volume of scoring requests. - -Deployments can be scaled in the following ways: - - - -* Update the configuration for a deployment in a deployment space. -* Programmatically, using the Watson Machine Learning Python client library, or the Watson Machine Learning REST APIs. - - - -" -8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_1,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Changing the number of copies of an online deployment from a space - - - -1. Click the Deployment tab of your deployment space. -2. From the action menu for your deployment name, click Edit. -3. In the Edit deployment dialog box, change the number of copies and click Save. - - - -" -8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_2,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Increasing the number of replicas of a deployment programmatically - -To view or run a working sample of scaling a deployment programmatically, you can increase the number of replicas in the metadata for a deployment. - -" -8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_3,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Python example - -This example uses the Python client to set the number of replicas to 3. - -change_meta = { -client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { -""name"":""S"", -""num_nodes"":3} -} - -client.deployments.update(, change_meta) - -The HARDWARE_SPEC value includes a name because the API requires a name or an ID to be provided. - -" -8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_4,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," REST API example - -curl -k -X PATCH -d '[ { ""op"": ""replace"", ""path"": ""/hardware_spec"", ""value"": { ""name"": ""S"", ""num_nodes"": 2 } } ]' - -You must specify a name for the hardware_spec value, but the argument is not applied for scaling. - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -462A5BA596AADF9C38762611CA2578398F234BD4_0,462A5BA596AADF9C38762611CA2578398F234BD4," Updating a deployment - -After you create an online or a batch deployment, you can still update your deployment details and update the assets that are associated with your deployment. - -For more information, see: - - - -* [Update deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-general) -* [Update assets associated with a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-assets) - - - -" -462A5BA596AADF9C38762611CA2578398F234BD4_1,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details - -You can update general deployment details, such as deployment name, description, metadata, and tags by using one of these methods: - - - -* [Update deployment details from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-ui). -* [Update deployment details by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-api). - - - -" -462A5BA596AADF9C38762611CA2578398F234BD4_2,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details from the UI - - - -1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit settings. -2. Update the details and then click Save. - -Tip: You can also update a deployment from the information sheet for the deployment. - - - -" -462A5BA596AADF9C38762611CA2578398F234BD4_3,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details by using the Patch API command - -Use the [Watson Machine Learning API Patch](https://cloud.ibm.com/apidocs/machine-learning-cpmodels-update) command to update deployment details. - -curl -X PATCH '/ml/v4/deployments/?space_id=&version=' n--data-raw '[ -{ -""op"": """", -""path"": """", -""value"": """" -}, -{ -""op"": """", -""path"": """", -""value"": """" -} -]' - -For example, to update a description for deployment: - -curl -X PATCH '/ml/v4/deployments/?space_id=&version=' n--data-raw '[ -{ -""op"": ""replace"", -""path"": ""/description"", -""value"": """" -}, -]' - -Notes: - - - -* For , use ""add"", ""remove"", or ""replace"". - - - -" -462A5BA596AADF9C38762611CA2578398F234BD4_4,462A5BA596AADF9C38762611CA2578398F234BD4," Updating assets associated with a deployment - -After you create an online or batch deployment, you can update the deployed asset from the same endpoint. For example, if you have a better performing model, you can replace the deployed model with the improved version. When the update is complete, the new model is available from the REST API endpoint. - -Before you update an asset, make sure that these conditions are true: - - - -* The framework of the new model is compatible with the existing deployed model. -* The input schema exists and matches for the new and deployed model. - -Caution: Failure to follow these conditions can result in a failed deployment. -* For more information, see [Updating an asset from the deployment space UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-ui). -* For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-api). - - - -" -462A5BA596AADF9C38762611CA2578398F234BD4_5,462A5BA596AADF9C38762611CA2578398F234BD4," Updating an asset from the deployment space UI - - - -1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit. -2. Click Replace asset. From the Select an asset dialog box, select the asset that you want to replace the current asset with and click Select asset. -3. Click Save. - - - -Important: Make sure that the new asset is compatible with the deployment. - -![Replacing a deployed asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-update.png) - -" -462A5BA596AADF9C38762611CA2578398F234BD4_6,462A5BA596AADF9C38762611CA2578398F234BD4," Updating an asset by using the Patch API command - -Use the Watson Machine Learning [API](https://cloud.ibm.com/apidocs/machine-learning)Patch command to update any supported asset. - -Use this method to patch a model for an online deployment. - -curl -X PATCH '/ml/v4/models/?space_id=&project_id=&version=' n--data-raw '[ -{ -""op"": """", -""path"": """", -""value"": """" -}, -{ -""op"": """", -""path"": """", -""value"": """" -} -]' - -For example, patch a model with ID 6f01d512-fe0f-41cd-9a52-1e200c525c84 in space ID f2ddb8ce-7b10-4846-9ab0-62454a449802: - -curl -X PATCH '/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=&version=' n--data-raw '[ - -{ -""op"":""replace"", -""path"":""/asset"", -""value"":{ -""id"":""6f01d512-fe0f-41cd-9a52-1e200c525c84"", -""rev"":""1"" -} -} -]' - -A successful output response looks like this: - -{ -""entity"": { -""asset"": { -""href"": ""/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802"", -" -462A5BA596AADF9C38762611CA2578398F234BD4_7,462A5BA596AADF9C38762611CA2578398F234BD4,"""id"": ""6f01d512-fe0f-41cd-9a52-1e200c525c84"" -}, -""custom"": { -}, -""description"": ""Test V4 deployments"", -""name"": ""test_v4_dep_online_space_hardware_spec"", -""online"": { -}, -""space"": { -""href"": ""/v4/spaces/f2ddb8ce-7b10-4846-9ab0-62454a449802"", -""id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802"" -}, -""space_id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802"", -""status"": { -""online_url"": { -""url"": ""https://example.com/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83/predictions"" -}, -""state"": ""updating"" -} -}, -""metadata"": { -""created_at"": ""2020-06-08T16:51:08.315Z"", -""description"": ""Test V4 deployments"", -""guid"": ""349dc1f7-9452-491b-8aa4-0777f784bd83"", -""href"": ""/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83"", -""id"": ""349dc1f7-9452-491b-8aa4-0777f784bd83"", -""modified_at"": ""2020-06-08T16:55:28.348Z"", -""name"": ""test_v4_dep_online_space_hardware_spec"", -""parent"": { -""href"": """" -}, -""space_id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802"" -} -} - -Notes: - - - -* For , use ""add"", ""remove"", or ""replace"". -" -462A5BA596AADF9C38762611CA2578398F234BD4_8,462A5BA596AADF9C38762611CA2578398F234BD4,"* The initial state for the PATCH API output is ""updating"". Keep polling the status until it changes to ""ready"", then retrieve the deployment meta. -* Only the ASSET attribute can be specified for the asset patch. Changing any other attribute results in an error. -* The schema of the current model and the model being patched is compared to the deployed asset. A warning message is returned in the output of the Patch request API if the two don't match. For example, if a mismatch is detected, you can find this information in the output response. - -""status"": { -""message"": { -""text"": ""The input schema of the asset being patched does not match with the currently deployed asset. Please ensure that the score payloads are up to date as per the asset being patched."" -}, -* For more information, see [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api). - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_0,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Managing hardware configurations - -When you deploy certain assets in Watson Machine Learning, you can choose the type, size, and power of the hardware configuration that matches your computing needs. - -" -0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_1,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Deployment types that require hardware specifications - -Selecting a hardware specification is available for all [batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) types. For [online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html), you can select a specific hardware specification if you're deploying: - - - -* Python Functions -* Tensorflow models -* Models with custom software specifications - - - -" -0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_2,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Hardware configurations available for deploying assets - - - -* XS: 1x4 = 1 vCPU and 4 GB RAM -* S: 2x8 = 2 vCPU and 8 GB RAM -* M: 4x16 = 4 vCPU and 16 GB RAM -* L: 8x32 = 8 vCPU and 32 GB RAM -* XL: 16x64 = 16 vCPU and 64 GB RAM - - - -You can use the XS configuration to deploy: - - - -* Python functions -* Python scripts -* R scripts -* Models based on custom libraries and custom images - - - -For Decision Optimization deployments, you can use these hardware specifications: - - - -* S -* M -* L -* XL - - - -" -0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_3,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Learn more - - - -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -19BA0BFC40B6212B42F38487F1533BB65647850E_0,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing models to a deployment space - -Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks that are available for importing models. - -Here, to import a trained model means: - - - -1. Store the trained model in your Watson Machine Learning repository -2. Optional: Deploy the stored model in your Watson Machine Learning service - - - -and repository means a Cloud Object Storage bucket. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). - -You can import a model in these ways: - - - -* [Directly through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enui-import) -* [By using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) -* [By using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) -* [Import a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) - - - -For more information, see [Importing models by ML framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_1,19BA0BFC40B6212B42F38487F1533BB65647850E,"For more information, see [Things to consider when you import models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-import-considerations). - -For an example of how to add a model programmatically by using the Python client, refer to this notebook: - - - -* [Use PMML to predict iris species.](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/pmml/Use%20PMML%20to%20predict%20iris%20species.ipynb) - - - -For an example of how to add a model programmatically by using the REST API, refer to this notebook: - - - -* [Use scikit-learn to predict diabetes progression](https://github.com/IBM/watson-machine-learning-samples/blob/be84bcd25d17211f41fb34ec262b418f6cd6c87b/cloud/notebooks/rest_api/curl/deployments/scikit/Use%20scikit-learn%20to%20predict%20diabetes%20progression.ipynb) - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_2,19BA0BFC40B6212B42F38487F1533BB65647850E," Available ways to import models, per framework type - -This table lists the available ways to import models to Watson Machine Learning, per framework type. - - - -Import options for models, per framework type - - Import option Spark MLlib Scikit-learn XGBoost TensorFlow PyTorch - - [Importing a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) ✓ ✓ ✓ - [Importing a model by using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) ✓ ✓ ✓ ✓ - [Importing a model by using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) ✓ ✓ ✓ ✓ - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_3,19BA0BFC40B6212B42F38487F1533BB65647850E," Adding a model by using UI - -Note:If you want to import a model in the PMML format, you can directly import the model .xml file. - -To import a model by using UI: - - - -1. From the Assets tab of your space in Watson Machine Learning, click Import assets. -2. Select Local file and then select Model. -3. Select the model file that you want to import and click Import. - - - -The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml file. - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_4,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model object - -Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). - -To import a model object: - - - -1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). -2. Store the model object in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_5,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model by using a path to a file - -Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). - -To import a model by using a path to a file: - - - -1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download) to download it. -2. If your model is located locally, place it in a specific directory: - -!cp -!cd -3. For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive: - -!tar -zcvf .tar.gz - -The model file must be at the top-level folder of the directory, for example: - -assets/ - -variables/ -variables/variables.data-00000-of-00001 -variables/variables.index -4. Use the path to the saved file to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_6,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model by using a path to a directory - -Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). - -To import a model by using a path to a directory: - - - -1. If your model is located in a remote location, refer to [Downloading a model stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). -2. If your model is located locally, place it in a specific directory: - -!cp -!cd - -For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top-level folder of the directory, for example: - -assets/ - -variables/ -variables/variables.data-00000-of-00001 -variables/variables.index -3. Use the directory path to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_7,19BA0BFC40B6212B42F38487F1533BB65647850E," Downloading a model stored in a remote location - -Follow this sample code to download your model from a remote location: - -import os -from wget import download - -target_dir = '' -if not os.path.isdir(target_dir): -os.mkdir(target_dir) -filename = os.path.join(target_dir, '') -if not os.path.isfile(filename): -filename = download('', out = target_dir) - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_8,19BA0BFC40B6212B42F38487F1533BB65647850E," Things to consider when you import models - -To learn more about importing a specific model type, see: - - - -* [Models saved in PMML format](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpmml-import) -* [Spark MLlib models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enspark-ml-lib-import) -* [Scikit-learn models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enscikit-learn-import) -* [XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enxgboost-import) -* [TensorFlow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=entf-import) -* [PyTorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpt-import) - - - -To learn more about frameworks that you can use with Watson Machine Learning, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_9,19BA0BFC40B6212B42F38487F1533BB65647850E," Models saved in PMML format - - - -* The only available deployment type for models that are imported from PMML is online deployment. -* The PMML file must have the .xml file extension. -* PMML models cannot be used in an SPSS stream flow. -* The PMML file must not contain a prolog. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default. For example, if your file contains a prolog string such as spark-mllib-lr-model-pmml.xml, remove the string before you import the PMML file to the deployment space. - - - -Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example: - -:::::::::::::: -spark-mllib-lr-model-pmml.xml -:::::::::::::: - -You must remove that prolog before you can import the PMML file to Watson Machine Learning. - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_10,19BA0BFC40B6212B42F38487F1533BB65647850E," Spark MLlib models - - - -* Only classification and regression models are available. -* Custom transformers, user-defined functions, and classes are not available. - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_11,19BA0BFC40B6212B42F38487F1533BB65647850E," Scikit-learn models - - - -* .pkl and .pickle are the available import formats. -* To serialize or pickle the model, use the joblib package. -* Only classification and regression models are available. -* Pandas Dataframe input type for predict() API is not available. -* The only available deployment type for scikit-learn models is online deployment. - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_12,19BA0BFC40B6212B42F38487F1533BB65647850E," XGBoost models - - - -* .pkl and .pickle are the available import formats. -* To serialize or pickle the model, use the joblib package. -* Only classification and regression models are available. -* Pandas Dataframe input type for predict() API is not available. -* The only available deployment type for XGBoost models is online deployment. - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_13,19BA0BFC40B6212B42F38487F1533BB65647850E," TensorFlow models - - - -* .pb, .h5, and .hdf5 are the available import formats. -* To save or serialize a TensorFlow model, use the tf.saved_model.save() method. -* tf.estimator is not available. -* The only available deployment types for TensorFlow models are: online deployment and batch deployment. - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_14,19BA0BFC40B6212B42F38487F1533BB65647850E," PyTorch models - - - -* The only available deployment type for PyTorch models is online deployment. -* For a Pytorch model to be importable to Watson Machine Learning, it must be previously exported to .onnx format. Refer to this code. - -torch.onnx.export(, , "".onnx"", verbose=True, input_names=, output_names=) - - - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_15,19BA0BFC40B6212B42F38487F1533BB65647850E," Storing a model in your Watson Machine Learning repository - -Use this code to store your model in your Watson Machine Learning repository: - -from ibm_watson_machine_learning import APIClient - -client = APIClient() -sw_spec_uid = client.software_specifications.get_uid_by_name("""") - -meta_props = { -client.repository.ModelMetaNames.NAME: """", -client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid, -client.repository.ModelMetaNames.TYPE: """"} - -client.repository.store_model(model=, meta_props=meta_props) - -Notes: - - - -* Depending on the model framework used, can be the actual model object, a full path to a saved model file, or a path to a directory where the model file is located. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). -* For a list of available software specifications to use as , use the client.software_specifications.list() method. -* For a list of available model types to use as model_type, refer to [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). -* When you export a Pytorch model to the .onnx format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (Watson Machine Learning deployments use the caffe2 ONNX runtime that doesn't support opset versions higher than 9). - -" -19BA0BFC40B6212B42F38487F1533BB65647850E_16,19BA0BFC40B6212B42F38487F1533BB65647850E,"torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9) -* To learn more about how to create the dictionary, refer to [Watson Machine Learning authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). - - - -Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) -" -E008266C010ADFEF841C513AE7BCB91436F9AE9C_0,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Frameworks and software specifications in Watson Machine Learning - -You can use popular tools, libraries, and frameworks to train and deploy your machine learning models and functions. - -" -E008266C010ADFEF841C513AE7BCB91436F9AE9C_1,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Overview of software specifications - -Software specifications define the programming language and version that you use for a building a model or a function. You can use software specifications to configure the software that is used for running your models and functions. You can also define the software version to be used and include your own extensions. For example, you can use conda .yml files or custom libraries. - -" -E008266C010ADFEF841C513AE7BCB91436F9AE9C_2,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Supported frameworks and software specifications - -You can use predefined tools, libraries, and frameworks to train and deploy your machine learning models and functions. Examples of supported frameworks include Scikit-learn, Tensorflow, and more. - -For more information, see [Supported deployment frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). - -![Frameworks and software specifications for model delpoyments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/frameworks-software-specs.png) - -" -E008266C010ADFEF841C513AE7BCB91436F9AE9C_3,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Managing outdated frameworks and software specifications - -Update software specifications and frameworks in your models when they become outdated. Sometimes, you can seamlessly update your assets. In other cases, you must retrain or redeploy your assets. - -For more information, see [Managing outdated software specifications or frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html). - -Parent topic:[Deploying assets with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_0,29A9834843B2D6E7417C09A5385B83BCB13D814C," Managing outdated software specifications or frameworks - -Use these guidelines when you are updating assets that refer to outdated software specifications or frameworks. - -In some cases, asset update is seamless. In other cases, you must retrain or redeploy the assets. For general guidelines, refer to [Migrating assets that refer to discontinued software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-soft-spec) or [Migrating assets that refer to discontinued framework versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-framewrk). - -For more information, see the following sections: - - - -* [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs) -* [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model) -* [Updating a Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgr-function) -* [Retraining an SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-spss) - - - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_1,29A9834843B2D6E7417C09A5385B83BCB13D814C," Managing assets that refer to discontinued software specifications - - - -* During migration, assets that refer to the discontinued software specification are mapped to a comparable-supported default software specification (only in cases where the model type is still supported). -* When you create new deployments of the migrated assets, the updated software specification in the asset metadata is used. -* Existing deployments of the migrated assets are updated to use the new software specification. If deployment or scoring fails due to framework or library version incompatibilities, follow the instructions in [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs). If the problem persists, follow the steps that are listed in [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). - - - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_2,29A9834843B2D6E7417C09A5385B83BCB13D814C," Migrating assets that refer to discontinued framework versions - - - -* During migration, model types are not be updated. You must manually update this data. For more information, see [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). -* After migration, the existing deployments are removed and new deployments for the deprecated framework are not allowed. - - - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_3,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications - -You can update software specifications from the UI or by using the API. For more information, see the following sections: - - - -* [Updating software specifications from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-ui) -* [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-api) - - - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_4,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications from the UI - - - -1. From the deployment space, click the model (make sure it does not have any active deployments.) -2. Click the i symbol to check model details. -3. Use the dropdown list to update the software specification. - - - -Refer to the example image: - -![Updating software specifications through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/update-software-spec-via-ui.png) - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_5,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications by using the API - -You can update a software specification by using the API Patch command: - -For software_spec field, type /software_spec. For value field, use either the ID or the name of the new software specification. - -Refer to this example: - -curl -X PATCH '/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=&version=' n--data-raw '[ -{ -""op"":""replace"", -""path"":""/software_spec"", -""value"":{ -""id"":""6f01d512-fe0f-41cd-9a52-1e200c525c84"" // or ""name"":""tensorflow_rt22.1-py3.9"" -} -} -]' - -For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_6,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating a machine learning model - -Follow these steps to update a model built with a deprecated framework. - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_7,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 1: Save the model with a compatible framework - - - -1. Download the model by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -The following example shows how to download your model: - -client.repository.download(, filename=""xyz.tar.gz"") -2. Edit model metadata with the model type and version that is supported in the current release. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). - -The following example shows how to edit model metadata: - -model_metadata = { -client.repository.ModelMetaNames.NAME: ""example model"", -client.repository.ModelMetaNames.DESCRIPTION: ""example description"", -client.repository.ModelMetaNames.TYPE: """", -client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: -client.software_specifications.get_uid_by_name("""") -} -3. Save the model to the Watson Machine Learning repository. The following example shows how to save the model to the repository: - -model_details = client.repository.store_model(model=""xyz.tar.gz"", meta_props=model_metadata) -4. Deploy the model. -5. Score the model to generate predictions. - - - -If deployment or scoring fails, the model is not compatible with the new version that was used for saving the model. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-option2). - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_8,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 2: Retrain the model with a compatible framework - - - -1. Retrain the model with a model type and version that is supported in the current version. -2. Save the model with the supported model type and version. -3. Deploy and score the model. - - - -It is also possible to update a model by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_9,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating a Python function - -Follow these steps to update a Python function built with a deprecated framework. - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_10,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 1: Save the Python function with a compatible runtime or software specification - - - -1. Download the Python function by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). -2. Save the Python function with a supported runtime or software specification version. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). -3. Deploy the Python function. -4. Score the Python function to generate predictions. - - - -If your Python function fails during scoring, the function is not compatible with the new runtime or software specification version that was used for saving the Python function. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enmodify-option2). - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_11,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 2: Modify the function code and save it with a compatible runtime or software specification - - - -1. Modify the Python function code to make it compatible with the new runtime or software specification version. In some cases, you must update dependent libraries that are installed within the Python function code. -2. Save the Python function with the new runtime or software specification version. -3. Deploy and score the Python function. - - - -It is also possible to update a function by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). - -" -29A9834843B2D6E7417C09A5385B83BCB13D814C_12,29A9834843B2D6E7417C09A5385B83BCB13D814C," Retraining an SPSS Modeler flow - -Some models that were built with SPSS Modeler in IBM Watson Studio Cloud before 1 September 2020 can no longer be deployed by using Watson Machine Learning. This problem is caused by an upgrade of the Python version in supported SPSS Modeler runtimes. If you're using one of the following six nodes in your SPSS Modeler flow, you must rebuild and redeploy your models with SPSS Modeler and Watson Machine Learning: - - - -* XGBoost Tree -* XGBoost Linear -* One-Class SVM -* HDBSCAN -* KDE Modeling -* Gaussian Mixture - - - -To retrain your SPSS Modeler flow, follow these steps: - - - -* If you're using the Watson Studio user interface, open the SPSS Modeler flow in Watson Studio, retrain, and save the model to Watson Machine Learning. After you save the model to the project, you can promote it to a deployment space and create a new deployment. -* If you're using [REST API](https://cloud.ibm.com/apidocs/machine-learning) or [Python client](https://ibm.github.io/watson-machine-learning-sdk/), retrain the model by using SPSS Modeler and save the model to the Watson Machine Learning repository with the model type spss-modeler-18.2. - - - -Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) -" -6F51A9033343574AEE2D292CB23F09D542456389,6F51A9033343574AEE2D292CB23F09D542456389," Enabling model tracking with AI factsheets - -If your organization is using AI Factsheets as part of an AI governance strategy, you can track models after adding them to a space. - -Tracking a model populates a factsheet in an associated model use case. The model use cases are maintained in a model inventory in a catalog, providing a way for all stakeholders to view the lifecyle details for a machine learning model. From the inventory, collaborators can view the details for a model as it moves through the model lifecycle, including the request, development, deployment, and evaluation of the model. - -To enable model tracking by using AI Factsheets: - - - -1. From the asset list in your space, click a model name and then click the Model details tab. -2. Click Track this model. -3. Associate the model with an existing model use case in the inventory or create a new use case. -4. Specify the details for the new use case, including specifying a catalog if you have access to more than one, and save to register the model. A link to the model inventory is added to the model details page. -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_0,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Adding conditions to the pipeline - -Add conditions to a pipeline to handle various scenarios. - -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_1,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring conditions for the pipeline - -As you create a pipeline, you can specify conditions that must be met before you run the pipeline. For example, you can set a condition that the output from a node must satisfy a particular condition before you proceed with the pipeline execution. - -To define a condition: - - - -1. Hover over the link between two nodes. -2. Click Add condition. -3. Choose the type of condition: - - - -* [Condition Response](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ennode) checks a condition on the status of the previous node. -* [Simple condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ensimple) is a no-code condition in the form of an if-then statement. -* [Advanced condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=enadvanced) Advanced condition uses expression code, providing the most features and flexibility. - - - -4. Define and save your expression. - - - -![Defining a condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_adding_condition.gif) - -When you define your expression, a summary captures the condition and the expected result. For example: - -If Run AutoAI is Successful, then Create deployment node. - -When you return to the flow, you see an indicator that you defined a condition. Hover over the icon to edit or delete the condition. - -![Viewing a successful condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-condition1.png) - -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_2,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring a condition based on node status - -If you select Condition Response as your condition type, the previous node status must satisfy at least one of these conditions to continue with the flow: - - - -* Completed - the node activity is completed without error. -* Completed with warnings - the node activity is completed but with warnings. -* Completed with errors - the node activity is completed, but with errors. -* Failed - the node activity failed to complete. -* Cancelled - the previous action or activity was canceled. - - - -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_3,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring a simple condition - -To configure a simple condition, choose the condition that must be satisfied to continue with the flow. - - - -1. Optional: edit the default name. -2. Depending on the node, choose a variable from the drop-down options. For example, if you are creating a condition based on a Run AutoAI node, you can choose Model metric as the variable to base your condition on. -3. Based on the variable, choose an operator from: Equal to, Not equal to, Greater than, Less than, Greater than or equal to, Less than or equal to. -4. Specify the required value. For example, if you are basing a condition on an AutoAI metric, specify a list of values that consists of the available metrics. -5. Optional: click the plus icon to add an And (all conditions must be met) or an Or (either condition must be met) to the expression to build a compound conditional statement. -6. Review the summary and save the condition. - - - -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_4,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring an advanced condition - -Use coding constructs to build a more complex condition. The next node runs when the condition is met. You build the advanced condition by using the expression builder. - - - -1. Optional: edit the default name. -2. Add items from the Expression elements panel to the Expression canvas to build your condition. You can also type your conditions and the elements autocomplete. -3. When your expression is complete, review the summary and save the condition. - - - -" -035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_5,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Learn more - -For more information on using the code editor to build an expression, see: - - - -* [Functions used in pipelines Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html) - - - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -8CF8260D0474AD73D9878CCD361C83102B724733_0,8CF8260D0474AD73D9878CCD361C83102B724733," Configuring pipeline nodes - -Configure the nodes of your pipeline to specify inputs and to create outputs as part of your pipeline. - -" -8CF8260D0474AD73D9878CCD361C83102B724733_1,8CF8260D0474AD73D9878CCD361C83102B724733," Specifying the workspace scope - -By default, the scope for a pipeline is the project that contains the pipeline. You can explicitly specify a scope other than the default, to locate an asset used in the pipeline. The scope is the project, catalog, or space that contains the asset. From the user interface, you can browse for the scope. - -" -8CF8260D0474AD73D9878CCD361C83102B724733_2,8CF8260D0474AD73D9878CCD361C83102B724733," Changing the input mode - -When you are configuring a node, you can specify any resources that include data and notebooks in various ways. Such as directly entering a name or ID, browsing for an asset, or by using the output from a prior node in the pipeline to populate a field. To see what options are available for a field, click the input icon for the field. Depending on the context, options can include: - - - -* Select resource: use the asset browser to find an asset such as a data file. -* Assign pipeline parameter: assign a value by using a variable configured with a pipeline parameter. For more information, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). -* Select from another node: use the output from a node earlier in the pipeline as the value for this field. -* Enter the expression: enter code to assign values or identify resources. For more information, see [Coding elements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html). - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_3,8CF8260D0474AD73D9878CCD361C83102B724733," Pipeline nodes and parameters - -Configure the following types of pipeline nodes: - -" -8CF8260D0474AD73D9878CCD361C83102B724733_4,8CF8260D0474AD73D9878CCD361C83102B724733," Copy nodes - -Use Copy nodes to add assets to your pipeline or to export pipeline assets. - - - -* Copy assets - -Copy selected assets from a project or space to a nonempty space. You can copy these assets to a space: - AutoAI experiment - Code package job - Connection - Data Refinery flow - Data Refinery job - Data asset - Deployment job - Environment - Function - Job - Model - Notebook - Notebook job - Pipelines job - Script - Script job - SPSS Modeler job #### Input parameters |Parameter|Description| |---|---| |Source assets |Browse or search for the source asset to add to the list. You can also specify an asset with a pipeline parameter, with the output of another node, or by entering the asset ID| |Target|Browse or search for the target space| |Copy mode|Choose how to handle a case where the flow tries to copy an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Output assets |List of copied assets| - - - - - -* Export assets - -" -8CF8260D0474AD73D9878CCD361C83102B724733_5,8CF8260D0474AD73D9878CCD361C83102B724733,"Export selected assets from the scope, for example, a project or deployment space. The operation exports all the assets by default. You can limit asset selection by building a list of resources to export. #### Input parameters |Parameter|Description| |---|---| |Assets |Choose Scope to export all exportable items or choose List to create a list of specific items to export| |Source project or space |Name of project or space that contains the assets to export| |Exported file |File location for storing the export file| |Creation mode (optional)|Choose how to handle a case where the flow tries to create an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Exported file|Path to exported file| Notes: - If you export a project that contains a notebook, the latest version of the notebook is included in the export file. If the Pipeline with the Run notebook job node was configured to use a different notebook version other than the latest version, the exported Pipeline is automatically reconfigured to use the latest version when imported. This might produce unexpected results or require some reconfiguration after the import. - If assets are self-contained in the exported project, they are retained when you import a new project. Otherwise, some configuration might be required following an import of exported assets. - - - - - -* Import assets - -" -8CF8260D0474AD73D9878CCD361C83102B724733_6,8CF8260D0474AD73D9878CCD361C83102B724733,"Import assets from a ZIP file that contains exported assets. #### Input parameters |Parameter|Description| |---|---| |Path to import target |Browse or search for the assets to import| |Archive file to import |Specify the path to a ZIP file or archive| Notes: After you import a file, paths and references to the imported assets are updated, following these rules: - References to assets from the exported project or space are updated in the new project or space after the import. - If assets from the exported project refer to external assets (included in a different project), the reference to the external asset will persist after the import. - If the external asset no longer exists, the parameter is replaced with an empty value and you must reconfigure the field to point to a valid asset. - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_7,8CF8260D0474AD73D9878CCD361C83102B724733," Create nodes - -Configure the nodes for creating assets in your pipeline. - - - -* Create AutoAI experiment - -Use this node to train an [AutoAI classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment name|Name of the new experiment| |Scope|A project or a space, where the experiment is going to be created| |Prediction type|The type of model for the following data: binary, classification, or regression| |Prediction column (label)|The prediction column name| |Positive class (optional)|Specify a positive class for a binary classification experiment| |Training data split ratio (optional)|The percentage of data to hold back from training and use to test the pipelines(float: 0.0 - 1.0)| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Algorithms to use|Specify the list of estimators to be used (the list depends on the learning type)| |Optimize metric (optional)| The metric used for model ranking| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI experiment description|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to the saved model| - - - - - -* Create AutoAI time series experiment - -" -8CF8260D0474AD73D9878CCD361C83102B724733_8,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to train an [AutoAI time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI time series experiment name|Name of the new experiment| |Scope|A project or a space, where the pipeline is going to be created| |Prediction columns (label)|The name of one or more prediction columns| |Date/time column (optional)|Name of date/time column| |Leverage future values of supporting features|Choose ""True"" to enable the consideration for supporting (exogenous) features to improve the prediction. For example, include a temperature feature for predicting ice cream sales.| |Supporting features (optional)|Choose supporting features and add to list| |Imputation method (optional)|Choose a technique for imputing missing values in a data set| |Imputation threshold (optional)|Specify an higher threshold for percentage of missing values to supply with the specified imputation method. If the threshold is exceeded, the experiment fails. For example, if you specify that 10% of values can be imputed, and the data set is missing 15% of values, the experiment fails.| |Fill type|Specify how the specified imputation method fill null values. Choose to supply a mean of all values, and median of all values, or specify a fill value.| |Fill value (optional)|If you selected to sepcify a value for replacing null values, enter the value in this field.| |Final training data set|Choose whether to train final pipelines with just the training data or with training data and holdout data. " -8CF8260D0474AD73D9878CCD361C83102B724733_9,8CF8260D0474AD73D9878CCD361C83102B724733,"If you choose training data, the generated notebook includes a cell for retrieving holdout data| |Holdout size (optional)|If you are splitting training data into training and holdout data, specify a percentage of the training data to reserve as holdout data for validating the pipelines. Holdout data does not exceed a third of the data.| |Number of backtests (optional)|Customize the backtests to cross-validate your time series experiment| |Gap length (optional)|Adjust the number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap is not used to train the experiment or evaluate the current backtest.| |Lookback window (optional)|A parameter that indicates how many previous time series values are used to predict the current time point.| |Forecast window (optional)|The range that you want to predict based on the data in the lookback window.| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Pipelines to complete|Optionally adjust the number of pipelines to create. More pipelines increase training time and resources.| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI time series experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI time series experiment|Path to the saved model| -* Create batch deployment - -" -8CF8260D0474AD73D9878CCD361C83102B724733_10,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to create a batch deployment for a machine learning model. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment| - - - - - -* Create data asset - -Use this node to create a data asset. #### Input parameters |Parameter|Description| |---|---| |File |Path to file in a file storage| |Target scope| Path to the target space or project| |Name (optional)|Name of the data source with optional description, country of origin, and tags| |Description (optional)| Description for the asset| |Origin country (optional)|Origin country for data regulations| |Tags (optional)| Tags to identify assets| |Creation mode|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Data asset|The newly created data asset| - - - - - -* Create deployment space - -" -8CF8260D0474AD73D9878CCD361C83102B724733_11,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to create and configure a space that you can use to organize and create deployments. #### Input parameters |Parameter|Description| |---|---| |New space name|Name of the new space with optional description and tags| |New space tags (optional)| Tags to identify the space| |New space COS instance CRN |CRN of the COS service instance| |New space WML instance CRN (optional)|CRN of the Watson Machine Learning service instance| |Creation mode (optional)|How to handle a case where the pipeline tries to create a space and one of the same name exists. One of: ignore, fail, overwrite| |Space description (optional)|Description of the space| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the newly created space| - - - - - -* Create online deployment - -Use this node to create an online deployment where you can submit test data directly to a web service REST API endpoint. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment| - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_12,8CF8260D0474AD73D9878CCD361C83102B724733," Wait - -Use nodes to pause a pipeline until an asset is available in the location that is specified in the path. - - - -* Wait for all results - -Use this node to wait until all results from the previous nodes in the pipeline are available so the pipeline can continue. This node takes no inputs and produces no output. When the results are all available, the pipeline continues automatically. - - - - - -* Wait for any result - -Use this node to wait until any result from the previous nodes in the pipeline is available so the pipeline can continue. Run the downstream nodes as soon as any of the upstream conditions are met. This node takes no inputs and produces no output. When any results are available, the pipeline continues automatically. - - - - - -* Wait for file - -" -8CF8260D0474AD73D9878CCD361C83102B724733_13,8CF8260D0474AD73D9878CCD361C83102B724733,"Wait for an asset to be created or updated in the location that is specified in the path from a job or process earlier in the pipeline. Specify a timeout length to wait for the condition to be met. If 00:00:00 is the specified timeout length, the flow waits indefinitely. #### Input parameters |Parameter|Description| |---|---| |File location|Specify the location in the asset browser where the asset resides. Use the format data_asset/filename where the path is relative to the root. The file must exist and be in the location you specify or the node fails with an error. | |Wait mode| By default the mode is for the file to appear. You can change to waiting for the file to disappear| |Timeout length (optional)|Specify the length of time to wait before you proceed with the pipeline. Use the format hh:mm:ss| |Error policy (optional)| See [Handling errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html)| #### Output parameters |Parameter|Description| |---|---| |Return value|Return value from the node| |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_14,8CF8260D0474AD73D9878CCD361C83102B724733," Control nodes - -Control the pipeline by adding error handling and logic. - - - -* Loops - -" -8CF8260D0474AD73D9878CCD361C83102B724733_15,8CF8260D0474AD73D9878CCD361C83102B724733,"Loops are a node in a Pipeline that operates like a coded loop. The two types of loops are parallel and sequential. You can use loops when the number of iterations for an operation is dynamic. For example, if you don't know the number of notebooks to process, or you want to choose the number of notebooks at run time, you can use a loop to iterate through the list of notebooks. You can also use a loop to iterate through the output of a node or through elements in a data array. ### Loops in parallel Add a parallel looping construct to the pipeline. A parallel loop runs the iterating nodes independently and possibly simultaneously. For example, to train a machine learning model with a set of hyperparameters to find the best performer, you can use a loop to iterate over a list of hyperparameters to train the notebook variations in parallel. The results can be compared later in the flow to find the best notebook. To see limits on the number of loops you can run simultaneously, see [Limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlpipeline-issues). \\ Input parameters when iterating List types |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Parallelism |Maximum number of tasks to be run simultaneously. Must be greater than zero| \\ Input parameters when iterating String types |Parameter|Description| |---|---| |Text input| Text data that the loop reads from| |Separator| A char used to split the text | |Parallelism (optional)| Maximum number of tasks to be run simultaneously. Must be greater than zero| If the input array element type is JSON or any type that is represented as such, this field might decompose it as dictionary. " -8CF8260D0474AD73D9878CCD361C83102B724733_16,8CF8260D0474AD73D9878CCD361C83102B724733,"Keys are the original element keys and values are the aliases for output names. \ Loops in sequence Add a sequential loop construct to the pipeline. Loops can iterate over a numeric range, a list, or text with a delimiter. A use case for sequential loops is if you want to try an operation 3 time before you determine whether an operation failed. \\ Input parameters |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Text input| Text data that the loop reads from. Specify a character to split the text.| |Range| Specify the start, end, and optional step for a range to iterate over. The default step is 1.| After you configure the loop iterative range, define a subpipeline flow inside the loop to run until the loop is complete. For example, it can invoke notebook, script, or other flow per iteration. \ Terminate loop In a parallel or sequential loop process flow, you can add a Terminate pipeline node to end the loop process anytime. You must customize the conditions for terminating. Attention: If you use the Terminate loop node, your loop cancels any ongoing tasks and terminates without completing its iteration. -* Set user variables - -" -8CF8260D0474AD73D9878CCD361C83102B724733_17,8CF8260D0474AD73D9878CCD361C83102B724733,"Configure a user variable with a key/value pair, then add the list of dynamic variables for this node. For more information on how to create a user variable, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). #### Input parameters x |Parameter|Description| |---|---| |Name| Enter the name, or key, for the variable| |Input type|Choose Expression or Pipeline parameter as the input type. - For expressions, use the built-in Expression Builder to create a variable that results from a custom expression. - For pipeline parameters, assign a pipeline parameter and use the parameter value as input for the user variable. - - - - - -* Terminate pipeline - -You can initiate and control the termination of a pipeline with a Terminate pipeline node from the Control category. When the error flow runs, you can optionally specify how to handle notebook or training jobs that were initiated by nodes in the pipeline. You must specify whether to wait for jobs to finish, cancel the jobs then stop the pipeline, or stop everything without canceling. Specify the options for the Terminate pipeline node. #### Input parameters |Parameter|Description| |---|---| |Terminator mode (optional)| Choose the behavior for the error flow| Terminator mode can be: - Terminate pipeline run and all running jobs stops all jobs and stops the pipeline. - Cancel all running jobs then terminate pipeline cancels any running jobs before stopping the pipeline. - Terminate pipeline run after running jobs finish waits for running jobs to finish, then stops the pipeline. - Terminate pipeline that is run without stopping jobs stops the pipeline but allows running jobs to continue. - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_18,8CF8260D0474AD73D9878CCD361C83102B724733," Update nodes - -Use update nodes to replace or update assets to improve performance. For example, if you want to standardize your tags, you can update to replace a tag with a new tag. - - - -* Update AutoAI experiment - -Update the training details for an [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to a project or a space, where the experiment resides| |AutoAI experiment name (optional)| Name of the experiment to be updated, with optional description and tags| |AutoAI experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path of the updated experiment| - - - - - -* Update batch deployment - -Use these parameters to update a batch deployment. #### Input parameters |Parameter|Description| |---|---| |Deployment| Path to the deployment to be updated| |New name for the deployment (optional)|Name or ID of the deployment to be updated | |New description for the deployment (optional)|Description of the deployment| |New tags for the deployment (optional)| Tags to identify the deployment| |ML asset|Name or ID of the machine learning asset to deploy| |Hardware specification|Update the hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment| - - - - - -* Update deployment space - -" -8CF8260D0474AD73D9878CCD361C83102B724733_19,8CF8260D0474AD73D9878CCD361C83102B724733,"Update the details for a space. #### Input parameters |Parameter|Description| |---|---| |Space|Path of the existing space| |Space name (optional)|Update the space name| |Space description (optional)|Description of the space| |Space tags (optional)|Tags to identify the space| |WML Instance (optional)| Specify a new Machine Learning instance| |WML instance| Specify a new Machine Learning instance. Note: Even if you assign a different name for an instance in the UI, the system name is Machine Learning instance. Differentiate between different instances by using the instance CRN| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the updated space| - - - - - -* Update online deployment - -Use these parameters to update an online deployment (web service). #### Input parameters |Parameter|Description| |---|---| |Deployment|Path of the existing deployment| |Deployment name (optional)|Update the deployment name| |Deployment description (optional)|Description of the deployment| |Deployment tags (optional)|Tags to identify the deployment| |Asset (optional)|Machine learning asset (or version) to be redeployed| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment| - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_20,8CF8260D0474AD73D9878CCD361C83102B724733," Delete nodes - -Configure parameters for delete operations. - - - -* Delete - -You can delete: - AutoAI experiment - Batch deployment - Deployment space - Online deployment For each item, choose the asset for deletion. - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_21,8CF8260D0474AD73D9878CCD361C83102B724733," Run nodes - -Use these nodes to train an experiment, execute a script, or run a data flow. - - - -* Run AutoAI experiment - -Trains and stores [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) pipelines and models. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Browse for the ML Pipeline asset or get the experiment from a pipeline parameter or the output from a previous node. | |Training data asset|Browse or search for the data to train the experiment. Note that you can supply data at runtime by using a pipeline parameter| |Holdout data asset (optional)|Optionally choose a separate file to use for holdout data for testingmodel performance| |Models count (optional)| Specify how many models to save from best performing pipelines. The limit is 3 models| |Run name (optional)|Name of the experiment and optional description and tags| |Model name prefix (optional)| Prefix used to name trained models. Defaults to <(experiment name)> | |Run description (optional)| Description of the new training run| |Run tags (optional)| Tags for new training run| |Creation mode (optional)| Choose how to handle a case where the pipeline flow tries to create an asset and one of the same name exists. " -8CF8260D0474AD73D9878CCD361C83102B724733_22,8CF8260D0474AD73D9878CCD361C83102B724733,"One of: ignore, fail, overwrite| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Models | List of paths of highest N trained and persisted model (ordered by selected evaluation metric)| |Best model | path of the winning model (based on selected evaluation metric)| |Model metrics | a list of trained model metrics (each item is a nested object with metrics like: holdout_accuracy, holdout_average_precision, ...)| |Winning model metric |elected evaluation metric of the winning model| |Optimized metric| Metric used to tune the model| |Execution status| Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the job| -* Run Bash script - -" -8CF8260D0474AD73D9878CCD361C83102B724733_23,8CF8260D0474AD73D9878CCD361C83102B724733,"Run an inline Bash script to automate a function or process for the pipeline. You can enter the Bash script code manually, or you can import the bash script from a resource, pipeline parameter, or the output of another node. You can also use a Bash script to process large output files. For example, you can generate a large, comma-separated list that you can then iterate over using a loop. In the following example, the user entered the inline script code manually. The script uses the cpdctl tool to search all notebooks with a set variable tag and aggregates the results in a JSON list. The list can then be used in another node, such as running the notebooks returned from the search. ![Example of a bash script node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-config-4.png){: height=""50%"" width=""50%""} #### Input parameters |Parameter|Description| |---|---| |Inline script code|Enter a Bash script in the inline code editor. Optional: Alternatively, you can select a resource, assign a pipeline parameter, or select from another node. | |Environment variables (optional)| Specify a variable name (the key) and a data type and add to the list of variables to use in the script.| |Runtime type (optional)| Select either use standalone runtime (default) or a shared runtime. " -8CF8260D0474AD73D9878CCD361C83102B724733_24,8CF8260D0474AD73D9878CCD361C83102B724733,"Use a shared runtime for tasks that require running in shared pods. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Output variables |Configure a key/value pair for each custom variable, then click the Add button to populate the list of dynamic variables for the node| |Return value|Return value from the node| |Standard output|Standard output from the script| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Message associated with the status| #### Rules for Bash script output The output for a Bash script is often the result of a computed expression and can be large. When you are reviewing the properties for a script with valid large output, you can preview or download the output in a viewer. These rules govern what type of large output is valid. - The output of a list_expression is a calculated expression, so it is valid a large output. - String output is treated as a literal value rather than a calculated expression, so it must follow the size limits that govern inline expressions. For example, you are warned when a literal value exceeds 1 KB and values of 2 KB and higher result in an error. #### Referencing a variable in a Bash script The way that you reference a variable in a script depends on whether the variable was created as an input variable or as an output variable. Output variables are created as a file and require a file path in the reference. Specifically: - Input variables are available using the assigned name - Output variable names require that _PATH be appended to the variable name to indicate that values have to be written to the output file pointed by the {output_name}_PATH variable. #### Using SSH in Bash scripts -The following steps describe how to use ssh to run your remote Bash script. 1. Create a private key and public key. -bash ssh-keygen -t rsa -C ""XXX"" 2. Copy the public key to the remote host. -" -8CF8260D0474AD73D9878CCD361C83102B724733_25,8CF8260D0474AD73D9878CCD361C83102B724733,"bash ssh-copy-id USER@REMOTE_HOST 3. On the remote host, check whether the public key contents are added into /root/.ssh/authorized_keys. -4. Copy the public and private keys to a new directory in the Run Bash script node. bash mkdir -p $HOME/.ssh copy private key content echo ""-----BEGIN OPENSSH PRIVATE KEY----- ... ... -----END OPENSSH PRIVATE KEY-----"" > $HOME/.ssh/id_rsa copy public key content echo ""ssh-rsa ...... "" > $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null -i $HOME/.ssh/id_rsa USER@REMOTE_HOST ""cd /opt/scripts; ls -l; sh 1.sh"" \\ Using SSH utilities in Bash scripts -The following steps describe how to use sshpass to run your remote Bash script. 1. Put your SSH password file in your system path, such as the mounted storage volume path. 2. Use the SSH password directly in the Run Bash script node: bash cd /mnts/orchestration ls -l sshpass chmod 777 sshpass ./sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USER@REMOTE_HOST ""cd /opt/scripts; ls -l; sh 1.sh"" - - - - - -* Run batch deployment - -Configure this node to run selected deployment jobs. #### Input parameters |Parameter|Description| |---|---| |Deployment|Browse or search for the deployment job | |Input data assets|Specify the data used for the batch job -" -8CF8260D0474AD73D9878CCD361C83102B724733_26,8CF8260D0474AD73D9878CCD361C83102B724733,"Restriction: Input for batch deployment jobs is limited to data assets. Deployments that require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.| |Output asset|Name of the output file for the results of the batch job. You can either select Filename and enter a custom file name, or Data asset and select an existing asset in a space.| |Hardware specification (optional)|Browse for a hardware specification to apply for the job| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job|Path to the file with results from the deployment job| |Job run|ID for the job| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the job| - - - - - -* Run DataStage job - - - - - -* Run Data Refinery job - -" -8CF8260D0474AD73D9878CCD361C83102B724733_27,8CF8260D0474AD73D9878CCD361C83102B724733,"This node runs a specified Data Refinery job. #### Input parameters |Parameter|Description| |---|---| |Data Refinery job |Path to the Data Refinery job.| |Environment | Path of the environment used to run the job Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the Data Refinery job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status|Information on the state of the flow: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the flow| - - - - - -* Run notebook job - -" -8CF8260D0474AD73D9878CCD361C83102B724733_28,8CF8260D0474AD73D9878CCD361C83102B724733,"Use these configuration options to specify how to run a Jupyter Notebook in a pipeline. #### Input parameters |Parameter|Description| |---|---| |Notebook job|Path to the notebook job. | |Environment |Path of the environment used to run the notebook. Attention: Leave the environments field as is to use the default environment. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the notebook language and hardware configuration to avoid a runtime error.| |Environment variables (optional)|List of environment variables used to run the notebook job| |Error policy (optional)| Optionally, override the default error policy for the node| Notes: - Environment variables that you define in a pipeline cannot be used for notebook jobs you run outside of Watson Pipelines. - You can run a notebook from a code package in a regular package. #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the notebook job| |Job run|Information about the job run| |Job name |Name of the job | |Output variables |Configure a key/value pair for each custom variable, then click Add to populate the list of dynamic variables for the node| |Execution status|Information on the state of the run: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the notebook run| - - - - - -* Run Pipelines component - -" -8CF8260D0474AD73D9878CCD361C83102B724733_29,8CF8260D0474AD73D9878CCD361C83102B724733,"Run a reusable pipeline component that is created by using a Python script. For more information, see [Creating a custom component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html). - If a pipeline component is available, configuring the node presents a list of available components. - The component that you choose specifies the input and output for the node. - Once you assign a component to a node, you cannot delete or change the component. You must delete the node and create a new one. - - - - - -* Run Pipelines job - -Add a pipeline to run a nested pipeline job as part of a containing pipeline. This is a way of adding reusable processes to multiple pipelines. You can use the output from a nested pipeline that is run as input for a node in the containing pipeline. #### Input parameters |Parameter|Description| |---|---| |Pipelines job|Select or enter a path to an existing Pipelines job.| |Environment (optional)| Select the environment to run the Pipelines job in, and assign environment resources. Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Job Run Name (optional) |A default job name is used unless you override it by specifying a custom job name. You can see the job name in the Job Details dashboard.| |Values for local parameters (optional) | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Values from parameter sets (optional) |Edit the parameter sets used by this job. " -8CF8260D0474AD73D9878CCD361C83102B724733_30,8CF8260D0474AD73D9878CCD361C83102B724733,"You can choose to use the parameters as defined by default, or use value sets from other pipelines' parameters. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| #### Notes for running nested pipeline jobs If you create a pipeline with nested pipelines and run a pipeline job from the top-level, the pipelines are named and saved as project assets that use this convention: - The top-level pipeline job is named ""Trial job - pipeline guid"". - All subsequent jobs are named ""pipeline_ pipeline guid"". -* Run SPSS Modeler job - -" -8CF8260D0474AD73D9878CCD361C83102B724733_31,8CF8260D0474AD73D9878CCD361C83102B724733,"Use these configuration options to specify how to run an SPSS Modeler in a pipeline. #### Input parameters |Parameter|Description| |---|---| |SPSS Modeler job|Select or enter a path to an existing SPSS Modeler job.| |Environment (optional)| Select the environment to run the SPSS Modeler job in, and assign environment resources. Attention: Leave the environments field as is to use the default SPSS Modeler runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the hardware configuration to avoid a runtime error.| |Values for local parameters | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| - - - -" -8CF8260D0474AD73D9878CCD361C83102B724733_32,8CF8260D0474AD73D9878CCD361C83102B724733," Learn more - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_0,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Creating a pipeline - -Create a pipeline to run an end-to-end scenario to automate all or part of the AI lifecycle. For example, create a pipeline that creates and trains an asset, promotes it to a space, creates a deployment, then scores the model. - -Watch this video to see how to create and run a sample pipeline. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_1,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Overview: Adding a pipeline to a project - -Follow these steps to add a pipeline to a project: - - - -1. Open a project. -2. Click New task > Automate model lifecycle. -3. Enter a name and an optional description. -4. Click Create to open the canvas. - - - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_2,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Pipeline access - -When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_3,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Related services - -In addition to access to all elements in a pipeline, you must have the services available to run all assets you add to a pipeline. For example, if you automate a pipeline that trains and deploys a model, you must have the Watson Studio and Watson Machine Learning services. If a required service is missing, the pipeline will not run. This table lists assets that require services in addition to Watson Studio: - - - - Asset Required service - - AutoAI experiment Watson Machine Learning - Batch deployment job Watson Machine Learning - Online deployment (web service) Watson Machine Learning - - - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_4,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Overview: Building a pipeline - -Follow these high-level steps to build and run a pipeline. - - - -1. Drag any node objects onto the canvas. For example, drag a Run notebook job node onto the canvas. -2. Use the action menu for each node to view and select options. -3. Configure a node as required. You are prompted to supply the required input options. For some nodes, you can view or configure output options as well. For examples of configuring nodes, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). -4. Drag from one node to another to connect and order the pipeline. -5. Optional: Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to configure runtime options for the pipeline. -6. When the pipeline is complete, click the Run icon on the toolbar to run the pipeline. You can run a trial to test the pipeline, or you can schedule a job when you are confident in the pipeline. - - - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_5,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Configuring nodes - -As you add nodes to a pipeline, you must configure them to provide all of the required details. For example, if you add a node to run an AutoAI experiment, you must configure the node to specify the experiment, load the training data, and specify the output file: - -![AutoAI node parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/OE-run-autoai-node.png) - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_6,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Connecting nodes - -When you build a complete pipeline, the nodes must be connected in the order in which they run in the pipeline. To connect nodes, hover over a node and drag a connection to the target node. Disconnected nodes are run in parallel. - -![Connecting nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_conecting_nodes_gif.gif) - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_7,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Defining pipeline parameters - -A pipeline parameter defines a global variable for the whole pipeline. Use pipeline parameters to specify data from one of these categories: - - - - Parameter type Can specify - - Basic JSON types such as string, integer, or a JSON object - CPDPath Resources available within the platform, such as assets, asset containers, connections, notebooks, hardware specs, projects, spaces, or jobs - InstanceCRN Storage, machine learning instances, and other services. - Other Various configuration types, such as status, timeout length, estimator, error policies and other various configuration types. - - - -To specify a pipeline parameter: - - - -1. Click the global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to open the Manage global objects window. -2. Select the Pipeline parameters tab to configure parameters. -3. Click Add pipeline parameter. -4. Specify a name and an optional description. -5. Select a type and provide any required information. -6. Click Add when the definition is complete, and repeat the previous steps until you finish defining the parameters. -7. Close the Manage global objects dialog. - - - -The parameters are now available to the pipeline. - -" -536EF493AB96990DE8E237EDB8A97DB989EF15C8_8,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Next steps - -[Configure pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) - -Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -7F2731C1EBB3F492687A336E1369CD6232512118_0,7F2731C1EBB3F492687A336E1369CD6232512118," Creating a custom component for use in the pipeline - -A custom pipeline component runs a script that you write. You can use custom components to share reusable scripts between pipelines. - -You create custom components as project assets. You can then use the components in pipelines you create in that project. You can create as many custom components for pipelines as needed. Currently, to create a custom component you must create one programmatically, using a Python function. - -" -7F2731C1EBB3F492687A336E1369CD6232512118_1,7F2731C1EBB3F492687A336E1369CD6232512118," Creating a component as a project asset - -To create a custom component, use the Python client to authenticate with IBM Watson Pipelines, code the component, then publish the component to the specified project. After it is available in the project, you can assign it to a node in a pipeline and run it as part of a pipeline flow. - -This example demonstrates the process of publishing a component that adds two numbers together, then assigning the component to a pipeline node. - - - -1. Publish a function as a component with the latest Python client. Run the following code in a Jupyter Notebook in a project of IBM watsonx. - - Install libraries -! pip install ibm-watson-pipelines - - Authentication -from ibm_watson_pipelines import WatsonPipelines - -apikey = '' -project_id = 'your_project_id' - -client = WatsonPipelines.from_apikey(apikey) - - Define the function of the component - - If you define the input parameters, users are required to - input them in the UI - -def add_two_numbers(a: int, b: int) -> int: -print('Adding numbers: {} + {}.'.format(a, b)) -return a + b + 10 - - Other possible functions might be sending a Slack message, - or listing directories in a storage volume, and so on. - - Publish the component -client.publish_component( -name='Add numbers', Appears in UI as component name -func=add_two_numbers, -description='Custom component adding numbers', Appears in UI as component description -project_id=project_id, -overwrite=True, Overwrites an existing component with the same name -) - -To generate a new API key: - - - -1. Go to the [IBM Cloud home page](https://cloud.ibm.com/) -2. Click Manage > Access (IAM) -3. Click API keys -4. Click Create - - - - - - - -1. Drag the node called Run Pipelines component under Run to the canvas. -![Retrieving the custom component node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-1.png) -" -7F2731C1EBB3F492687A336E1369CD6232512118_2,7F2731C1EBB3F492687A336E1369CD6232512118,"2. Choose the name of the component that you want to use. -![Choosing the actual component function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-2.png) -3. Connect and run the node as part of a pipeline job. -![Connecting the component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-3.png) - - - -" -7F2731C1EBB3F492687A336E1369CD6232512118_3,7F2731C1EBB3F492687A336E1369CD6232512118," Manage pipeline components - -To manage your components, use the Python client to manage them. - - - -Table 1. Manage pipeline components - - Method Function - - client.get_components(project_id=project_id) List components from a project - client.get_component(project_id=project_id, component_id=component_id) Get a component by ID - client.get_component(project_id=project_id, name=component_name) Get a component by name - client.publish_component(component name) Publish a new component - client.delete_component(project_id=project_id, component_id=component_id) Delete a component by ID - - - -" -7F2731C1EBB3F492687A336E1369CD6232512118_4,7F2731C1EBB3F492687A336E1369CD6232512118," Import and export - -IBM Watson Pipelines can be imported and exported with pipelines only. - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -05D687FC92FD17804374E20E7F330EDAE142F725_0,05D687FC92FD17804374E20E7F330EDAE142F725," Handling Pipeline errors - -You can specify how to respond to errors in a pipeline globally, with an error policy, and locally, by overriding the policy on the node level. You can also create a custom error-handling response. - -" -05D687FC92FD17804374E20E7F330EDAE142F725_1,05D687FC92FD17804374E20E7F330EDAE142F725," Setting global error policy - -The error policy sets the default behavior for errors in a pipeline. You can override this behavior for any node in the pipeline. - -To set the global error policy: - - - -1. Click the Manage default settings icon on the toolbar. -2. Choose the default response to an error under the Error policy: - - - -* Fail pipeline on error stops the flow and initiates an error-handling flow. -* Continue pipeline on error tries to continue running the pipeline. - -Note: Continue pipeline on error affects nodes that use the default error policy and does not affect node-specific error policies. - - - -3. You can optionally create a custom error-handling response for a flow failure. - - - -" -05D687FC92FD17804374E20E7F330EDAE142F725_2,05D687FC92FD17804374E20E7F330EDAE142F725," Specifying an error response - -If you opt for Fail pipeline on error for either the global error policy or for a node-specific policy, you can further specify what happens on failure. For example, if you check the Show icon on nodes that are linked to an error-handling pipeline, an icon flags a node with an error to help debug the flow. - -" -05D687FC92FD17804374E20E7F330EDAE142F725_3,05D687FC92FD17804374E20E7F330EDAE142F725," Specifying a node-specific error policy - -You can override the default error policy for any node in the pipeline. - - - -1. Click a node to open the configuration pane. -2. Check the option to Override default error policy with: - - - -* Fail pipeline on error -* Continue pipeline on error - - - - - -" -05D687FC92FD17804374E20E7F330EDAE142F725_4,05D687FC92FD17804374E20E7F330EDAE142F725," Viewing all node policies - -To view all node-specific error handling for a pipeline: - - - -1. Click Manage default settings on the toolbar. -2. Click the view all node policies link under Error policy. - - - -A list of all nodes in the pipeline show which nodes use the default policy, and which override the default policy. Click a node name to see the policy details. Use the view filter to show: - - - -* All error policies: all nodes -* Default policy: all nodes that use the default policy -* Override default policy: all nodes that override the default policy -* Fail pipeline on error: all nodes that stop the flow on error -* Continue pipeline on error: all nodes that try to continue the flow on error - - - -" -05D687FC92FD17804374E20E7F330EDAE142F725_5,05D687FC92FD17804374E20E7F330EDAE142F725," Running the Fail on error flow - -If you specify that the flow fails on error, a secondary error handling flow starts when an error is encountered. - -" -05D687FC92FD17804374E20E7F330EDAE142F725_6,05D687FC92FD17804374E20E7F330EDAE142F725," Adding a custom error response - -If Create custom error handling response is checked on default settings for error policy, you can add an error handling node to the canvas so you can configure a custom error response. The response applies to all nodes configured to fail when an error occurs. - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_0,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Functions used in Watson Pipelines's Expression Builder - -Use these functions in Pipelines code editors, for example, to define a user variable or build an advanced condition. - -The Experssion Builder uses the categories for coding functions: - - - -* [Conversion functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enconversion) -* [Standard functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enofext) -* [Accessing advanced global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enadvanced) - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_1,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Conversion functions - -Converts a single data element format to another. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_2,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Table for basic data type conversion - - - - Type Accepts Returns Syntax - - double int, uint, string double double(val) - duration string duration duration(string)
Duration must end with ""s"", which stands for seconds. - int int, uint, double, string, timestamp int int(val) - timestamp string timestamp timestamp(string)
Converts strings to timestamps according to RFC3339, that is ""1972-01-01T10:00:20.021-05:00"". - uint int, double, string uint uint(val) - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_3,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Example - -For example, to cast a value to type double: - -double(%val%) - -When you cast double to int | uint, result rounds toward zero and errors if result is out of range. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_4,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Standard functions - -Functions that are unique to IBM Watson Pipelines. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_5,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," sub - -Replaces substrings of a string that matches the given regular expression that starts at position offset. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_6,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).sub(substring (string), replacement (string), [occurrence (int), offset (int)]]) - -returns: the string with substrings updated. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_7,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'aaabbbcccbbb'.sub('[b]+','RE') - -Returns 'aaaREcccRE'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_8,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," format - -Formats a string or timestamp according to a format specifier and returns the resulting string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_9,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_10,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C,"format as a method of strings - -(string).format(parameter 1 (string or bool or number)... parameter 10 (string or bool or number)) - -returns: the string that contains the formatted input values. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_11,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C,"format as a method of timestamps - -(timestamp).format(layout(string)) - -returns: the formatted timestamp in string format. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_12,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'number=%d, text=%s'.format(1, 'str') - -Returns the string 'number=1, text=str'. - -timestamp('2020-07-24T09:07:29.000-00:00').format('%Y/%m/%d') - -Returns the string '2020/07/24'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_13,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," now - -Returns the current timestamp. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_14,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -now() - -returns: the current timestamp. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_15,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," parseTimestamp - -Returns the current timestamp in string format. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_16,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -parseTimestamp([timestamp_string(string)] [layout(string)]) - -returns: the current timestamp to a string of type string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_17,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -parseTimestamp('2020-07-24T09:07:29Z') - -Returns '2020-07-24T09:07:29.000-00:00'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_18,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," min - -Returns minimum value in list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_19,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(list).min() - -returns: the minimum value of the list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_20,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -[1,2,3].min() - -Returns the integer 1. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_21,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," max - -Returns maximum value in list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_22,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(list).max() - -returns: the maximum value of the list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_23,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -[1,2,3].max() - -Returns the integer 3. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_24,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," argmin - -Returns index of minimum value in list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_25,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(list).argmin() - -returns: the index of the minimum value of the list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_26,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -[1,2,3].argmin() - -Returns the integer 0. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_27,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," argmax - -Returns index of maximum value in list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_28,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(list).argmax() - -returns: the index of the maximum value of the list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_29,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -[1,2,3].argmax() - -Returns the integer 2. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_30,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," sum - -Returns the sum of values in list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_31,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(list).sum() - -returns: the index of the maximum value of the list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_32,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -[1,2,3].argmax() - -Returns the integer 2. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_33,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," base64.decode - -Decodes base64-encoded string to bytes. This function returns an error if the string input is not base64-encoded. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_34,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -base64.decode(base64_encoded_string(string)) - -returns: the decoded base64-encoded string in byte format. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_35,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -base64.decode('aGVsbG8=') - -Returns 'hello' in bytes. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_36,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," base64.encode - -Encodes bytes to a base64-encoded string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_37,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -base64.encode(bytes_to_encode (bytes)) - -returns: the encoded base64-encoded string of the original byte value. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_38,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -base64.decode(b'hello') - -Returns 'aGVsbG8=' in bytes. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_39,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," charAt - -Returns the character at the given position. If the position is negative, or greater than the length of the string, the function produces an error. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_40,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).charAt(index (int)) - -returns: the character of the specified position in integer format. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_41,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello'.charAt(4) - -Returns the character 'o'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_42,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," indexOf - -Returns the integer index of the first occurrence of the search string. If the search string is not found the function returns -1. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_43,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).indexOf(search_string (string), [offset (int)]) - -returns: the index of the first character occurrence after the offset. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_44,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello mellow'.indexOf('ello', 2) - -Returns the integer 7. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_45,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," lowerAscii - -Returns a new string with ASCII characters turned to lowercase. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_46,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).lowerAscii() - -returns: the new lowercase string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_47,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'TacoCat'.lowerAscii() - -Returns the string 'tacocat'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_48,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," replace - -Returns a new string based on the target, which replaces the occurrences of a search string with a replacement string if present. The function accepts an optional limit on the number of substring replacements to be made. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_49,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).replace(search_string (string), replacement (string), [offset (int)]) - -returns: the new string with occurrences of a search string replaced. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_50,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello hello'.replace('he', 'we') - -Returns the string 'wello wello'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_51,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," split - -Returns a list of strings that are split from the input by the separator. The function accepts an optional argument that specifies a limit on the number of substrings that are produced by the split. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_52,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).split(separator (string), [limit (int)]) - -returns: the split string as a string list. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_53,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello hello hello'.split(' ') - -Returns the string list ['hello', 'hello', 'hello']. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_54,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," substring - -Returns the substring given a numeric range corresponding to character positions. Optionally you might omit the trailing range for a substring from a character position until the end of a string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_55,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).substring(start (int), [end (int)]) - -returns: the substring at the specified index of the string. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_56,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'tacocat'.substring(4) - -Returns the string 'cat'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_57,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," trim - -Returns a new string, which removes the leading and trailing white space in the target string. The trim function uses the Unicode definition of white space, which does not include the zero-width spaces. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_58,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).trim() - -returns: the new string with white spaces removed. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_59,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -' ttrimn '.trim() - -Returns the string 'trim'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_60,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," upperAscii - -Returns a new string where all ASCII characters are upper-cased. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_61,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).upperAscii() - -returns: the new string with all characters turned to uppercase. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_62,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'TacoCat'.upperAscii() - -Returns the string 'TACOCAT'. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_63,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," size - -Returns the length of the string, bytes, list, or map. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_64,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string | bytes | list | map).size() - -returns: the length of the string, bytes, list, or map array. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_65,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello'.size() - -Returns the integer 5. - -'hello'.size() - -Returns the integer 5. - -['a','b','c'].size() - -Returns the integer 3. - -{'key': 'value'}.size() - -Returns the integer 1. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_66,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," contains - -Tests whether the string operand contains the substring. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_67,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).contains(substring (string)) - -returns: a Boolean value of whether the substring exists in the string operand. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_68,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello'.contains('ll') - -Returns true. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_69,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," endsWith - -Tests whether the string operand ends with the specified suffix. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_70,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).endsWith(suffix (string)) - -returns: a Boolean value of whether the string ends with specified suffix in the string operand. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_71,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello'.endsWith('llo') - -Returns true. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_72,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," startsWith - -Tests whether the string operand starts with the prefix argument. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_73,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).startsWith(prefix (string)) - -returns: a Boolean value of whether the string begins with specified prefix in the string operand. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_74,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'hello'.startsWith('he') - -Returns true. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_75,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," matches - -Tests whether the string operand matches regular expression. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_76,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(string).matches(prefix (string)) - -returns: a Boolean value of whether the string matches the specified regular expression. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_77,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -'Hello'.matches('[Hh]ello') - -Returns true. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_78,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDate - -Get the day of the month from the date with time zone (default Coordinated Universal Time), one-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_79,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getDate([time_zone (string)]) - -returns: the day of the month with one-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_80,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getDate() - -Returns 24. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_81,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfMonth - -Get the day of the month from the date with time zone (default Coordinated Universal Time), zero-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_82,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getDayOfMonth([time_zone (string)]) - -returns: the day of the month with zero-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_83,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getDayOfMonth() - -Returns 23. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_84,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfWeek - -Get day of the week from the date with time zone (default Coordinated Universal Time), zero-based indexing, zero for Sunday. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_85,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getDayOfWeek([time_zone (string)]) - -returns: the day of the week with zero-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_86,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getDayOfWeek() - -Returns 5. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_87,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfYear - -Get the day of the year from the date with time zone (default Coordinated Universal Time), zero-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_88,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getDayOfYear([time_zone (string)]) - -returns: the day of the year with zero-based indexing. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_89,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getDayOfYear() - -Returns 205. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_90,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getFullYear - -Get the year from the date with time zone (default Coordinated Universal Time). - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_91,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getFullYear([time_zone (string)]) - -returns: the year from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_92,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getFullYear() - -Returns 2020. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_93,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMonth - -Get the month from the date with time zone, 0-11. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_94,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getMonth([time_zone (string)]) - -returns: the month from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_95,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getMonth() - -Returns 6. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_96,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getHours - -Get hours from the date with time zone, 0-23. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_97,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getHours([time_zone (string)]) - -returns: the hour from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_98,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getHours() - -Returns 9. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_99,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMinutes - -Get minutes from the date with time zone, 0-59. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_100,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getMinutes([time_zone (string)]) - -returns: the minute from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_101,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getMinutes() - -Returns 7. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_102,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getSeconds - -Get seconds from the date with time zone, 0-59. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_103,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getSeconds([time_zone (string)]) - -returns: the second from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_104,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.000-00:00').getSeconds() - -Returns 29. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_105,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMilliseconds - -Get milliseconds from the date with time zone, 0-999. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_106,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -(timestamp).getMilliseconds([time_zone (string)]) - -returns: the millisecond from the date. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_107,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - -timestamp('2020-07-24T09:07:29.021-00:00').getMilliseconds() - -Returns 21. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_108,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Access to advanced global objects - -Get node outputs, user variables, and pipeline parameters by using the following Pipelines code. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_109,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get user variable - -Gets the most up-to-date value of a user variable. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_110,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -vars. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_111,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - vars.my_user_var Gets the value of the user variable my_user_var - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_112,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get parameters - -Gets the flow parameters. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_113,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -params. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_114,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - params.a Gets the value of the parameter a - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_115,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get parameter sets - -Gets the flow parameter sets. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_116,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -param_set.. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_117,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - param_set.ps.a Gets the value of the parameter a from a parameter set ps - param_sets.config Gets the pipeline configuration values - param_sets.config.deadline Gets a date object from the configurations parameter set - param_sets.ps[""$PARAM""] Gets the value of the parameter $PARAM from a parameter set ps - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_118,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get task results - -Get a pipeline task's resulting output and other metrics from a pipeline task after it completes its run. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_119,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax - -tasks.. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_120,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - tasks.run_datastage_job Gets the results dictionary of job output - tasks.run_datastage_job.results.score Gets the value score of job output - tasks.run_datastage_job.results.timestamp Gets the end timestamp of job run - tasks.run_datastage_job.results.error Gets the number of errors from job run - tasks.loop_task.loop.counter Gets the current loop iterative counter of job run - tasks.loop_task.loop.item Gets the current loop iterative item of job run - tasks.run_datastage_job.results.status Gets either success or fail status of job run - tasks.run_datastage_job.results.status_message Gets the status message of job run - tasks.run_datastage_job.results.job_name Gets the job name - tasks.run_datastage_job.results.job Gets the Cloud Pak for Data path of job - tasks.run_datastage_job.results.job_run Gets the Cloud Pak for Data run path of job run - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_121,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get pipeline context objects - -Gets values that are evaluated in the context of a pipeline that is run in a scope (project, space, catalog). - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_122,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - ctx.scope.id Gets scope ID - ctx.scope.type Returns either ""project"", ""space"", or ""catalog"" - ctx.scope.name Gets scope name - ctx.pipeline.id Gets pipeline ID - ctx.pipeline.name Gets pipeline name - ctx.job.id Gets job ID - ctx.run_datastage_job.id Gets job run ID - ctx.run_datastage_job.started_at Gets job run start time - ctx.user.id Gets the user ID - - - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_123,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get error status - -If the exception handler is triggered, an error object is created and becomes accessible only within the exception handler. - -" -E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_124,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples - - - - Example Output - - error.status Gets either success or fail status of job run, usually failed - error.status_message Gets the error status message - error.job Gets the Cloud Pak for Data path of job - error.run_datastage_job Gets the Cloud Pak for Data run path of job - - - -Parent topic:[Adding conditions to a Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html) -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_0,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Configuring global objects for Watson Pipelines - -Use global objects to create configurable constants to configure your pipeline at run time. Use parameters or user variables in pipelines to specify values at run time, rather than hardcoding the values. Unlike pipeline parameters, user variables can be dynamically set during the flow. - -Learn about creating: - - - -* [Pipeline parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enflow) -* [Parameter sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enparam-set) -* [User variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enuser) - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_1,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Pipeline parameters - -Use pipeline parameters to specify a value at pipeline runtime. For example, if you want a user to enter a deployment space for pipeline output, use a parameter to prompt for the space name to use when the pipeline runs. Specifying the value of the parameter each time that you run the job helps you use the correct resources. - -About pipeline parameters: - - - -* can be assigned as a node value or assign it for the pipeline job. -* can be assigned to any node, and a status indicator alerts you. -* can be used for multiple nodes. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_2,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Defining a pipeline parameter - - - -1. Create a pipeline parameter from the node configuration panel from the toolbar. -2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline. -3. Assign a parameter type. Depending on the parameter type, you might need to provide more details or assign a default value. -4. Click Add to list to save the pipeline parameter. - - - -Note: - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_3,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Parameter types - -Parameter types are categorized as: - - - -* Basic: including data types to structure input to a pipeline or options for handling the creation of a duplicate space or asset. -* Resource: for selecting a project, catalog, space, or asset. -* Instance: for selecting a machine learning instance or a Cloud Object Storage instance. -* Other: for specifying details, such as creation mode or error policy. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_4,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Example of using pipeline types - -To create a parameter of the type Path: - - - -1. Create a parameter set called MASTER_PARAMETER_SET. -2. Create a parameter called file_path and set the type to Path. -3. Set the value of file_path to mnts/workspace/masterdir. -4. Drag the node Wait for file onto the canvas and set the File location value to MASTER_PARAMETER_SET.file_path. -5. Connect the Wait for file with the Run Bash script node so that the latter node runs after the former. -6. Optional: Test your parameter variable: - - - -1. Add the environment variable parameter to your MASTER_PARAMETER_SET parameter set, for example FILE_PATH. -2. Paste the following command into the Script code of the Run Bash script: - -echo File: $FILE_PATH -cat $FILE_PATH - - - -7. Run the pipeline. The path mnts/workspace/masterdir is in both of the nodes' execution logs to see they passed successfully. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_5,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Configuring a node with a pipeline parameter - -When you configure a node with a pipeline parameter, you can choose an existing pipeline parameter or create a new one as part of configuring a node. - -For example: - - - -1. Create a pipeline parameter called creationmode and save it to the parameter list. -2. Configure a Create deployment space node and click to open the configuration panel. -3. Choose the Pipeline parameter as the input for the Creation mode option. -4. Choose the creationmode pipeline parameter and save the configuration. - - - -When you run the flow, the pipeline parameter is assigned when the space is created. - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_6,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Parameter sets - -Parameter sets are a group of related parameters to use in a pipeline. For example, you might create one set of parameters to use in a test environment and another for use in a production environment. - -Parameter sets can be created as a project asset. Parameter sets created in the project are then available for use in pipelines in that project. - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_7,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Creating a parameter set as a project asset - -You can create a parameter set as a reusable project asset to use in pipelines. - - - -1. Open an existing project or create a project. -2. Click New task > Collect multiple job parameters with specified values to reuse in jobs from the available tasks. -3. Assign a name for the set, and specify the details for each parameter in the set, including: - - - -* Name for the parameter -* Data type -* Prompt -* Default value - - - -4. Optionally create value sets for the parameters in the parameter set. The value sets can be the different values for different contexts. For example, you can create a Test value set with values for a test environment, and a production set for production values. -5. Save the parameter set after you create all the parameters, s. It becomes available for use in pipelines that are created in that project. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_8,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Adding a parameter set for use in a pipeline - -To add a parameter set from a project: - - - -1. Click the global objects icon and switch to the Parameter sets tab. -2. Click Add parameter set to add parameter sets from your project that you want to use in your pipeline. -3. You can add or remove parameter sets from the list. The parameter sets you specify for use in your pipeline becomes available when you assign parameters as input in the pipeline. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_9,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Creating a parameter set from the parameters list in your pipeline - -You can create a parameter set from the parameters list for your pipeline - - - -1. Click the global objects icon and open the Pipeline Parameters. -2. Select the parameters that you want in the set, then click the Save as parameter set icon. -3. Enter a name and optional description for the set. -4. Save to add the parameter set for use in your pipeline. - - - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_10,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Using a parameter set in a pipeline - -To use a parameter set: - - - -1. Choose Assign pipeline parameter as an input type from a node property sheet. -2. Choose the parameter to assign. A list displays all available parameters of the type for that input. Available parameters can be individual parameters, and parameters defined as part of a set. The parameter set name precedes the name of the parameter. For example, Parameter_set_name.Parameter_name. -3. Run the pipeline and select a value set for the corresponding value (if available), assign a value for the parameter, or accept the default value. - - - -Note:You can use a parameter set in the expression builder by using the format param_sets.. If a parameter set value contains an environment variable, you must use this syntax in the expression builder: param_sets.MyParamSet[""$ICU_DATA""]. Attention: If you delete a parameter, make sure that you remove the references to the parameter from your job design. If you do not remove the references, your job might fail. - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_11,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Editing a parameter set in a job - -If you use a parameter set when you define a job, you can choose a value set to populate variables with the values in that set. If you change and save the values, then edit the job and save changes, the parameter set values reset to the defaults. - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_12,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," User variables - -Create user variables to assign values when the flow runs. Unlike pipeline parameters, user variables can be modified during processing. - -" -445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_13,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Defining a user variable - -You can create user variables for use in your pipeline. User variables, like parameters, are defined on the global level and are not specific to any node. The initial value for a user variable must be set when you define it and cannot be set dynamically as the result of any node output. When you define a user variable, you can use the Set user variables node to update it with node output. - -To create a user variable: - - - -1. Create a variable from the Update variable node configuration panel or from the toolbar. -2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline. -3. Complete the definition of the variable, including choosing a variable type and input type. -4. Click Add to add the variable to the list. It is now available for use in a node. - - - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_0,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," Getting started with the Watson Pipelines editor - -The Watson Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations. - -You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset. To create a new asset in your project from the Assets tab, click New asset > Automate model lifecycle. To edit an existing asset, click the pipeline asset name on the Assets tab. - -The canvas opens with a set of annotated tools for you to use to create a pipeline. The canvas includes the following components: - -![Pipeline canvas components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-canvas.svg) - - - -* The node palette provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook. You can also use nodes to run and update assets. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle. -* The toolbar includes shortcuts to options related to running, editing, and viewing the pipeline. -* The parameters pane provides context-sensitive options for configuring the elements of your pipeline. - - - -" -484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_1,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The toolbar - -![Pipeline toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-toolbar.png) - -Use the Pipeline editor toolbar to: - - - -* Run the pipeline as a trial run or a scheduled job -* View the history of pipeline runs -* Cut, copy, or paste canvas objects -* Delete a selected node -* Drop a comment onto the canvas -* Configure global objects, such as pipeline parameters or user variables -* Manage default settings -* Arrange nodes vertically -* View last saved timestamp -* Zoom in or out -* Fit the pipeline to the view -* Show or hide global messages - - - -Hover over an icon on the toolbar to view the shortcut text. - -" -484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_2,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The node palette - -The node palette provides the objects that you need to create an end-to-end pipeline. Click a top-level node in the palette to see the related nodes. - - - - Node category Description Node type - - Copy Use nodes to copy an asset or file, import assets, or export assets Copy assets
Export assets
Import assets - Create Create assets or containers for assets Create AutoAI experiment
Create AutoAI time series experiment
Create batch deployment
Create data asset
Create deployment space
Create online deployment - Wait Specify node-level conditions for advancing the pipeline run Wait for all results
Wait for any result
Wait for file - Control Specify error handling Loop in parallel
Loop in sequence
Set user variables
Terminate pipeline - Update Update the configuration settings for a space, asset, or job. Update AutoAI experiment
Update batch deployment
Update deployment space
Update online deployment - Delete Remove a specified asset, job, or space. Delete AutoAI experiment
Delete batch deployment
Delete deployment space
Delete online deployment - Run Run an existing or ad hoc job. Run AutoAI experiment
Run Bash script
Run batch deployment
Run Data Refinery job
Run notebook job
Run pipeline job
Run Pipelines component job
Run SPSS Modeler job - - - -" -484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_3,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The parameters pane - -Double-click a node to edit its configuration options. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically. You can define the source of values in various ways. For example, you can specify that the source of value for ""ML asset"" input for a batch deployment must be the output from a run notebook node. - -For more information on parameters, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). - -" -484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_4,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," Next steps - - - -* [Planning a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html) -* [Explore the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html) -* [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) - - - -Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_0,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Manage default settings - -You can manage the global settings of your IBM Watson Pipelines such as a default error policy and default rules for node caching. - -Global settings apply to all nodes in the pipeline unless local node settings overwrite them. To update global settings, click the Manage default settings icon ![gear icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-setting-icon.png) on the toolbar. You can configure: - - - -* [Error policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=enerr-pol) -* [Node caching](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=ennode-cache) - - - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_1,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Setting global error policy - -You can define the behavior of Pipelines when an error occurs. - - - -* Fail pipeline on error stops the flow and initiates an error-handling flow. -* Continue pipeline on error tries to continue running the pipeline. - - - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_2,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Error handling - -You can configure the behavior of Pipelines for error handling. - - - -* Create custom-error handling response: Customize an error-handling response. Add an error handling node to the canvas so you can configure a custom error response. The response applies to all configured nodes to fail when an error occurs. - - - -* Show icon on nodes linked to error handling pipeline: An icon flags a node with an error to help debug the flow. - - - - - -To learn more about error handling, see [Managing pipeline errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html) - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_3,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Setting node caches - -Manual caching for nodes sets the default for how the pipeline caches and stores information. You can override these settings for individual nodes. - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_4,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Default cache usage frequency - -You can change the following cache settings: - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_5,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Caching method - -Choose whether to enable automatic caching for all nodes or choose to manually set cache conditions for specific nodes. - - - -* Enable automatic caching for all nodes (recommended) -All nodes that support caching enable it by default. Setting Creation Mode or Copy Mode in your node's settings to Overwrite automatically disables cache, if the node supports these setting parameters. -* Enable caching for specific nodes in the node properties panel. -In individual nodes, you can select Create data cache at this node in Output to allow caching for individual nodes. A save icon appears on nodes that uses this feature. - - - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_6,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Cache usage - -Choose the conditions for using cached data. - - - -* Do not use cache -* Always use cache -* Use cache when all selected conditions are met - - - -* Retrying from a previous failed run -* Input values for the current pipeline are unchanged from previous run -* Pipeline version is unchanged from previous run - - - - - -To view and download your cache data, see Run tracker in your flow. You can download the results by opening a preview of the node's cache and clicking the download icon. - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_7,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Resetting the cache - -If your cache was enabled, you can choose to reset your cache when you run a Pipelines job. When you click Run again, you can select Clear pipeline cache in Define run settings. By choosing this option, you are overriding the default cache settings to reset the cache for the current run. However, the pipeline still creates cache for subsequent runs while cache is enabled. - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_8,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Managing your Pipelines settings - -Configure other global settings for your Pipelines asset. - -" -28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_9,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Autosave - -Choose to automatically save your current Pipelines canvas at a selected frequency. Only changes that impact core pipeline flow are saved. - -Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -606EF22CF35AF0EDC961776FB893B07A880F11D4_0,606EF22CF35AF0EDC961776FB893B07A880F11D4," IBM Watson Pipelines - -The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts. - -To design a pipeline that you drag nodes onto the canvas, specify objects and parameters, then run and monitor the pipeline. - -" -606EF22CF35AF0EDC961776FB893B07A880F11D4_1,606EF22CF35AF0EDC961776FB893B07A880F11D4," Automating the path to production - -Putting a model into a product is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift. - -![Automating the AI lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-cycle-3.svg) - -Automating the pipeline makes it simpler to build, run, and evaluate a model in a cohesive way, to shorten the time from conception to production. You can assemble the pipeline, then rapidly update and test modifications. The Pipelines canvas provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule. - -The Pipelines editor also allows for more cohesive collaboration between a data scientist and a ModelOps engineer. A data scientist can create and train a model. A ModelOps engineer can then automate the process of training, deploying, and evaluating the model after it is published to a production environment. - -" -606EF22CF35AF0EDC961776FB893B07A880F11D4_2,606EF22CF35AF0EDC961776FB893B07A880F11D4," Next steps - -[Add a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) to your project and get to know the canvas tools. - -" -606EF22CF35AF0EDC961776FB893B07A880F11D4_3,606EF22CF35AF0EDC961776FB893B07A880F11D4," Additional resources - -For more information, see this blog post about [automating the AI lifecycle with a pipeline flow](https://yairschiff.medium.com/automating-the-ai-lifecycle-with-ibm-watson-studio-orchestration-flow-4450f1d725d6). -" -1BD28F052373C2E70130C7539D399D76F9D2AAFE_0,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Accessing the components in your pipeline - -When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. Collaborators who run the pipeline must also be able to access the pipeline components. - -" -1BD28F052373C2E70130C7539D399D76F9D2AAFE_1,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Managing pipeline credentials - -To run a job, the pipeline must have access to IBM Cloud credentials. Typically, a pipeline uses your personal IBM Cloud API key to execute long-running operations in the pipeline without disruption. If credentials are not available when you create the job, you are prompted to supply an API key or create a new one. - -To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account. - -You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html). - -Alternatively, you can request that a key is generated for the pipeline. In either scenario, name and copy the key, protecting it as you would a password. - -" -1BD28F052373C2E70130C7539D399D76F9D2AAFE_2,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Adding assets to a pipeline - -When you create a pipeline, you add assets, such as data, notebooks, deployment jobs, or Data Refinery jobs to the pipeline to orchestrate a sequential process. The strongly recommended method for adding assets to a pipeline is to collect the assets in the project containing the pipeline and use the asset browser to select project assets for the pipeline. - -Attention: Although you can include assets from other projects, doing so can introduce complexities and potential problems in your pipeline and could be prohibited in a future release. The recommended practice is to use assets from the current project. - -Parent topic:[Getting started with Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) -" -F5086D0B6258FEF503CB3219F427FFBFF73135E1_0,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming IBM Watson Pipelines - -You can program in a pipeline by using a notebook, or running Bash scripts in a pipeline. - -" -F5086D0B6258FEF503CB3219F427FFBFF73135E1_1,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming with Bash scripts - -[Run Bash scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-bash) in a pipeline to compute or process data as part of the flow. - -" -F5086D0B6258FEF503CB3219F427FFBFF73135E1_2,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming with notebooks - -You can use a notebook to run an end-to-end pipeline or to run parts of a pipeline, such as model training. - - - -* For details on creating notebooks and for links to sample notebooks, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html). -* For details on running a notebook as a pipeline job, see [Run notebook job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-notebook). - - - -" -F5086D0B6258FEF503CB3219F427FFBFF73135E1_3,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Using the Python client - -Use the [Watson Pipelines Python client](https://pypi.org/project/ibm-watson-pipelines/) for working with pipelines in a notebook. - -To install the library, use pip to install the latest package of ibm-watson-pipelines in your coding environment. For example, run the following code in your notebook environment or console. - -!pip install ibm-watson-pipelines - -Use the client documentation for syntax and descriptions for commands that access pipeline components. - -" -F5086D0B6258FEF503CB3219F427FFBFF73135E1_4,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Go further - -To learn more about how to orchestrate external tasks efficiently, see [Making tasks more efficiently with Tekton](https://medium.com/@rafal.bigaj/tekton-and-friends-how-to-orchestrate-external-tasks-efficiently-3fcacf882f6d), a key continuous delivery framework used for Pipelines. - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -AE57C56703B39C9097516D1466B70A3DE57AA1C4_0,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Running a pipeline - -You can run a pipeline in real time to test a flow as you work. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule. - -To run a pipeline: - - - -1. Click Run pipeline on the toolbar. -2. Choose an option: - - - -* Trial run runs the pipeline without creating a job. Use this to test a pipeline. -* Create a job presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline. -* View history compares all of your runs over time. - - - - - -You must make sure requirements are met when you run a pipeline. For example, you might need a deployment space or an API key to run some of your nodes before you can begin. - -" -AE57C56703B39C9097516D1466B70A3DE57AA1C4_1,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Using a job run name - -You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different jobs in the Job details dashboard. Otherwise, you can also assign a local parameter DSJobInvocationId to either a Run pipeline job node or Run DataStage job node. - -If both the parameter DSJobInvocationId and job name of the node are set, DSJobInvocationId will be used. If neither are set, the default value ""job run"" is used. - - Notes on running a pipeline - - - -* When you run a pipeline from a trial run or a job, click the node output to view the results of a successful run. If the run fails, error messages and logs are provided to help you correct issues. -* Errors in the pipeline are flagged with an error badge. Open the node or condition with an error to change or complete the configuration. -* View the consolidated logs to review operations or identify issues with the pipeline. - - - -" -AE57C56703B39C9097516D1466B70A3DE57AA1C4_2,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Creating a pipeline job - -The following are all the configuration options for defining a job to run the pipeline. - - - -1. Name your pipeline job and choose a version. -2. Input your IBM API key. -3. (Optional) Schedule your job by toggling the Schedule button. - - - -1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month. -2. Add exception days to prevent the job from running on certain days. -3. Add a time for terminating the job. - - - -4. (Optional) Enter the pipeline parameters needed for your job, for example assigning a space to a deployment node. To see how to create a pipeline parameter, see Defining pipeline parameters in [Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html). -5. (Optional) Choose if you want to be notified of pipeline job status after running. - - - -" -AE57C56703B39C9097516D1466B70A3DE57AA1C4_3,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Saving a version of a pipeline - -You can save a version of a pipeline and revert to it at a later time. For example, if you want to preserve a particular configuration before you make changes, save a version. You can revert the pipeline to a previous version. When you share a pipeline, the latest version is used. - -To save a version: - - - -1. Click the Versions icon on the toolbar. -2. In the Versions pane, click Save version to create a new version with a version number incremented by 1. - - - -When you run the pipeline, you can choose from available saved versions. - -Note: You cannot delete a saved version. - -" -AE57C56703B39C9097516D1466B70A3DE57AA1C4_4,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Exporting pipeline assets - -When you export project or space assets to import them into a deployment space, you can include pipelines in the list of assets you export to a zip file and then import into a project or space. - -Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre-production to a production space. Note these considerations for working with pipelines in a space: - - - -* Pipelines in a space are read-only. You cannot edit the pipeline. You must edit the pipeline from the project, then export the updated pipeline and import it into the space. -* Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline. You can also use parameters to assign values for jobs so you can have different values for each job you configure. -* If there is already a pipeline in the space with the same name, the pipeline import will fail. -* If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space. -* Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail. -* If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job, the pipeline job will fail. - - - -Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_0,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Run the built-in sample pipeline - -You can view and run a built-in sample pipeline that uses sample data to learn how to automate machine learning flows in Watson Pipelines. - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_1,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," What's happening in the sample pipeline? - -The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is then copied to a deployment space where it is deployed. - -The sample illustrates how you can automate an end-to-end flow to make the lifecycle easier to run and monitor. - -The sample pipeline looks like this: - -![Sample orchestration pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-tutorial1.png) - -The tutorial steps you through this process: - - - -1. [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enset-up) -2. [Preview creating and running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enpreview) -3. [Creating the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=encreate-sample) -4. [Running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enrun-flow) -5. [Reviewing the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enreview-results) -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_2,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2,"6. [Exploring the sample nodes and configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enexplore-sample) - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_3,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Prerequisites - -To run this sample, you must first create: - - - -* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), where you can run the sample pipeline. -* A [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html), where you can view and test the results. The deployment space is required to run the sample pipeline. - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_4,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Preview creating and running the sample pipeline - -Watch this video to see how to create and run a sample pipeline. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_5,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Creating the sample pipeline - -Create the sample pipeline in the Pipelines editor. - - - -1. Open the project where you want to create the pipeline. -2. From the Assets tab, click New asset > Automate model lifecycle. -3. Click the Samples tab, and select the Orchestrate an AutoAI experiment. -4. Enter a name for the pipeline. For example, enter Bank marketing sample. -5. Click Create to open the canvas. - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_6,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Running the sample pipeline - -To run the sample pipeline: - - - -1. Click Run pipeline on the canvas toolbar, then choose Trial run. -2. Select a deployment space when prompted to provide a value for the deployment_space pipeline parameter. - - - -1. Click Select Space. -2. Expand the Spaces section. -3. Select your deployment space. -4. Click Choose. - - - -3. Provide an API key if it is your first time to run a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption. - - - -* If you have an existing API key, click Use existing API key, paste the API key, and click Save. -* If you don't have an existing API key, click Generate new API key, provide a name, and click Save. Copy the API key, and then save the API key for future use. When you're done, click Close. - - - -4. Click Run to start the pipeline. - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_7,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Reviewing the results - -When the pipeline run completes, you can view the output to see the results. - -![Sample pipeline run output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results1.png) - -Open the deployment space that you specified as part of the pipeline. You see the new deployment in the space: - -![Sample pipeline deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results-space.png) - -If you want to test the deployment, use the deployment space Test page to submit payload data in JSON format and get a score back. For example, click the JSON tab and enter this input data: - -{""input_data"": [{""fields"": ""age"",""job"",""marital"",""education"",""default"",""balance"",""housing"",""loan"",""contact"",""day"",""month"",""duration"",""campaign"",""pdays"",""previous"",""poutcome""],""values"": ""30"",""unemployed"",""married"",""primary"",""no"",""1787"",""no"",""no"",""cellular"",""19"",""oct"",""79"",""1"",""-1"",""0"",""unknown""]]}]} - -When you click Predict, the model generates output with a confidence score for the prediction of whether a customer subscribes to a term deposit promotion. - -![Prediction score for the sample model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-gall-sample-output.png) - -In this case, the prediction of ""no"" is accompanied by a confidence score of close to 95%, predicting that the client will most likely not subscribe to a term deposit. - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_8,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Exploring the sample nodes and configuration - -Get a deeper understanding of how the sample nodes were configured to work in concert in the pipeline sample. - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_9,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Viewing the pipeline parameter - -A pipeline parameter specifies a setting for the entire pipeline. In the sample pipeline, a pipeline parameter is used to specify a deployment space where the model that is saved from the AutoAI experiment is stored and deployed. You are prompted to select the deployment space the pipeline parameter links to. - -Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) on the canvas toolbar to view or create pipeline parameters. In the sample pipeline, the pipeline parameter is named deployment_space and is of type Space. Click the name of the pipeline parameter to view the details. In the sample, the pipeline parameter is used with the Create data file node and the Create AutoAI experiment node. - -![Flow parameter to specify deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-flow-param3.png) - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_10,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Loading the training data for the AutoAI experiment - -In this step, a Create data file node is configured to access the data set for the experiment. Click the node to view the configuration. The data file is bank-marketing-data.csv, which provides sample data to predict whether a bank customer signs up for a term deposit. The data rests in a Cloud Object Storage bucket and can be refreshed to keep the model training up to date. - - - - Option Value - - File The location of the data asset for training the AutoAI experiment. In this case, the data file is in a project. - File path The name of the asset, bank-marketing-data.csv. - Target scope For this sample, the target is a deployment space. - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_11,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Creating the AutoAI experiment - -The node to Create AutoAI experiment is configured with these values: - - - - Option Value - - AutoAI experiment name onboarding-bank-marketing-prediction - Scope For this sample, the target is a deployment space. - Prediction type binary - Prediction column (label) y - Positive class yes - Training data split ration 0.9 - Algorithms to include GradientBoostingClassifierEstimator
XGBClassifierEstimator - Algorithms to use 1 - Metric to optimize ROC AUC - Optimize metric (optional) default - Hardware specification (optional) default - AutoAI experiment description This experiment uses a sample file, which contains text data that is collected from phone calls to a Portuguese bank in response to a marketing campaign. The classification goal is to predict whether a client subscribes to a term deposit, represented by variable y. - AutoAI experiment tags (optional) none - Creation mode (optional) default - - - -Those options define an experiment that uses the bank marketing data to predict whether a customer is likely to enroll in a promotion. - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_12,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Running the AutoAI experiment - -In this step, the Run AutoAI experiment node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model. - - - - Option Value - - AutoAI experiment Takes the output from the Create AutoAI node as the input to run the experiment. - Training data assets Takes the output from the Create Data File node as the training data input for the experiment. - Model count 1 - Holdout data asset (optional) none - Models count (optional) 3 - Run name (optional) none - Model name prefix (optional) none - Run description (optional) none - Run tags (optional) none - Creation mode (optional) default - Error policy (optional) default - - - -" -2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_13,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Deploying the model to a web service - -The Create Web deployment node creates an online deployment that is named onboarding-bank-marketing-prediction-deployment so you can deliver data and get predictions back in real time from the REST API endpoint. - - - - Option Value - - ML asset Takes the best model output from the Run AutoAI node as the input to create the deployment. - Deployment name onboarding-bank-marketing-prediction-deployment - - - -Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -BB961AB67F88B50475329FCD1EE2F64137480426_0,BB961AB67F88B50475329FCD1EE2F64137480426," Run a sample pipeline to compare models - -Download a pre-populated project with the assets you need to run a sample pipeline. The pipeline compares two AutoAI experiments and compares the output, selecting the best model and deploying it as a Web service. - -The Train AutoAI and reference model sample creates a pre-populated project with the assets you need to run a pre-built pipeline that trains models using a sample data set. After performing some set up and configuration tasks, you can run the sample pipeline to automate the following sequence: - - - -* Copy sample assets into a space. -* Run a notebook and an AutoAI experiment simultaneously, on a common training data set. -* Run another notebook to compare the results from the previous nodes and select the best model, ranked for accuracy. -* Copy the winning model to a space and create a web service deployment for the selected model. - - - -After the run completes, you can inspect the output in the pipeline editor and then switch to the associated deployment space to [view and test the resulting deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample2.html?context=cdpaas&locale=enview-deploy). - -" -BB961AB67F88B50475329FCD1EE2F64137480426_1,BB961AB67F88B50475329FCD1EE2F64137480426," Learning goals - -After running this sample you will know how to: - - - -* Configure a Watson Pipeline -* Run a Watson Pipeline - - - -" -BB961AB67F88B50475329FCD1EE2F64137480426_2,BB961AB67F88B50475329FCD1EE2F64137480426," Downloading the sample - -Follow these steps to create the sample project from the Samples so you can test the capabilities of IBM Watson Pipelines: - - - -1. Open the [Train AutoAI and reference model sample](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/496c1220779cbe5cccc063534600789f) from the Samples. -2. Click Create project to create the project. -3. Open the project and follow the instructions on the Readme page to set up the pipeline assets. - - - -" -BB961AB67F88B50475329FCD1EE2F64137480426_3,BB961AB67F88B50475329FCD1EE2F64137480426," The sample pipeline components - -The sample project includes: - - - -* Pre-built sample Watson Pipeline -* Data set called german_credit_data_biased_training.csv used for training a model to predict credit risk -* Data set called german_credit_test_data.csv used to test the deployed model -* Notebook called reference-model-training-notebook that trains an AutoAI experiment and saves the best pipeline as a model -* Notebook called select-winning-model that compares the models and chooses the best to save to the designated deployment space - - - -" -BB961AB67F88B50475329FCD1EE2F64137480426_4,BB961AB67F88B50475329FCD1EE2F64137480426," Getting started with the sample - -To run the sample pipeline, you will need to perform some set-up tasks: - - - -1. Create a deployment space, for example, dev-space which you'll need when you run the notebooks. From the navigation menu, select Deployments > View All Spaces > New deployment space. Fill in the required fields. - -Note:Make sure you associate a Watson Machine Learning instance with the space or the pipeline run will fail. -2. From the Assets page of the sample project, open the reference-model-training-notebook and follow the steps in the Set up the environment section to acquire and insert an api_key variable as your credentials. -3. After inserting your credentials, click File > Save as version to save the updated notebook to your project. -4. Do the same for the select-winning-model notebook to add credentials and save the updated version of the notebook. - - - -" -BB961AB67F88B50475329FCD1EE2F64137480426_5,BB961AB67F88B50475329FCD1EE2F64137480426," Exploring the pipeline - -After you complete the set up tasks, open the sample pipeline On-boarding - Train AutoAI and reference model and select the best from the Assets page of the sample project. - -You will see the sample pipeline: - -![Sample pipeline from Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_6,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing node configuration - -As you explore the sample pipeline, double-click on the various nodes to view their configuration. For example, if you click on the first node for copying an asset, you will see this configuration: - -![Creating assets configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-config.png) - -Note that the node that will copy the data asset to a deployment space is configured using a pipeline parameter. The pipeline parameter creates a placeholder for the space you created to use for this pipeline. When you run the pipeline, you are prompted to choose the space. - -" -BB961AB67F88B50475329FCD1EE2F64137480426_7,BB961AB67F88B50475329FCD1EE2F64137480426," Running the pipeline - -When you are ready to run the pipeline, click the Run icon and choose Trial job. You are prompted to choose the deployment space for the pipeline and create or supply an API key for the pipeline if one is not already available. - -As the pipeline runs, you will see status notifications about the progress of the run. Nodes that are processed successfully are marked with a checkmark. - -![Running the pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-run.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_8,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the output - -When the job completes, click Pipeline output for the run to see a summary of pipeline processes. You can click to expand each section and view the details for each operation. - -![Viewing the pipeline output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-output.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_9,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the deployment in your space - -After you are done exploring the pipeline and its output, you can view the assets that were created in the space you designated for the pipeline. - -Open the space. You can see that the models and training data were copied to the space. The winning model is tagged as selected_model. - -![Viewing the associated space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-space.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_10,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the deployment - -The last step of the pipeline created a web service deployment for the selected model. Click the Deployments tab to view the deployment. - -![Viewing the deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-deploy1.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_11,BB961AB67F88B50475329FCD1EE2F64137480426," Testing the deployment - -You can test the deployment to see the predictions the model will generate. - - - -1. Click the deployment name to view the details. -2. Click the Test tab. -3. Enter this JSON data into the Input form. The payload (input) must match the schema for the model but should not include the prediction column. - - - -{""input_data"":[{ -""fields"": ""CheckingStatus"",""LoanDuration"",""CreditHistory"",""LoanPurpose"",""LoanAmount"",""ExistingSavings"",""EmploymentDuration"",""InstallmentPercent"",""Sex"",""OthersOnLoan"",""CurrentResidenceDuration"",""OwnsProperty"",""Age"",""InstallmentPlans"",""Housing"",""ExistingCreditsCount"",""Job"",""Dependents"",""Telephone"",""ForeignWorker""], -""values"": ""no_checking"",28,""outstanding_credit"",""appliances"",5990,""500_to_1000"",""greater_7"",5,""male"",""co-applicant"",3,""car_other"",55,""none"",""free"",2,""skilled"",2,""yes"",""yes""]] -}]} - -Clicking Predict returns this prediction, indicating a low credit risk for this customer. - -![Viewing the prediction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-predict.png) - -" -BB961AB67F88B50475329FCD1EE2F64137480426_12,BB961AB67F88B50475329FCD1EE2F64137480426," Next steps - -[Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) using your own assets. - -Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) -" -D1DB4F3B084CB401795C925F280207CBCB3D94AA_0,D1DB4F3B084CB401795C925F280207CBCB3D94AA," Storage and data access for IBM Watson Pipelines - -Learn where files and data are stored outside of IBM Watson Pipelines and use it in a Pipelines. - -" -D1DB4F3B084CB401795C925F280207CBCB3D94AA_1,D1DB4F3B084CB401795C925F280207CBCB3D94AA," Access data on Cloud Object Storage - -File storage refers to the repository where you store assets to use with the pipeline. It is a Cloud Object Storage bucket that is used as storage for a particular scope, such as a project or deployment space. - -A storage location is referenced by a Cloud Object Storage data connection in its scope. Refer to a file by pointing to a location such as an object key in a dedicated, self-managed bucket. - -Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) -" -CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_0,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deploying models with Watson Machine Learning - -Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights. - -This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use. - -![Building a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml_overview.svg) - -" -CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_1,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," IBM Watson Machine Learning architecture and services - -Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models. - -" -CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_2,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deploying and managing models with Watson Machine Learning - -Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). - -To build and train a model: - - - -* Use one of the tools that are listed in [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html). -* [Import a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) that you built and trained outside of Watson Studio. - - - -" -CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_3,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deployment infrastructure - - - -* [Deploy trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) as a web service or for batch processing. -* [Deploy Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) to simplify AI solutions. - - - -" -CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_4,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Programming Interfaces - - - -* Use [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) to work with all of your Watson Machine Learning assets in a notebook. -* Use [REST API](https://cloud.ibm.com/apidocs/machine-learning) to call methods from the base URLs for the Watson Machine Learning API endpoints. -* When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html). - - - -Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -577964B0C132F5EA793054C3FF67417DDA6511D3_0,577964B0C132F5EA793054C3FF67417DDA6511D3," Watson Machine Learning Python client samples and examples - -Review and use sample Jupyter Notebooks that use Watson Machine Learning Python library to demonstrate machine learning features and techniques. Each notebook lists learning goals so you can find the one that best meets your goals. - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_1,577964B0C132F5EA793054C3FF67417DDA6511D3," Training and deploying models from notebooks - -If you choose to build a machine learning model in a notebook, you must be comfortable with coding in a Jupyter Notebook. A Jupyter Notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model. - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_2,577964B0C132F5EA793054C3FF67417DDA6511D3," Learn from sample notebooks - -Many ways exist to build and train models and then deploy them. Therefore, the best way to learn is to look at annotated samples that step you through the process by using different frameworks. Review representative samples that demonstrate key features. - -The samples are built by using the V4 version of the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). - -Video disclaimer: Some minor steps and graphical elements in the videos might differ from your deployment. - -Watch this video to learn how to train, deploy, and test a machine learning model in a Jupyter Notebook. This video mirrors the Use scikit-learn to recognize hand-written digits found in the Deployment samples table. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Watch this video to learn how to test a model that was created with AutoAI by using the Watson Machine Learning APIs in Jupyter Notebook. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_3,577964B0C132F5EA793054C3FF67417DDA6511D3," Helpful variables - -Use the pre-defined PROJECT_ID environment variable to call the Watson Machine Learning Python client APIs. PROJECT_ID is the guide of the project where your environment is running. - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_4,577964B0C132F5EA793054C3FF67417DDA6511D3," Deployment samples - -View or run these Jupyter Notebooks to see how techniques are implemented by using various frameworks. Some of the samples rely on trained models, which are also available for you to download from the public repository. - - - - Sample name Framework Techniques demonstrated - - [Use scikit-learn and custom library to predict temperature](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/9365d34eeacef267026a2b75b92bfa2f) Scikit-learn Train a model with custom defined transformer
Persist the custom-defined transformer and the model in Watson Machine Learning repository
Deploy the model by using Watson Machine Learning Service
Perform predictions that use the deployed model - [Use PMML to predict iris species](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16f5b4) PMML Deploy and score a PMML model - [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model, then deploy the sample model. - [Use scikit-learn to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4) Scikit-learn Train sklearn model
Persist trained model in Watson Machine Learning repository
Deploy model for online scoring by using client library
Score sample records by using client library -" -577964B0C132F5EA793054C3FF67417DDA6511D3_5,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use Spark and batch deployment to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21719c1) Spark Load a CSV file into an Apache Spark DataFrame
Explore data
Prepare data for training and evaluation
Create an Apache Spark machine learning pipeline
Train and evaluate a model
Persist a pipeline and model in Watson Machine Learning repository
Explore and visualize prediction result by using the plotly package
Deploy a model for batch scoring by using Watson Machine Learning API - [Use Spark and Python to predict Credit Risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2173364) Spark Load a CSV file into an Apache® Spark DataFrame
Explore data
Prepare data for training and evaluation
Persist a pipeline and model in Watson Machine Learning repository from tar.gz files
Deploy a model for online scoring by using Watson Machine Learning API
Score sample data by using the Watson Machine Learning API
Explore and visualize prediction results by using the plotly package - [Use SPSS to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2175eb9) SPSS Work with the instance
Perform an online deployment of the SPSS model
Score data by using deployed model -" -577964B0C132F5EA793054C3FF67417DDA6511D3_6,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use XGBoost to classify tumors](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ac820b22cc976f5cf6487260f4c8d9c8) XGBoost Load a CSV file into numpy array
Explore data
Prepare data for training and evaluation
Create an XGBoost machine learning model
Train and evaluate a model
Use cross-validation to optimize the model's hyperparameters
Persist a model in Watson Machine Learning repository
Deploy a model for online scoring
Score sample data - [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Spark Download an externally trained Keras model with dataset.
Persist an external model in the Watson Machine Learning repository.
Deploy a model for online scoring by using client library.
Score sample records by using client library. - [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function
Create a web service
Score the model - [Machine Learning artifact management](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/55ef73c276cd1bf2bae266613d08c0f3) Core Export and import artifacts
Load, deploy, and score externally created models - [Use Decision Optimization to plan your diet](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/5502accad754a3c5dcb3a08f531cea5a) Core Create a diet planning model by using Decision Optimization -" -577964B0C132F5EA793054C3FF67417DDA6511D3_7,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use SPSS and batch deployment with Db2 to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0955ef) SPSS Load a CSV file into an Apache Spark DataFrame
Explore data
Prepare data for training and evaluation
Persist a pipeline and model in Watson Machine Learning repository from tar.gz files
Deploy a model for online scoring by using Watson Machine Learning API
Score sample data by using the Watson Machine Learning API
Explore and visualize prediction results by using the plotly package - [Use scikit-learn and AI lifecycle capabilities to predict Boston house prices](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b1c7b47) Scikit-learn Load a sample data set from scikit-learn
Explore data
Prepare data for training and evaluation
Create a scikit-learn pipeline
Train and evaluate a model
Store a model in the Watson Machine Learning repository
Deploy a model with AutoAI lifecycle capabilities - [German credit risk prediction with Scikit-learn for model monitoring](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/f63c83c7368d2487c943c91a9a28ad67) Scikit-learn Train, create, and deploy a credit risk prediction model with monitoring - [Monitor German credit risk model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/48e9f342365736c7bb7a8dfc481bca6e) Scikit-learn Train, create, and deploy a credit risk prediction model with IBM Watson OpenScale capabilities - - - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_8,577964B0C132F5EA793054C3FF67417DDA6511D3," AutoAI samples - -View or run these Jupyter Notebooks to see how AutoAI model techniques are implemented. - - - - Sample name Framework Techniques demonstrated - - [Use AutoAI and Lale to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16d0c0) Hybrid (AutoAI) with Lale Work with Watson Machine Learning experiments to train AutoAI models
Compare trained models quality and select the best one for further refinement
Refine the best model and test new variations
Deploy and score the trained model - [Use AutoAI to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/029d77a73d72a4134c81383d6f103330) Hybrid (AutoAI) Work with Watson Machine Learning experiments to train AutoAI models
Compare trained models quality and select the best one for further refinement
Refine the best model and test new variations
Deploy and score the trained model - - - -" -577964B0C132F5EA793054C3FF67417DDA6511D3_9,577964B0C132F5EA793054C3FF67417DDA6511D3," Next steps - - - -* To learn more about using notebook editors, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html). -* To learn more about working with notebooks, see [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html). - - - - - -* To learn more about authenticating in a notebook, see [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_0,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Managing the Watson Machine Learning service endpoint - -You can use IBM Cloud connectivity options for accessing cloud services securely by using service endpoints. When you provision a Watson Machine Learning service instance, you can choose if you want to access your service through the public internet, which is the default setting, or over the IBM Cloud private network. - -For more information, refer to [IBM Cloud service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint).{: new_window} - -You can use the Service provisioning page to choose a default endpoint from the following options: - - - -* [Public network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enpublic_net) -* [Private network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enprivate_net) -* Both, public and private networks - - - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_1,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Public network - -You can use public network endpoints to connect to Watson Machine Learning service instance on the public network. Your environment needs to have internet access to connect. - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_2,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Private network - -You can use private network endpoints to connect to your IBM Watson Machine Learning service instance over the IBM Cloud Private network. After you configure your Watson Machine Learning service to use private endpoints, the service is not accessible from the public internet. - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_3,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Private URLs for Watson Machine Learning - -Private URLs for Watson Machine Learning for each region are as follows: - - - -* Dallas - [https://private.us-south.ml.cloud.ibm.com](https://private.us-south.ml.cloud.ibm.com) -* London - [https://private.eu-gb.ml.cloud.ibm.com](https://private.eu-gb.ml.cloud.ibm.com) -* Frankfurt - [https://private.eu-de.ml.cloud.ibm.com](https://private.eu-de.ml.cloud.ibm.com) -* Tokyo - [https://private.jp-tok.ml.cloud.ibm.com](https://private.jp-tok.ml.cloud.ibm.com) - - - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_4,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Using IBM Cloud service endpoints - -Follow these steps to enable private network endpoints on your clusters: - - - -1. Use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started) to enable your account to use IBM Cloud service endpoints. -2. Provision a Watson Machine Learning service instance with private endpoints. - - - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_5,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Provisioning with service endpoints - -You can provision a Watson Machine Learning service instance with service endpoint by using IBM Cloud UI or IBM Cloud CLI. - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_6,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Provisioning a service endpoint with IBM Cloud UI - -To configure the endpoints of your IBM Watson Machine Learning service instance, you can use the Endpoints field on the IBM Cloud catalog page. You can configure a public, private, or a mixed network. - -![Configure endpoint from the service catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-endpoints.png) - -" -67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_7,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," IBM Cloud CLI - -If you provision an IBM Watson Machine Learning service instance by using the IBM Cloud CLI, use the command-line option service-endpoints to configure the Watson Machine Learning endpoints. You can specify the value public (the default value), private, or public-and-private: - -ibmcloud resource service-instance-create pm-20 --service-endpoints - -For example: - -ibmcloud resource service-instance-create wml-instance pm-20 standard us-south -p --service-endpoints private - -or - -ibmcloud resource service-instance-create wml-instance pm-20 standard us-south --service-endpoints public-and-private - -Parent topic:[First steps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_0,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Assets in deployment spaces - -Learn about various ways of adding and promoting assets to a space. Find the list of asset types that you can add to a space. - -Note these considerations for importing assets into a space: - - - -* Upon import, some assets are automatically assigned a version number, starting with version 1. This version numbering prevents overwriting existing assets if you import their updated versions later. -* Assets or references that are required to run jobs in the space must be part of the import package, or must be added separately. If you don't add these supporting assets or references, jobs fail. - - - -The way to add an asset to a space depends on the asset type. You can add some assets directly to a space (for example a model that was created outside of watsonx). Other asset types originate in a project and must be transferred from a project to a space. The third class includes asset types that you can add to a space only as a dependency of another asset. These asset types do not display in the Assets tab in the UI. - -For more information, see: - - - -* [Asset types that you can directly add to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_directly) -* [Asset types that are created in projects and can be transferred into a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_transfer) -* [Asset types that can be added to a space only as a dependency](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_dependency) - - - -For more information about working with space assets, see: - - - -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_1,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57,"* [Accessing asset details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html) - - - -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_2,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Asset types that you can directly add to a space - - - -* Connection -* Data asset (from a connection or an uploaded file) -* Model - - - -For more information, see: - - - -* For data assets and connections: [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html) -* For models: [Importing models into a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) - - - -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_3,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Assets types that are created in projects and can be transferred into a space - - - -* Connection -* Data Refinery flow -* Environment -* Function -* Job -* Model -* Script - - - -If your asset is located in a standard Watson Studio project, you can transfer the asset to the deployment space by promoting it. - -For more information, see [Promoting assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html). - -Alternatively, you can export the project and then import it into the deployment space. For more information, see: - - - -* [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) -* [Importing spaces and projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html) - - - -If you export the whole project, any matching custom environments are exported as well. - -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_4,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Asset types that can be added to a space only as a dependency - - - -* Hardware Specification -* Package Extension -* Software Specification -* Watson Machine Learning Experiment -* Watson Machine Learning Model Definition - - - -" -80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_5,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Learn more - - - -* [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -* [Training and deploying machine learning models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_0,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data assets to a deployment space - -Learn about various ways of adding and promoting data assets to a space and data types that are used in deployments. - -Data can be: - - - -* A data file such as a .csv file -* A connection to data that is located in a repository such as a database. -* Connected data that is located in a storage bucket. For more information, see [Using data from the Cloud Object Storage service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=encos-data). - - - -Notes: - - - -* For definitions of data-related terms, refer to [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). - - - -You can add data to a space in one of these ways: - - - -* [Add data and connections to space by using UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-directly) -* [Promote a data source, such as a file or a connection from an associated project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html) -* [Save a data asset to a space programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-programmatically) -* [Import a space or a project, including data assets, into an existing space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). - - - -Data added to a space is managed in a similar way to data added to a Watson Studio project. For example: - - - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_1,D8BD7C30F776F7218860187F535C6B72D1A8DC74,"* Adding data to a space creates a new copy of the asset and its attachments within the space, maintaining a reference back to the project asset. If an asset such as a data connection requires access credentials, they persist and are the same whether you are accessing the data from a project or from a space. -* Just like with data connection in a project, you can edit data connection details from the space. -* Data assets are stored in a space in the same way that they are stored in a project. They use the same file structure for the space as the structure used for the project. - - - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_2,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data and connections to space by using UI - -To add data or connections to space by using UI: - - - -1. From the Assets tab of your deployment space, click Import assets. -2. Choose between adding a connected data asset, local file, or connection to a data source: - - - -* If you want to add a connected data asset, select Connected data. Choose a connection and click Import. -* If you want to add a local file, select Local file > Data asset. Upload your file and click Done. -* If you want to add a connection to a data source, select Data access > Connection. Choose a connection and click Import. - - - - - -The data asset displays in the space and is available for use as an input data source in a deployment job. - -Note:Some types of connections allow for using your personal platform credentials. If you add a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox. - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_3,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data to space programmatically - -If you are using APIs to create, update, or delete Watson Machine Learning assets, make sure that you are using only Watson Machine Learning [API calls](https://cloud.ibm.com/apidocs/machine-learning). - -For an example of how to add assets programmatically, refer to this sample notebook: [Use SPSS and batch deployment with Db2 to predict customer churn](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/spss/Use%20SPSS%20and%20batch%20deployment%20with%20DB2%20to%20predict%20customer%20churn.ipynb) - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_4,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Data source reference types in Watson Machine Learning - -Data source reference types are referenced in Watson Machine Learning requests to represent input data and results locations. Use data_asset and connection_asset for these types of data sources: - - - -* Cloud Object Storage -* Db2 -* Database data - - - -Notes: - - - -* For Decision Optimization, the reference type is url. - - - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_5,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Example data_asset payload - -{""input_data_references"": [{ -""type"": ""data_asset"", -""connection"": { -}, -""location"": { -""href"": ""/v2/assets/?space_id="" -} -}] - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_6,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Example connection_asset payload - -""input_data_references"": [{ -""type"": ""connection_asset"", -""connection"": { -""id"": """" -}, -""location"": { -""bucket"": """", -""file_name"": ""/"" -} - -}] - -For more information, see: - - - -* Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) - - - -" -D8BD7C30F776F7218860187F535C6B72D1A8DC74_7,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Using data from the Cloud Object Storage service - -Cloud Object Storage service can be used with deployment jobs through a connected data asset or a connection asset. To use data from the Cloud Object Storage service: - - - -1. Create a connection to IBM Cloud Object Storage by adding a Connection to your project or space and selecting Cloud Object Storage (infrastructure) or Cloud Object Storage as the connector. Provide the secret key, access key, and login URL. - -Note:When you are creating a connection to Cloud Object Storage or Cloud Object Storage (Infrastructure), you must specify both access_key and secret_key. If access_key and secret_key are not specified, downloading the data from that connection doesn't work in a batch deployment job. For reference, see [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) and [IBM Cloud Object Storage (infrastructure) connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html). -2. Add input and output files to the deployment space as connected data by using the Cloud Object Storage connection that you created. - - - -Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) -" -451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_0,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Creating deployment spaces - -Create a deployment space to store your assets, deploy assets, and manage your deployments. - -" -451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_1,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2,"Required permissions: -All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services or for Cloud Pak for Data can manage to create deployment spaces. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform). - -A deployment space is not associated with a project. You can publish assets from multiple projects to a space. For example, you might have a test space for evaluating deployments, and a production space for deployments you want to deploy in business applications. - -Follow these steps to create a deployment space: - - - -1. From the navigation menu, select Deployments > New deployment space. Enter a name for your deployment space. -2. Optional: Add a description and tags. - -3. Select a storage service to store your space assets. - - - -* If you have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, choose a repository from the list to store your space assets. -* If you do not have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, you are prompted to create one. - - - -4. Optional: If you want to deploy assets from your space, select a machine learning service instance to associate with your deployment space. -To associate a machine learning instance to a space, you must: - - - -* Be a space administrator. -* Have admin access to the machine learning service instance that you want to associate with the space. For more information, see [Creating a Watson Machine Learning service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html). Tip: If you want to evaluate assets in the space, switch to the Manage tab and associate a Watson OpenScale instance. - - - -" -451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_2,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2,"5. Optional: Assign the space to a deployment stage. Deployment stages are used for [MLOps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/modelops-overview.html), to manage access for assets in various stages of the AI lifecycle. They are also used in [governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html), for tracking assets. Choose from: - - - -* Development for assets under development. Assets that are tracked for governance are displayed in the Develop stage of their associated use case. -* Testing for assets that are being validated. Assets that are tracked for governance are displayed in the Validate stage of their associated use case. -* Production for assets in production. Assets that are tracked for governance are displayed in the Operate stage of their associated use case. - - - -6. Optional: Upload space assets, such as [exported project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) or [exported space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). If the imported space is encrypted, you must enter the password. - -Tip: If you get an import error, clear your browser cookies and then try again. -7. Click Create. - - - -" -451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_3,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Viewing and managing deployment spaces - - - -* To view all deployment spaces that you can access, click Deployments on the navigation menu. -* To view any of the details about the space after you create it, such as the associated service instance or storage ID, open your deployment space and then click the Manage tab. -* Your space assets are stored in a Cloud Object Storage repository. You can access this repository from IBM Cloud. To find the bucket ID, open your deployment space, and click the Manage tab. - - - -" -451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_4,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Learn more - -To learn more about adding assets to a space and managing them, see [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). - -To learn more about creating a space and accessing its details programmatically, see [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3). - -To learn more about handling spaces programmatically, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning). - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -C11E8DEEDBABE64F4789061D10E55AEA415FD51E_0,C11E8DEEDBABE64F4789061D10E55AEA415FD51E," Deleting deployment spaces - -Delete existing deployment spaces that you don't require anymore. - -Important:Before you delete a deployment space, you must delete all the deployments that are associated with it. Only a project admin can delete a deployment space. For more information, see [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html). - -To remove a deployment space, follow these steps: - - - -1. From the navigation menu, click Deployments. -2. In the deployments list, click the Spaces tab and find the deployment space that you want to delete. -3. Hover over the deployment space, select the menu (![Menu icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/open-close-icon.png)) icon, and click Delete. -4. In the confirmation dialog box, click Delete. - - - -" -C11E8DEEDBABE64F4789061D10E55AEA415FD51E_1,C11E8DEEDBABE64F4789061D10E55AEA415FD51E," Learn more - -To learn more about how to clean up a deployment space and delete it programmatically, refer to: - - - -* [Notebook on managing machine learning artifacts](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d093d7b) -* [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3) - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -85E9CAC1F581E61092CFF1F6BE38570EE734C115_0,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Exporting space assets from deployment spaces - -You can export assets from a deployment space so that you can share the space with others or reuse the assets in another space. - -For a list of assets that you can export from space, refer to [Assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). - -" -85E9CAC1F581E61092CFF1F6BE38570EE734C115_1,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Exporting space assets from the UI - -Important:To avoid problems with importing the space, export all dependencies together with the space. For more information, see [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html). - -To export space assets from the UI: - - - -1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon. From the list, select Export space. -2. Click New export file. Specify a file name and an optional description. -Tip: To encrypt sensitive data in the exported archive, type the password in the Password field. -3. Select the assets that you want to export with the space. -4. Click Create to create the export file. -5. After the space is exported, click the download (![Download icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/download-icon.png)) to save the file. - - - -You can reuse this space by choosing Create a space from a file when you create a new space. - -" -85E9CAC1F581E61092CFF1F6BE38570EE734C115_2,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Learn more - - - -* [Importing spaces and projects into existing deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -A11374B50B49477362FA00BBB32A277776F7E8E2_0,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing space and project assets into deployment spaces - -You can import assets that you export from a deployment space or a project (either a project export or a Git archive) into a new or existing deployment space. This way, you can add assets or update existing assets (for example, replacing a model with its newer version) to use for your deployments. - -You can import a space or a project export file to [a new deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-new) or an [existing deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-existing) to populate the space with assets. - -Tip: The export file can come from a Git-enabled project and a Watson Studio project. To create the file to export, create a compressed file for the project that contains the assets to import. Then, follow the steps for importing the compressed file into a new or existing space. - -" -A11374B50B49477362FA00BBB32A277776F7E8E2_1,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing a space or a project to a new deployment space - -To import a space or a project when you are creating a new deployment space: - - - -1. Click New deployment space. -2. Enter the details for the space. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). -3. In the Upload space assets section, upload the exported compressed file that contains data assets and click Create. - - - -The assets from the exported file is added as space assets. - -" -A11374B50B49477362FA00BBB32A277776F7E8E2_2,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing a space or a project to an existing deployment space - -To import a space or a project into an existing space: - - - -1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon. From the list, select Import space. -2. Add your compressed file that contains assets from a Watson Studio project or deployment space. -Tip: If the space that you are importing is encrypted, enter the password in the Password field. -3. After your asset is imported, click Done. - - - -The assets from the exported file is added as space assets. - -" -A11374B50B49477362FA00BBB32A277776F7E8E2_3,A11374B50B49477362FA00BBB32A277776F7E8E2," Resolving issues with asset duplication - -The importing mechanism compares assets that exist in your space with the assets that are being imported. If it encounters an asset with the same name and of the same type: - - - -* If the asset type supports revisions, the importing mechanism creates a new revision of the existing asset and fixes the new revision. -* If the asset type does not support revisions, the importing mechanism fixes the existing asset. - - - -This table describes how import works to resolve cases where assets are duplicated between the import file and the existing space. - - - -Scenarios for importing duplicated assets - - Your space File being imported Result - - No assets with matching name or type One or more assets with matching name or type All assets are imported. If multiple assets in the import file have the same name, they are imported as duplicate assets in the target space. - One asset with matching name or type One asset with matching name or type Matching asset is updated with new version. Other assets are imported normally. - One asset with matching name or type More than one asset with matching name or type The first matching asset that is processed is imported as a new version for the existing asset in the space, extra assets with matching name are created as duplicates in the space. Other assets are imported normally. - Multiple assets with matching name or type One or more assets with matching name or type Assets with matching names fail to import. Other assets are imported normally. - - - -Warning: Multiple assets of the same name in an existing space or multiple assets of the same name in an import file are not fully supported scenarios. The import works as described for the scenarios in the table, but you cannot use versioning capabilities specific to the import. - -Existing deployments get updated differently, depending on deployment type: - - - -* If a batch deployment was created by using the previous version of the asset, the next invocation of the batch deployment job will refer to the updated state of the asset. -* If an online deployment was created by using the previous version of the asset, the next ""restart"" of the deployment refers to the updated state of the asset. - - - -" -A11374B50B49477362FA00BBB32A277776F7E8E2_4,A11374B50B49477362FA00BBB32A277776F7E8E2," Learn more - - - -* To learn about adding other types of assets to a space, refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). -* To learn about exporting assets from a deployment space, refer to [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -4DD17198B8E7413469C1837FFDBAF109B307078C_0,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting assets to a deployment space - -Learn about how to promote assets from a project to a deployment space and the requirements for promoting specific asset types. - -" -4DD17198B8E7413469C1837FFDBAF109B307078C_1,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting assets to your deployment space - -You can promote assets from your project to a deployment space. For a list of assets that can be promoted from a project to a deployment space, refer to [Adding assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). When you are promoting assets, you can: - - - -* Choose an existing space or create a new one. -* Add tags to help identify the promoted asset. -* Choose dependent assets to promote them at the same time. - - - -Follow these steps to promote your assets to your deployment space: - - - -1. From your project, go to the Assets tab. -2. Select the Options (![Options icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) icon and click Promote to space. - - - -Tip: If the asset that you want to promote is a model, you can also click the model name to open the model details page, and then click Promote to deployment space. - -Notes: - - - -* Promoting assets and their dependencies from a project to a space by using the Watson Studio user interface is the recommended method to guarantee that the promotion flow results in a complete asset definition. For example, relying on the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd) to manage the promotion flow of an asset, together with its dependencies, can result in the promoted asset from being inaccessible from the space. -* Promoting assets from default Git-based projects is not supported. -* Depending on your configuration and the type of asset that you promote, large asset attachments, typically more than 2 GB, can cause the promotion action to time out. - - - -For more information, see: - - - -* [Promoting connections and connected data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-conn) -" -4DD17198B8E7413469C1837FFDBAF109B307078C_2,4DD17198B8E7413469C1837FFDBAF109B307078C,"* [Promoting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-model) -* [Promoting notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-nbs) - - - -" -4DD17198B8E7413469C1837FFDBAF109B307078C_3,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting connections and connected data - -When you promote a connection that uses personal credentials or Cloud Pak for Data authentication to a deployment space, the credentials are not promoted. You must provide the credentials information again or allow Cloud Pak for Data authentication. Because Storage Volume connections support only personal credentials, to be able to use this type of asset after it is promoted to a space, you must provide the credentials again. - -Some types of connections allow for using your personal platform credentials. If you promote a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox. - -Although you can promote any kind of data connection to a space, where you can use the connection is governed by factors such as model and deployment type. For example, you can access any of the connected data by using a script. However, in batch deployments you are limited to particular types of data, as listed in [Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html). - -" -4DD17198B8E7413469C1837FFDBAF109B307078C_4,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting models - -When you promote a model to a space: - - - -* Components that are required for a successful deployment, such as a custom software specification, model definition, or pipeline definition are automatically promoted as well. -* The data assets that were used to train the model are not promoted with it. Information on data assets used to train the model is included in model metadata. - - - -" -4DD17198B8E7413469C1837FFDBAF109B307078C_5,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting notebooks and scripts - -Tip: If you are using the Notebook editor, you must save a version of the notebook before you can promote it. - - - -* If you created a job for a notebook and you selected Log and updated version as the job run result output, the notebook cannot be promoted to a deployment space. -* If you are working in a notebook that you created before IBM Cloud Pak for Data 4.0, and you want to promote this notebook to a deployment space, follow these steps to enable promoting it: - - - -1. Save a new version of the notebook. -2. Select the newly created version. -3. Select either Log and notebook or Log only as the job run result output under Advanced configuration. -4. Run your job again. - -Now you can promote it manually from the project Assets page or programmatically by using CPDCTL commands. - - - - - - - -* If you want to promote a notebook programmatically, use CPDCTL commands to move the notebook or script to a deployment space. To learn how to use CPDCTL to move notebooks or scripts to spaces, refer to [CPDCTL code samples](https://github.com/IBM/cpdctl/tree/master/samples). For the reference guide, refer to [CPDCTL command reference](https://github.com/IBM/cpdctl/blob/master/README_command_reference.mdnotebook_promote). - - - -Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) -" -47CC4851C049D805F02BD2058CD5C2FFA157981C_0,47CC4851C049D805F02BD2058CD5C2FFA157981C," Deployment spaces - -Deployment spaces contain deployable assets, deployments, deployment jobs, associated input and output data, and the associated environments. You can use spaces to deploy various assets and manage your deployments. - -Deployment spaces are not associated with projects. You can publish assets from multiple projects to a space, and you can deploy assets to more than one space. For example, you might have a test space for evaluating deployments, and a production space for deployments that you want to deploy in business applications. - -The deployments dashboard is an aggregate view of deployment activity available to you, across spaces. For details, refer to [Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html). - -When you open a space from the UI, you see these elements: - -![Detailed information about a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/DeploymentSpace.svg) - -You can share a space with other people. When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. For details on space collaborator permissions, refer to [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html). - -" -47CC4851C049D805F02BD2058CD5C2FFA157981C_1,47CC4851C049D805F02BD2058CD5C2FFA157981C," Learn more - - - -* [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html) -* [Managing assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) -* [Creating deployments from a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -88A9F08917918D1D74C1C2CA702E999747EEB422_0,88A9F08917918D1D74C1C2CA702E999747EEB422," Jupyter Notebook editor - -The Jupyter Notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked. - -You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost. - -When your notebooks are ready, you can create jobs to run the notebooks directly from the Jupyter Notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run. - -" -88A9F08917918D1D74C1C2CA702E999747EEB422_1,88A9F08917918D1D74C1C2CA702E999747EEB422," Learn more - - - -* [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) -* [Create notebooks in the Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) -* [Runtime environments for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -* [Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) -* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) -* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) -* [Share and publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_0,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Compute resource options for the notebook editor in projects - -When you run a notebook in the notebook editor in a project, you choose an environment template, which defines the compute resources for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software configuration. For notebooks, environment templates include a supported language of Python and R. - - - -* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=entypes) -* [Runtime releases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enruntime-releases) -* [CPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu) -* [Spark environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark) -* [GPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu) -* [Default hardware specifications for scoring models with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enwml) -* [Data files in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endata-files) -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_1,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342,"* [Compute usage by service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=encompute) -* [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enscope) -* [Changing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enchange-env) - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_2,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Types of environments - -You can use these types of environments for running notebook: - - - -* [Anaconda CPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu) for standard workloads. -* [Spark environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark) for parallel processing that is provided by the platform or by other services. -* [GPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu) for compute-intensive machine learning models. - - - -Most environment types for notebooks have default environment templates so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - - - -Environment types for notebooks - - Environment type Default templates Custom templates - - Anaconda CPU ✓ ✓ - Spark clusters ✓ ✓ - GPU ✓ ✓ - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_3,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Runtime releases - -The default environments for notebooks are added as an affiliate of a runtime release and prefixed with Runtime followed by the release year and release version. - -A runtime release specifies a list of key data science libraries and a language version, for example Python 3.10. All environments of a runtime release are built based on the library versions defined in the release, thus ensuring the consistent use of data science libraries across all data science applications. - -The Runtime 22.2 and Runtime 23.1 releases are available for Python 3.10 and R 4.2. - -While a runtime release is supported, IBM will update the library versions to address security requirements. Note that these updates will not change the . versions of the libraries, but only the versions. This ensures that your notebook assets will continue to run. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_4,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Library packages included in Runtimes - -For specific versions of popular data science library packages included in Watson Studio runtimes refer to these tables: - - - -Table 3. Packages and their versions in the various Runtime releases for Python - - Library Runtime 22.2 on Python 3.10 Runtime 23.1 on Python 3.10 - - Keras 2.9 2.12 - Lale 0.7 0.7 - LightGBM 3.3 3.3 - NumPy 1.23 1.23 - ONNX 1.12 1.13 - ONNX Runtime 1.12 1.13 - OpenCV 4.6 4.7 - pandas 1.4 1.5 - PyArrow 8.0 11.0 - PyTorch 1.12 2.0 - scikit-learn 1.1 1.1 - SciPy 1.8 1.10 - SnapML 1.8 1.13 - TensorFlow 2.9 2.12 - XGBoost 1.6 1.6 - - - - - -Table 4. Packages and their versions in the various Runtime releases for R - - Library Runtime 22.2 on R 4.2 Runtime 23.1 on R 4.2 - - arrow 8.0 11.0 - car 3.0 3.0 - caret 6.0 6.0 - catools 1.18 1.18 - forecast 8.16 8.16 - ggplot2 3.3 3.3 - glmnet 4.1 4.1 - hmisc 4.7 4.7 - keras 2.9 2.12 - lme4 1.1 1.1 - mvtnorm 1.1 1.1 - pandoc 2.12 2.12 - psych 2.2 2.2 - python 3.10 3.10 - randomforest 4.7 4.7 - reticulate 1.25 1.25 - sandwich 3.0 3.0 - scikit-learn 1.1 1.1 - spatial 7.3 7.3 - tensorflow 2.9 2.12 - tidyr 1.2 1.2 - xgboost 1.6 1.6 - - - -In addition to the libraries listed in the tables, runtimes include many other useful libraries. To see the full list, select the Manage tab in your project, then click Templates, select the Environments tab, and then click on one of the listed environments. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_5,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," CPU environment templates - -You can select any of the following default CPU environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project. - -DO Indicates that the environment templates includes the CPLEX and the DOcplex libraries to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments. See [Decision Optimization notebooks](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html). - -NLP Indicates that the environment templates includes the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html). This default environment should be large enough to run the pre-trained models. - - - -Default CPU environment templates for notebooks - - Name Hardware configuration CUH rate per hour - - Runtime 22.2 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5 - Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 1 - Runtime 22.2 on Python 3.10 S 4 vCPU and 16 GB RAM 2 - Runtime 23.1 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5 - Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 1 - Runtime 23.1 on Python 3.10 S 4 vCPU and 16 GB RAM 2 - DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6 - NLP + DO Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 6 - Runtime 22.2 on R 4.2 S 4 vCPU and 16 GB RAM 2 - Runtime 23.1 on R 4.2 S 4 vCPU and 16 GB RAM 2 - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_6,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342,"You should stop all active CPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [CPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). - - Notebooks and CPU environments - -When you open a notebook in edit mode in a CPU runtime environment, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per single user and not per notebook. This means that if you open a second notebook with the same environment template in the same project, a second kernel is started in the same runtime. Runtime resources are shared by the Jupyter kernels that you start in the runtime. Runtime resources are also shared if the CPU has GPU. - -If you want to avoid sharing runtimes but want to use the same environment template for multiple notebooks in a project, you should create custom environment templates with the same specifications and associate each notebook with its own template. - -If necessary, you can restart or reconnect to the kernel. When you restart a kernel, the kernel is stopped and then started in the same session again, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_7,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Spark environment templates - -You can select any of the following default Spark environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project. - - - -Default Spark environment templates for notebooks - - Name Hardware configuration CUH rate per hour - - Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM;
Driver: 1 vCPU and 4 GB RAM 1 - Default Spark 3.4 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM;
Driver: 1 vCPU and 4 GB RAM 1 - - - -You should stop all active Spark runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [Spark idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_8,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Large Spark environments - -If you have the Watson Studio Professional plan, you can create custom environment templates for larger Spark environments. - -Professional plan users can have up to 35 executors and can choose from the following options for both driver and executor: - - - -Hardware configurations for Spark environments - - Hardware configuration - - 1 vCPU and 4 GB RAM - 1 vCPU and 8 GB RAM - 1 vCPU and 12 GB RAM - - - -The CUH rate per hour increases by 0.5 for every vCPU that is added. For example, 1x Driver: 3vCPU with 12GB of RAM and 4x Executors: 2vCPU with 8GB of RAM amounts to (3 + (4 * 2)) = 11 vCPUs and 5.5 CUH. - - Notebooks and Spark environments - -You can select the same Spark environment template for more than one notebook. Every notebook associated with that environment has its own dedicated Spark cluster and no resources are shared. - -When you start a Spark environment, extra resources are needed for the Jupyter Enterprise Gateway, Spark Master, and the Spark worker daemons. These extra resources amount to 1 vCPU and 2 GB of RAM for the driver and 1 GB RAM for each executor. You need to take these extra resources into account when selecting the hardware size of a Spark environment. For example: if you create a notebook and select Default Spark 3.3 & Python 3.10, the Spark cluster consumes 3 vCPU and 12 GB RAM but, as 1 vCPU and 4 GB RAM are required for the extra resources, the resources remaining for the notebook are 2 vCPU and 8 GB RAM. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_9,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," File system on a Spark cluster - -If you want to share files across executors and the driver or kernel of a Spark cluster, you can use the shared file system at /home/spark/shared. - -If you want to use your own custom libraries, you can store them under /home/spark/shared/user-libs/. There are four subdirectories under /home/spark/shared/user-libs/ that are pre-configured to be made available to Python and R or Java runtimes. - -The following tables lists the pre-configured subdirectories where you can add your custom libaries. - - - -Table 5. Pre-configured subdirectories for custom libraries - - Directory Type of library - - /home/spark/shared/user-libs/python3/ Python 3 libraries - /home/spark/shared/user-libs/R/ R packages - /home/spark/shared/user-libs/spark2/ Java JAR files - - - -To share libraries across a Spark driver and executors: - - - -1. Download your custom libraries or JAR files to the appropriate pre-configured directory. -2. Restart the kernel from the notebook menu by clicking Kernel > Restart Kernel. This loads your custom libraries or JAR files in Spark. - - - -Note that these libraries are not persisted. When you stop the environment runtime and restart it again later, you need to load the libraries again. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_10,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," GPU environment templates - -You can select the following GPU environment template for notebooks. The environment templates are listed under Templates on the Environments page on the Manage tab of your project. - -The GPU environment template names indicate the accelerator power. The GPU environment templates include the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html). - - Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). - - - -Default GPU environment templates for notebooks - - Name Hardware configuration CUH rate per hour - - GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68 - GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68 - GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136 - GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136 - - - -You should stop all active GPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [GPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). - - Notebooks and GPU environments - -GPU environments for notebooks are available only in the Dallas IBM Cloud service region. - -You can select the same Python and GPU environment template for more than one notebook in a project. In this case, every notebook kernel runs in the same runtime instance and the resources are shared. To avoid sharing runtime resources, create multiple custom environment templates with the same specifications and associate each notebook with its own template. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_11,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Default hardware specifications for scoring models with Watson Machine Learning - -When you invoke the Watson Machine Learning API within a notebook, you consume compute resources from the Watson Machine Learning service as well as the compute resources for the notebook kernel. - -You can select any of the following hardware specifications when you connect to Watson Machine Learning and create a deployment. - - - -Hardware specifications available when invoking the Watson Machine Learning service in a notebook - - Capacity size Hardware configuration CUH rate per hour - - Extra small 1x4 = 1 vCPU and 4 GB RAM 0.5 - Small 2x8 = 2 vCPU and 8 GB RAM 1 - Medium 4x16 = 4 vCPU and 16 GB RAM 2 - Large 8x32 = 8 vCPU and 32 GB RAM 4 - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_12,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Data files in notebook environments - -If you are working with large data sets, you should store the data sets in smaller chunks in the IBM Cloud Object Storage associated with your project and process the data in chunks in the notebook. Alternatively, you should run the notebook in a Spark environment. - -Be aware that the file system of each runtime is non-persistent and cannot be shared across environments. To persist files in Watson Studio, you should use IBM Cloud Object Storage. The easiest way to use IBM Cloud Object Storage in notebooks in projects is to leverage the [project-lib package for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html) or the [project-lib package for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-r.html). - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_13,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Compute usage by service - -The notebook runtimes consumes compute resources as CUH from Watson Studio, while running default or custom environments. You can monitor the Watson Studio CUH consumption in the project on the Resource usage page on the Manage tab of the project. - -Notebooks can also consume CUH from the Watson Machine Learning service when the notebook invokes the Watson Machine Learning to score a model. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of the project. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_14,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Track CUH consumption for Watson Machine Learning in a notebook - -To calculate capacity unit hours consumed by a notebook, run this code in the notebook: - -CP = client.service_instance.get_details() -CUH = CUH[""entity""][""capacity_units""]/(36001000) -print(CUH) - -For example: - -'capacity_units': {'current': 19773430} - -19773430/(36001000) - -returns 5.49 CUH - -For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_15,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Runtime scope - -Environment runtimes are always scoped to an environment template and a user within a project. If different users in a project work with the same environment, each user will get a separate runtime. - -If you select to run a version of a notebook as a scheduled job, each scheduled job will always start in a dedicated runtime. The runtime is stopped when the job finishes. - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_16,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Changing the environment of a notebook - -You can switch environments for different reasons, for example, you can: - - - -* Select an environment with more processing power or more RAM -* Change from using an environment without Spark to a Spark environment - - - -You can only change the environment of a notebook if the notebook is unlocked. You can change the environment: - - - -* From the notebook opened in edit mode: - - - -1. Save your notebook changes. -2. Click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar and then click Environment. -3. Select another template with the compute power and memory capacity from the list. -4. Select Change environment. This stops the active runtime and starts the newly selected environment. - - - -* From the Assets page of your project: - - - -1. Select the notebook in the Notebooks section, click Actions > Change Environment and select another environment. The kernel must be stopped before you can change the environment. This new runtime environment will be instantiated the next time the notebook is opened for editing. - - - -* In the notebook job by editing the job template. See [Editing job settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details). - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_17,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Next steps - - - -* [Creating a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) -* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) -* [Customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) -* [Stopping active notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) - - - -" -CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_18,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Learn more - - - -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -C6B0055426C9E91760F4923ED42BE91D64FCA6C8_0,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Notebooks and scripts - -You can create, edit and execute Python and R code using Jupyter notebooks and scripts in code editors, for example the notebook editor or an integrated development environment (IDE), like RStudio. - -Notebooks : A Jupyter notebook is a web-based environment for interactive computing. You can use notebooks to run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data, namely the data, the code computations that process the data, the visualizations of the results, and text and rich media to enhance understanding. - -Scripts : A script is a file containing a set of commands and comments. The script can be saved and used later to re-execute the saved commands. Unlike in a notebook, the commands in a script can only be executed in a linear fashion. - - Notebooks - -Required permissions : Editor or Admin role in a project - -Tools : Notebook editor - -Programming languages : Python and R - -Data format : All types - -Code support is available for loading and accessing data from project assets for: - -: Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets - -See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. - -Data size : 5 GB. If your files are larger, you must load the data in multiple parts. - -" -C6B0055426C9E91760F4923ED42BE91D64FCA6C8_1,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Scripts - -Required permissions : Editor or Admin role in a project - -Tools : RStudio - -Programming languages : R - -Data format : All types - -Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets - -See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. - -Data size : 5 GB. If your files are larger, you must load the data in multiple parts. - -" -C6B0055426C9E91760F4923ED42BE91D64FCA6C8_2,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Working in the notebook editor - -The notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked. - -You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost. - -When your notebooks are ready, you can create jobs to run the notebooks directly from the notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run. - -" -C6B0055426C9E91760F4923ED42BE91D64FCA6C8_3,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Working in RStudio - -RStudio is an integrated development environment for working with R scripts or Shiny apps. Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. - -R scripts and Shiny apps can only be created and used in the RStudio IDE. You can't create jobs for R scripts or R Shiny deployments. - -" -C6B0055426C9E91760F4923ED42BE91D64FCA6C8_4,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Learn more - - - -* [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) -* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) -* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) - - - -Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_0,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Deployments dashboard - -The deployments dashboard provides an aggregate view of deployment activity available to you, across spaces. You can get a broad view of deployment activity such as the status of job runs or a list of online deployments. You can also use filters and views to focus on specific job runs or category of runs such as failed runs. ModelOps or DevOps users can review and monitor the activity for an organization. - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_1,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Accessing the Deployments dashboard - -From the navigation menu, click Deployments. If you don't have any deployment spaces, you are prompted to create a space. This following illustration shows an example of the Deployments dashboard: - -![Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deployment-dashboard.png) - -The dashboard view has two tabs: - - - -* [Activity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enactivity): Use the Activity tab to review all of the deployment activity across spaces. You can sort and filter this view to focus on a particular type of activity, such as failed deployments, or jobs with active runs. You can also review metrics such as the number of deployment spaces with active deployments. -* [Spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enspaces): Use the Spaces tab to list all the spaces that you can access. You can read the overview information, such as the number of deployments and job runs in a space, or click a space name to view details and create deployments or jobs. - - - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_2,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Viewing activity - -View the overview information for finished runs, active runs, or online deployments, or drill down to view details. - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_3,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Finished runs - -The Finished runs section shows activity in jobs over a specified time interval. The default is to view finished jobs for the last 8 hours. It shows jobs that are completed, canceled, or failed across all of your deployment spaces within the specified time frame. Click View finished runs to view a list of runs. - -The view provides more detail on the finished runs and a visualization that shows run times. - -![Viewing detail for finished jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view3.png) - -Filter the view to focus on a particular type of activity: - -![Filtering job detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view4.png) - - - -* Jobs with active runs - Shows jobs that have active runs (running, started, or queued) across all spaces you can access. -* Active runs - Shows runs that are in the running, started, or queued state across all jobs you can access. -* Jobs with finished runs - Shows jobs with runs that are completed, canceled, or failed. -* Finished runs - Shows runs that are completed, canceled, or failed. - - - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_4,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Active runs - -The Active runs section displays runs that are currently running or are in the starting or queued state. Click View active runs to view a list of the runs. - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_5,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Online deployments - -The Deployments section shows all online and R-Shiny deployments, which are sorted into categories for by status. Click View deployments to view the list of deployments that you can access. - -From any view, you can start from the overview and drill down to see the details for a particular job or run. You can also filter the view to focus on a particular type of deployment. - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_6,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Viewing spaces - -View a list of spaces that you can access, with overview information such as number of deployments and collaborators. Click the name of a space to view details or add assets, and to create new deployments or jobs. Use filters to modify the view from the default list of all spaces to show Active spaces, with deployments or jobs, or Inactive spaces, with no deployments or jobs. - -" -A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_7,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Next steps - -[Use spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) to organize your deployment activity. - -Parent topic:[Deploying and managing models and functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_0,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," The parts of a notebook - -You can see some information about a notebook before you open it on the Assets page of a project. When you open a notebook in edit mode, you can do much more with the notebook by using multiple menu options, toolbars, an information pane, and by editing and running the notebook cells. - -You can view the following information about a notebook by clicking the Notebooks asset type in the Assets page of your project: - - - -* The name of the notebook -* The date when the notebook was last modified and the person who made the change -* The programming language of the notebook -* Whether the notebook is currently locked - - - -When you open a notebook in edit mode, the notebook editor includes the following features: - - - -* [Menu bar and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enmenu-bar-and-toolbar) -* [Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=ennotebook-action-bar) -* [The cells in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enthe-cells-in-a-jupyter-notebook) - - - -* [Jupyter Code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-code-cells) -* [Jupyter markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-markdown-cells) -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_1,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7,"* [Raw Jupyter NBConvert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enraw-jupyter-nbconvert-cells) - - - -* [Spark job progress bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enspark-job-progress-bar) -* [Project token for authorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) - - - -![menu and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/toolbar.png) - -You can select notebook features that affect the way the notebook functions and perform the most-used operations within the notebook by clicking an icon. - - Notebook action bar - -![Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/action-bar-Blue.png) - -You can select features that enhance notebook collaboration. From the action bar, you can: - - - -* Publish your notebook as a gist or on GitHub. -* Create a permanent URL so that anyone with the link can view your notebook. -* Create jobs in which to run your notebook. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html). -* Download your notebook. -* Add a project token so that code can access the project resources. See [Add code to set the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). -* Generate code snippets to add data from a data asset or a connection to a notebook cell. -* View your notebook information. You can: - - - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_2,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7,"* Change the name of your notebook by editing it in the Name field. -* Edit the description of your notebook in the Description field. -* View the date when the notebook was created. -* View the environment details and runtime status; you can change the notebook runtime from here. See [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html). - - - -* Save versions of your notebook. -* Upload assets to the project. - - - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_3,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," The cells in a Jupyter notebook - -A Jupyter notebook consists of a sequence of cells. The flow of a notebook is sequential. You enter code into an input cell, and when you run the cell, the notebook executes the code and prints the output of the computation to an output cell. - -You can change the code in an input cell and re-run the cell as often as you like. In this way, the notebook follows a read-evaluate-print loop paradigm. You can choose to use tags to describe cells in a notebook. - -The behavior of a cell is determined by a cell’s type. The different types of cells include: - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_4,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Jupyter code cells - -Where you can edit and write new code. - -![code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code_cells_notebook_bigger.png) - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_5,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Jupyter markdown cells - -Where you can document the computational process. You can input headings to structure your notebook hierarchically. - -You can also add and edit image files as attachments to the notebook. The markdown code and images are rendered when the cell is run. - -![markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/markdownCells_notebook.png) - -See [Markdown for Jupyter notebooks cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_6,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Raw Jupyter NBConvert cells - -Where you can write output directly or put code that you don’t want to run. Raw cells are not evaluated by the notebook. - -![raw convert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/rawconvert_cells_notebook_bigger.png) - -" -BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_7,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Spark job progress bar - -When you run code in a notebook that triggers Spark jobs, it is often challenging to determine why your code is not running efficiently. - -To help you better understand what your code is doing and assist you in code debugging, you can monitor the execution of the Spark jobs for a code cell. - -To enable Spark monitoring for a cell in a notebook: - - - -* Select the code cell you want to monitor. -* Click the Enable Spark Monitoring icon (![Shows the enable Spark monitoring icon.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ProgressBars-Active.png)) on the notebook toolbar. - - - -The progress bars you see display the real time runtime progress of your jobs on the Spark cluster. Each Spark job runs on the cluster in one or more stages, where each stage is a list of tasks that can be run in parallel. The monitoring pane can become very large is the Spark job has many stages. - -The job monitoring pane also displays the duration of each job and the status of the job stages. A stage can have one of the following statuses: - - - -* Running: Stage active and started. -* Completed: Stage completed. -* Skipped: The results of this stage were cached from a earlier operation and so the task doesn't have to run again. -* Pending: Stage hasn't started yet. - - - -Click the icon again to disable monitoring in a cell. - -Note: Spark monitoring is currently only supported in notebooks that run on Python. - -Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_0,B3F8FB433FC6730284E636B068A5DE98C002DABD," Planning your notebooks and scripts experience - -To make a plan for using Jupyter notebooks and scripts, first understand the choices that you have, the implications of those choices, and how those choices affect the order of implementation tasks. - -You can perform most notebook and script related tasks with Editor or Admin role in an analytics project. - -Before you start working with notebooks and scripts, you should consider the following questions as most tasks need to be completed in a particular order: - - - -* Which programming language do you want to work in? -* What will your notebooks be doing? -* What libraries do you want to work with? -* How can you use the notebook or script in IBM watsonx? - - - -To create a plan for using Jupyter notebooks or scripts, determine which of the following tasks you must complete. - - - - Task Mandatory? Timing - - [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enproject) Yes This must be your very first task - [Adding data assets to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=endata-assets) Yes Before you begin creating notebooks - [Picking a programming language](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-lang) Yes Before you select the tool - [Selecting a tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enselect-tool) Yes After you've picked the language -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_1,B3F8FB433FC6730284E636B068A5DE98C002DABD," [Checking the library packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-libs) Yes Before you select a runtime environment - [Choosing an appropriate runtime environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enruntime-env) Yes Before you open the development environment - [Managing the notebooks and scripts lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enmanage-lifecycle) No When the notebook or script is ready - [Uses for notebooks and scripts after creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enuse-options) No When the notebook is ready - - - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_2,B3F8FB433FC6730284E636B068A5DE98C002DABD," Creating a project - -You need to create a project before you can start working in notebooks. - -Projects You can create an empty project, one from file, or from URL. In this project: - - - -* You can use the Juypter Notebook and RStudio. -* Notebooks are assets in the project. -* Notebook collaboration is based on locking by user at the project level. -* R scripts and Shiny apps are not assets in the project. -* There is no collaboration on R scripts or Shiny apps. - - - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_3,B3F8FB433FC6730284E636B068A5DE98C002DABD," Picking a programming language - -You can choose to work in the following languages: - -Notebooks : Python and R - -Scripts : R scripts and R Shiny apps - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_4,B3F8FB433FC6730284E636B068A5DE98C002DABD," Selecting a tool - -In IBM watsonx, you can work with notebook and scripts in the following tool: - -Juypter Notebook editor : In the Juypter Notebook editor, you can create Python or R notebooks. Notebooks are assets in a project. Collaboration is only at the project level. The notebook is locked by a user when opened and can only be unlocked by the same user or a project admin. - -RStudio : In RStudio, you can create R scripts and Shiny apps. R scripts are not assets in a project, which means that there is no collaboration at the project level. - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_5,B3F8FB433FC6730284E636B068A5DE98C002DABD," Checking the library packages - -When you open a notebook in a runtime environment, you have access to a large selection of preinstalled data science library packages. Many environments also include libraries provided by IBM at no extra charge, such as the Watson Natural Language Processing library in Python environments, libraries to help you access project assets, or libraries for time series or geo-spatial analysis in Spark environments. - -For a list of the library packages and the versions included in an environment template, select the template on the Templates page from the Manage tab on the project's Environments page. - -If libraries are missing in a template, you can add them: - -Through the notebook or script : You can use familiar package install commands for your environment. For example, in Python notebooks, you can use mamba, conda or pip. - -By creating a custom environment template : When you create a custom template, you can create a software customization and add the libraries you want to include. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_6,B3F8FB433FC6730284E636B068A5DE98C002DABD," Choosing a runtime environment - -Choosing the compute environment for your notebook depends on the amount of data you want to process and the complexity of the data analysis processes. - -Watson Studio offers many default environment templates with different hardware sizes and software configurations to help you quickly get started, without having to create your own templates. These included templates are listed on the Templates page from the Manage tab on the project's Environments page. For more information about the included environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). - -If the available templates don't suit your needs, you can create custom templates and determine the hardware size and software configuration. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). - -Important: Make sure that the environment has enough memory to store the data that you load to the notebook. Oftentimes this means that the environment must have significantly more memory than the total size of the data loaded to the notebook because some data frameworks, like pandas, can hold multiple copies of the data in memory. - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_7,B3F8FB433FC6730284E636B068A5DE98C002DABD," Working with data - -To work with data in a notebook, you need to: - - - -* Add the data to your project, which turns the data into a project asset. See [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj//manage-data/add-data-project.html) for the different methods for adding data to a project. -* Use generated code that loads data from the asset to a data structure in your notebook. For a list of the supported data types, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). -* Write your own code to load data if the data source isn't added as a project asset or support for adding generated code isn't available for the project asset. - - - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_8,B3F8FB433FC6730284E636B068A5DE98C002DABD," Managing the notebooks and scripts lifecycle - -After you have created and tested a notebook in your tool, you can: - - - -* Publish it to a catalog so that other catalog members can use the notebook in their projects. See [Publishing assets from a project into a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html). -* Share a read-only copy outside of Watson Studio so that people who aren't collaborators in your projects can see and use it. See [Sharing notebooks with a URL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html). -* Publish it to a GitHub repository. See [Publishing notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). -* Publish it as a gist. See [Publishing a notebook as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html). - - - -R scripts and Shiny apps can't be published or shared using functionality in a project. - -" -B3F8FB433FC6730284E636B068A5DE98C002DABD_9,B3F8FB433FC6730284E636B068A5DE98C002DABD," Uses for notebooks and scripts after creation - -The options for a notebook that is created and ready to use in IBM watsonx include: - - - -* Running it as a job in a project. See [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). -* Running it as part of a Watson Pipeline. See [Configuring pipeline nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). - -To ensure that a notebook can be run as a job or in a pipeline: - - - -* Ensure that no cells require interactive input by a user. -* Ensure that the notebook logs enough detailed information to enable understanding the progress and any failures by looking at the log. -* Use environment variables in the code to access configurations if a notebook or script requires them, for example the input data file or the number of training runs. - - - -* Using the Watson Machine Learning Python client to build, train and then deploy your models. See [Watson Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). -* Using the Watson Machine Learning REST API to build, train and then deploy your models. - - - -R scripts and Shiny apps can only be created and used in the RStudio IDE in IBM watsonx. You can't create jobs for R scripts or R Shiny deployments. - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -1483016BE71021F31B8193239D319F34D8E01C9C_0,1483016BE71021F31B8193239D319F34D8E01C9C," Supported machine learning tools, libraries, frameworks, and software specifications - -In IBM Watson Machine Learning, you can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions. The environment for these models and functions is made up of specific hardware and software specifications. - -Software specifications define the language and version that you use for a model or function. You can use software specifications to configure the software that is used for running your models and functions. By using software specifications, you can precisely define the software version to be used and include your own extensions (for example, by using conda .yml files or custom libraries). - -You can get a list of available software and hardware specifications and then use their names and IDs for use with your deployment. For more information, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning). - -" -1483016BE71021F31B8193239D319F34D8E01C9C_1,1483016BE71021F31B8193239D319F34D8E01C9C," Predefined software specifications - -You can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions. - -This table lists the predefined (base) model types and software specifications. - - - -List of predefined (base) model types and software specifications - - Framework** Versions Model Type Default software specification - - AutoAI 0.1 NA autoai-kb_rt22.2-py3.10
autoai-ts_rt22.2-py3.10
hybrid_0.1
autoai-kb_rt23.1-py3.10
autoai-ts_rt23.1-py3.10
autoai-tsad_rt23.1-py3.10
autoai-tsad_rt22.2-py3.10 - Decision Optimization 20.1 do-docplex_20.1
do-opl_20.1
do-cplex_20.1
do-cpo_20.1 do_20.1 - Decision Optimization 22.1 do-docplex_22.1
do-opl_22.1
do-cplex_22.1
do-cpo_22.1 do_22.1 - Hybrid/AutoML 0.1 wml-hybrid_0.1 hybrid_0.1 - PMML 3.0 to 4.3 pmml. (or) pmml..*3.0 - 4.3 pmml-3.0_4.3 - PyTorch 1.12 pytorch-onnx_1.12
pytorch-onnx_rt22.2 runtime-22.2-py3.10
pytorch-onnx_rt22.2-py3.10
pytorch-onnx_rt22.2-py3.10-edt - PyTorch 2.0 pytorch-onnx_2.0
pytorch-onnx_rt23.1 runtime-23.1-py3.10
pytorch-onnx_rt23.1-py3.10
pytorch-onnx_rt23.1-py3.10-edt
pytorch-onnx_rt23.1-py3.10-dist -" -1483016BE71021F31B8193239D319F34D8E01C9C_2,1483016BE71021F31B8193239D319F34D8E01C9C," Python Functions 0.1 NA runtime-22.2-py3.10
runtime-23.1-py3.10 - Python Scripts 0.1 NA runtime-22.2-py3.10
runtime-23.1-py3.10 - Scikit-learn 1.1 scikit-learn_1.1 runtime-22.2-py3.10
runtime-23.1-py3.10 - Spark 3.3 mllib_3.3 spark-mllib_3.3 - SPSS 17.1 spss-modeler_17.1 spss-modeler_17.1 - SPSS 18.1 spss-modeler_18.1 spss-modeler_18.1 - SPSS 18.2 spss-modeler_18.2 spss-modeler_18.2 - Tensorflow 2.9 tensorflow_2.9
tensorflow_rt22.2 runtime-22.2-py3.10
tensorflow_rt22.2-py3.10 - Tensorflow 2.12 tensorflow_2.12
tensorflow_rt23.1 runtime-23.1-py3.10
tensorflow_rt23.1-py3.10-dist
tensorflow_rt23.1-py3.10-edt
tensorflow_rt23.1-py3.10 - XGBoost 1.6 xgboost_1.6 or scikit-learn_1.1 (see notes) runtime-22.2-py3.10
runtime-23.1-py3.10 - - - -When you have assets that rely on discontinued software specifications or frameworks, in some cases the migration is seamless. In other cases, your action is required to retrain or redeploy assets. - - - -* Existing deployments of models that are built with discontinued framework versions or software specifications are removed on the date of discontinuation. -* No new deployments of models that are built with discontinued framework versions or software specifications are allowed. - - - -" -1483016BE71021F31B8193239D319F34D8E01C9C_3,1483016BE71021F31B8193239D319F34D8E01C9C," Learn more - - - -* To learn more about how to customize software specifications, see [Customizing with third-party and private Python libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html). -* To learn more about how to use and customize environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). -* To learn more about how to use software specifications for deployments, see the following Jupyter notebooks: - - - -* [Using REST API and cURL](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/rest_api/curl/deployments) -* [Using the Python client](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/python_sdk/deployments) - - - - - -Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) -" -6406A3BCB4E9210A9FB00AF248F11F392AF5C205,6406A3BCB4E9210A9FB00AF248F11F392AF5C205," Promoting an environment template to a space - -If you created an environment template and associated it with an asset that you promoted to a deployment space, you can also promote the environment template to the same space. Promoting the environment template to the same space enables running the asset in the same environment that was used in the project. - -You can only promote environment templates that you created. - -To promote an environment template associated with an asset that you promoted to a deployment space: - - - -1. From the Manage tab of your project on the Environments page under Templates, select the custom environment template and click Actions > Promote. -2. Select the space that you promoted your asset to as the target deployment space and optionally provide a description and tags. - - - -Parent topic:[Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_0,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Deploying a prompt template - -Deploy a prompt template so you can add it to a business workflow or so you can evaluate the prompt template to measure performance. - -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_1,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Before you begin - -Save a prompt template that contains at least one variable as a project asset. See [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). - -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_2,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Promote a prompt template to a deployment space - -To deploy a prompt template, complete the following steps: - - - -1. Open the project containing the prompt template. -2. Click Promote to space for the template. - -![Promoting a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt1.png) -3. In the Target deployment space field, choose a deployment space or create a new space. Note the following: - -The deployment space must be associated with a machine learning instance that is in the same account as the project where the prompt template was created. - -If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). - -If you plan to evaluate the prompt template in the space, the recommended Deployment stage type for the space is Production. For more information on evaluating, see [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html). - -Note: The deployment space stage cannot be changed after the space is created. - - - - - -1. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed asset. -2. From the Assets tab of the deployment space, click Deploy. You create an online deployment, which means you can send data to the endpoint and receive a response in real-time. - -![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt2.png) -3. Optional: In the Deployment serving name field, add a unique label for the deployment. - -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_3,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD,"The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated unique ID that is assigned otherwise. - -The serving name also abstracts the deployment from its service instance details. Applications refer to this name, which allows for the underlying service instance to be changed without impacting users. - -The name can have up to 36 characters. The supported characters are [a-z,0-9,_]. - -The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. - - - -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_4,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Testing the deployed prompt template - -After the deployment successfully completes, click the deployment name to view the deployment. - -![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt3.png) - - - -* API reference tab includes the API endpoints and code snippets that you need to add this prompt template to an application. -* Test tab supports testing the prompt template. Enter test data as text, streamed text, or in a JSON file. For details on testing a prompt template, see. - - - -If the watsonx.governance service is enabled, you also see these tabs: - - - -* Evaluate provides the tools for evaluating the prompt template in the space. Click Activate to choose the dimensions to evaluate. For details, see [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html). -* AI Factsheets displays all of the metadata that is collected for the prompt template. Use these details for tracking the prompt template for governance and compliance goals. See [Tracking prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html). - - - -For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - -" -B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_5,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Learn more - - - -* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) -* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) -* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) - - - -Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_0,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Managing feature groups with assetframe-lib for Python (beta) - -You can use the assetframe-lib to create, view and edit feature group information for data assets in Watson Studio notebooks. - -Feature groups define additional metadata on columns of your data asset that can be used in downstream Machine Learning tasks. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for more information about using feature groups in the UI. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_1,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Setting up the assetframe-lib and ibm-watson-studio-lib libraries - -The assetframe-lib library for Python is pre-installed and can be imported directly in a notebook in Watson Studio. However, it relies on the [ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) library. The following steps describe how to set up both libraries. - -To insert the project token to your notebook: - - - -1. Click the More icon on your notebook toolbar and then click Insert project token. - -If a project token exists, a cell is added to your notebook with the following information: - -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - is the value of the project token. - -If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. - -To create a project token: - - - -1. From the Manage tab, select the Access Control page, and click New access token under Access tokens. -2. Enter a name, select Editor role for the project, and create a token. -3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token. - - - -2. Import assetframe-lib and initialize it with the created ibm-watson-studio-lib instance. - -from assetframe_lib import AssetFrame -AssetFrame._wslib = wslib - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_2,0A507FF5262BAD7A3FB3F3C478388CFF78949941," The assetframe-lib functions and methods - -The assetframe-lib library exposes a set of functions and methods that are grouped in the following way: - - - -* [Creating an asset frame](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-assetframe) -* [Creating, retrieving and removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-features) -* [Specifying feature attributes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enspecify-featureatt) - - - -* [Role](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrole) -* [Description](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=endescription) -* [Fairness information for favorable and unfavorable outcomes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enfairnessinfo) -* [Fairness information for monitored and reference groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enmonitoredreference) -* [Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=envalue-desc) -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_3,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"* [Recipe](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrecipe) -* [Tags](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=entags) - - - -* [Previewing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enpreview-data) -* [Getting fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enget-fairness) - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_4,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Creating an asset frame - -An asset frame is used to define feature group metadata on an existing data asset or on a pandas DataFrame. You can have exactly one feature group for each asset. If you create an asset frame on a pandas DataFrame, you can store the pandas DataFrame along with the feature group metadata as a data asset in your project. - -You can use one of the following functions to create your asset frame: - - - -* AssetFrame.from_data_asset(asset_name, create_default_features=False) - -This function creates a new asset frame wrapping an existing data asset in your project. If there is already a feature group for this asset, for example created in the user interface, it is read from the asset metadata. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_5,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: -- asset_name: (Required) The name of a data asset in your project. -- create_default_features: (Optional) Creates features for all columns in the data asset. - - - -* AssetFrame.from_pandas(name, dataframe, create_default_features=False) - -This function creates a new asset frame wrapping a pandas DataFrame. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_6,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* name: (Required) The name of the asset frame. This name will be used as the name of the data asset if you store your feature group in your project in a later step. -* dataframe: (Required) A pandas DataFrame that you want to store along with feature group information. -* create_default_features: (Optional) Create features for all columns in the dataframe. - -Example of creating a asset frame from a pandas DataFrame: - - Create an asset frame from a pandas DataFrame and set - the name of the asset frame. -af = AssetFrame.from_pandas(dataframe=credit_risk_df, name=""Credit Risk Training Data"") - - - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_7,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Creating, retrieving and removing features - -A feature defines metadata that can be used by downstream Machine Learning tasks. You can create one feature per column in your data set. - -You can use one of the following functions to create, retrieve or remove columns from your asset frame: - - - -* add_feature(column_name, role='Input') - -This function adds a new feature to your asset frame with the given role. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_8,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* column_name: (Required) The name of the column to create a feature for. -* role: (Optional) The role of the feature. It defaults to Input. - -Valid roles are: - - - -* Input: The input for a machine learning model - -* Target: The target of a prediction model - -* Identifier: The identifier of a row in your data set. - - - - - -* create_default_features() - -This function creates features for all columns in your data set. The roles of the features will default to Input. -* get_features() - -This function retrieves all features of the asset frame. -* get_feature(column_name) - -This function retrieves the feature for the given column name. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_9,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* column_name: (Required) The string name of the column to create the feature for. - - - -* get_features_by_role(role) - -This function retrieves all features of the dataframe with the given role. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_10,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* role: (Required) The role that the features must have. This can be Input, Target or Identifier. - - - -* remove_feature(feature_or_column_name) - -This function removes the feature from the asset frame. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_11,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* feature_or_column_name: (Required) A feature or the name of the column to remove the feature for. - - - - - -Example that shows creating features for all columns in the data set and retrieving one of those columns for further specifications: - - Create features for all columns in the data set and retrieve a column - for further specifications. -af.create_default_features() -risk_feat = af.get_feature('Risk') - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_12,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Specifying feature attributes - -Features specify additional metadata on columns that may be used in downstream Machine Learning tasks. - -You can use the following function to retrieve the column that the feature is defined for: - - - -* get_column_name() - -This function retrieves the column name that the feature is defined for. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_13,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Role - -The role specifies the intended usage of the feature in a Machine Learning task. - -Valid roles are: - - - -* Input: The feature can be used as an input to a Machine Learning model. -* Identifier: The feature uniquely identifies a row in the data set. -* Target: The feature can be used as a target in a prediction algorithm. - - - -At this time, a feature must have exactly one role. - -You can use the following methods to work with the role: - - - -* set_roles(roles) - -This method sets the roles of the feature. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_14,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* roles : (Required) The roles to be used. Either as a single string or an array of strings. - - - -* get_roles() - -This method returns all roles of the feature. - - - -Example that shows getting a feature and setting a role: - - Set the role of the feature 'Risk' to 'Target' to use it as a target in a prediction model. -risk_feat = af.get_feature('Risk') -risk_feat.set_roles('Target') - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_15,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Description - -An optional description of the feature. It defaults to None. - -You can use the following methods to work with the description. - - - -* set_description(description) - -This method sets the description of the feature. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_16,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* description: (Required) Either a string or None to remove the description. - - - -* get_description() - -This method returns the description of the feature. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_17,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Fairness information for favorable and unfavorable outcomes - -You can specify favorable and unfavorable labels for a feature with a Target role. - -You can use the following methods to set and retrieve favorable or unfavorable labels. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_18,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Favorable outcomes - -You can use the following methods to set and get favorable labels: - - - -* set_favorable_labels(labels) - -This method sets favorable labels for the feature. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_19,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* labels: (Required) A string or list of strings with favorable labels. - - - -* get_favorable_labels() - -This method returns the favorable labels of the feature. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_20,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Unfavorable outcomes - -You can use the following methods to set and get unfavorable labels: - - - -* set_unfavorable_labels(labels) - -This method sets unfavorable labels for the feature. - -Parameters: - - - -* labels: (Required) A string or list of strings with unfavorable labels. - - - -* get_unfavorable_labels() - -This method gets the unfavorable labels of the feature. - - - -Example that shows setting favorable and unfavorable labels: - - Set favorable and unfavorable labels for the target feature 'Risk'. -risk_feat = af.get_feature('Risk') -risk_feat.set_favorable_labels(""No Risk"") -risk_feat.set_unfavorable_labels(""Risk"") - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_21,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Fairness information for monitored and reference groups - -Some columns in your data might by prone to unfair bias. You can specify monitored and reference groups for further usage in Machine Learning tasks. They can be specified for features with the role Input. - -You can either specify single values or ranges of numeric values as a string with square brackets and a start and end value, for example [0,15]. - -You can use the following methods to set and retrieve monitored and reference groups: - - - -* set_monitored_groups(groups) - -This method sets monitored groups for the feature. - -Parameters: - - - -* groups: (Required) A string or list of strings with monitored groups. - - - -* get_monitored_groups() - -This method gets the monitored groups of the feature. -* set_reference_groups(groups) - -This method sets reference groups for the feature. - -Parameters: - - - -* groups: (Required) A string or list of strings with reference groups. - - - -* get_reference_groups() - -This method gets the reference groups of the feature. - - - -Example that shows setting monitored and reference groups: - - Set monitored and reference groups for the features 'Sex' and 'Age'. -sex_feat = af.get_feature(""Sex"") -sex_feat.set_reference_groups(""male"") -sex_feat.set_monitored_groups(""female"") - -age_feat = af.get_feature(""Age"") -age_feat.set_monitored_groups(""[0,25]"") -age_feat.set_reference_groups(""[26,80]"") - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_22,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Value descriptions - -You can use value descriptions to specify descriptions for column values in your data. - -You can use the following methods to set and retrieve descriptions: - - - -* set_value_descriptions(value_descriptions) - -This method sets value descriptions for the feature. - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_23,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters: - - - -* value_descriptions: (Required) A Pyton dictionary or list of dictionaries of the following format: {'value': '', 'description': ''} - - - -* get_value_descriptions() - -This method returns all value descriptions of the feature. -* get_value_description(value) - -This method returns the value description for the given value. - -Parameters: - - - -* value: (Required) The value to retrieve the value description for. - - - -* add_value_description(value, description) - -This method adds a value description with the given value and description to the list of value descriptions for the feature. - -Parameters: - - - -* value: (Required) The string value of the value description. -* description: (Required) The string description of the value description. - - - -* remove_value_description(value) - -This method removes the value description with the given value from the list of value descriptions of the feature. - -Parameters: - - - -* value: (Required) A value of the value description to be removed. - - - - - -Example that shows how to set value descriptions: - -plan_feat = af.get_feature(""InstallmentPlans"") -val_descriptions = [ -{'value': 'stores', -'description': 'customer has additional business installment plan'}, -{'value': 'bank', -'description': 'customer has additional personal installment plan'}, -{'value': 'none', -'description': 'customer has no additional installment plan'} -] -plan_feat.set_value_descriptions(val_descriptions) - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_24,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Recipe - -You can use the recipe to describe how a feature was created, for example with a formula or a code snippet. It defaults to None. - -You can use the following methods to work with the recipe. - - - -* set_recipe(recipe) - -This method sets the recipe of the feature. - -Parameters: - - - -* recipe: (Required) Either a string or None to remove the recipe. - - - -* get_recipe() - -This method returns the recipe of the feature. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_25,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Tags - -You can use tags to attach additional labels or information to your feature. - -You can use the following methods to work with tags: - - - -* set_tags(tags) - -This method sets the tags of the feature. - -Parameters: - - - -* tags: (Required) Either as a single string or an array of strings. - - - -* get_tags() - -This method returns all tags of the feature. - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_26,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Previewing data - -You can preview the data of your data asset or pandas DataFrame with additional information about your features like fairness information. - -The data is displayed like a pandas DataFrame with optional header information about feature roles, descriptions or recipes. Fairness information is displayed with coloring for favorable or unfavorable labels, monitored and reference groups. - -At this time, you can retrieve up to 100 rows of sample data for a data asset. - -Use the following function to preview data: - - - -* head(num_rows=5, display_options=['role']) - -This function returns the first num_rows rows of the data set in a pandas DataFrame. - -Parameters: - - - -* num_rows : (Optional) The number of rows to retrieve. -* display_options: (Optional) The column header can display additional information for a column in your data set. - -Use these options to display feature attributes: - - - -* role: Displays the role of a feature for this column. -* description: Displays the description of a feature for this column. -* recipe: Displays the recipe of a feature for this column. - - - - - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_27,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Getting fairness information - -You can retrieve the fairness information of all features in your asset frame as a Python dictionary. This includes all features containing monitored or reference groups (or both) as protected attributes and the target feature with favorable or unfavorable labels. - -If the data type of a column with fairness information is numeric, the values of labels and groups are transformed to numeric values if possible. - -Fairness information can be used directly in [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) or [AI Fairness 360](https://www.ibm.com/opensource/open/projects/ai-fairness-360/). - -You can use the following function to retrieve fairness information of your asset frame: - - - -* get_fairness_info(target=None) - -This function returns a Python dictionary with favorable and unfavorable labels of the target column and protected attributes with monitored and reference groups. - -Parameters: - - - -* target: (Optional) The target feature. If there is only one feature with role Target, it will be used automatically. - -Example that shows how to retrieve fairness information: - -af.get_fairness_info() - -Output showing fairness information: - -{ -'favorable_labels': ['No Risk'], -'unfavorable_labels': ['Risk'], -'protected_attributes': [ -{'feature': 'Sex', -'monitored_group': 'female'], -'reference_group': 'male']}, -{'feature': 'Age', -'monitored_group': 0.0, 25]], -'reference_group': 26, 80]] -}] -} - - - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_28,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Saving feature group information - -After you have fully specified or updated your features, you can save the whole feature group definition as metadata for your data asset. - -If you created the asset frame from a pandas DataFrame, a new data asset will be created in the project storage with the name of the asset frame. - -You can use the following method to store your feature group information: - - - -* to_data_asset(overwrite_data=False) - -This method saves feature group information to the assets metadata. It creates a new data asset, if the asset frame was created from a pandas DataFrame. - -Parameters: - - - -* overwrite_data: (Optional) Also overwrite the asset contents with the data from the asset frame. Defaults to False. - - - - - -" -0A507FF5262BAD7A3FB3F3C478388CFF78949941_29,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Learn more - -See the [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples. - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -A724F6E91162B52C519F6887F06DF40626C0F698_0,A724F6E91162B52C519F6887F06DF40626C0F698," Using Python functions to work with Cloud Object Storage - -To access and work with data that is in IBM Cloud Object Storage, you can use Python functions from a notebook. - -With your IBM Cloud Object Storage credentials, you can access and load data from IBM Cloud Object Storage to use in a notebook. This data can be any object of type file-like-object, for example, byte buffers or string buffers. The data that you upload can reside in a different IBM Cloud Object Storage bucket than the project's bucket. - -You can also upload data from a local system into IBM Cloud Object Storage from within a notebook. This data can be a compressed file or Pickle object. - -See [Working With IBM Cloud Object Storage In Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f) for more information. - -" -A724F6E91162B52C519F6887F06DF40626C0F698_1,A724F6E91162B52C519F6887F06DF40626C0F698," Learn more - - - -* Use [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) to interact with Watson Studio projects and project assets. The library also contains functions that simplify fetching files from IBM Cloud Object Storage. -* [Control access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) - - - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_0,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Compute resource options for RStudio in projects - -When you run RStudio in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. - - - -* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=entypes) -* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=endefault) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=encompute) -* [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enscope) -* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enchange-env) - - - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_1,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Types of environments - -You can use this type of environment with RStudio: - - - -* Default RStudio CPU environments for standard workloads - - - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_2,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Default environment templates - -You can select any of the following default environment templates for RStudio in a project. These default environment templates are listed under Templates on the Environments page on the Manage tab of your project. All environment templates use RStudio with Runtime 23.1 on the R 4.2 programming language. - - - -Default RStudio environment templates - - Name Hardware configuration Local storage CUH rate per hour - - Default RStudio L 16 vCPU and 64 GB RAM 2 GB 8 - Default RStudio M 8 vCPU and 32 GB RAM 2 GB 4 - Default RStudio XS 2 vCPU and 8 GB RAM 2 GB 1 - - - -If you don't explicitly select an environment, Default RStudio M is the default. The hardware configuration of the available RStudio environments is preset and cannot be changed. - -For compute-intensive processing on a large data set, consider pushing your data processing to Spark from your RStudio session. See [Using Spark in RStudio](https://medium.com/ibm-data-science-experience/access-ibm-analytics-for-apache-spark-from-rstudio-eb11bf8b401b). - -To prevent consuming extra capacity unit hours (CUHs), stop all active RStudio runtimes when you no longer need them. See [RStudio idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_3,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Compute usage in projects - -RStudio consumes compute resources as CUH from the Watson Studio service in projects. - -You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_4,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Runtime scope - -An RStudio environment runtime is always scoped to a project and a user. Each user can only have one RStudio runtime per project at one time. If you start RStudio in a project in which you already have an active RStudio session, the existing active session is disconnected and you can continue working in the new RStudio session. - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_5,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Changing the RStudio runtime - -If you notice that processing is very slow, you can restart RStudio and select a larger environment runtime. - -To change the RStudio environment runtime: - - - -1. Save any data from your current session before switching to another environment. -2. Stop the active RStudio runtime under Tool runtimes on the Environments page on the Manage tab of your project. -3. Restart RStudio from the Launch IDE menu on your project's action bar and select another environment with the compute power and memory capacity that better meets your data processing requirements. - - - -" -F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_6,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Learn more - - - -* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_0,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," RStudio - -R is a popular statistical analysis and machine-learning package that enables data management and includes tests, models, analyses and graphics. RStudio, included in IBM Watson Studio, provides an integrated development environment for working with R scripts. - -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_1,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Accessing RStudio - -RStudio is integrated in IBM Watson Studio projects and can be launched after you create a project. With RStudio integration in projects, you can access and use the data files that are stored in the IBM Cloud Object Storage bucket associated with your project in RStudio. - -To start RStudio in your project: - - - -1. Click RStudio from the Launch IDE menu on your project's action bar. -2. Select an environment. -3. Click Launch. - -The environment runtime is initiated and the development environment opens. - - - -Sometimes, when you start an RStudio session, you might experience a corrupted RStudio state from a previous session and your session will not start. If this happens, select to reset the workspace at the time you select the RStudio environment and then start the RStudio IDE again. By resetting the workspace, RStudio is started using the default settings with a clean RStudio workspace. - -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_2,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Working with data files - -In RStudio, you can work with data files from different sources: - - - -* Files in the RStudio server file structure, which you can view by clicking Files in the bottom right section of RStudio. This is where you can create folders, upload files from your local system, and delete files. - -To access these files in R, you need to set the working directory to the directory with the files. You can do this by navigating to the directory with the files and clicking More > Set as Working Directory. - -Be aware that files stored in the Home directory of your RStudio instance are persistent within your instance only and cannot be shared across environments nor within your project. - - - -Video disclaimer: Some minor steps and graphical elements in the videos on this page may differ from your deployment. - -Watch this video to see how to load data to RStudio. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Project data assets that are stored in the IBM Cloud Object Storage bucket associated with your project. When RStudio is launched, the IBM Cloud Object Storage bucket content is mounted to the project-objectstorage directory in your RStudio Home directory. - -If you want data files to appear in the project-objectstorage directory, you must add them as assets to your project. See [Adding files as project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html?context=cdpaas&locale=enadding-files). - -If new data assets are added to the project while you are in RStudio and you want to access them, you need to refresh the project-objectstorage folder. - -See how to [read and write data to and from Cloud Object Storage](https://medium.com/ibm-data-science-experience/read-and-write-data-to-and-from-bluemix-object-storage-in-rstudio-276282347ce1). -* Data stored in a database system. - -Watch this video to see how to connect to external data sources in RStudio. - -This video provides a visual method to learn the concepts and tasks in this documentation. -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_3,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D,"* Files stored in local storage that are mounted to /home/rstudio. The home directory has a storage limitation of 2 GB and is used to store the RStudio session workspace. Note that you are allocated 2 GB for your home directory storage across all of your projects, irrespective of whether you use RStudio in each project. As a consequence, you should only store R script files and small data files in the home directory. It is not intended for large data files or large generated output. All large data files should be uploaded as project assets, which are mounted to the project-objectstorage directory from where you can access them. - - - -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_4,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Adding files as project assets - -If you worked with data files and want them appear in the project-objectstorage directory, you must add them to your project as data assets. To add these files as data assets to the project: - - - -1. On the Assets page of the project, click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and select the Files tab. -2. Select the files you want to add to the project as assets. -3. From the Actions list, select Add as data asset and apply your changes. - - - -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_5,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Capacity consumption and runtime scope - -An RStudio environment runtime is always scoped to an environment template and an RStudio session user. Only one RStudio session can be active per Watson Studio user at one time. If you started RStudio in another project, you are asked if you want to stop that session and start a new RStudio session in the context of the current project you're working in. - -Runtime usage is calculated by the number of capacity unit hours (CUHs) consumed by the active environment runtime. The CUHs consumed by an active RStudio runtime in a project are billed to the account of the project creator. See [Capacity units per hour billing for RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmlrstudio). - -You can see which RStudio environment runtimes are active on the project's Environments page. You can stop your runtime from this page. - -Remember: The CUH counter continues to increase while the runtime is active so stop the runtime if you aren't using RStudio. If you don't explicitly stop the runtime, it is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours. - -Watch this video to see an overview of the RStudio IDE. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video is a quick tour of the RStudio integrated development environment inside a Watson Studio project. - 00:07 From any project, you can launch the RStudio IDE. - 00:12 RStudio is a free and open-source integrated development environment for R, a programming language for statistical computing and graphics. - 00:22 In RStudio, there are four panes: the source pane, the console pane, the environment pane, and the files pane. -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_6,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 00:32 The panes help you organize your work and separate the different tasks you'll do with R. - 00:39 You can drag to resize the panes or use the icons to minimize and maximize a pane. - 00:47 You can also rearrange the panes in global options. - 00:53 The console pane is your interface to R. - 00:56 It's exactly what you would see in terminal window or user interfaces bundled with R. - 01:01 The console pane does have some added features that you'll find helpful. - 01:06 To run code from the console, just type the command. - 01:11 Start typing a command to see a list of commands that begin with the letters you started typing. - 01:17 Highlight a command in the list and press ""Enter"" to insert it. - 01:24 Use the up arrow to scroll through the commands you've previously entered. - 01:31 As you issue more commands, you can scroll through the results. - 01:36 Use the menu option to clear the console. - 01:39 You can also use tab completion to see a list of the functions, objects, and data sets beginning with that text. - 01:47 And use the arrows to highlight a command to see help for that command. - 01:51 When you're ready, just press ""Enter"" to insert it. - 01:55 Next, you'll see a list of the options for that command in the current context. - 01:59 For example, the first argument for the read.csv function is the file. - 02:05 RStudio will display a list of the folders and files in your working directory, so you can easily locate the file to include with the argument. - 02:16 Lastly, if you use the tab completion with a function that expects a package name, such as a library, you'll see a list of all the installed packages. - 02:28 Next, let's look at the source pane, which is simply a text editor for you to write your R code. - 02:34 The text editor supports R command files and plain text, as well as several other languages, and includes language-specific highlighting in context. - 02:47 And you'll notice the tab completion is also available in the text editor. -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_7,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 02:53 From the text editor, you can run a single line of code, or select several lines of code to run, and you'll see the results in the console pane. - 03:08 You can save your code as an R script to share or run again later. - 03:15 The view function opens a new tab that shows the dataframe in spreadsheet format. - 03:22 Or you can display it in its own window. - 03:25 Now, you can scroll through the data, sort the columns, search for specific values, or filter the rows using the sliders and drop-down menus. - 03:41 The environment pane contains an ""Environment"" tab, a ""History"" tab, and a ""Connections"" tab, and keeps track of what's been happening in this R session. - 03:51 The ""Environment"" tab contains the R objects that exist in your global environment, created during the session. - 03:58 So, when you create a new object in the console pane, it automatically displays in the environment pane. - 04:04 You can also view the objects related to a specific package, and even see the source code for a specific function. - 04:12 You can also see a list of the data sets, expand a data set to inspect its individual elements, and view them in the source pane. - 04:22 You can save the contents of an environment as an .RData file, so you can load that .RData file at a later date. - 04:29 From here, you can also clear the objects from the workspace. - 04:33 If you want to delete specific items, use the grid view. - 04:38 For example, you can easily find large items to delete to free up memory in your R session. - 04:45 The ""Environment"" tab also allows you to import a data set. - 04:50 You can see a preview of the data set and change options before completing the import. - 04:55 The imported data will display in the source pane. - 05:00 The ""History"" tab displays a history of each of the commands that you run at the command line. - 05:05 Just like the ""Environment"" tab, you can save the history as an .Rhistory file, so you can open it at a later date. - 05:11 And this tab has the same options to clear all of the history and individual entries in the history. -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_8,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 05:17 Select a command and send it to the console to rerun the command. - 05:23 You can also copy a command to the source pane to include it in a script. - 05:31 On the ""Connections"" tab, you can create a new connection to a data source. - 05:36 The choices in this dialog box are dependent upon which packages you have installed. - 05:41 For example, a ""BLUDB"" connection allows you to connect to a Db2 Warehouse on Cloud service. - 05:49 The files pane contains the ""Files"", ""Plots"", ""Packages"", ""Help"", and ""Viewer"" tabs. - 05:55 The ""Files"" tab displays the contents of your working directory. - 05:59 RStudio will load files from this directory and save files to this directory. - 06:04 Navigate to a file and click the file to view it in the source pane. - 06:09 From here, you can create new folders and upload files, either by selecting individual files to upload or selecting a .zip file containing all of the files to upload. - 06:25 From here, you can also delete and rename files and folders. - 06:30 In order to access the file in R, you need to set the data folder as a working directory. - 06:36 You'll see that the setwd command was executed in the console. - 06:43 You can access the data assets in your project by opening the project folder. - 06:50 The ""Plots"" tab displays the results of R's plot functions, such as: plot, hist, ggplot, and xyplot - 07:00 You can navigate through different plots using the arrows or zoom to see a graph full screen. - 07:09 You can also delete individual plots or all plots from here. - 07:13 Use the ""Export"" option to save the plot as a graphic or print file at the specified resolution. - 07:21 The ""Packages"" tab displays the packages you currently have installed in your system library. - 07:26 The search bar lets you quickly find a specific package. - 07:30 The checked packages are the packages that were already loaded, using the library command, in the current session. - 07:38 You can check additional packages from here to load them or uncheck packages to detach them from the current session. -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_9,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 07:45 The console pane displays the results. - 07:48 Use the ""X"" next to a package name to remove it from the system library. - 07:54 You can also find new packages to install or update to the latest version of any package. - 08:03 Clicking any of the packages opens the ""Help"" tab with additional information for that package. - 08:09 From here, you can search for functions to get more help. - 08:13 And from the console, you can use the help command, or simply type a question mark followed by the function, to get help with that function. - 08:21 The ""Viewer"" tab displays HTML output. - 08:25 Some R functions generate HTML to display reports and interactive graphs. - 08:31 The R Markdown package creates reports that you can view in the ""Viewer"" tab. - 08:38 The Shiny package creates web apps that you can view in the ""Viewer"" tab. - 08:44 And other packages build on the htmlwidgets framework and include Java-based, interactive visualizations. - 08:54 You can also publish the visualization to the free site, called ""RPubs.com"". - 09:01 This is been a brief overview of the RStudio IDE. - 09:05 Find more videos on RStudio in the Cloud Pak for Data as a Service documentation. - - - - - -" -BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_10,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Learn more - - - -* [RStudio environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) -* [Using Spark in RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-spark.html) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -2BCC4276EA71978FFA874621715BE92A9667390F_0,2BCC4276EA71978FFA874621715BE92A9667390F," Using Spark in RStudio - -Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. RStudio uses the sparklyr package to connect to Spark from R. The sparklyr package includes a dplyr interface to Spark data frames as well as an R interface to Spark’s distributed machine learning pipelines. - -You can connect to Spark from RStudio: - - - -* By connecting to a Spark kernel that runs locally in the RStudio container in IBM Watson Studio - - - -RStudio includes sample code snippets that show you how to connect to a Spark kernel in your applications for both methods. - -To use Spark in RStudio after you have launched the IDE: - - - -1. Locate the ibm_sparkaas_demos directory under your home directory and open it. The directory contains the following R scripts: - - - -* A readme with details on the included R sample scripts -* spark_kernel_basic_local.R includes sample code of how to connect to a local Spark kernel -* spark_kernel_basic_remote.R includes sample code of how to connect to a remote Spark kernel -* The files sparkaas_flights.Rand sparkaas_mtcars.R are two examples of how to use Spark in a small sample application - - - -2. Use the sample code snippets in your R scripts or applications to help you get started using Spark. - - - -" -2BCC4276EA71978FFA874621715BE92A9667390F_1,2BCC4276EA71978FFA874621715BE92A9667390F," Connecting to Spark from RStudio - -To connect to Spark from RStudio using the Sparklyr R package, you need a Spark with R environment. You can either use the default Spark with R environment that is provided or create a custom Spark with R environment. To create a custom environment, see [Creating environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - -Follow these steps after you launch RStudio in an RStudio environment: - -Use the following sample code to get a listing of the Spark environment details and to connect to a Spark kernel from your RStudio session: - - load spark R packages -library(ibmwsrspark) -library(sparklyr) - - load kernels -kernels <- load_spark_kernels() - - display kernels -display_spark_kernels() - - get spark kernel Configuration - -conf <- get_spark_config(kernels[1]) - Set spark configuration -conf$spark.driver.maxResultSize <- ""1G"" - connect to Spark kernel - -sc <- spark_connect(config = conf) - -Then to disconnect from Spark, use: - - disconnect -spark_disconnect(sc) - -Examples of these commands are provided in the readme under /home/wsuser/ibm_sparkaas_demos. - -Parent topic:[RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) -" -42F34465DD884E8110BB08A708A138532999714F_0,42F34465DD884E8110BB08A708A138532999714F," Compute resource options for AutoAI experiments in projects - -When you run an AutoAI experiment in a project, the type, size, and power of the hardware configuration available depend on the type of experiment you build. - - - -* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=endefault) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=encompute) - - - -" -42F34465DD884E8110BB08A708A138532999714F_1,42F34465DD884E8110BB08A708A138532999714F," Default hardware configurations - -The type of hardware configuration available for your AutoAI experiment depends on the type of experiment you are building. A standard AutoAI experiment, with a single data source, has a single, default hardware configuration. An AutoAI experiment with joined data has options for increasing computational power. - -" -42F34465DD884E8110BB08A708A138532999714F_2,42F34465DD884E8110BB08A708A138532999714F," Capacity units per hour for AutoAI experiments - - - -Hardware configurations available in projects for AutoAI with a single data source - - Capacity type Capacity units per hour - - 8 vCPU and 32 GB RAM 20 - - - -The runtimes for AutoAI stop automatically when processing is complete. - -" -42F34465DD884E8110BB08A708A138532999714F_3,42F34465DD884E8110BB08A708A138532999714F," Compute usage in projects - -AutoAI consumes compute resources as CUH from the Watson Machine Learning service. - -You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. - -" -42F34465DD884E8110BB08A708A138532999714F_4,42F34465DD884E8110BB08A708A138532999714F," Learn more - - - -* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_0,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute options for model training and scoring - -When you train or score a model or function, you choose the type, size, and power of the hardware configuration that matches your computing needs. - - - -* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=endefault) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=encompute) - - - -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_1,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Default hardware configurations - -Choose the hardware configuration for your Watson Machine Learning asset when you train the asset or when you deploy it. - - - -Hardware configurations available for training and deploying assets - - Capacity type Capacity units per hour - - Extra small: 1x4 = 1 vCPU and 4 GB RAM 0.5 - Small: 2x8 = 2 vCPU and 8 GB RAM 1 - Medium: 4x16 = 4 vCPU and 16 GB RAM 2 - Large: 8x32 = 8 vCPU and 32 GB RAM 4 - Extra large: 16x64 = 16 vCPU and 64 GB RAM 8 - - - -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_2,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute usage for Watson Machine Learning assets - -Deployments and scoring consume compute resources as capacity unit hours (CUH) from the Watson Machine Learning service. - -To check the total monthly CUH consumption for your Watson Machine Learning services, from the navigation menu, select Administration -> Environment runtimes. - -Additionally, you can monitor the monthly resource usage in each specific deployment space. To do that, from your deployment space, go to the Manage tab and then select Resource usage. The summary shows CUHs used by deployment type: separately for AutoAI deployments, Federated Learning deployments, batch deployments, and online deployments. - -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_3,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute usage details - -The rate of consumed CUHs is determined by the computing requirements of your deployments. It is based on such variables as: - - - -* type of deployment -* type of framework -* complexity of scoring Scaling a deployment to support more concurrent users and requests also increases CUH consumption. As many variables affect resource consumption for a deployment, it is recommended that you run tests on your models and deployments to analyze CUH consumption. - - - -The way that online deployments consume capacity units is based on framework. For some frameworks, CUHs are charged for the number of hours that the deployment asset is active in a deployment space. For example, SPSS models in online deployment mode that run for 24 hours a day, seven days a week, consume CUHs and are charged for that period. An active online deployment has no idle time. For other frameworks, CUHs are charged according to scoring duration. Refer to the CUH consumption table for details on how CUH usage is calculated. - -Compute time is calculated to the millisecond, with a 1-minute minimum for each distinct operation. For example: - - - -* A training run that takes 12 seconds is billed as 1 minute -* A training run that takes 83.555 seconds is billed exactly as calculated - - - -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_4,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," CUH consumption by deployment and framework type - -CUH consumption is calculated by using these formulas: - - - - Deployment type Framework CUH calculation - - Online AutoAI, AI function, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny Deployment active duration * Number of nodes * CUH rate for capacity type framework - Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost Score duration in seconds * Number of nodes * CUH rate for capacity type framework - Batch all frameworks Job duration in seconds * Number of nodes * CUH rate for capacity type framework - - - -" -B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_5,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Learn more - - - -* [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) -" -5B66F4F408827FE62B0584882D7F25FB9C6CA839_0,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Compute resource options for Decision Optimization - -When you run a Decision Optimization model, you use the Watson Machine Learning instance that is linked to the deployment space associated with your experiment. - - - -* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=endefault) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=encompute) - - - -" -5B66F4F408827FE62B0584882D7F25FB9C6CA839_1,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Default hardware configuration - -The following hardware configuration is used by default when running models in an experiment: - - - - Capacity type Capacity units per hour (CUH) - - 2 vCPU and 8 GB RAM 6 - - - -The CUH is consumed only when the model is running and not when you are adding data or editing your model. - -You can also switch to any other experiment environment as required. See the [Decision Optimization plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmldo) for a list of environments for Decision Optimization experiments. - -For more information on how to configure Decision Optimization experiment environments, see [Configuring environments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html). - -" -5B66F4F408827FE62B0584882D7F25FB9C6CA839_2,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Compute usage in projects - -Decision Optimization experiments consume compute resources as CUH from the Watson Machine Learning service. - -You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. - -" -5B66F4F408827FE62B0584882D7F25FB9C6CA839_3,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Learn more - - - -* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) -* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_0,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Compute resource options for Tuning Studio experiments in projects - -A Tuning Studio experiment has a single hardware configuration. - -The following table shows the hardware configuration that is used when tuning foundation models in a tuning experiment. - - - -Hardware configuration available in projects for Tuning Studio - - Capacity type Capacity units per hour - - - - -NVIDIA A100 80GB GPU|43| - -" -17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_1,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Compute usage in projects - -Tuning Studio consumes compute resources as CUH from the Watson Machine Learning service. - -You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. - -" -17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_2,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Learn more - - - -* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) -* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -9DAE797269714235C8D9287B5D358BCF72E2C9F5_0,9DAE797269714235C8D9287B5D358BCF72E2C9F5," SPSS predictive analytics algorithms for scoring - -A PMML-compliant scoring engine supports: - - - -* PMML-compliant models (4.2 and earlier versions) produced by various vendors, except for Baseline Model, ScoreCard Model, Sequence Model, and Text Model. Refer to the [Data Mining Group (DMG) web site](http://www.dmg.org/) for a list of supported models. -* Non-PMML models produced by IBM SPSS products: Discriminant and Bayesian networks -* PMML 4.2 transformations completely - - - -Different kinds of models can produce various scoring results. For example: - - - -* Classification models (those with a categorical target: Bayes Net, General Regression, Mining, Naive Bayes, k-Nearest Neighbor, Neural Network, Regression, Ruleset, Support Vector Machine, and Tree) produce: - - - -* Predicted values -* Probabilities -* Confidence values - - - -* Regression models (those with a continuous target: General Regression, Mining, k-Nearest Neighbor, Neural Network, Regression, and Tree) produce predicted values; some also produce standard errors. -* Cox regression (in General Regression) produces predicted survival probability and cumulative hazard values. -* Tree models also produce Node ID. -* Clustering models produce Cluster ID and Cluster affinity. -* Anomaly Detection (represented as Clustering) produces anomaly index and top reasons. -* Association models produce Consequent, Rule ID, and confidence for top matching rules. - - - -" -9DAE797269714235C8D9287B5D358BCF72E2C9F5_1,9DAE797269714235C8D9287B5D358BCF72E2C9F5,"Python example code: - -from spss.ml.score import Score - -with open(""linear.pmml"") as reader: -pmmlString = reader.read() - -score = Score().fromPMML(pmmlString) -scoredDf = score.transform(data) -scoredDf.show() - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_0,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Sharing notebooks with a URL - -You can create a URL to share the last saved version of a notebook on social media or with people outside of Watson Studio. The URL shows a read-only view of the notebook. Anyone who has the URL can view or download the notebook. - -" -2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_1,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8,"Required permissions: - -You must have the Admin or Editor role in the project to share a notebook URL. The shared notebook shows the author of the shared version and when the notebook version was last updated. - -" -2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_2,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Sharing a notebook URL - -To share a notebook URL: - - - -1. Open the notebook in edit mode. -2. If necessary, add code to [hide sensitive code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html). -3. Create a saved version of the notebook by clicking File > Save Version. -4. Click the Share icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar. - -![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) -5. Select to share the link. -6. Choose a sharing option: - - - -* Choose Only text and output to hide all code cells. -* Choose All content excluding sensitive code cells to hide code cells that you marked as sensitive. -* Choose All content, including code to show everything, even code cells that you marked as sensitive. Make sure that you remove your credential and other sensitive information before you choose this option and every time before you save a new version of the notebook. - - - -7. Copy the link or choose a social media site on which to share the URL. - - - -Note: The URL remains valid while the project and notebook exist and while the notebook is shared. If you [unshare the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html?context=cdpaas&locale=enunsharing), the URL becomes invalid. When you unshare, and then re-share the notebook, the URL will be the same again. - -" -2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_3,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Updating a shared notebook - -To update a shared notebook: - - - -1. Open the notebook in edit mode. -2. Make changes to the notebook. -3. Create a new version of the notebook by clicking File > Save Version. - - - -Note: Clicking File > Save saves your changes but it doesn't create a new version of the notebook; the shared URL still points to the older version of the notebook. - -" -2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_4,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Unsharing a notebook URL - -To unshare a notebook URL: - - - -1. Open the notebook in edit mode. -2. Click the Share icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar. - -![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) -3. Unselect the Share with anyone who has the link toggle. - - - -Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html) -" -C0E0C248B3934E34883814B5F9CEB792D734042A_0,C0E0C248B3934E34883814B5F9CEB792D734042A," Compute resource options for Data Refinery in projects - -When you create or edit a Data Refinery flow in a project, you use the Default Data Refinery XS runtime environment. However, when you run a Data Refinery flow in a job, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. - - - -* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=entypes) -* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=encompute) -* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enchange-env) -* [Runtime logs for jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enlogs) - - - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_1,C0E0C248B3934E34883814B5F9CEB792D734042A," Types of environments - -You can use these types of environments with Data Refinery: - - - -* Default Data Refinery XS runtime environment for running jobs on small data sets. -* Spark environments for running jobs on larger data sets. The Spark environments have [default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault) so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) for Spark environments. You should use a Spark & R environment only if you are working on a large data set. If your data set is small, you should select the Default Data Refinery XS runtime. The reason is that, although the SparkR cluster in a Spark & R environment is fast and powerful, it requires time to create, which is noticeable when you run a Data Refinery job on small data set. - - - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_2,C0E0C248B3934E34883814B5F9CEB792D734042A," Default environment templates - -When you work in Data Refinery, the Default Data Refinery XS environment runtime is started and appears as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. This runtime stops after an hour of inactivity in the Data Refinery interface. However, you can stop it manually under Tool runtimes on the Environments page. - -When you create a job to run a Data Refinery flow in a project, you select an environment template. After a runtime for a job is started, it is listed as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. The runtime for a job stops when the Data Refinery job stops running. - -Compute usage is tracked by capacity unit hours (CUH). - - - -Preset environment templates available in projects for Data Refinery - - Name Hardware configuration Capacity units per hour (CUH) - - Default Data Refinery XS 3 vCPU and 12 GB RAM 1.5 - Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM;
Driver: 1 vCPU and 4 GB RAM 1.5 - - - -All default environment templates for Data Refinery are HIPAA ready. - -The Spark default environment templates are listed under Templates on the Environments page on the Manage tab of your project. - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_3,C0E0C248B3934E34883814B5F9CEB792D734042A," Compute usage in projects - -You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_4,C0E0C248B3934E34883814B5F9CEB792D734042A," Changing the runtime - -You can't change the runtime for working in Data Refinery. - -You can change the runtime for a Data Refinery flow job by editing the job template. See [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr). - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_5,C0E0C248B3934E34883814B5F9CEB792D734042A," Runtime logs for jobs - -To view the accumulated logs for a Data Refinery job: - - - -1. From the project's Jobs page, click the job that ran the Data Refinery flow for which you want to see logs. -2. Click the job run. You can view the log tail or download the complete log file. - - - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_6,C0E0C248B3934E34883814B5F9CEB792D734042A," Next steps - - - -* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) -* [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr) -* [Stopping active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) - - - -" -C0E0C248B3934E34883814B5F9CEB792D734042A_7,C0E0C248B3934E34883814B5F9CEB792D734042A," Learn more - - - -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -6F544922DE2638796837398F7EC15A4AFE6B0781,6F544922DE2638796837398F7EC15A4AFE6B0781," SPSS predictive analytics algorithms - -You can use the following SPSS predictive analytics algorithms in your notebooks. Code samples are provided for Python notebooks. - -Notebooks must run in a Spark with Python environment runtime. To run the algorithms described in this section, you don't need the SPSS Modeler service. - - - -* [Data preparation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html) -* [Classification and regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html) -* [Clustering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html) -* [Forecasting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html) -* [Survival analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html) -* [Score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_0,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Compute resource options for SPSS Modeler in projects - -When you run an SPSS Modeler flow in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. - - - -* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=entypes_spss) -* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=endefault_spss) -* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=encompute_spss) -* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=enchange-env_spss) - - - -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_1,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Types of environments - -You can use this type of environment with SPSS Modeler: - - - -* Default SPSS Modeler CPU environments for standard workloads - - - -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_2,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Default environment templates - -You can select any of the following default environment templates for SPSS Modeler in a project. The included environment templates are listed under Templates on the Environments page on the Manage tab of your project. - - - -Default SPSS Modeler environment templates - - Name Hardware configuration Local storage CUH rate per hour - - Default SPSS Modeler S 2 vCPU and 8 GB RAM 128 GB 1 - Default SPSS Modeler M 4 vCPU and 16 GB RAM 128 GB 2 - Default SPSS Modeler L 6 vCPU and 24 GB RAM 128 GB 3 - - - -After selecting an environment, any other SPSS Modeler flows opened in that project will use the same runtime. The hardware configuration of the available SPSS Modeler environments is preset and cannot be changed. - -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_3,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Compute usage in projects - -SPSS Modeler consumes compute resources as CUH from the Watson Studio service in projects. - -You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. - -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_4,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Changing the SPSS Modeler runtime - -If you notice that processing is very slow, you can restart SPSS Modeler and select a larger environment runtime. - -To change the SPSS Modeler environment runtime: - - - -1. Save any data from your current session before switching to another environment. -2. Stop the active SPSS Modeler runtime under Tool runtimes on the Environments page on the Manage tab of your project. -3. Restart SPSS Modeler and select another environment with the compute power and memory capacity that better meets your requirements. - - - -" -54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_5,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Learn more - - - -* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_0,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," SPSS predictive analytics survival analysis algorithms in notebooks - -You can use non-parametric distribution fitting, parametric distribution fitting, or parametric regression modeling SPSS predictive analytics algorithms in notebooks. - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_1,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Non-Parametric Distribution Fitting - -Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. - -Non-parametric Distribution Fitting (NPDF) provides an estimate of the survival function without making any assumptions concerning the distribution of the data. NPDF includes Kaplan-Meier estimation, life tables, and specialized extension algorithms to support left censored, interval censored, and recurrent event data. - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_2,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python example code: - -from spss.ml.survivalanalysis import NonParametricDistributionFitting -from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem - -npdf = NonParametricDistributionFitting(). -setAlgorithm(""KM""). -setBeginField(""time""). -setStatusField(""status""). -setStrataFields([""treatment""]). -setGroupFields([""gender""]). -setUndefinedStatus(""INTERVALCENSORED""). -setDefinedStatus( -DefinedStatus( -failure=StatusItem(points = Points(""1"")), -rightCensored=StatusItem(points = Points(""0"")))). -setOutMeanSurvivalTime(True) - -npdfModel = npdf.fit(df) -predictions = npdfModel.transform(data) -predictions.show() - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_3,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Parametric Distribution Fitting - -Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. - -Parametric Distribution Fitting (PDF) provides an estimate of the survival function by comparing the functions for several known distributions (exponential, Weibull, log-normal, and log-logistic) to determine which, if any, describes the data best. In addition, the distributions for two or more groups of cases can be compared. - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_4,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python excample code: - -from spss.ml.survivalanalysis import ParametricDistributionFitting -from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem - -pdf = ParametricDistributionFitting(). -setBeginField(""begintime""). -setEndField(""endtime""). -setStatusField(""status""). -setFreqField(""frequency""). -setDefinedStatus( -DefinedStatus( -failure=StatusItem(points=Points(""F"")), -rightCensored=StatusItem(points=Points(""R"")), -leftCensored=StatusItem(points=Points(""L""))) -). -setMedianRankEstimation(""RRY""). -setMedianRankObtainMethod(""BetaFDistribution""). -setStatusConflictTreatment(""DERIVATION""). -setEstimationMethod(""MRR""). -setDistribution(""Weibull""). -setOutProbDensityFunc(True). -setOutCumDistFunc(True). -setOutSurvivalFunc(True). -setOutRegressionPlot(True). -setOutMedianRankRegPlot(True). -setComputeGroupComparison(True) - -pdfModel = pdf.fit(data) -predictions = pdfModel.transform(data) -predictions.show() - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_5,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Parametric regression modeling - -Parametric regression modeling (PRM) is a survival analysis technique that incorporates the effects of covariates on the survival times. PRM includes two model types: accelerated failure time and frailty. Accelerated failure time models assume that the relationship of the logarithm of survival time and the covariates is linear. Frailty, or random effects, models are useful for analyzing recurrent events, correlated survival data, or when observations are clustered into groups. - -PRM automatically selects the survival time distribution (exponential, Weibull, log-normal, or log-logistic) that best describes the survival times. - -" -2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_6,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python example code: - -from spss.ml.survivalanalysis import ParametricRegression -from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem - -prm = ParametricRegression(). -setBeginField(""startTime""). -setEndField(""endTime""). -setStatusField(""status""). -setPredictorFields([""age"", ""surgery"", ""transplant""]). -setDefinedStatus( -DefinedStatus( -failure=StatusItem(points=Points(""0.0"")), -intervalCensored=StatusItem(points=Points(""1.0"")))) - -prmModel = prm.fit(data) -PMML = prmModel.toPMML() -statXML = prmModel.statXML() -predictions = prmModel.transform(data) -predictions.show() - -Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_0,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Switching the platform for a space - -You can switch the platform for some spaces between the Cloud Pak for Data as a Service and the watsonx platform. When you switch a space to another platform, you can use the tools that are specific to that platform. - -For example, you might switch an existing space from Cloud Pak for Data as a Service to watsonx to consolidate your collaborative work on one platform. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html). - -Note: You cannot promote Prompt Lab assets created with foundation model inferencing to a space. - - - -* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrequirements) -* [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrestrictions) -* [What happens when you switch a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enconsequences) -* [Switch the platform for a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enmove-one) - - - -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_1,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Requirements - -You can switch a space from one platform to the other if you have the required accounts and permissions. - -Required accounts : You must be signed up for both Cloud Pak for Data as a Service and watsonx. - -Required permissions : You must have the Admin role in the space that you want to switch. - -Required services : The current account that you are working in must have both of these services provisioned: : - Watson Studio : - Watson Machine Learning - -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_2,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Restrictions - -To switch a space from Cloud Pak for Data as a Service to watsonx, all the assets in the space must be supported by both platforms. - -Spaces that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: - - - -* Connected data asset -* Connection -* Data asset from a file -* Deployment -* Jupyter notebook -* Model -* Python function -* Script - - - -You can’t switch a space that contains assets that are specific to Cloud Pak for Data as a Service. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that space to watsonx. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx spaces, you can't switch a space that contains pipeline assets because pipelines can reference unsupported assets. - -For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). - -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_3,FAE139F839DAB4C6EB794D689DACCEFF869C718F," What happens when you switch the platform for a space - -Switching a space between platforms has the following effects: - -Collaborators : Collaborators in the project receive notifications of the switch on the original platform. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project. - -Jobs : Scheduled jobs are retained. Any jobs that are running at the time of the switch continue until completion on the original platform. Any jobs that are scheduled for times after the switch are run on the destination platform. Job history is not retained. - -Environments : Custom hardware and software specifications are retained. - -Space history : Recent activity and asset activities are not retained. - -Resource usage : Resource usage is cumulative because you continue to use the same service instances. - -Storage : The space's IBM Cloud Object Storage bucket remains the same. - -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_4,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Switch the platform for a space - -You can switch the platform for a space from within the space on the original platform. You can switch between Cloud Pak for Data as a Service and watsonx. - -To switch the platform for a space: - - - -1. From the space you want to switch, open the Manage tab, select the General page, and in the Controls section, click Switch platform. If you don't see a Switch platform button or the button is not active, you can't switch the space. -2. Select the destination platform and click Switch platform. - - - -" -FAE139F839DAB4C6EB794D689DACCEFF869C718F_5,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Learn more - - - -* [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) -* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) - - - -Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -" -384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_0,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Compute resource options for Synthetic Data Generator in projects - -To create data with the Synthetic Data Generator, you must have the Watson Studio and Watson Machine Learning services provisioned. Running a synthetic data flow consumes compute resources from the Watson Studio service. - -" -384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_1,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Capacity units per hour for Synthetic Data Generator - - - - Capacity type Capacity units per hour - - 2 vCPU and 8 GB RAM 7 - - - -" -384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_2,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Compute usage in projects - -Running a synthetic data flow consumes compute resources from the Watson Studio service. - -You can monitor the total monthly amount of CUH consumption for Watson Studio on the Resource usage page on the Manage tab of your project. - -" -384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_3,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Learn more - - - -* [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) -* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Watson Studio service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) -* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) - - - -Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -" -8A411252B81F0E159C1F63EE64F63A987D1BEF9F_0,8A411252B81F0E159C1F63EE64F63A987D1BEF9F," Manually adding the project access token - -All projects have an authorization token that is used to access data assets, for example files and connections, and is used by platform APIs. This token is called the project access token, or simply access token in the project user interface. This project access token must be set in notebooks so that project and platform functions can access the project resources. - -When you load data to your notebook by clicking Read data on the Code snippets pane, selecting the asset and the load option, the project access token is added for you, if the generated code that is inserted uses project functions. - -However, when you use API functions in your notebook that require the project token, for example, if you're using Wget to access data by using the HTTP, HTTPS or FTP protocols, or the ibm-watson-studio-lib library, you must add the project access token to the notebook yourself. - -To add a project access token to a notebook if you are not using the generated code: - - - -1. From the Manage tab, select Access Control and click New access token under Access tokens. Only project administrators can create project access tokens. - -Enter a name and select the access role. To enable using API functions in a notebook, the access token must have the Editor access role. An access token with Viewer access role enables read access only to a notebook. -2. Add the project access token to a notebook by clicking More > Insert project token from the notebook action bar. - -By running the inserted hidden code cell, a project object is created that you can use for functions in the ibm-watson-studio-lib library. For example to get the name of the current project run: - -project.get_name() - -For details on the available ibm-watson-studio-lib functions, see [Accessing project assets with ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html). - -" -8A411252B81F0E159C1F63EE64F63A987D1BEF9F_1,8A411252B81F0E159C1F63EE64F63A987D1BEF9F,"Note that a project administrator can revoke a project access token at any time. An access token has no expiration date and is valid until it is revoked. - - - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_0,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Watson Studio environments compute usage - -Compute usage is calculated by the number of capacity unit hours (CUH) consumed by an active environment runtime in Watson Studio. Watson Studio plans govern how you are billed monthly for the resources you consume. - - - -Capacity units included in each plan per month - - Feature Lite Professional Standard (legacy) Enterprise (legacy) - - Processing usage 10 CUH
per month Unlimited CUH
billed for usage per month 10 CUH per month
+ pay for more 5000 CUH per month
+ pay for more - - - - - -Capacity units included in each plan per month - - Feature Lite Professional - - Processing usage 10 CUH per month Unlimited CUH
billed for usage per month - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_1,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks - - - -Notebooks - - Capacity type Language Capacity units per hour - - 1 vCPU and 4 GB RAM Python
R 0.5 - 2 vCPU and 8 GB RAM Python
R 1 - 4 vCPU and 16 GB RAM Python
R 2 - 8 vCPU and 32 GB RAM Python
R 4 - 16 vCPU and 64 GB RAM Python
R 8 - Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM Spark with Python
Spark with R 1
CUH per additional executor is 0.5 - Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM Spark with Python
Spark with R 1.5
CUH per additional executor is 1 - Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; Spark with Python
Spark with R 1.5
CUH per additional executor is 0.5 - Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; Spark with Python
Spark with R 2
CUH per additional executor is 1 - - - -The rate of capacity units per hour consumed is determined for: - - - -* Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes - -For example: The IBM Runtime 22.2 on Python 3.10 XS with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using the IBM Runtime 22.2 on Python 3.10 XS environment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is 5 x 7 x 8 = 280 CUH per week. - -The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs. -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_2,E76A86B7EE87A78FA06482285BAD02694ABCC3CA,"* Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size. - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_3,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks with Decision Optimization - -The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization. - - - -Decision Optimization notebooks - - Capacity type Language Capacity units per hour - - 1 vCPU and 4 GB RAM Python + Decision Optimization 0.5 + 5 = 5.5 - 2 vCPU and 8 GB RAM Python + Decision Optimization 1 + 5 = 6 - 4 vCPU and 16 GB RAM Python + Decision Optimization 2 + 5 = 7 - 8 vCPU and 32 GB RAM Python + Decision Optimization 4 + 5 = 9 - 16 vCPU and 64 GB RAM Python + Decision Optimization 8 + 5 = 13 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_4,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks with Watson Natural Language Processing - -The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing. - - - -Watson Natural Language Processing notebooks - - Capacity type Language Capacity units per hour - - 1 vCPU and 4 GB RAM Python + Watson Natural Language Processing 0.5 + 5 = 5.5 - 2 vCPU and 8 GB RAM Python + Watson Natural Language Processing 1 + 5 = 6 - 4 vCPU and 16 GB RAM Python + Watson Natural Language Processing 2 + 5 = 7 - 8 vCPU and 32 GB RAM Python + Watson Natural Language Processing 4 + 5 = 9 - 16 vCPU and 64 GB RAM Python + Watson Natural Language Processing 8 + 5 = 13 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_5,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for Synthetic Data Generator - - - - Capacity type Capacity units per hour - - 2 vCPU and 8 GB RAM 7 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_6,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for SPSS Modeler flows - - - -SPSS Modeler flows - - Name Capacity type Capacity units per hour - - Default SPSS XS 4 vCPU 16 GB RAM 2 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_7,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for Data Refinery and Data Refinery flows - - - -Data Refinery and Data Refinery flows - - Name Capacity type Capacity units per hour - - Default Data Refinery XS runtime 3 vCPU and 12 GB RAM 1.5 - Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1.5 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_8,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for RStudio - - - -RStudio - - Name Capacity type Capacity units per hour - - Default RStudio XS 2 vCPU and 8 GB RAM 1 - Default RStudio M 8 vCPU and 32 GB RAM 4 - Default RStudio L 16 vCPU and 64 GB RAM 8 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_9,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for GPU environments - - - -GPU environments - - Capacity type GPUs Language Capacity units per hour - - 1 x NVIDIA Tesla V100 1 Python with GPU 68 - 2 x NVIDIA Tesla V100 2 Python with GPU 136 - - - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_10,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Runtime capacity limit - -You are notified when you're about to reach the monthly runtime capacity limit for your Watson Studio service plan. When this happens, you can: - - - -* Stop active runtimes you don't need. -* Upgrade your service plan. For up-to-date information, see the[Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=services). - - - -Remember: The CUH counter continues to increase while a runtime is active so stop the runtimes you aren't using. If you don't explicitly stop a runtime, the runtime is stopped after an idle timeout. During the idle time, you will continue to consume CUHs for which you are billed. - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_11,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Track runtime usage for a project - -You can view the environment runtimes that are currently active in a project, and monitor usage for the project from the project's Environments page. - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_12,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Track runtime usage for an account - -The CUH consumed by the active runtimes in a project are billed to the account that the project creator has selected in his or her profile settings at the time the project is created. This account can be the account of the project creator, or another account that the project creator has access to. If other users are added to the project and use runtimes, their usage is also billed against the account that the project creator chose at the time of project creation. - -You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator. - -To view the total runtime usage across all of the projects and see how much of your plan you have currently used, choose Administration > Environment runtimes. - -A list of the active runtimes billed to your account is displayed. You can see who created the runtimes, when, and for which projects, as well as the capacity units that were consumed by the active runtimes at the time you view the list. - -" -E76A86B7EE87A78FA06482285BAD02694ABCC3CA_13,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Learn more - - - -* [Idle runtime timeouts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) -* [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) -* [Upgrade your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html) - - - -Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html) -" -DE60E212953766B4698982B3B631D1A25A019F2E_0,DE60E212953766B4698982B3B631D1A25A019F2E," Accessing project assets with ibm-watson-studio-lib - -The ibm-watson-studio-lib library for Python and R contains a set of functions that help you to interact with IBM Watson Studio projects and project assets. You can think of the library as a programmatical interface to a project. Using the ibm-watson-studio-lib library, you can access project metadata and assets, including files and connections. The library also contains functions that simplify fetching files associated with the project. - -" -DE60E212953766B4698982B3B631D1A25A019F2E_1,DE60E212953766B4698982B3B631D1A25A019F2E," Next steps - - - -* Start using ibm-watson-studio-lib in new notebooks: - - - -* [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) -* [ibm-watson-studio-lib for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html) - - - - - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_0,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD," Watson Natural Language Processing task catalog - -Watson Natural Language Processing encapsulates natural language functionality in standardized components called blocks or workflows. Each block or workflow can be loaded and run in a notebook, some directly on input data, others in a given order. - -This topic contains descriptions of the natural language processing tasks supported in the Watson Natural Language Processing library. It lists the task names, the languages that are supported, dependencies to other blocks and includes sample code of how you use the natural language processing functionality in a Python notebook. - -The following natural language processing tasks are supported as blocks or workflows in the Watson Natural Language Processing library: - - - -* [Language detection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-language-detection.html) -* [Syntax analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-syntax.html) -* [Noun phrase extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-noun-phrase.html) -* [Keyword extraction and ranking](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-keyword.html) -* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html) -* [Sentiment classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-sentiment.html) -* [Tone classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html) -" -15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_1,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD,"* [Emotion classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-emotion.html) -* [Concepts extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext.html) -* [Relations extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html) -* [Hierarchical text categorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html) - - - -" -15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_2,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD," Language codes - -Many of the pre-trained models are available in many languages. The following table lists the language codes and the corresponding language. - - - -Language codes and their corresponding language equivalents - - Language code Corresponding language Language code Corresponding language - - af Afrikaans ar Arabic - bs Bosnian ca Catalan - cs Czech da Danish - de German el Greek - en English es Spanish - fi Finnish fr French - he Hebrew hi Hindi - hr Croatian it Italian - ja Japanese ko Korean - nb Norwegian Bokmål nl Dutch - nn Norwegian Nynorsk pl Polish - pt Portuguese ro Romanian - ru Russian sk Slovak - sr Serbian sv Swedish - tr Turkish zh_cn Chinese (Simplified) - zh_tw Chinese (Traditional) - - - -Parent topic:[Watson Natural language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_0,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA," Concepts extraction block - -The Watson Natural Language Processing Concepts block extracts general DBPedia concepts (concepts drawn from language-specific Wikipedia versions) that are directly referenced or alluded to, but not directly referenced, in the input text. - -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_1,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Block name - -concepts_alchemy__stock - -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_2,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Supported languages - -The Concepts block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -de, en, es, fr, it, ja, ko, pt - -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_3,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Capabilities - -Use this block to assign concepts from [DBPedia](https://www.dbpedia.org/) (2016 edition). The output types are based on DBPedia. - -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_4,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Dependencies on other blocks - -The following block must run before you can run the Concepts extraction block: - - - -* syntax_izumo__stock - - - -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_5,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Code sample - -import watson_nlp - - Load Syntax and a Concepts model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -concepts_model = watson_nlp.load('concepts_alchemy_en_stock') - Run the syntax model on the input text -syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing') - - Run the concepts model on the result of syntax -concepts = concepts_model.run(syntax_prediction) -print(concepts) - -Output of the code sample: - -{ -""concepts"": [ -{ -""text"": ""IBM"", -""relevance"": 0.9842190146446228, -""dbpedia_resource"": ""http://dbpedia.org/resource/IBM"" -}, -{ -""text"": ""Quantum_computing"", -""relevance"": 0.9797260165214539, -""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_computing"" -}, -{ -""text"": ""Computing"", -""relevance"": 0.9080164432525635, -""dbpedia_resource"": ""http://dbpedia.org/resource/Computing"" -}, -{ -""text"": ""Shor's_algorithm"", -""relevance"": 0.7580527067184448, -""dbpedia_resource"": ""http://dbpedia.org/resource/Shor's_algorithm"" -}, -{ -""text"": ""Quantum_dot"", -""relevance"": 0.7069802284240723, -""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_dot"" -}, -{ -""text"": ""Quantum_algorithm"", -""relevance"": 0.7063655853271484, -""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_algorithm"" -}, -{ -""text"": ""Qubit"", -""relevance"": 0.7063655853271484, -" -156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_6,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"""dbpedia_resource"": ""http://dbpedia.org/resource/Qubit"" -}, -{ -""text"": ""DNA_computing"", -""relevance"": 0.7044616341590881, -""dbpedia_resource"": ""http://dbpedia.org/resource/DNA_computing"" -}, -{ -""text"": ""Computation"", -""relevance"": 0.7044616341590881, -""dbpedia_resource"": ""http://dbpedia.org/resource/Computation"" -}, -{ -""text"": ""Computer"", -""relevance"": 0.7044616341590881, -""dbpedia_resource"": ""http://dbpedia.org/resource/Computer"" -} -], -""producer_id"": { -""name"": ""Alchemy Concepts"", -""version"": ""0.0.1"" -} -} - -Parent topic:[Watson Natural Language Processing block catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -B32394103127310AF0F4BF240CFD0B26399B685D_0,B32394103127310AF0F4BF240CFD0B26399B685D," Emotion classification - -The Emotion model in the Watson Natural Language Processing classification workflow classifies the emotion in the input text. - -Workflow nameensemble_classification-workflow_en_emotion-stock - -" -B32394103127310AF0F4BF240CFD0B26399B685D_1,B32394103127310AF0F4BF240CFD0B26399B685D,"Supported languages - - - -* English and French - - - -" -B32394103127310AF0F4BF240CFD0B26399B685D_2,B32394103127310AF0F4BF240CFD0B26399B685D,"Capabilities - -The Emotion classification model is a pre-trained document classification model for the task of classifying the emotion in the input document. The model identifies the emotion of a document, and classifies it as: - - - -* Anger -* Disgust -* Fear -* Joy -* Sadness - - - -Unlike the Sentiment model, which classifies each individual sentence, the Emotion model classifies the entire input document. As such, the Emotion model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Emotion model on each sentence or paragraph. - -A document may be classified into multiple categories or into no category. - - - -Capabilities of emotion classification based on an example - - Capabilities Example - - Identifies the emotion of a document and classifies it ""I'm so annoyed that this code won't run --> anger, sadness - - - -" -B32394103127310AF0F4BF240CFD0B26399B685D_3,B32394103127310AF0F4BF240CFD0B26399B685D,"Dependencies on other blocks - -None - -" -B32394103127310AF0F4BF240CFD0B26399B685D_4,B32394103127310AF0F4BF240CFD0B26399B685D,"Code sample - -import watson_nlp - - Load the Emotion workflow model for English -emotion_model = watson_nlp.load('ensemble_classification-workflow_en_emotion-stock') - - Run the Emotion model -emotion_result = emotion_model.run(""I'm so annoyed that this code won't run"") -print(emotion_result) - -Output of the code sample: - -{ -""classes"": [ -{ -""class_name"": ""anger"", -""confidence"": 0.6074999913276445 -}, -{ -""class_name"": ""sadness"", -""confidence"": 0.2913303280964709 -}, -{ -""class_name"": ""fear"", -""confidence"": 0.10266377929247113 -}, -{ -""class_name"": ""disgust"", -""confidence"": 0.018745421312542355 -}, -{ -""class_name"": ""joy"", -""confidence"": 0.0020577122567564804 -} -], -""producer_id"": { -""name"": ""Voting based Ensemble"", -""version"": ""0.0.1"" -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_0,A8A2D53661EB9EF173F7CC4794096A134123DACA," Entity extraction - -The Watson Natural Language Processing Entity extraction models extract entities from input text. - -For details, on available extraction types, refer to these sections: - - - -* [Machine-learning-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-general) -* [Machine-learning-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-pii) -* [Rule-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-general) -* [Rule-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-pii) - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_1,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based extraction for general entities - -The machine-learning-based extraction models are trained on labeled data for the more complex entity types such as person, organization and location. - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_2,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities - -The entity models extract entities from the input text. The following types of entities are recognized: - - - -* Date -* Duration -* Facility -* Geographic feature -* Job title -* Location -* Measure -* Money -* Ordinal -* Organization -* Person -* Time - - - - - -Capabilities of machine-learning-based extraction based on an example - - Capabilities Examples - - Extracts entities from the input text. IBM's CEO Arvind Krishna is based in the US -> IBMOrganization , CEOJobTitle, Arvind KrishnaPerson, USLocation - - - -Available workflows and blocks differ, depending on the runtime used. - - - -Blocks and workflows for handling general entities with their corresponding runtimes - - Block or workflow name Available in runtime - - entity-mentions_transformer-workflow_multilingual_slate.153m.distilled [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231) - entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231) - entity-mentions_bert_multi_stock [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-222) - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_3,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based workflows for general entities in Runtime 23.1 - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_4,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Workflow names - - - -* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled: this workflow can be used on both CPUs and GPUs. -* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu: this workflow is optimized for CPU-based runtimes. - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_5,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages - -Entity extraction is available for the following languages. - -For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes): - -ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_6,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample - -import watson_nlp - Load the workflow model -entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled') - Run the entity extraction workflow on the input text -entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code=""en"") -print(entities.get_mention_pairs()) - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_7,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample: - -[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')] - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_8,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based blocks for general entities in Runtime 22.2 - -Block namesentity-mentions_bert_multi_stock - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_9,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages - -Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_10,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks - -The following block must run before you can run the Entity extraction block: - - - -* syntax_izumo__stock - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_11,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample - -import watson_nlp - - Load Syntax Model for English, and the multilingual BERT Entity model -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -bert_entity_model = watson_nlp.load('entity-mentions_bert_multi_stock') - - Run the syntax model on the input text -syntax_prediction = syntax_model.run('IBM's CEO Arvind Krishna is based in the US') - - Run the entity mention model on the result of syntax model -bert_entity_mentions = bert_entity_model.run(syntax_prediction) -print(bert_entity_mentions.get_mention_pairs()) - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_12,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample: - -[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')] - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_13,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based extraction for PII entities - -Block namesentity-mentions_bilstm_en_pii - - - -Blocks for handling Personal Identifiable Information (PII) entities with their corresponding runtimes - - Block name Available in runtime - - entity-mentions_bilstm_en_pii Runtime 22.2, Runtime 23.1 - - - -The entity-mentions_bilstm_en_pii machine-learning based extraction model is trained on labeled data for types person and location. - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_14,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities - -The entity-mentions_bilstm_en_pii block recognizes the following types of entities: - - - -Entities extracted by the entity-mentions_bilstm_en_pii block - - Entity type name Description Supported languages - - Location All geo-political regions, continents, countries, and street names, states, provinces, cities, towns or islands. en - Person Any being; living, nonliving, fictional or real. en - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_15,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks - -The following block must run before you can run the entity-mentions_bilstm_en_pii block: - - - -* syntax_izumo_en_stock - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_16,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample - -import os - -import watson_nlp - - Load Syntax and a Entity Mention BiLSTM model for English - -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - -entity_model = watson_nlp.load('entity-mentions_bilstm_en_pii') - -text = 'Denver is the capital of Colorado. The total estimated government spending in Colorado in fiscal year 2016 was $36.0 billion. IBM office is located in downtown Denver. Michael Hancock is the mayor of Denver.' - - Run the syntax model on the input text - -syntax_prediction = syntax_model.run(text) - - Run the entity mention model on the result of the syntax analysis - -entity_mentions = entity_model.run(syntax_prediction) - -print(entity_mentions) - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_17,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample: - -{ -""mentions"": [ -{ -""span"": { -""begin"": 0, -""end"": 6, -""text"": ""Denver"" -}, -""type"": ""Location"", -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -}, -""confidence"": 0.6885626912117004, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -}, -{ -""span"": { -""begin"": 25, -""end"": 33, -""text"": ""Colorado"" -}, -""type"": ""Location"", -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -}, -""confidence"": 0.8509215116500854, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -}, -{ -""span"": { -""begin"": 78, -""end"": 86, -""text"": ""Colorado"" -}, -""type"": ""Location"", -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -}, -""confidence"": 0.9928259253501892, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -}, -{ -""span"": { -""begin"": 151, -""end"": 166, -""text"": ""downtown Denver"" -}, -""type"": ""Location"", -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -}, -""confidence"": 0.48378944396972656, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -}, -{ -""span"": { -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_18,A8A2D53661EB9EF173F7CC4794096A134123DACA,"""begin"": 168, -""end"": 183, -""text"": ""Michael Hancock"" -}, -""type"": ""Person"", -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -}, -""confidence"": 0.9972871541976929, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -} -], -""producer_id"": { -""name"": ""BiLSTM Entity Mentions"", -""version"": ""1.0.0"" -} -} - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_19,A8A2D53661EB9EF173F7CC4794096A134123DACA," Rule-based extraction for general entities - -The rule-based model entity-mentions_rbr_xx_stock identifies syntactically regular entities. - -Block nameentity-mentions_rbr_xx_stock - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_20,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities - -Rule-based extraction handles syntactically regular entity types. The entity block extract entities from the input text. The following types of entities are recognized: - - - -* PhoneNumber -* EmailAddress -* Number -* Percent -* IPAddress -* HashTag -* TwitterHandle -* URLDate - - - - - -Capabilities of rule-based extraction based on an example - - Capabilities Examples - - Extracts syntactically regular entity types from the input text. My email is john@us.ibm.com -> john@us.ibm.comEmailAddress - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_21,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages - -Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn, zh-tw - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_22,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks - -None - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_23,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample - -import watson_nlp - - Load a rule-based Entity Mention model for English -rbr_entity_model = watson_nlp.load('entity-mentions_rbr_en_stock') - - Run the entity model on the input text -rbr_entity_mentions = rbr_entity_model.run('My email is john@us.ibm.com') -print(rbr_entity_mentions) - -Output of the code sample: - -{ -""mentions"": [ -{ -""span"": { -""begin"": 12, -""end"": 27, -""text"": ""john@us.ibm.com"" -}, -""type"": ""EmailAddress"", -""producer_id"": { -""name"": ""RBR mentions"", -""version"": ""0.0.1"" -}, -""confidence"": 0.8, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -} -], -""producer_id"": { -""name"": ""RBR mentions"", -""version"": ""0.0.1"" -} -} - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_24,A8A2D53661EB9EF173F7CC4794096A134123DACA," Rule-based extraction for PII entities - -The rule-based model entity-mentions_rbr_multi_pii handles the majority of the types by identifying common formats of PII entities and performing possible checksum or validations as appropriate for each entity type. For example, credit card number candidates are validated using the Luhn algorithm. - -Block nameentity-mentions_rbr_multi_pii - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_25,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities - -The entity block entity-mentions_rbr_multi_pii recognizes the following types of entities: - - - -Entities extracted by the entity-mentions_rbr_multi_pii block - - Entity type name Description Supported languages - - BankAccountNumber.CreditCardNumber.Amex Credit card number for card types AMEX (15 digits). Checked through the Luhn algorithm. All - BankAccountNumber.CreditCardNumber.Master Credit card number for card types Master card (16 digits). Checked through the Luhn algorithm. All - BankAccountNumber.CreditCardNumber.Other Credit card number for left-over category of other types. Checked through the Luhn algorithm. All - BankAccountNumber.CreditCardNumber.Visa Credit card number for card types VISA (16 to 19 digits). Checked through the Luhn algorithm. All - EmailAddress Email addresses, for example: john@gmail.com ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn - IPAddress IPv4 and IPv6 addresses, for example, 10.142.250.123 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn - PhoneNumber Any specific phone number, for example, 0511-123-456 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn - - - -Some PII entity type names are country-specific. The _ in the following entity types is a placeholder for a country code. - - - -* BankAccountNumber.BBAN._ : These are more variable national bank account numbers and the extraction is mostly language-specific without a general checksum algorithm. -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_26,A8A2D53661EB9EF173F7CC4794096A134123DACA,"* BankAccountNumber.IBAN._ : Highly standardized IBANs are supported in a language-independent way and with a checksum algorithm. -* NationalNumber.NationalID._: These national IDs don’t have a (published) checksum algorithm, and are being extracted on a language-specific basis. -* NationalNumber.Passport._ : Checksums are implemented only for the countries where a checksum algorithm exists. These are specifically extracted language with additional context restrictions. -* NationalNumber.TaxID._ : These IDs don't have a (published) checksum algorithm, and are being extracted on a language-specific basis. - - - -Which entity types are available for which languages and which country code to use is listed in the following table. - - - -Country-specific PII entity types - - Country Entity Type Name Description Supported Languages - - Austria BankAccountNumber.BBAN.AT Basic bank account number de - BankAccountNumber.IBAN.AT International bank account number all - NationalNumber.Passport.AT Passport number de - NationalNumber.TaxID.AT Tax identification number de - Belgium BankAccountNumber.BBAN.BE Basic bank account number fr, nl - BankAccountNumber.IBAN.BE International bank account number all - NationalNumber.NationalID.BE National identification number fr, nl - NationalNumber.Passport.BE Passport number fr, nl - Bulgaria BankAccountNumber.BBAN.BG Basic bank account number bg - BankAccountNumber.IBAN.BG International bank account number all - NationalNumber.NationalID.BG National identification number bg - Canada NationalNumber.SocialInsuranceNumber.CA Social insurance number. Checksum algorithm is implemented. en, fr - Croatia BankAccountNumber.BBAN.HR Basic bank account number hr - BankAccountNumber.IBAN.HR International bank account number all - NationalNumber.NationalID.HR National identification number hr - NationalNumber.TaxID.HR Tax identification number hr - Cyprus BankAccountNumber.BBAN.CY Basic bank account number el - BankAccountNumber.IBAN.CY International bank account number all - NationalNumber.TaxID.CY Tax identification number el -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_27,A8A2D53661EB9EF173F7CC4794096A134123DACA," Czechia BankAccountNumber.BBAN.CZ Basic bank account number cs - BankAccountNumber.IBAN.CZ International bank account number cs - NationalNumber.NationalID.CZ National identification number cs - NationalNumber.TaxID.CZ Tax identification number cs - Denmark BankAccountNumber.BBAN.DK Basic bank account number da - BankAccountNumber.IBAN.DK International bank account number all - NationalNumber.NationalID.DK National identification number da - Estonia BankAccountNumber.BBAN.EE Basic bank account number et - BankAccountNumber.IBAN.EE International bank account number all - NationalNumber.NationalID.EE National identification number et - Finland BankAccountNumber.BBAN.FI Basic bank account number fi - BankAccountNumber.IBAN.FI International bank account number all - NationalNumber.NationalID.FI National identification number fi - NationalNumber.Passport.FI Passport number fi - France BankAccountNumber.BBAN.FR Basic bank account number fr - BankAccountNumber.IBAN.FR International bank account number all - NationalNumber.Passport.FR Passport number fr - NationalNumber.SocialInsuranceNumber.FR Social insurance number. Checksum algorithm is implemented. fr - Germany BankAccountNumber.BBAN.DE Basic bank aAccount number de - BankAccountNumber.IBAN.DE International bank account number all - NationalNumber.Passport.DE Passport number de - NationalNumber.SocialInsuranceNumber.DE Social insurance number. Checksum algorithm is implemented. de - Greece BankAccountNumber.BBAN.GR Basic bank account number el - BankAccountNumber.IBAN.GR International bank account number all - NationalNumber.Passport.GR Passport number el - NationalNumber.TaxID.GR Tax identification number el - NationalNumber.NationalID.GR National ID number el - Hungary BankAccountNumber.BBAN.HU Basic bank account number hu - BankAccountNumber.IBAN.HU International bank account number all - NationalNumber.NationalID.HU National identification number hu - NationalNumber.TaxID.HU Tax identification number hu - Iceland BankAccountNumber.BBAN.IS Basic bank account number is - BankAccountNumber.IBAN.IS International bank account number all - NationalNumber.NationalID.IS National identification number is -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_28,A8A2D53661EB9EF173F7CC4794096A134123DACA," Ireland BankAccountNumber.BBAN.IE Basic bank account number en - BankAccountNumber.IBAN.IE International bank account number all - NationalNumber.NationalID.IE National identification number en - NationalNumber.Passport.IE Passport number en - NationalNumber.TaxID.IE Tax identification number en - Italy BankAccountNumber.BBAN.IT Basic bank account number it - BankAccountNumber.IBAN.IT International bank account number all - NationalNumber.NationalID.IT National identification number it - NationalNumber.Passport.IT Passport number it - Latvia BankAccountNumber.BBAN.LV Basic bank account number lv - BankAccountNumber.IBAN.LV International bank account number all - NationalNumber.NationalID.LV National identification number lv - Liechtenstein BankAccountNumber.BBAN.LI Basic bank account number de - BankAccountNumber.IBAN.LI International bank account number all - Lithuania BankAccountNumber.BBAN.LT Basic bank account number lt - BankAccountNumber.IBAN.LT International bank account number all - NationalNumber.NationalID.LT National identification number lt - Luxembourg BankAccountNumber.BBAN.LU Basic bank account number de, fr - BankAccountNumber.IBAN.LU International bank account number all - NationalNumber.TaxID.LU Tax identification number de, fr - Malta BankAccountNumber.BBAN.MT Basic bank account number mt - BankAccountNumber.IBAN.MT International bank account number all - Netherlands BankAccountNumber.BBAN.NL Basic bank account number nl - BankAccountNumber.IBAN.NL International bank account number all - NationalNumber.NationalID.NL National identification number nl - NationalNumber.Passport.NL Passport number nl - Norway BankAccountNumber.BBAN.NO Basic bank account number no - BankAccountNumber.IBAN.NO International bank account number all - NationalNumber.NationalID.NO National identification number no - NationalNumber.NationalID.NO.Old National identification number old no - NationalNumber.Passport.NO Passport number no - Poland BankAccountNumber.BBAN.PL Basic bank account number pl - BankAccountNumber.IBAN.PL International bank account number all - NationalNumber.NationalID.PL National identification number pl -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_29,A8A2D53661EB9EF173F7CC4794096A134123DACA," NationalNumber.Passport.PL Passport number pl - NationalNumber.TaxID.PL Tax identification number pl - Portugal BankAccountNumber.IBAN.PT International bank account number all - BankAccountNumber.BBAN.PT Basic bank account number pt - NationalNumber.NationalID.PT National identification number pt - NationalNumber.NationalID.PT.Old National identification number, obsolete format pt - NationalNumber.TaxID.PT Tax identification number pt - Romania BankAccountNumber.BBAN.RO Basic bank account number ro - BankAccountNumber.IBAN.RO International bank account number all - NationalNumber.NationalID.RO National identification number ro - NationalNumber.TaxID.RO Tax identification number ro - Slovakia BankAccountNumber.IBAN.SK International bank account number all - BankAccountNumber.BBAN.SK Basic bank account number sk - NationalNumber.TaxID.SK Tax identification number sk - NationalNumber.NationalID.SK National identification number sk - Slovenia BankAccountNumber.IBAN.SI International bank account number all - Spain BankAccountNumber.IBAN.ES International bank account number all - BankAccountNumber.BBAN.ES Basic bank account number es - NationalNumber.NationalID.ES National identification number es - NationalNumber.Passport.ES Passport number es - NationalNumber.TaxID.ES Tax identification number es - Sweden BankAccountNumber.IBAN.SE International bank account number all - BankAccountNumber.BBAN.SE Basic bank account number sv - NationalNumber.NationalID.SE National identification number sv - NationalNumber.Passport.SE Passport number sv - Switzerland BankAccountNumber.IBAN.CH International bank account number all - BankAccountNumber.BBAN.CH Basic bank account number de, fr, it - NationalNumber.NationalID.CH National identification number de, fr, it - NationalNumber.Passport.CH Passport number de, fr, it - NationalNumber.NationalID.CH.Old National identification number, obsolete format de, fr, it - United Kingdom of Great Britain and Northern Ireland BankAccountNumber.IBAN.GB International bank account number all - NationalNumber.SocialSecurityNumber.GB.NHS National Health Service number all -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_30,A8A2D53661EB9EF173F7CC4794096A134123DACA," NationalNumber.SocialSecurityNumber.GB.NINO National Social Security Insurance number all - NationalNumber.NationalID.GB.Old National ID number, obsolete format all - NationalNumber.Passport.GB Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. all - United States NationalNumber.SocialSecurityNumber.US Social Security number. Checksum algorithm is not implemented and hence come with additional context restrictions. en - NationalNumber.Passport.US Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. en - - - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_31,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks - -None - -" -A8A2D53661EB9EF173F7CC4794096A134123DACA_32,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample - -import watson_nlp - - Load the RBR PII model. Note that this is a multilingual model supporting multiple languages. -rbr_entity_model = watson_nlp.load('entity-mentions_rbr_multi_pii') - - Run the RBR model. Note that language code of the input text is passed as a parameter to the run method. -rbr_entity_mentions = rbr_entity_model.run('Please find my credit card number here: 378282246310005. Thanks for the payment.', language_code='en') -print(rbr_entity_mentions) - -Output of the code sample: - -{ -""mentions"": [ -{ -""span"": { -""begin"": 40, -""end"": 55, -""text"": ""378282246310005"" -}, -""type"": ""BankAccountNumber.CreditCardNumber.Amex"", -""producer_id"": { -""name"": ""RBR mentions"", -""version"": ""0.0.1"" -}, -""confidence"": 0.8, -""mention_type"": ""MENTT_UNSET"", -""mention_class"": ""MENTC_UNSET"", -""role"": """" -} -], -""producer_id"": { -""name"": ""RBR mentions"", -""version"": ""0.0.1"" -} -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_0,1EC0AABFA78901776901CB2C57AFF822855B6B5E," Hierarchical text categorization - -The Watson Natural Language Processing Categories block assigns individual nodes within a hierarchical taxonomy to an input document. For example, in the text IBM announces new advances in quantum computing, examples of extracted categories are technology and computing/hardware/computer and technology and computing/operating systems. These categories represent level 3 and level 2 nodes in a hierarchical taxonomy. - -This block differs from the Classification block in that training starts from a set of seed phrases associated with each node in the taxonomy, and does not require labeled documents. - -Note that the Hierarchical text categorization block can only be used in a notebook that is started in an environment based on Runtime 22.2 or Runtime 23.1 that includes the Watson Natural Language Processing library. - -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_1,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Block name - -categories_esa_en_stock - -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_2,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Supported languages - -The Categories block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -de, en - -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_3,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Capabilities - -Use this block to determine the topics of documents on the web by categorizing web pages into a taxonomy of general domain topics, for ad placement and content recommendation. The model was tested on data from news reports and general web pages. - -For a list of the categories that can be returned, see [Category types](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html). - -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_4,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Dependencies on other blocks - -The following block must run before you can run the hierarchical categorization block: - - - -* syntax_izumo__stock - - - -" -1EC0AABFA78901776901CB2C57AFF822855B6B5E_5,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Code sample - -import watson_nlp - - Load Syntax and a Categories model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -categories_model = watson_nlp.load('categories_esa_en_stock') - - Run the syntax model on the input text -syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing') - - Run the categories model on the result of syntax -categories = categories_model.run(syntax_prediction) -print(categories) - -Output of the code sample: - -{ -""categories"": [ -{ -""labels"": -""technology & computing"", -""computing"" -], -""score"": 0.992489, -""explanation"": ] -}, -{ -""labels"": -""science"", -""physics"" -], -""score"": 0.945449, -""explanation"": ] -} -], -""producer_id"": { -""name"": ""ESA Hierarchical Categories"", -""version"": ""1.0.0"" -} -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_0,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4," Keyword extraction and ranking - -The Watson Natural Language Processing Keyword extraction with ranking block extracts noun phrases from input text based on their relevance. - -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_1,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Block name - -keywords_text-rank__stock - -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_2,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Supported language - -Keyword extraction with text ranking is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn - -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_3,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Capabilities - -The keywords and text rank block ranks noun phrases extracted from an input document based on how relevant they are within the document. - - - -Capabilities of keyword extraction and ranking based on an example - - Capabilities Examples - - Ranks extracted noun phrases based on relevance ""Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."" -> Anna, University of California Santa Cruz - - - -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_4,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Dependencies on other blocks - -The following blocks must run before you can run the Keyword extraction with ranking block: - - - -* syntax_izumo__stock -* noun-phrases_rbr__stock - - - -" -BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_5,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Code sample - -import watson_nlp -text = ""Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."" - - Load Syntax, Noun Phrases and Keywords models for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock') -keywords_model = watson_nlp.load('keywords_text-rank_en_stock') - - Run the Syntax and Noun Phrases models -syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech')) -noun_phrases = noun_phrases_model.run(text) - - Run the keywords model -keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2) -print(keywords) - -Output of the code sample: - -'keywords': -[{'text': 'University of California Santa Cruz', 'relevance': 0.939524, 'count': 1}, -{'text': 'Anna', 'relevance': 0.891002, 'count': 2}] - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -E1074D5C232CB13E3CD1FB6E832753626D2FE30E_0,E1074D5C232CB13E3CD1FB6E832753626D2FE30E," Language detection - -The Watson Natural Language Processing Language Detection identifies the language of input text. - -Block namelang-detect_izumo_multi_stock - -" -E1074D5C232CB13E3CD1FB6E832753626D2FE30E_1,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Supported languages - -The Language Detection block is able to detect the following languages: - -af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw - -" -E1074D5C232CB13E3CD1FB6E832753626D2FE30E_2,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Capabilities - -Use this block to detect the language of an input text. - -" -E1074D5C232CB13E3CD1FB6E832753626D2FE30E_3,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Dependencies on other blocks - -None - -" -E1074D5C232CB13E3CD1FB6E832753626D2FE30E_4,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Code sample - - Load the language detection model -lang_detection_model = watson_nlp.load('lang-detect_izumo_multi_stock') - - Run it on input text -detected_lang = lang_detection_model.run('IBM announced new advances in quantum computing') - - Retrieve language ISO code -print(detected_lang.to_iso_format()) - -Output of the code sample: - -EN - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -883359C27F09C3368292819B64149182441721E1_0,883359C27F09C3368292819B64149182441721E1," Noun phrase extraction - -The Watson Natural Language Processing Noun phrase extraction block extracts noun phrases from input text. - -" -883359C27F09C3368292819B64149182441721E1_1,883359C27F09C3368292819B64149182441721E1,"Block name - -noun-phrases_rbr__stock - -Note: The ""rbr"" abbreviation in model name means rule-based reasoning. RBR models handle syntactically regular entity types such as number, email and phone. - -" -883359C27F09C3368292819B64149182441721E1_2,883359C27F09C3368292819B64149182441721E1,"Supported languages - -Noun phrase extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, cs, da, de, es, en, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh_cn, zh_tw - -" -883359C27F09C3368292819B64149182441721E1_3,883359C27F09C3368292819B64149182441721E1,"Capabilities - -The Noun phrase extraction block extracts non-overlapping noun phrases from the input text. - - - -Capabilities of noun phrase extraction based on an example - - Capabilities Examples - - Extraction of non-overlapping noun phrases ""Anna went to school at University of California Santa Cruz"" -> Anna, school, University of California Santa Cruz - - - -" -883359C27F09C3368292819B64149182441721E1_4,883359C27F09C3368292819B64149182441721E1,"Dependencies on other blocks - -None - -" -883359C27F09C3368292819B64149182441721E1_5,883359C27F09C3368292819B64149182441721E1,"Code sample - -import watson_nlp - - Load the model for English -noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock') - - Run the model on the input text -noun_phrases = noun_phrases_model.run('Anna went to school at University of California Santa Cruz') -print(noun_phrases) - -Output of the code sample: - -{ -""noun_phrases"": [ -{ -""span"": { -""begin"": 0, -""end"": 4, -""text"": ""Anna"" -} -}, -{ -""span"": { -""begin"": 13, -""end"": 19, -""text"": ""school"" -} -}, -{ -""span"": { -""begin"": 23, -""end"": 58, -""text"": ""University of California Santa Cruz"" -} -} -], -""producer_id"": { -""name"": ""RBR Noun phrases"", -""version"": ""0.0.1"" -} -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_0,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relations extraction - -Watson Natural Language Processing Relations extraction encapsulates algorithms for extracting relations between two entity mentions. For example, in the text Lionel Messi plays for FC Barcelona. a relation extraction model may decide that the entities Lionel Messi and F.C. Barcelona are in a relationship with each other, and the relationship type is works for. - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_1,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Capabilities - -Use this model to detect relations between discovered entities. - -The following table lists common relations types that are available out-of-the-box after you have run the entity models. - - - -Table 1. Available common relation types between entities - - Relation Description - - affiliatedWith Exists between two entities that have an affiliation or are similarly connected. - basedIn Exists between an Organization and the place where it is mainly, only, or intrinsically located. - bornAt Exists between a Person and the place where they were born. - bornOn Exists between a Person and the Date or Time when they were born. - clientOf Exists between two entities when one is a direct business client of the other (that is, pays for certain services or products). - colleague Exists between two Persons who are part of the same Organization. - competitor Exists between two Organizations that are engaged in economic competition. - contactOf Relates contact information with an entity. - diedAt Exists between a Person and the place at which he, she, or it died. - diedOn Exists between a Person and the Date or Time on which he, she, or it died. - dissolvedOn Exists between an Organization or URL and the Date or Time when it was dissolved. - educatedAt Exists between a Person and the Organization at which he or she is or was educated. - employedBy Exists between two entities when one pays the other for certain work or services; monetary reward must be involved. In many circumstances, marking this relation requires world knowledge. - foundedOn Exists between an Organization or URL and the Date or Time on which it was founded. - founderOf Exists between a Person and a Facility, Organization, or URL that they founded. - locatedAt Exists between an entity and its location. - managerOf Exists between a Person and another entity such as a Person or Organization that he or she manages as his or her job. - memberOf Exists between an entity, such as a Person or Organization, and another entity to which he, she, or it belongs. - ownerOf Exists between an entity, such as a Person or Organization, and an entity that he, she, or it owns. The owner does not need to have permanent ownership of the entity for the relation to exist. -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_2,B4B2E864E1ABD4EA20845750E9567225BB3F417E," parentOf Exists between a Person and their children or stepchildren. - partner Exists between two Organizations that are engaged in economic cooperation. - partOf Exists between a smaller and a larger entity of the same type or related types in which the second entity subsumes the first. If the entities are both events, the first must occur within the time span of the second for the relation to be recognized. - partOfMany Exists between smaller and larger entities of the same type or related types in which the second entity, which must be plural, includes the first, which can be singular or plural. - populationOf Exists between a place and the number of people located there, or an organization and the number of members or employees it has. - measureOf This relation indicates the quantity of an entity or measure (height, weight, etc) of an entity. - relative Exists between two Persons who are relatives. To identify parents, children, siblings, and spouses, use the parentOf, siblingOf, and spouseOf relations. - residesIn Exists between a Person and a place where they live or previously lived. - shareholdersOf Exists between a Person or Organization, and an Organization of which the first entity is a shareholder. - siblingOf Exists between a Person and their sibling or stepsibling. - spokespersonFor Exists between a Person and an Facility, Organization, or Person that he or she represents. - spouseOf Exists between two Persons that are spouses. - subsidiaryOf Exists between two Organizations when the first is a subsidiary of the second. - - - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_3,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"In [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-222), relation extraction is provided as an analysis block, which depends on the Syntax analysis block and a entity mention extraction block. Starting with [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-231), relation extraction is provided as a workflow, which is directly run on the input text. - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_4,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relation extraction in Runtime 23.1 - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_5,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Workflow name - -relations_transformer-workflow_multilingual_slate.153m.distilled - -Supported languages The Relations Workflow is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, de, en, es, fr, it, ja, ko, pt - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_6,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Code sample - -import watson_nlp - - Load the workflow model -relations_workflow = watson_nlp.load('relations_transformer-workflow_multilingual_slate.153m.distilled') - - Run the relation extraction workflow on the input text -relations = relations_workflow.run('Anna Smith is an engineer. Anna works at IBM.', language_code=""en"") -print(relations.get_relation_pairs_by_type()) - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_7,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Output of the code sample - -{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]} - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_8,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relation extraction in Runtime 22.2 - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_9,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Block name - -relations_transformer_en_stock - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_10,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Supported languages - -The Relations extraction block is available for English only. - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_11,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Dependencies on other blocks - -The following block must run before you can run the relations_transformer_en_stock block: - - - -* syntax_izumo_en_stock - - - -This must be followed by one of the following entity models on which the relations extraction block can build its results: - - - -* entity-mentions_rbr_en_stock -* entity-mentions_bert_multi_stock - - - -" -B4B2E864E1ABD4EA20845750E9567225BB3F417E_12,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Code sample - -import watson_nlp - - Load the models for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -entity_mentions_model = watson_nlp.load('entity-mentions_bert_multi_stock') -relation_model = watson_nlp.load('relations_transformer_en_stock') - - Run the prerequisite models -syntax_prediction = syntax_model.run('Anna Smith is an engineer. Anna works at IBM.') -entity_mentions = entity_mentions_model.run(syntax_prediction) - - Run the relations model -relations_on_mentions = relation_model.run(syntax_prediction, mentions_prediction=entity_mentions) -print(relations_on_mentions.get_relation_pairs_by_type()) - -Output of the code sample: - -{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_0,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentiment classification - -The Watson Natural Language Processing Sentiment classification models classify the sentiment of the input text. - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_1,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Supported languages - -Sentiment classification is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_2,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentiment - -The sentiment of text can be positive, negative or neutral. - -The sentiment model computes the sentiment for each sentence in the input document. The aggregated sentiment for the entire document is also calculated using the sentiment transformer workflow in Runtime 23.1. If you are using the sentiment models in Runtime 22.2 the overall document sentiment can be computed by the helper method called predict_document_sentiment. - -The classifications returned contain a probability. The sentiment score varies from -1 to 1. A score greater than 0 denotes a positive sentiment, a score less than 0 a negative sentiment, and a score of 0 a neutral sentiment. - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_3,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentence sentiment workflows in runtime 23.1 - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_4,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Workflow names - - - -* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled -* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu - - - -The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs. - -The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes. - -Code sample using the sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow - - Load the Sentiment workflow -sentiment_model = watson_nlp.load('sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu') - - Run the sentiment model on the result of the syntax results -sentiment_result = sentiment_model.run('The rooms are nice. But the beds are not very comfortable.') - - Print the sentence sentiment results -print(sentiment_result) - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_5,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Output of the code sample - -{ -""document_sentiment"": { -""score"": -0.339735, -""label"": ""SENT_NEGATIVE"", -""mixed"": true, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 0, -""end"": 19, -""text"": ""The rooms are nice."" -}, -""sentimentprob"": { -""positive"": 0.9720447063446045, -""neutral"": 0.011838269419968128, -""negative"": 0.016117043793201447 -} -}, -{ -""span"": { -""begin"": 20, -""end"": 58, -""text"": ""But the beds are not very comfortable."" -}, -""sentimentprob"": { -""positive"": 0.0011594508541747928, -""neutral"": 0.006315878126770258, -""negative"": 0.9925248026847839 -} -} -] -}, -""targeted_sentiments"": { -""targeted_sentiments"": {}, -""producer_id"": { -""name"": ""Aggregated Sentiment Workflow"", -""version"": ""0.0.1"" -} -}, -""producer_id"": { -""name"": ""Aggregated Sentiment Workflow"", -""version"": ""0.0.1"" -} -} - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_6,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentence sentiment blocks in 22.2 runtimes - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_7,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Block name - -sentiment_sentence-bert_multi_stock - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_8,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Dependencies on other blocks - -The following block must run before you can run the Sentence sentiment block: - - - -* syntax_izumo__stock - - - -Code sample using the sentiment_sentence-bert_multi_stock block - -import watson_nlp -from watson_nlp.toolkit.sentiment_analysis_utils import predict_document_sentiment - Load Syntax and a Sentiment model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -sentiment_model = watson_nlp.load('sentiment_sentence-bert_multi_stock') - - Run the syntax model on the input text -syntax_result = syntax_model.run('The rooms are nice. But the beds are not very comfortable.') - - Run the sentiment model on the result of the syntax results -sentiment_result = sentiment_model.run_batch(syntax_result.get_sentence_texts(), syntax_result.sentences) - - Print the sentence sentiment results -print(sentiment_result) - - Get the aggregated document sentiment -document_sentiment = predict_document_sentiment(sentiment_result, sentiment_model.class_idxs) -print(document_sentiment) - -Output of the code sample: - -[{ -""score"": 0.9540348989256836, -""label"": ""SENT_POSITIVE"", -""sentiment_mention"": { -""span"": { -""begin"": 0, -""end"": 19, -""text"": ""The rooms are nice."" -}, -""sentimentprob"": { -""positive"": 0.919123649597168, -""neutral"": 0.05862388014793396, -""negative"": 0.022252488881349564 -} -}, -""producer_id"": { -""name"": ""Sentence Sentiment Bert Processing"", -""version"": ""0.1.0"" -} -}, { -""score"": -0.9772116371114815, -""label"": ""SENT_NEGATIVE"", -""sentiment_mention"": { -""span"": { -""begin"": 20, -""end"": 58, -""text"": ""But the beds are not very comfortable."" -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_9,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"}, -""sentimentprob"": { -""positive"": 0.015949789434671402, -""neutral"": 0.025898978114128113, -""negative"": 0.9581512808799744 -} -}, -""producer_id"": { -""name"": ""Sentence Sentiment Bert Processing"", -""version"": ""0.1.0"" -} -}] -{ -""score"": -0.335185, -""label"": ""SENT_NEGATIVE"", -""mixed"": true, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 0, -""end"": 19, -""text"": ""The rooms are nice."" -}, -""sentimentprob"": { -""positive"": 0.919123649597168, -""neutral"": 0.05862388014793396, -""negative"": 0.022252488881349564 -} -}, -{ -""span"": { -""begin"": 20, -""end"": 58, -""text"": ""But the beds are not very comfortable."" -}, -""sentimentprob"": { -""positive"": 0.015949789434671402, -""neutral"": 0.025898978114128113, -""negative"": 0.9581512808799744 -} -} -] -} - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_10,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment extraction - -Targets sentiment extraction extracts sentiments expressed in text and identifies the targets of those sentiments. - -It can handle multiple targets with different sentiment in one sentence as opposed to the sentiment block described above. - -For example, given the input sentence The served food was delicious, yet the service was slow., the Targets sentiment block identifies that there is a positive sentiment expressed in the target ""food"", and a negative sentiment expressed in ""service"". - -The model has been fine-tuned on English data only. Although you can use the model on the other languages listed under Supported languages, the results might vary. - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_11,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment workflows in Runtime 23.1 - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_12,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Workflow names - - - -* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled -* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu - - - -The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs. - -The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes. - -Code sample for the targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow - -import watson_nlp - Load Targets Sentiment model for English -targets_sentiment_model = watson_nlp.load('targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled') - Run the targets sentiment model on the input text -targets_sentiments = targets_sentiment_model.run('The rooms are nice, but the bed was not very comfortable.') - Print the targets with the associated sentiment -print(targets_sentiments) - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_13,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Output of the code sample: - -{ -""targeted_sentiments"": { -""rooms"": { -""score"": 0.990798830986023, -""label"": ""SENT_POSITIVE"", -""mixed"": false, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 4, -""end"": 9, -""text"": ""rooms"" -}, -""sentimentprob"": { -""positive"": 0.990798830986023, -""neutral"": 0.0, -""negative"": 0.00920116901397705 -} -} -] -}, -""bed"": { -""score"": -0.9920912981033325, -""label"": ""SENT_NEGATIVE"", -""mixed"": false, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 28, -""end"": 31, -""text"": ""bed"" -}, -""sentimentprob"": { -""positive"": 0.00790870189666748, -""neutral"": 0.0, -""negative"": 0.9920912981033325 -} -} -] -} -}, -""producer_id"": { -""name"": ""Transformer-based Targets Sentiment Extraction Workflow"", -""version"": ""0.0.1"" -} -} - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_14,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment blocks in 22.2 runtimes - -Block nametargets-sentiment_sequence-bert_multi_stock - -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_15,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Dependencies on other blocks - -The following block must run before you can run the Targets sentiment extraction block: - - - -* syntax_izumo__stock - - - -Code sample using the sentiment-targeted_bert_multi_stock block - -import watson_nlp - - Load Syntax and the Targets Sentiment model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -targets_sentiment_model = watson_nlp.load('targets-sentiment_sequence-bert_multi_stock') - - Run the syntax model on the input text -syntax_result = syntax_model.run('The rooms are nice, but the bed was not very comfortable.') - - Run the targets sentiment model on the syntax results -targets_sentiments = targets_sentiment_model.run(syntax_result) - - Print the targets with the associated sentiment -print(targets_sentiments) - -Output of the code sample: - -{ -""targeted_sentiments"": { -""rooms"": { -""score"": 0.9989274144172668, -""label"": ""SENT_POSITIVE"", -""mixed"": false, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 4, -""end"": 9, -""text"": ""rooms"" -}, -""sentimentprob"": { -""positive"": 0.9989274144172668, -""neutral"": 0.0, -""negative"": 0.0010725855827331543 -} -} -] -}, -""bed"": { -""score"": -0.9977545142173767, -""label"": ""SENT_NEGATIVE"", -""mixed"": false, -""sentiment_mentions"": [ -{ -""span"": { -""begin"": 28, -""end"": 31, -""text"": ""bed"" -}, -""sentimentprob"": { -""positive"": 0.002245485782623291, -""neutral"": 0.0, -""negative"": 0.9977545142173767 -} -} -] -} -}, -""producer_id"": { -""name"": ""BERT TSA"", -""version"": ""0.0.1"" -} -" -A152F3047C3B41F06773051EA4B5B6B14DDE709E_16,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_0,DCE29488A4D041B77F6E9B1B514F41335FAE0696," Syntax analysis - -The Watson Natural Language Processing Syntax block encapsulates syntax analysis functionality. - -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_1,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Block names - - - -* syntax_izumo__stock -* syntax_izumo__stock-dp (Runtime 23.1 only) - - - -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_2,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Supported languages - -The Syntax analysis block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - -Language codes to use for model syntax_izumo__stock: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw - -Language codes to use for model syntax_izumo__stock-dp: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh - - - -List of the supported languages for each syntax task - - Task Supported language codes - - Tokenization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh - Part-of-speech tagging af, ar, bs, ca, cs, da, de, nl, nn, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_3,DCE29488A4D041B77F6E9B1B514F41335FAE0696," Lemmatization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh - Sentence detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh - Paragraph detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh - Dependency parsing af, ar, bs, cs, da, de, en, es, fi, fr, hi, hr, it, ja, nb, nl, nn, pt, ro, ru, sk, sr, sv - - - -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_4,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Capabilities - -Use this block to perform tasks like sentence detection, tokenization, part-of-speech tagging, lemmatization and dependency parsing in different languages. For most tasks, you will likely only need sentence detection, tokenization, and part-of-speech tagging. For these use cases use the syntax_model_xx_stock model. If you want to run dependency parsing in Runtime 23.1, use the syntax_model_xx_stock-dp model. In Runtime 22.2, dependency parsing is included in the syntax_model_xx_stock model. - -The analysis for Part-of-speech (POS) tagging and dependencies follows the Universal Parts of Speech tagset ([Universal POS tags](https://universaldependencies.org/u/pos/)) and the Universal Dependencies v2 tagset ([Universal Dependency Relations](https://universaldependencies.org/u/dep/)). - -The following table shows you the capabilities of each task based on the same example and the outcome to the parse. - - - -Capabilities of each syntax task based on an example - - Capabilities Examples Parser attributes - - Tokenization I don't like Mondays"" --> ""I"" , ""do"", ""n't"", ""like"", ""Mondays token - Part-Of_Speech detection ""I don't like Mondays"" --> ""I""\POS_PRON, ""do""\POS_AUX, ""n't""\POS_PART, ""like""\POS_VERB, ""Mondays""\POS_PROPN part_of_speech - Lemmatization I don't like Mondays"" --> ""I"", ""do"", ""not"", ""like"", ""Monday lemma - Dependency parsing I don't like Mondays"" --> ""I""-SUBJECT->""like""<-OBJECT-""Mondays dependency - Sentence detection ""I don't like Mondays"" --> returns this sentence sentence - Paragraph detection (Currently paragraph detection is still experimental and returns similar results to sentence detection.) ""I don't like Mondays"" --> returns this sentence as being a paragraph sentence - - - -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_5,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Dependencies on other blocks - -None - -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_6,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Code sample - -import watson_nlp - - Load Syntax for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Detect tokens, lemma and part-of-speech -text = 'I don't like Mondays' -syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech')) - - Print the syntax result -print(syntax_prediction) - -Output of the code sample: - -{ -""text"": ""I don't like Mondays"", -""producer_id"": { -""name"": ""Izumo Text Processing"", -""version"": ""0.0.1"" -}, -""tokens"": [ -{ -""span"": { -""begin"": 0, -""end"": 1, -""text"": ""I"" -}, -""lemma"": ""I"", -""part_of_speech"": ""POS_PRON"" -}, -{ -""span"": { -""begin"": 2, -""end"": 4, -""text"": ""do"" -}, -""lemma"": ""do"", -""part_of_speech"": ""POS_AUX"" -}, -{ -""span"": { -""begin"": 4, -""end"": 7, -""text"": ""n't"" -}, -""lemma"": ""not"", -""part_of_speech"": ""POS_PART"" -}, -{ -""span"": { -""begin"": 8, -""end"": 12, -""text"": ""like"" -}, -""lemma"": ""like"", -""part_of_speech"": ""POS_VERB"" -}, -{ -""span"": { -""begin"": 13, -""end"": 20, -""text"": ""Mondays"" -}, -""lemma"": ""Monday"", -""part_of_speech"": ""POS_PROPN"" -} -], -""sentences"": [ -{ -""span"": { -""begin"": 0, -""end"": 20, -""text"": ""I don't like Mondays"" -} -} -], -""paragraphs"": [ -{ -""span"": { -""begin"": 0, -" -DCE29488A4D041B77F6E9B1B514F41335FAE0696_7,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"""end"": 20, -""text"": ""I don't like Mondays"" -} -} -] -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_0,ABCA967CD96AB805BE518E8A52EF984499C62F6C," Tone classification - -The Tone model in the Watson Natural Language Processing classification workflow classifies the tone in the input text. - -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_1,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Workflow name - -ensemble_classification-workflow_en_tone-stock - -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_2,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Supported languages - - - -* English and French - - - -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_3,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Capabilities - -The Tone classification model is a pre-trained document classification model for the task of classifying the tone in the input document. The model identifies the tone of the input document and classifies it as: - - - -* Excited -* Frustrated -* Impolite -* Polite -* Sad -* Satisfied -* Sympathetic - - - -Unlike the Sentiment model, which classifies each individual sentence, the Tone model classifies the entire input document. As such, the Tone model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Tone model on each sentence or paragraph. - -A document may be classified into multiple categories or into no category. - - - -Capabilities of tone classification - - Capabilities Example - - Identifies the tone of a document and classifies it ""I'm really happy with how this was handled, thank you!"" --> excited, satisfied - - - -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_4,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Dependencies on other blocks - -None - -" -ABCA967CD96AB805BE518E8A52EF984499C62F6C_5,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Code sample - -import watson_nlp - - Load the Tone workflow model for English -tone_model = watson_nlp.load('ensemble_classification-workflow_en_tone-stock') - - Run the Tone model -tone_result = tone_model.run(""I'm really happy with how this was handled, thank you!"") -print(tone_result) - -Output of the code sample: - -{ -""classes"": [ -{ -""class_name"": ""excited"", -""confidence"": 0.6896854620082722 -}, -{ -""class_name"": ""satisfied"", -""confidence"": 0.6570277557333078 -}, -{ -""class_name"": ""polite"", -""confidence"": 0.33628806679460566 -}, -{ -""class_name"": ""sympathetic"", -""confidence"": 0.17089694967744093 -}, -{ -""class_name"": ""sad"", -""confidence"": 0.06880583874412932 -}, -{ -""class_name"": ""frustrated"", -""confidence"": 0.010365418217209686 -}, -{ -""class_name"": ""impolite"", -""confidence"": 0.002470793624966174 -} -], -""producer_id"": { -""name"": ""Voting based Ensemble"", -""version"": ""0.0.1"" -} -} - -Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html) -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_0,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Classifying text with a custom classification model - -You can train your own models for text classification using strong classification algorithms from three different families: - - - -* Classic machine learning using SVM (Support Vector Machines) -* Deep learning using CNN (Convolutional Neural Networks) -* A transformer-based algorithm using a pre-trained transformer model: - - - -* Runtime 23.1: Slate IBM Foundation model -* Runtime 22.x: Google BERT Multilingual model - - - - - -The Watson Natural Language Processing library also offers an easy to use Ensemble classifier that combines different classification algorithms and majority voting. - -The algorithms support multi-label and multi-class tasks and special cases, like if the document belongs to one class only (single-label task), or binary classification tasks. - -Note:Training classification models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Especially for transformer-based algorithms, you should use a GPU-based environment, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - -Topic sections: - - - -* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data) -* [Input data requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data-reqs) -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_1,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* [Stopwords](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enstopwords) -* [Training SVM algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-svm) -* [Training the CNN algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-cnn) -* [Training the transformer algorithm by using the Slate IBM Foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-slate) -* [Training a custom transformer model by using a model provided by Hugging Face](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-huface) -* [Training the multilingual BERT algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-bert) -* [Training an ensemble model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-ensemble) -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_2,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* [Training best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enbest-practices) -* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enapply-model) -* [Choosing the right algorithm for your use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enchoose-algorithm) - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_3,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Input data format for training - -Classification blocks accept training data in CSV and JSON formats. - - - -* The CSV Format - -The CSV file should contain no header. Each row in the CSV file represents an example record. Each record has one or more columns, where the first column represents the text and the subsequent columns represent the labels associated with that text. - -Note: - - - -* The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each CSV row must have at least one label, i.e., 2 columns. -* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label. - -Example 1,label 1 -Example 2,label 1,label 2 - - - -* The JSON Format - -The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a labels field. The text represents the training example, and labels stores the labels associated with the example (0, 1, or more than one label). - -[ -{ -""text"": ""Example 1"", -""labels"": ""label 1""] -}, -{ -""text"": ""Example 2"", -""labels"": ""label 1"", ""label 2""] -}, -{ -""text"": ""Example 3"", -""labels"": ] -} -] - -Note: - - - -* ""labels"": [] denotes an example with no labels. The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each JSON object must have at least one label. -* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label. - - - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_4,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Input data requirements - -For SVM and CNN algorithms: - - - -* Minimum number of unique labels required: 2 -* Minimum number of text examples required per label: 5 - - - -For the BERT-based and Slate-based Transformer algorithms: - - - -* Minimum number of unique labels required: 1 -* Minimum number of text examples required per label: 5 - - - -Note that the training data in CSV or JSON format is converted to a DataStream before training. Instead of training data files, you can also pass data streams directly to the training functions of classification blocks. - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_5,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Stopwords - -You can provide your own stopwords that will be removed during preprocessing. Stopwords file inputs are expected in a standard format: a single text file with one phrase per line. Stopwords can be provided as a list or as a file in a standard format. - -Stopwords can be used only with the Ensemble classifier. - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_6,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training SVM algorithms - -SVM is a support vector machine classifier that can be trained using predictions on any kind of input provided by the embedding or vectorization blocks as feature vectors, for example, by USE (Universal Sentence Encoder) embeddings and TF-IDF vectorizers. It supports multi-class and multi-label text classification and produces confidence scores via Platt Scaling. - -For all options that are available for configuring SVM training, enter: - -help(watson_nlp.blocks.classification.svm.SVM.train) - -To train SVM algorithms: - - - -1. Begin with these preprocessing steps: - -import watson_nlp -from watson_core.data_model.streams.resolver import DataStreamResolver -from watson_nlp.blocks.classification.svm import SVM - -training_data_file = """" - - Create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -training_data = data_stream_resolver.as_data_stream(training_data_file) - - Load a Syntax model -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Create Syntax stream -text_stream, labels_stream = training_data[0], training_data[1] -syntax_stream = syntax_model.stream(text_stream) - - - - - -1. Train the classification model using USE embeddings. See [Pretrained USE embeddings available out-of-the-box](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enuse-embeddings) for a list of the pretrained blocks that are available. - - download embedding -use_embedding_model = watson_nlp.load('embedding_use_en_stock') - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_7,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"use_train_stream = use_embedding_model.stream(syntax_stream, doc_embed_style='raw_text') - NOTE: doc_embed_style can be changed to avg_sent as well. For more information check the documentation for Embeddings - Or the USE run function API docs -use_svm_train_stream = watson_nlp.data_model.DataStream.zip(use_train_stream, labels_stream) - - Train SVM using Universal Sentence Encoder (USE) training stream -classification_model = SVM.train(use_svm_train_stream) - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_8,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Pretrained USE embeddings available out-of-the-box - -USE embeddings are wrappers around Google Universal Sentence Encoder embeddings available in TFHub. These embeddings are used in the document classification SVM algorithm. - -The following table lists the pretrained blocks for USE embeddings that are available and the languages that are supported. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - - - -List of pretrained USE embeddings with their supported languages - - Block name Model name Supported languages - - use embedding_use_en_stock English only - use embedding_use_multi_small ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw - use embedding_use_multi_large ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw - - - -When using USE embeddings, consider the following: - - - -* Choose embedding_use_en_stock if your task involves English text. -* Choose one of the multilingual USE embeddings if your task involves text in a non-English language, or you want to train multilingual models. -* The USE embeddings exhibit different trade-offs between quality of the trained model and throughput at inference time, as described below. Try different embeddings to decide the trade-off between quality of result and inference throughput that is appropriate for your use case. - - - -* embedding_use_multi_small has reasonable quality, but it is fast at inference time -* embedding_use_en_stock is a English-only version of embedding_embedding_use_multi_small, hence it is smaller and exhibits higher inference throughput -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_9,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* embedding_use_multi_large is based on Transformer architecture, and therefore it provides higher quality of result, with lower throughput at inference time - - - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_10,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the CNN algorithm - -CNN is a simple convolutional network architecture, built for multi-class and multi-label text classification on short texts. It utilizes GloVe embeddings. GloVe embeddings encode word-level semantics into a vector space. The GloVe embeddings for each language are trained on the Wikipedia corpus in that language. For information on using GloVe embeddings, see the open source GloVe embeddings documentation. - -For all the options that are available for configuring CNN training, enter: - -help(watson_nlp.blocks.classification.cnn.CNN.train) - -To train CNN algorithms: - -import watson_nlp -from watson_core.data_model.streams.resolver import DataStreamResolver -from watson_nlp.blocks.classification.cnn import CNN - -training_data_file = """" - - Create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -training_data = data_stream_resolver.as_data_stream(training_data_file) - - Load a Syntax model -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Create Syntax stream -text_stream, labels_stream = training_data[0], training_data[1] -syntax_stream = syntax_model.stream(text_stream) - - Download GloVe embeddings -glove_embedding_model = watson_nlp.load('embedding_glove_en_stock') - - Train CNN -classification_model = CNN.train(watson_nlp.data_model.DataStream.zip(syntax_stream, labels_stream), embedding=glove_embedding_model.embedding) - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_11,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the transformer algorithm by using the IBM Slate model - -The transformer algorithm using the pretrained Slate IBM Foundation model can be used for multi-class and multi-label text classification on short texts. - -The pretrained Slate IBM Foundation model is only available in Runtime 23.1. - -For all the options available for configuring Transformer training, enter: - -help(watson_nlp.blocks.classification.transformer.Transformer.train) - -To train Transformer algorithms: - -import watson_nlp -from watson_nlp.blocks.classification.transformer import Transformer -from watson_core.data_model.streams.resolver import DataStreamResolver -training_data_file = ""train_data.json"" - - create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -train_stream = data_stream_resolver.as_data_stream(training_data_file) - - Load pre-trained Slate model -pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased ') - - Train model -classification_model = Transformer.train(train_stream, pretrained_model_resource) - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_12,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training a custom transformer model by using a model provided by Hugging Face - -Note: This training method is only available in Runtime 23.1. - -You can train your custom transformer-based model by using a pretrained model from Hugging Face. - -To use a Hugging Face model, specify the model name as the pretrained_model_resource parameter in the train method of watson_nlp.blocks.classification.transformer.Transformer. Go to [https://huggingface.co/models](https://huggingface.co/models) to copy the model name. - -To get a list of all the options available for configuring a transformer training, type this code: - -help(watson_nlp.blocks.classification.transformer.Transformer.train) - -For information on how to train transformer algorithms, refer to this code example: - -import watson_nlp -from watson_nlp.blocks.classification.transformer import Transformer -from watson_core.data_model.streams.resolver import DataStreamResolver -training_data_file = ""train_data.json"" - - create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -train_stream = data_stream_resolver.as_data_stream(training_data_file) - - Specify the name of the Hugging Face model -huggingface_model_name = 'xml-roberta-base' - - Train model -classification_model = Transformer.train(train_stream, pretrained_model_resource=huggingface_model_name) - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_13,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the multilingual BERT algorithm - -BERT is a transformer-based architecture, built for multi-class and multi-label text classification on short texts. - -Note: The Google BERT Multilingual model is available in 22.2 runtimes only. - -For all the options available for configuring BERT training, enter: - -help(watson_nlp.blocks.classification.bert.BERT.train) - -To train BERT algorithms: - -import watson_nlp -from watson_nlp.blocks.classification.bert import BERT -from watson_core.data_model.streams.resolver import DataStreamResolver -training_data_file = """" - - create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -train_stream = data_stream_resolver.as_data_stream(training_data_file) - - Load pre-trained BERT model -pretrained_model_resource = watson_nlp.load('pretrained-model_bert_multi_bert_multi_uncased') - - Train model -classification_model = BERT.train(train_stream, pretrained_model_resource) - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_14,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training an ensemble model - -The Ensemble model is a weighted ensemble of these three algorithms: CNN, SVM with TF-IDF and SVM with USE. It computes the weighted mean of a set of classification predictions using confidence scores. The ensemble model is very easy to use. - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_15,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Using the Runtime 22.2 and Runtime 23.1 environments - -The GenericEnsemble classifier allows more flexibility for the user to choose from the three base classifiers TFIDF-SVM, USE-SVM and CNN. For texts ranging from 50 to 1000 characters, using the combination of TFIDF-SVM and USE-SVM classifiers often yields a good balance of quality and performance. On some medium or long documents (500-1000+ characters), adding the CNN to the Ensemble could help increase quality, but it usually comes with a significant runtime performance impact (lower throughput and increased model loading time). - -For all of the options available for configuring Ensemble training, enter: - -help(watson_nlp.workflows.classification.GenericEnsemble) - -To train Ensemble algorithms: - -import watson_nlp -from watson_nlp.workflows.classification import GenericEnsemble -from watson_nlp.workflows.classification.base_classifier import GloveCNN -from watson_nlp.workflows.classification.base_classifier import TFidfSvm -from watson_nlp.workflows.classification.base_classifier import UseSvm - -training_data_file = """" - - Create datastream from training data -data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list}) -training_data = data_stream_resolver.as_data_stream(training_data_file) - - Syntax Model -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - USE Embedding Model -use_model = watson_nlp.load('embedding_use_en_stock') - GloVE Embedding model -glove_model = watson_nlp.load('embedding_glove_en_stock') - -ensemble_model = GenericEnsemble.train(training_data, syntax_model, -base_classifiers_params=[ -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_16,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"TFidfSvm.TrainParams(syntax_model=syntax_model), -GloveCNN.TrainParams(syntax_model=syntax_model, glove_embedding_model=glove_model, cnn_epochs=5), -UseSvm.TrainParams(syntax_model=syntax_model, use_embedding_model=use_model, doc_embed_style='raw_text')], -use_ewl=True) - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_17,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Pretrained stopword models available out-of-the-box - -The text model for identifying stopwords is used in training the document classification ensemble model. - -The following table lists the pretrained stopword models and the language codes that are supported (xx stands for the language code). For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - - - -List of pretrained stopword models with their supported languages - - Resource class Model name Supported languages - - text text_stopwords_classification_ensemble_xx_stock ar, de, es, en, fr, it, ja, ko - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_18,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training best practices - -There are certain constraints on the quality and quantity of data to ensure that the classifications model training can complete in a reasonable amount of time and also meets various performance criteria. These are listed below. Note that none are hard restrictions. However, the further one deviates from these guidelines, the greater the chance that the model fails to train or the model will not be satisfactory. - - - -* Data quantity - - - -* The highest number of classes classification model has been tested on is 1200. -* The best suited text size for training and testing data for classification is around 3000 code points. However, larger texts can also be processed, but the runtime performance might be slower. -* Training time will increase based on the number of examples and number of labels. -* Inference time will increased based on the number of labels. - - - -* Data quality - - - -* Size of each sample (for example, number of phrases in each training sample) can affect quality. -* Class separation is important. In other words, classes among the training (and test) data should be semantically distinguishable from each another in order to avoid misclassifications. Since the classifier algorithms in Watson Natural Language Processing rely on word embeddings, training classes that contain text examples with too much semantic overlap may make high-quality classification computationally intractable. While more sophisticated heuristics may exist for assessing the semantic similarity between classes, you should start with a simple ""eye test"" of a few examples from each class to discern whether or not they seem adequately separated. -* It is recommended to use balanced data for training. Ideally there should be roughly equal numbers of examples from each class in the training data, otherwise the classifiers may be biased towards classes with larger representation in the training data. -* It is best to avoid circumstances where some classes in the training data are highly under-represented as compared to other classes. - - - - - -Limitations and caveats: - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_19,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* The BERT classification block has a predefined sequence length of 128 code points. However, this can be configured at train time by changing the parameter max_seq_length. The maximum value allowed for this parameter is 512. This means that the BERT classification block can only be used to classify short text. Text longer than max_seq_length is trimmed and discarded during classification training and inference. -* The CNN classification block has a predefined sequence length of 1000 code points. This limit can be configured at train time by changing the parameter max_phrase_len. There is no maximum limit for this parameter, but increasing the maximum phrase length will affect CPU and memory consumption. -* SVM blocks do not have such limit on sequence length and can be used with longer texts. - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_20,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Applying the model on new data - -After you have trained the model on a data set, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks. - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_21,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"Sample code - - - -* For the Ensemble and BERT models, for example for Ensemble: - - run Ensemble model on new text -ensemble_prediction = ensemble_classification_model.run(""new input text"") -* For SVM and CNN models, for example for CNN: - - run Syntax model first -syntax_result = syntax_model.run(""new input text"") - run CNN model on top of syntax result -cnn_prediction = cnn_classification_model.run(syntax_result) - - - -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_22,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Choosing the right algorithm for your use case - -You need to choose the model algorithm that best suits your use case. - -When choosing between SVM, CNN, and Transformers, consider the following: - - - -* BERT and Transformer-based Slate - - - -* Choose when high quality is required and higher computing resources are available. - - - -* CNN - - - -* Choose when decent size data is available -* Choose if GloVe embedding is available for the required language -* Choose if you have the option between single label versus multi-label -* CNN fine tunes embeddings, so it could give better performance for unknown terms or newer domains. - - - -* SVM - - - -* Choose if an easier and simpler model is required -* SVM has the fastest training and inference time -* Choose if your data set size is small - - - - - -If you select SVM, you need to consider the following when choosing between the various implementations of SVM: - - - -* SVMs train multi-label classifiers. -* The larger the number of classes, the longer the training time. -* TF-IDF: - - - -* Choose TF-IDF vectorization with SVM if the data set is small, i.e. has a small number of classes, a small number of examples and shorter text size, for example, sentences containing fewer phrases. -* TF-IDF with SVM can be faster than other algorithms in the classification block. -* Choose TF-IDF if embeddings for the required language are not available. - - - -* USE: - - - -* Choose Universal Sentence Encoder (USE) with SVM if the data set has one or more sentences in input text. -* USE can perform better on data sets where understanding the context of words or sentences is important. - - - - - -The Ensemble model combines multiple individual (diverse) models together to deliver superior prediction power. Consider the following key data for this model type: - - - -* The ensemble model combines CNN, SVM with TF-IDF and SVM with USE. -* It is the easiest model to use. -* It can give better performance than the individual algorithms. -* It works for all kinds of data sets. However, training time for large datasets (more than 20000 examples) can be high. -" -9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_23,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* An ensemble model allows you to set weights. These weights decides how the ensemble model combines the results of individual classifiers. Currently, the selection of weights is a heuristics and needs to be set by trial and error. The default weights that are provided in the function itself are a good starting point for the exploration. - - - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html) -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_0,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Creating your own models - -Certain algorithms in Watson Natural Language Processing can be trained with your own data, for example you can create custom models based on your own data for entity extraction, to classify data, to extract sentiments, and to extract target sentiments. - -Starting with Runtime 23.1 you can use the new built-in transformer-based IBM foundation model called Slate to create your own models. The Slate model has been trained on a very large data set that was preprocessed to filter hate, bias, and profanity. - -To create your own classification, entity extraction model, or sentiment model you can fine-tune the Slate model on your own data. To train the model in reasonable time, it's recommended to use GPU-based environments. - - - -* [Detecting entities with a custom dictionary](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html) -* [Detecting entities with regular expressions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html) -* [Detecting entities with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html) -* [Classifying text with a custom classification model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html) -* [Extracting sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html) -* [Extracting targets sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html) - - - -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_1,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Language support for custom models - -You can create custom models and use the following pretrained dictionary and classification models for the shown languages. For a list of the language codes and the corresponding languages, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes). - - - -Supported languages for out-of-the-box custom models - - Custom model Supported language codes - - Dictionary models af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging) - Regexes af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging) - SVM classification with TFIDF af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw - SVM classification with USE ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh_cn, zh_tw - CNN classification with GloVe ar, de, en, es, fr, it, ja, ko, nl, pt, zh_cn -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_2,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," BERT Multilingual classification af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw - Transformer model af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw - Stopword lists ar, de, en, es, fr, it, ja, ko - - - -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_3,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Saving and loading custom models - -If you want to use your custom model in another notebook, save it as a Data Asset to your project. This way, you can export the model as part of a project export. - -Use the ibm-watson-studio-lib library to save and load custom models. - -To save a custom model in your notebook as a data asset to export and use in another project: - - - -1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook. -2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-waton-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html). -3. Run the train() method to create a custom dictionary, regular expression, or classification model and assign this custom model to a variable. For example: - -custom_block = CNN.train(train_stream, embedding_model.embedding, verbose=2) -4. If you want to save a custom dictionary or regular expression model, convert it to a RBRGeneric block. Converting a custom dictionary or regular expression model to a RBRGeneric block is useful if you want to load and execute the model using the [API for Watson Natural Language Processing for Embed](https://www.ibm.com/docs/en/watson-libraries?topic=home-api-reference). To date, Watson Natural Language Processing for Embed supports running dictionary and regular expression models only as RBRGeneric blocks. To convert a model to a RBRGeneric block, run the following commands: - - Create the custom regular expression model -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_4,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3,"custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, language='en', regexes=regexes) - - Save the model to the local file system -custom_regex_model_path = 'some/path' -custom_regex_block.save(custom_regex_model_path) - - The model was saved in a file ""executor.zip"" in the provided path, in this case ""some/path/executor.zip"" -model_path = os.path.join(custom_regex_model_path, 'executor.zip') - - Re-load the model as a RBRGeneric block -custom_block = watson_nlp.blocks.rules.RBRGeneric(watson_nlp.toolkit.rule_utils.RBRExecutor.load(model_path), language='en') -5. Save the model as a Data Asset to your project using ibm-watson-studio-lib: - -wslib.save_data("""", custom_block.as_bytes(), overwrite=True) - -When saving transformer models, you have the option to save the model in CPU format. If you plan to use the model only in CPU environments, using this format will make your custom model run more efficiently. To do that, set the CPU format option as follows: - -wslib.save_data('', data=custom_model.as_bytes(cpu_format=True), overwrite=True) - - - -To load a custom model to a notebook that was imported from another project: - - - -1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook. -" -97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_5,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3,"2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html). -3. Load the model using ibm-watson-studio-lib and watson-nlp: - -custom_block = watson_nlp.load(wslib.load_data("""")) - - - -Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_0,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Detecting entities with a custom dictionary - -If you have a fixed set of terms that you want to detect, like a list of product names or organizations, you can create a dictionary. Dictionary matching is very fast and resource-efficient. - -Watson Natural Language Processing dictionaries contain advanced matching capabilities that go beyond a simple string match, including: - - - -* Dictionary terms can consist of a single token, for example wheel, or multiple tokens, for example, steering wheel. -* Dictionary term matching can be case-sensitive or case-insensitive. With a case-sensitive match, you can ensure that acronyms, like ABS don't match terms in the regular language, like abs that have a different meaning. -* You can specify how to consolidate matches when multiple dictionary entries match the same text. Given the two dictionary entries, Watson and Watson Natural Language Processing, you can configure which entry should match in ""I like Watson Natural Language Processing"": either only Watson Natural Language Processing, as it contains Watson, or both. -* You can specify to match the lemma instead of enumerating all inflections. This way, the single dictionary entry mouse will detect both mouse and mice in the text. -* You can attach a label to each dictionary entry, for example Organization category to include additional metadata in the match. - - - -All of these capabilities can be configured, so you can pick the right option for your use case. - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_1,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Types of dictionary files - -Watson Natural Language Processing supports two types of dictionary files: - - - -* Term list (ending in .dict) - -Example of a term list: - -Arthur -Allen -Albert -Alexa -* Table (ending in .csv) - -Example of a table: - -""label"", ""entry"" -""ORGANIZATION"", ""NASA"" -""COUNTRY"", ""USA"" -""ACTOR"", ""Christian Bale"" - - - -You can use multiple dictionaries during the same extraction. You can also use both types at the same time, for example, run a single extraction with three dictionaries, one term list and two tables. - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_2,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Creating dictionary files - -Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that will be used temporarily to store your dictionary files. - -To create dictionary files in your notebook: - - - -1. Create a module directory. Note that the name of the module folder cannot contain any dashes as this will cause errors. - -import os -import watson_nlp -module_folder = ""NLP_Dict_Module_1"" -os.makedirs(module_folder, exist_ok=True) -2. Create dictionary files, and store them in the module directory. You can either read in an external list or CSV file, or you can create dictionary files like so: - - Create a term list dictionary -term_file = ""names.dict"" -with open(os.path.join(module_folder, term_file), 'w') as dictionary: -dictionary.write('Bruce') -dictionary.write('n') -dictionary.write('Peter') -dictionary.write('n') - - Create a table dictionary -table_file = 'Places.csv' -with open(os.path.join(module_folder, table_file), 'w') as places: -places.write(""""label"", ""entry"""") -places.write(""n"") -places.write(""""SIGHT"", ""Times Square"""") -places.write(""n"") -places.write(""""PLACE"", ""5th Avenue"""") -places.write(""n"") - - - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_3,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Loading the dictionaries and configuring matching options - -The dictionaries can be loaded using the following helper methods. - - - -* To load a single dictionary, use watson_nlp.toolkit.rule_utils.DictionaryConfig () -* To load multiple dictionaries, use watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([)]) - - - -For each dictionary, you need to specify a dictionary configuration. The dictionary configuration is a Python dictionary, with the following attributes: - - - - Attribute Value Description Required - - name string The name of the dictionary Yes - source string The path to the dictionary, relative to module_folder Yes - dict_type file or table Whether the dictionary artifact is a term list (file) or a table of mappings (table) No. The default is file - consolidate ContainedWithin (Keep the longest match and deduplicate) / NotContainedWithin (Keep the shortest match and deduplicate) / ContainsButNotEqual (Keep longest match but keep duplicate matches) / ExactMatch (Deduplicate) / LeftToRight (Keep the leftmost longest non-overlapping span) What to do with dictionary matches that overlap. No. The default is to not consolidate matches. - case exact / insensitive Either match exact case or be case insensitive. No. The default is exact match. - lemma True / False Match the terms in the dictionary with the lemmas from the text. The dictionary should contain only lemma forms. For example, add mouse in the dictionary to match both mouse and mice in text. Do not add mice in the dictionary. To match terms that consist of multiple tokens in text, separate the lemmas of those terms in the dictionary by a space character. No. The default is False. -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_4,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," mappings.columns (columns as attribute of mappings: {}) list [ string ] List of column headers in the same order as present in the table csv Yes if dict_type: table - mappings.entry (entry as attribute of mappings: {}) string The name of the column header that contains the string to match against the document. Yes if dict_type: table - label string The label to attach to matches. No - - - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_5,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample - - Load the dictionaries -dictionaries = watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([{ -'name': 'Names', -'source': term_file, -'case':'insensitive' -}, { -'name': 'places_and_sights_mappings', -'source': table_file, -'dict_type': 'table', -'mappings': { -'columns': 'label', 'entry'], -'entry': 'entry' -} -}]) - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_6,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Training a model that contains dictionaries - -After you have loaded the dictionaries, create a dictionary model and train the model using the RBR.train() method. In the method, specify: - - - -* The module directory -* The language of the dictionary entries -* The dictionaries to use - - - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_7,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample - -custom_dict_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, -language='en', dictionaries=dictionaries) - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_8,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Applying the model on new data - -After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks. - -" -34BC2F43F99778FFA7E2C3E414C3CFB32509276D_9,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample - -custom_dict_block.run('Bruce is at Times Square') - -Output of the code sample: - -{(0, 5): ['Names'], (12, 24): ['SIGHT']} - -To show the labels or the name of the dictionary: - -RBR_result = custom_dict_block.executor.get_raw_response('Bruce is at Times Square', language='en') -print(RBR_result) - -Output showing the labels: - -{'annotations': {'View_Names': [{'label': 'Names', 'match': {'location': {'begin': 0, 'end': 5}, 'text': 'Bruce'}}], 'View_places_and_sights_mappings': [{'label': 'SIGHT', 'match': {'location': {'begin': 12, 'end': 24}, 'text': 'Times Square'}}]}, 'instrumentationInfo': {'annotator': {'version': '1.0', 'key': 'Text match extractor for NLP_Dict_Module_1'}, 'runningTimeMS': 3, 'documentSizeChars': 32, 'numAnnotationsTotal': 2, 'numAnnotationsPerType': [{'annotationType': 'View_Names', 'numAnnotations': 1}, {'annotationType': 'View_places_and_sights_mappings', 'numAnnotations': 1}], 'interrupted': False, 'success': True}} - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html) -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_0,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Detecting entities with regular expressions - -Similar to detecting entities with dictionaries, you can use regex pattern matches to detect entities. - -Regular expressions are not provided in files like dictionaries but in-memory within a regex configuration. You can use multiple regex configurations during the same extraction. - -Regexes that you define with Watson Natural Language Processing can use token boundaries. This way, you can ensure that your regular expression matches within one or more tokens. This is a clear advantage over simpler regular expression engines, especially when you work with a language that is not separated by whitespace, such as Chinese. - -Regular expressions are processed by a dedicated component called Rule-Based Runtime, or RBR for short. - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_1,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Creating regex configurations - -Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that is used temporarily to store the files created by the RBR training. This module directory can be the same directory that you created and used for dictionary-based entity extraction. Dictionaries and regular expressions can be used in the same training run. - -To create the module directory in your notebook, enter the following in a code cell. Note that the module directory can't contain a dash (-). - -import os -import watson_nlp -module_folder = ""NLP_RBR_Module_2"" -os.makedirs(module_folder, exist_ok=True) - -A regex configuration is a Python dictionary, with the following attributes: - - - -Available attributes in regex configurations with their values, descriptions of use and indication if required or not - - Attribute Value Description Required - - name string The name of the regular expression. Matches of the regular expression in the input text are tagged with this name in the output. Yes - regexes list (string of perl based regex patterns) Should be non-empty. Multiple regexes can be provided. Yes - flags Delimited string of valid flags Flags such as UNICODE or CASE_INSENSITIVE control the matching. Can also be a combination of flags. For the supported flags, see [Pattern (Java Platform SE 8)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html). No (defaults to DOTALL) - token_boundary.min int token_boundary indicates whether to match the regular expression only on token boundaries. Specified as a dict object with min and max attributes. No (returns the longest non-overlapping match at each character position in the input text) -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_2,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," token_boundary.max int max is an optional attribute for token_boundary and needed when the boundary needs to extend for a range (between min and max tokens). token_boundary.max needs to be >= token_boundary.min No (if token_boundary is specified, the min attribute can be specified alone) - groups list (string labels for matching groups) String index in list corresponds to matched group in pattern starting with 1 where 0 index corresponds to entire match. For example: regex: (a)(b) on ab with group: ['full', 'first', 'second'] will yield full: ab, first: a, second: b No (defaults to label match on full match) - - - -The regex configurations can be loaded using the following helper methods: - - - -* To load a single regex configuration, use watson_nlp.toolkit.RegexConfig.load() -* To load multiple regex configurations, use watson_nlp.toolkit.RegexConfig.load_all([)]) - - - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_3,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample - -This sample shows you how to load two different regex configurations. The first configuration detects person names. It uses the groups attribute to allow easy access to the full, first and last name at a later stage. - -The second configuration detects acronyms as a sequence of all-uppercase characters. By using the token_boundary attribute, it prevents matches in words that contain both uppercase and lowercase characters. - -from watson_nlp.toolkit.rule_utils import RegexConfig - - Load some regex configs, for instance to match First names or acronyms -regexes = RegexConfig.load_all([ -{ -'name': 'full names', -'regexes': '(A-Z]a-z]) (A-Z]a-z])'], -'groups': 'full name', 'first name', 'last name'] -}, -{ -'name': 'acronyms', -'regexes': '(A-Z]+)'], -'groups': 'acronym'], -'token_boundary': { -'min': 1, -'max': 1 -} -} -]) - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_4,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Training a model that contains regular expressions - -After you have loaded the regex configurations, create an RBR model using the RBR.train() method. In the method, specify: - - - -* The module directory -* The language of the text -* The regex configurations to use - - - -This is the same method that is used to train RBR with dictionary-based extraction. You can pass the dictionary configuration in the same method call. - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_5,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample - - Train the RBR model -custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_path=module_folder, language='en', regexes=regexes) - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_6,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Applying the model on new data - -After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks. - -" -6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_7,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample - -custom_regex_block.run('Bruce Wayne works for NASA') - -Output of the code sample: - -{(0, 11): ['regex::full names'], (0, 5): ['regex::full names'], (6, 11): ['regex::full names'], (22, 26): ['regex::acronyms']} - -To show the matching subgroups or the matched text: - -import json - Get the raw response including matching groups -full_regex_result = custom_regex_block.executor.get_raw_response('Bruce Wayne works for NASA‘, language='en') -print(json.dumps(full_regex_result, indent=2)) - -Output of the code sample: - -{ -""annotations"": { -""View_full names"": [ -{ -""label"": ""regex::full names"", -""fullname"": { -""location"": { -""begin"": 0, -""end"": 11 -}, -""text"": ""Bruce Wayne"" -}, -""firstname"": { -""location"": { -""begin"": 0, -""end"": 5 -}, -""text"": ""Bruce"" -}, -""lastname"": { -""location"": { -""begin"": 6, -""end"": 11 -}, -""text"": ""Wayne"" -} -} -], -""View_acronyms"": [ -{ -""label"": ""regex::acronyms"", -""acronym"": { -""location"": { -""begin"": 22, -""end"": 26 -}, -""text"": ""NASA"" -} -} -] -}, -... -} - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html) -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_0,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Detecting entities with a custom transformer model - -If you don't have a fixed set of terms or you cannot express entities that you like to detect as regular expressions, you can build a custom transformer model. The model is based on the pretrained Slate IBM Foundation model. - -When you use the pretrained model, you can build multi-lingual models. You don't have to have separate models for each language. - -You need sufficient training data to achieve high quality (2000 – 5000 per entity type). If you have GPUs available, use them for training. - -Note:Training transformer models is CPU and memory intensive. The predefined environments are not large enough to complete the training. Create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. If you have GPUs available, it's highly recommended to use them. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_1,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Input data format - -The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a mentions field. The text field represents the training sentence text, and mentions is an array of JSON objects with the text, type, and location of each mention: - -[ -{ -""text"": str, -""mentions"": { -""location"": { -""begin"": int, -""end"": int -}, -""text"": str, -""type"": str -},...] -},... -] - -Example: - -[ -{ -""id"": 38863234, -""text"": ""I'm moving to Colorado in a couple months."", -""mentions"": { -""text"": ""Colorado"", -""type"": ""Location"", -""location"": { -""begin"": 14, -""end"": 22 -} -}, -{ -""text"": ""couple months"", -""type"": ""Duration"", -""location"": { -""begin"": 28, -""end"": 41 -} -}] -} -] - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_2,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Training your model - -The transformer algorithm is using the pretrained Slate model. The pretrained Slate model is only available in Runtime 23.1. - -To get the options available for configuring Transformer training, enter: - -help(watson_nlp.workflows.entity_mentions.transformer.Transformer.train) - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_3,D71261B71A4CF5A1AD5E148EDE7751B630060BDF,"Sample code - -import watson_nlp -from watson_nlp.toolkit.entity_mentions_utils.train_util import prepare_stream_of_train_records_from_JSON_collection - - load the syntax models for all languages to be supported -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -syntax_models = [syntax_model] - - load the pretrained Slate model -pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased') - - prepare the train and dev data - entity_train_data is a directory with one or more json files in the input format specified above -train_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data') -dev_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data') - - train a transformer workflow model -trained_workflow = watson_nlp.workflows.entity_mentions.transformer.Transformer.train( -train_data_stream=train_data_stream, -dev_data_stream=dev_data_stream, -syntax_models=syntax_models, -template_resource=pretrained_model_resource, -num_train_epochs=3, -) - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_4,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Applying the model on new data - -Apply the trained transformer workflow model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_5,D71261B71A4CF5A1AD5E148EDE7751B630060BDF,"Code sample - -trained_workflow.run('Bruce is at Times Square') - -" -D71261B71A4CF5A1AD5E148EDE7751B630060BDF_6,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Storing and loading the model - -The custom transformer model can be stored as any other model as described in ""Loading and storing models"", using ibm_watson_studio_lib. - -To load the custom transformer model, extra steps are required: - - - -1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook. -2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar and then run the cell. - -By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html). -3. Download and extract the model to your local runtime environment: - -import zipfile -model_zip = 'trained_workflow_file' -model_folder = 'trained_workflow_folder' -wslib.download_file('trained_workflow', file_name=model_zip) - -with zipfile.ZipFile(model_zip, 'r') as zip_ref: -zip_ref.extractall(model_folder) -4. Load the model from the extracted folder: - -trained_workflow = watson_nlp.load(model_folder) - - - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html) -" -355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_0,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Extracting sentiment with a custom transformer model - -You can train your own models for sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data. - -The Slate IBM Foundation model is available only in Runtime 23.1. - -Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - - - -* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=eninput) -* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enload) -* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=entrain) -* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enapply) - - - -" -355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_1,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Input data format for training - -You need to provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a labels field. The text represents the training example text, and the labels field is an array, which contains exactly one label of positive, neutral, or negative. - -The following is an example of an array with sample training data: - -[ -{ -""text"": ""I am happy"", -""labels"": ""positive""] -}, -{ -""text"": ""I am sad"", -""labels"": ""negative""] -}, -{ -""text"": ""The sky is blue"", -""labels"": ""neutral""] -} -] - -The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you might use the utility method prepare_data_from_json: - -import watson_nlp -from watson_nlp.toolkit.sentiment_analysis_utils.training import train_util as utils - -training_data_file = ""train_data.json"" -dev_data_file = ""dev_data.json"" - -train_stream = utils.prepare_data_from_json(training_data_file) -dev_stream = utils.prepare_data_from_json(dev_data_file) - -" -355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_2,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Loading the pretrained model resources - -The pretrained Slate IBM Foundation model needs to be loaded before it passes to the training algorithm. In addition, you need to load the syntax analysis models for the languages that are used in your input texts. - -To load the model: - - Load the pretrained Slate IBM Foundation model -pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased') - - Download relevant syntax analysis models -syntax_model_en = watson_nlp.load('syntax_izumo_en_stock') -syntax_model_de = watson_nlp.load('syntax_izumo_de_stock') - - Create a list of all syntax analysis models -syntax_models = [syntax_model_en, syntax_model_de] - -" -355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_3,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Training the model - -For all options that are available for configuring sentiment transformer training, enter: - -help(watson_nlp.workflows.sentiment.AggregatedSentiment.train_transformer) - -The train_transformer method creates a workflow model, which automatically runs syntax analysis and the trained sentiment classification. In a subsequent step, enable language detection so that the workflow model can run on input text without any prerequisite information. - -The following is a sample call using the input data and pretrained model from the previous section (Training the model): - -from watson_nlp.workflows.sentiment import AggregatedSentiment - -sentiment_model = AggregatedSentiment.train_transformer( -train_data_stream = train_stream, -dev_data_stream = dev_stream, -syntax_model=syntax_models, -pretrained_model_resource=pretrained_model_resource, -label_list=['negative', 'neutral', 'positive'], -learning_rate=2e-5, -num_train_epochs=10, -combine_approach=""NON_NEUTRAL_MEAN"", -keep_model_artifacts=True -) -lang_detect_model = watson_nlp.load('lang-detect_izumo_multi_stock') - -sentiment_model.enable_lang_detect(lang_detect_model) - -" -355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_4,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Applying the model on new data - -After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. - -Sample code: - -input_text = 'new input text' -sentiment_predictions = sentiment_model.run(input_text) - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html) -" -D174298E1DD7898C08771488715D83FC7A7740AE_0,D174298E1DD7898C08771488715D83FC7A7740AE," Working with pre-trained models - -Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements. - -" -D174298E1DD7898C08771488715D83FC7A7740AE_1,D174298E1DD7898C08771488715D83FC7A7740AE," Loading and running a model - -To load a model, you first need to know its name. Model names follow a standard convention encoding the type of model (like classification or entity extraction), type of algorithm (like BERT or SVM), language code, and details of the type system. - -To find the model that matches your needs, use the task catalog. See [Watson NLP task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html). - -You can find the expected input for a given block class (for example to the Entity Mentions model) by using help() on the block class run() method: - -import watson_nlp - -help(watson_nlp.blocks.keywords.TextRank.run) - -Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Each block or workflow supports functions to: - - - -* load(): load a model -* run(): run the model on input arguments -* train(): train the model on your own data (not all blocks and workflows support training) -* save(): save the model that has been trained on your own data - - - -" -D174298E1DD7898C08771488715D83FC7A7740AE_2,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks - -Two types of blocks exist: - - - -* [Blocks that operate directly on the input document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-data) -* [Blocks that depend on other blocks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-blocks) - - - -[Workflows](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enworkflows) run one more blocks on the input document, in a pipeline. - -" -D174298E1DD7898C08771488715D83FC7A7740AE_3,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks that operate directly on the input document - -An example of a block that operates directly on the input document is the Syntax block, which performs natural language processing operations such as tokenization, lemmatization, part of speech tagging or dependency parsing. - -Example: running syntax analysis on a text snippet: - -import watson_nlp - - Load the syntax model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Run the syntax model and print the result -syntax_prediction = syntax_model.run('Welcome to IBM!') -print(syntax_prediction) - -" -D174298E1DD7898C08771488715D83FC7A7740AE_4,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks that depend on other blocks - -Blocks that depend on other blocks cannot be applied on the input document directly. They are applied on the output of one or more preceeding blocks. For example, the Keyword Extraction block depends on the Syntax and Noun Phrases block. - -These blocks can be loaded but can only be run in a particular order on the input document. For example: - -import watson_nlp -text = ""Anna went to school at University of California Santa Cruz. -Anna joined the university in 2015."" - - Load Syntax, Noun Phrases and Keywords models for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock') -keywords_model = watson_nlp.load('keywords_text-rank_en_stock') - - Run the Syntax and Noun Phrases models -syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech')) -noun_phrases = noun_phrases_model.run(text) - - Run the keywords model -keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2) -print(keywords) - -" -D174298E1DD7898C08771488715D83FC7A7740AE_5,D174298E1DD7898C08771488715D83FC7A7740AE," Workflows - -Workflows are predefined end-to-end pipelines from a raw document to a final block, where all necessary blocks are chained as part of the workflow pipeline. For instance, the Entity Mentions block offered in Runtime 22.2 requires syntax analysis results, so the end-to-end process would be: input text -> Syntax analysis -> Entity Mentions -> Entity Mentions results. Starting with Runtime 23.1, you can call the Entity Mentions workflow. Refer to this sample: - -import watson_nlp - - Load the workflow model -mentions_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled') - - Run the entity extraction workflow on the input text -mentions_workflow.run('IBM announced new advances in quantum computing', language_code=""en"") - -Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_0,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Category types - -The categories that are returned by the the Watson Natural Language Processing Categories block are based on the IAB Tech Lab Content Taxonomy, which provides common language categories that can be used when describing content. - -The following table lists the IAB categories taxonomy returned by the Categories block. - - - - LEVEL 1 LEVEL 2 LEVEL 3 LEVEL 4 - - Automotive - Automotive Auto Body Styles - Automotive Auto Body Styles Commercial Trucks - Automotive Auto Body Styles Sedan - Automotive Auto Body Styles Station Wagon - Automotive Auto Body Styles SUV - Automotive Auto Body Styles Van - Automotive Auto Body Styles Convertible - Automotive Auto Body Styles Coupe - Automotive Auto Body Styles Crossover - Automotive Auto Body Styles Hatchback - Automotive Auto Body Styles Microcar - Automotive Auto Body Styles Minivan - Automotive Auto Body Styles Off-Road Vehicles - Automotive Auto Body Styles Pickup Trucks - Automotive Auto Type - Automotive Auto Type Budget Cars - Automotive Auto Type Certified Pre-Owned Cars - Automotive Auto Type Classic Cars - Automotive Auto Type Concept Cars - Automotive Auto Type Driverless Cars - Automotive Auto Type Green Vehicles - Automotive Auto Type Luxury Cars - Automotive Auto Type Performance Cars - Automotive Car Culture - Automotive Dash Cam Videos - Automotive Motorcycles - Automotive Road-Side Assistance - Automotive Scooters - Automotive Auto Buying and Selling - Automotive Auto Insurance - Automotive Auto Parts - Automotive Auto Recalls - Automotive Auto Repair - Automotive Auto Safety - Automotive Auto Shows - Automotive Auto Technology - Automotive Auto Technology Auto Infotainment Technologies - Automotive Auto Technology Auto Navigation Systems - Automotive Auto Technology Auto Safety Technologies - Automotive Auto Rentals - Books and Literature - Books and Literature Art and Photography Books - Books and Literature Biographies - Books and Literature Children's Literature - Books and Literature Comics and Graphic Novels - Books and Literature Cookbooks - Books and Literature Fiction - Books and Literature Poetry - Books and Literature Travel Books - Books and Literature Young Adult Literature - Business and Finance - Business and Finance Business - Business and Finance Business Business Accounting & Finance - Business and Finance Business Human Resources - Business and Finance Business Large Business - Business and Finance Business Logistics -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_1,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Business and Finance Business Marketing and Advertising - Business and Finance Business Sales - Business and Finance Business Small and Medium-sized Business - Business and Finance Business Startups - Business and Finance Business Business Administration - Business and Finance Business Business Banking & Finance - Business and Finance Business Business Banking & Finance Angel Investment - Business and Finance Business Business Banking & Finance Bankruptcy - Business and Finance Business Business Banking & Finance Business Loans - Business and Finance Business Business Banking & Finance Debt Factoring & Invoice Discounting - Business and Finance Business Business Banking & Finance Mergers and Acquisitions - Business and Finance Business Business Banking & Finance Private Equity - Business and Finance Business Business Banking & Finance Sale & Lease Back - Business and Finance Business Business Banking & Finance Venture Capital - Business and Finance Business Business I.T. - Business and Finance Business Business Operations - Business and Finance Business Consumer Issues - Business and Finance Business Consumer Issues Recalls - Business and Finance Business Executive Leadership & Management - Business and Finance Business Government Business - Business and Finance Business Green Solutions - Business and Finance Business Business Utilities - Business and Finance Economy - Business and Finance Economy Commodities - Business and Finance Economy Currencies - Business and Finance Economy Financial Crisis - Business and Finance Economy Financial Reform - Business and Finance Economy Financial Regulation - Business and Finance Economy Gasoline Prices - Business and Finance Economy Housing Market - Business and Finance Economy Interest Rates - Business and Finance Economy Job Market - Business and Finance Industries - Business and Finance Industries Advertising Industry - Business and Finance Industries Education industry - Business and Finance Industries Entertainment Industry - Business and Finance Industries Environmental Services Industry - Business and Finance Industries Financial Industry - Business and Finance Industries Food Industry - Business and Finance Industries Healthcare Industry - Business and Finance Industries Hospitality Industry - Business and Finance Industries Information Services Industry - Business and Finance Industries Legal Services Industry - Business and Finance Industries Logistics and Transportation Industry - Business and Finance Industries Agriculture - Business and Finance Industries Management Consulting Industry - Business and Finance Industries Manufacturing Industry - Business and Finance Industries Mechanical and Industrial Engineering Industry - Business and Finance Industries Media Industry - Business and Finance Industries Metals Industry - Business and Finance Industries Non-Profit Organizations - Business and Finance Industries Pharmaceutical Industry - Business and Finance Industries Power and Energy Industry - Business and Finance Industries Publishing Industry - Business and Finance Industries Real Estate Industry - Business and Finance Industries Apparel Industry - Business and Finance Industries Retail Industry - Business and Finance Industries Technology Industry - Business and Finance Industries Telecommunications Industry - Business and Finance Industries Automotive Industry -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_2,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Business and Finance Industries Aviation Industry - Business and Finance Industries Biotech and Biomedical Industry - Business and Finance Industries Civil Engineering Industry - Business and Finance Industries Construction Industry - Business and Finance Industries Defense Industry - Careers - Careers Apprenticeships - Careers Career Advice - Careers Career Planning - Careers Job Search - Careers Job Search Job Fairs - Careers Job Search Resume Writing and Advice - Careers Remote Working - Careers Vocational Training - Education - Education Adult Education - Education Private School - Education Secondary Education - Education Special Education - Education College Education - Education College Education College Planning - Education College Education Postgraduate Education - Education College Education Postgraduate Education Professional School - Education College Education Undergraduate Education - Education Early Childhood Education - Education Educational Assessment - Education Educational Assessment Standardized Testing - Education Homeschooling - Education Homework and Study - Education Language Learning - Education Online Education - Education Primary Education - Events and Attractions - Events and Attractions Amusement and Theme Parks - Events and Attractions Fashion Events - Events and Attractions Historic Site and Landmark Tours - Events and Attractions Malls & Shopping Centers - Events and Attractions Museums & Galleries - Events and Attractions Musicals - Events and Attractions National & Civic Holidays - Events and Attractions Nightclubs - Events and Attractions Outdoor Activities - Events and Attractions Parks & Nature - Events and Attractions Party Supplies and Decorations - Events and Attractions Awards Shows - Events and Attractions Personal Celebrations & Life Events - Events and Attractions Personal Celebrations & Life Events Anniversary - Events and Attractions Personal Celebrations & Life Events Wedding - Events and Attractions Personal Celebrations & Life Events Baby Shower - Events and Attractions Personal Celebrations & Life Events Bachelor Party - Events and Attractions Personal Celebrations & Life Events Bachelorette Party - Events and Attractions Personal Celebrations & Life Events Birth - Events and Attractions Personal Celebrations & Life Events Birthday - Events and Attractions Personal Celebrations & Life Events Funeral - Events and Attractions Personal Celebrations & Life Events Graduation - Events and Attractions Personal Celebrations & Life Events Prom - Events and Attractions Political Event - Events and Attractions Religious Events - Events and Attractions Sporting Events - Events and Attractions Theater Venues and Events - Events and Attractions Zoos & Aquariums - Events and Attractions Bars & Restaurants - Events and Attractions Business Expos & Conferences -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_3,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Events and Attractions Casinos & Gambling - Events and Attractions Cinemas and Events - Events and Attractions Comedy Events - Events and Attractions Concerts & Music Events - Events and Attractions Fan Conventions - Family and Relationships - Family and Relationships Bereavement - Family and Relationships Dating - Family and Relationships Divorce - Family and Relationships Eldercare - Family and Relationships Marriage and Civil Unions - Family and Relationships Parenting - Family and Relationships Parenting Adoption and Fostering - Family and Relationships Parenting Daycare and Pre-School - Family and Relationships Parenting Internet Safety - Family and Relationships Parenting Parenting Babies and Toddlers - Family and Relationships Parenting Parenting Children Aged 4-11 - Family and Relationships Parenting Parenting Teens - Family and Relationships Parenting Special Needs Kids - Family and Relationships Single Life - Fine Art - Fine Art Costume - Fine Art Dance - Fine Art Design - Fine Art Digital Arts - Fine Art Fine Art Photography - Fine Art Modern Art - Fine Art Opera - Fine Art Theater - Food & Drink - Food & Drink Alcoholic Beverages - Food & Drink Vegan Diets - Food & Drink Vegetarian Diets - Food & Drink World Cuisines - Food & Drink Barbecues and Grilling - Food & Drink Cooking - Food & Drink Desserts and Baking - Food & Drink Dining Out - Food & Drink Food Allergies - Food & Drink Food Movements - Food & Drink Healthy Cooking and Eating - Food & Drink Non-Alcoholic Beverages - Healthy Living - Healthy Living Children's Health - Healthy Living Fitness and Exercise - Healthy Living Fitness and Exercise Participant Sports - Healthy Living Fitness and Exercise Running and Jogging - Healthy Living Men's Health - Healthy Living Nutrition - Healthy Living Senior Health - Healthy Living Weight Loss - Healthy Living Wellness - Healthy Living Wellness Alternative Medicine - Healthy Living Wellness Alternative Medicine Herbs and Supplements - Healthy Living Wellness Alternative Medicine Holistic Health - Healthy Living Wellness Physical Therapy - Healthy Living Wellness Smoking Cessation - Healthy Living Women's Health - Hobbies & Interests - Hobbies & Interests Antiquing and Antiques - Hobbies & Interests Magic and Illusion - Hobbies & Interests Model Toys - Hobbies & Interests Musical Instruments - Hobbies & Interests Paranormal Phenomena - Hobbies & Interests Radio Control -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_4,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Hobbies & Interests Sci-fi and Fantasy - Hobbies & Interests Workshops and Classes - Hobbies & Interests Arts and Crafts - Hobbies & Interests Arts and Crafts Beadwork - Hobbies & Interests Arts and Crafts Candle and Soap Making - Hobbies & Interests Arts and Crafts Drawing and Sketching - Hobbies & Interests Arts and Crafts Jewelry Making - Hobbies & Interests Arts and Crafts Needlework - Hobbies & Interests Arts and Crafts Painting - Hobbies & Interests Arts and Crafts Photography - Hobbies & Interests Arts and Crafts Scrapbooking - Hobbies & Interests Arts and Crafts Woodworking - Hobbies & Interests Beekeeping - Hobbies & Interests Birdwatching - Hobbies & Interests Cigars - Hobbies & Interests Collecting - Hobbies & Interests Collecting Comic Books - Hobbies & Interests Collecting Stamps and Coins - Hobbies & Interests Content Production - Hobbies & Interests Content Production Audio Production - Hobbies & Interests Content Production Freelance Writing - Hobbies & Interests Content Production Screenwriting - Hobbies & Interests Content Production Video Production - Hobbies & Interests Games and Puzzles - Hobbies & Interests Games and Puzzles Board Games and Puzzles - Hobbies & Interests Games and Puzzles Card Games - Hobbies & Interests Games and Puzzles Roleplaying Games - Hobbies & Interests Genealogy and Ancestry - Home & Garden - Home & Garden Gardening - Home & Garden Remodeling & Construction - Home & Garden Smart Home - Home & Garden Home Appliances - Home & Garden Home Entertaining - Home & Garden Home Improvement - Home & Garden Home Security - Home & Garden Indoor Environmental Quality - Home & Garden Interior Decorating - Home & Garden Landscaping - Home & Garden Outdoor Decorating - Medical Health - Medical Health Diseases and Conditions - Medical Health Diseases and Conditions Allergies - Medical Health Diseases and Conditions Ear, Nose and Throat Conditions - Medical Health Diseases and Conditions Endocrine and Metabolic Diseases - Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Hormonal Disorders - Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Menopause -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_5,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Thyroid Disorders - Medical Health Diseases and Conditions Eye and Vision Conditions - Medical Health Diseases and Conditions Foot Health - Medical Health Diseases and Conditions Heart and Cardiovascular Diseases - Medical Health Diseases and Conditions Infectious Diseases - Medical Health Diseases and Conditions Injuries - Medical Health Diseases and Conditions Injuries First Aid - Medical Health Diseases and Conditions Lung and Respiratory Health - Medical Health Diseases and Conditions Mental Health - Medical Health Diseases and Conditions Reproductive Health - Medical Health Diseases and Conditions Reproductive Health Birth Control - Medical Health Diseases and Conditions Reproductive Health Infertility - Medical Health Diseases and Conditions Reproductive Health Pregnancy - Medical Health Diseases and Conditions Blood Disorders - Medical Health Diseases and Conditions Sexual Health - Medical Health Diseases and Conditions Sexual Health Sexual Conditions - Medical Health Diseases and Conditions Skin and Dermatology - Medical Health Diseases and Conditions Sleep Disorders - Medical Health Diseases and Conditions Substance Abuse - Medical Health Diseases and Conditions Bone and Joint Conditions - Medical Health Diseases and Conditions Brain and Nervous System Disorders - Medical Health Diseases and Conditions Cancer - Medical Health Diseases and Conditions Cold and Flu - Medical Health Diseases and Conditions Dental Health - Medical Health Diseases and Conditions Diabetes - Medical Health Diseases and Conditions Digestive Disorders - Medical Health Medical Tests - Medical Health Pharmaceutical Drugs - Medical Health Surgery - Medical Health Vaccines - Medical Health Cosmetic Medical Services - Movies - Movies Action and Adventure Movies - Movies Romance Movies - Movies Science Fiction Movies - Movies Indie and Arthouse Movies - Movies Animation Movies - Movies Comedy Movies - Movies Crime and Mystery Movies - Movies Documentary Movies - Movies Drama Movies - Movies Family and Children Movies - Movies Fantasy Movies - Movies Horror Movies - Movies World Movies - Music and Audio - Music and Audio Adult Contemporary Music - Music and Audio Adult Contemporary Music Soft AC Music - Music and Audio Adult Contemporary Music Urban AC Music - Music and Audio Adult Album Alternative - Music and Audio Alternative Music - Music and Audio Children's Music - Music and Audio Classic Hits - Music and Audio Classical Music - Music and Audio College Radio - Music and Audio Comedy (Music and Audio) - Music and Audio Contemporary Hits/Pop/Top 40 - Music and Audio Country Music - Music and Audio Dance and Electronic Music - Music and Audio World/International Music - Music and Audio Songwriters/Folk -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_6,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Music and Audio Gospel Music - Music and Audio Hip Hop Music - Music and Audio Inspirational/New Age Music - Music and Audio Jazz - Music and Audio Oldies/Adult Standards - Music and Audio Reggae - Music and Audio Blues - Music and Audio Religious (Music and Audio) - Music and Audio R&B/Soul/Funk - Music and Audio Rock Music - Music and Audio Rock Music Album-oriented Rock - Music and Audio Rock Music Alternative Rock - Music and Audio Rock Music Classic Rock - Music and Audio Rock Music Hard Rock - Music and Audio Rock Music Soft Rock - Music and Audio Soundtracks, TV and Showtunes - Music and Audio Sports Radio - Music and Audio Talk Radio - Music and Audio Talk Radio Business News Radio - Music and Audio Talk Radio Educational Radio - Music and Audio Talk Radio News Radio - Music and Audio Talk Radio News/Talk Radio - Music and Audio Talk Radio Public Radio - Music and Audio Urban Contemporary Music - Music and Audio Variety (Music and Audio) - News and Politics - News and Politics Crime - News and Politics Disasters - News and Politics International News - News and Politics Law - News and Politics Local News - News and Politics National News - News and Politics Politics - News and Politics Politics Elections - News and Politics Politics Political Issues - News and Politics Politics War and Conflicts - News and Politics Weather - Personal Finance - Personal Finance Consumer Banking - Personal Finance Financial Assistance - Personal Finance Financial Assistance Government Support and Welfare - Personal Finance Financial Assistance Student Financial Aid - Personal Finance Financial Planning - Personal Finance Frugal Living - Personal Finance Insurance - Personal Finance Insurance Health Insurance - Personal Finance Insurance Home Insurance - Personal Finance Insurance Life Insurance - Personal Finance Insurance Motor Insurance - Personal Finance Insurance Pet Insurance - Personal Finance Insurance Travel Insurance - Personal Finance Personal Debt - Personal Finance Personal Debt Credit Cards - Personal Finance Personal Debt Home Financing - Personal Finance Personal Debt Personal Loans - Personal Finance Personal Debt Student Loans - Personal Finance Personal Investing - Personal Finance Personal Investing Hedge Funds - Personal Finance Personal Investing Mutual Funds - Personal Finance Personal Investing Options - Personal Finance Personal Investing Stocks and Bonds - Personal Finance Personal Taxes - Personal Finance Retirement Planning - Personal Finance Home Utilities - Personal Finance Home Utilities Gas and Electric - Personal Finance Home Utilities Internet Service Providers - Personal Finance Home Utilities Phone Services - Personal Finance Home Utilities Water Services - Pets - Pets Birds - Pets Cats - Pets Dogs - Pets Fish and Aquariums - Pets Large Animals - Pets Pet Adoptions - Pets Reptiles - Pets Veterinary Medicine -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_7,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Pets Pet Supplies - Pop Culture - Pop Culture Celebrity Deaths - Pop Culture Celebrity Families - Pop Culture Celebrity Homes - Pop Culture Celebrity Pregnancy - Pop Culture Celebrity Relationships - Pop Culture Celebrity Scandal - Pop Culture Celebrity Style - Pop Culture Humor and Satire - Real Estate - Real Estate Apartments - Real Estate Retail Property - Real Estate Vacation Properties - Real Estate Developmental Sites - Real Estate Hotel Properties - Real Estate Houses - Real Estate Industrial Property - Real Estate Land and Farms - Real Estate Office Property - Real Estate Real Estate Buying and Selling - Real Estate Real Estate Renting and Leasing - Religion & Spirituality - Religion & Spirituality Agnosticism - Religion & Spirituality Spirituality - Religion & Spirituality Astrology - Religion & Spirituality Atheism - Religion & Spirituality Buddhism - Religion & Spirituality Christianity - Religion & Spirituality Hinduism - Religion & Spirituality Islam - Religion & Spirituality Judaism - Religion & Spirituality Sikhism - Science - Science Biological Sciences - Science Chemistry - Science Environment - Science Genetics - Science Geography - Science Geology - Science Physics - Science Space and Astronomy - Shopping - Shopping Coupons and Discounts - Shopping Flower Shopping - Shopping Gifts and Greetings Cards - Shopping Grocery Shopping - Shopping Holiday Shopping - Shopping Household Supplies - Shopping Lotteries and Scratchcards - Shopping Sales and Promotions - Shopping Children's Games and Toys - Sports - Sports American Football - Sports Boxing - Sports Cheerleading - Sports College Sports - Sports College Sports College Football - Sports College Sports College Basketball - Sports College Sports College Baseball - Sports Cricket - Sports Cycling - Sports Darts - Sports Disabled Sports - Sports Diving - Sports Equine Sports - Sports Equine Sports Horse Racing - Sports Extreme Sports - Sports Extreme Sports Canoeing and Kayaking - Sports Extreme Sports Climbing - Sports Extreme Sports Paintball - Sports Extreme Sports Scuba Diving - Sports Extreme Sports Skateboarding - Sports Extreme Sports Snowboarding - Sports Extreme Sports Surfing and Bodyboarding - Sports Extreme Sports Waterskiing and Wakeboarding - Sports Australian Rules Football - Sports Fantasy Sports - Sports Field Hockey - Sports Figure Skating - Sports Fishing Sports - Sports Golf - Sports Gymnastics - Sports Hunting and Shooting - Sports Ice Hockey - Sports Inline Skating - Sports Lacrosse - Sports Auto Racing - Sports Auto Racing Motorcycle Sports - Sports Martial Arts - Sports Olympic Sports - Sports Olympic Sports Summer Olympic Sports - Sports Olympic Sports Winter Olympic Sports - Sports Poker and Professional Gambling - Sports Rodeo - Sports Rowing - Sports Rugby - Sports Rugby Rugby League - Sports Rugby Rugby Union - Sports Sailing - Sports Skiing - Sports Snooker/Pool/Billiards - Sports Soccer - Sports Badminton - Sports Softball -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_8,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Sports Squash - Sports Swimming - Sports Table Tennis - Sports Tennis - Sports Track and Field - Sports Volleyball - Sports Walking - Sports Water Polo - Sports Weightlifting - Sports Baseball - Sports Wrestling - Sports Basketball - Sports Beach Volleyball - Sports Bodybuilding - Sports Bowling - Sports Sports Equipment - Style & Fashion - Style & Fashion Beauty - Style & Fashion Beauty Hair Care - Style & Fashion Beauty Makeup and Accessories - Style & Fashion Beauty Nail Care - Style & Fashion Beauty Natural and Organic Beauty - Style & Fashion Beauty Perfume and Fragrance - Style & Fashion Beauty Skin Care - Style & Fashion Women's Fashion - Style & Fashion Women's Fashion Women's Accessories - Style & Fashion Women's Fashion Women's Accessories Women's Glasses - Style & Fashion Women's Fashion Women's Accessories Women's Handbags and Wallets - Style & Fashion Women's Fashion Women's Accessories Women's Hats and Scarves - Style & Fashion Women's Fashion Women's Accessories Women's Jewelry and Watches - Style & Fashion Women's Fashion Women's Clothing - Style & Fashion Women's Fashion Women's Clothing Women's Business Wear - Style & Fashion Women's Fashion Women's Clothing Women's Casual Wear - Style & Fashion Women's Fashion Women's Clothing Women's Formal Wear - Style & Fashion Women's Fashion Women's Clothing Women's Intimates and Sleepwear - Style & Fashion Women's Fashion Women's Clothing Women's Outerwear - Style & Fashion Women's Fashion Women's Clothing Women's Sportswear - Style & Fashion Women's Fashion Women's Shoes and Footwear - Style & Fashion Body Art - Style & Fashion Children's Clothing - Style & Fashion Designer Clothing - Style & Fashion Fashion Trends - Style & Fashion High Fashion - Style & Fashion Men's Fashion - Style & Fashion Men's Fashion Men's Accessories - Style & Fashion Men's Fashion Men's Accessories Men's Jewelry and Watches - Style & Fashion Men's Fashion Men's Clothing - Style & Fashion Men's Fashion Men's Clothing Men's Business Wear - Style & Fashion Men's Fashion Men's Clothing Men's Casual Wear - Style & Fashion Men's Fashion Men's Clothing Men's Formal Wear - Style & Fashion Men's Fashion Men's Clothing Men's Outerwear -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_9,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Style & Fashion Men's Fashion Men's Clothing Men's Sportswear - Style & Fashion Men's Fashion Men's Clothing Men's Underwear and Sleepwear - Style & Fashion Men's Fashion Men's Shoes and Footwear - Style & Fashion Personal Care - Style & Fashion Personal Care Bath and Shower - Style & Fashion Personal Care Deodorant and Antiperspirant - Style & Fashion Personal Care Oral care - Style & Fashion Personal Care Shaving - Style & Fashion Street Style - Technology & Computing - Technology & Computing Artificial Intelligence - Technology & Computing Augmented Reality - Technology & Computing Computing - Technology & Computing Computing Computer Networking - Technology & Computing Computing Computer Peripherals - Technology & Computing Computing Computer Software and Applications - Technology & Computing Computing Computer Software and Applications 3-D Graphics - Technology & Computing Computing Computer Software and Applications Photo Editing Software - Technology & Computing Computing Computer Software and Applications Shareware and Freeware - Technology & Computing Computing Computer Software and Applications Video Software - Technology & Computing Computing Computer Software and Applications Web Conferencing - Technology & Computing Computing Computer Software and Applications Antivirus Software - Technology & Computing Computing Computer Software and Applications Browsers - Technology & Computing Computing Computer Software and Applications Computer Animation - Technology & Computing Computing Computer Software and Applications Databases - Technology & Computing Computing Computer Software and Applications Desktop Publishing - Technology & Computing Computing Computer Software and Applications Digital Audio - Technology & Computing Computing Computer Software and Applications Graphics Software - Technology & Computing Computing Computer Software and Applications Operating Systems - Technology & Computing Computing Data Storage and Warehousing - Technology & Computing Computing Desktops - Technology & Computing Computing Information and Network Security - Technology & Computing Computing Internet - Technology & Computing Computing Internet Cloud Computing - Technology & Computing Computing Internet Web Development - Technology & Computing Computing Internet Web Hosting - Technology & Computing Computing Internet Email - Technology & Computing Computing Internet Internet for Beginners - Technology & Computing Computing Internet Internet of Things - Technology & Computing Computing Internet IT and Internet Support - Technology & Computing Computing Internet Search - Technology & Computing Computing Internet Social Networking - Technology & Computing Computing Internet Web Design and HTML - Technology & Computing Computing Laptops - Technology & Computing Computing Programming Languages - Technology & Computing Consumer Electronics - Technology & Computing Consumer Electronics Cameras and Camcorders -" -174D6FDF73627D7B2258D7F351C3D0156C06D1DC_10,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Technology & Computing Consumer Electronics Home Entertainment Systems - Technology & Computing Consumer Electronics Smartphones - Technology & Computing Consumer Electronics Tablets and E-readers - Technology & Computing Consumer Electronics Wearable Technology - Technology & Computing Robotics - Technology & Computing Virtual Reality - Television - Television Animation TV - Television Soap Opera TV - Television Special Interest TV - Television Sports TV - Television Children's TV - Television Comedy TV - Television Drama TV - Television Factual TV - Television Holiday TV - Television Music TV - Television Reality TV - Television Science Fiction TV - Travel - Travel Travel Accessories - Travel Travel Locations - Travel Travel Locations Africa Travel - Travel Travel Locations Asia Travel - Travel Travel Locations Australia and Oceania Travel - Travel Travel Locations Europe Travel - Travel Travel Locations North America Travel - Travel Travel Locations Polar Travel - Travel Travel Locations South America Travel - Travel Travel Preparation and Advice - Travel Travel Type - Travel Travel Type Adventure Travel - Travel Travel Type Family Travel - Travel Travel Type Honeymoons and Getaways - Travel Travel Type Hotels and Motels - Travel Travel Type Rail Travel - Travel Travel Type Road Trips - Travel Travel Type Spas - Travel Travel Type Air Travel - Travel Travel Type Beach Travel - Travel Travel Type Bed & Breakfasts - Travel Travel Type Budget Travel - Travel Travel Type Business Travel - Travel Travel Type Camping - Travel Travel Type Cruises - Travel Travel Type Day Trips - Video Gaming - Video Gaming Console Games - Video Gaming eSports - Video Gaming Mobile Games - Video Gaming PC Games - Video Gaming Video Game Genres - Video Gaming Video Game Genres Action Video Games - Video Gaming Video Game Genres Role-Playing Video Games - Video Gaming Video Game Genres Simulation Video Games - Video Gaming Video Game Genres Sports Video Games - Video Gaming Video Game Genres Strategy Video Games - Video Gaming Video Game Genres Action-Adventure Video Games - Video Gaming Video Game Genres Adventure Video Games - Video Gaming Video Game Genres Casual Games - Video Gaming Video Game Genres Educational Video Games - Video Gaming Video Game Genres Exercise and Fitness Video Games - Video Gaming Video Game Genres MMOs -" -D92A34A349CEE727B017AF7D40B880B232220959_0,D92A34A349CEE727B017AF7D40B880B232220959," Watson Natural Language Processing library usage samples - -The sample notebooks demonstrate how to use the different Watson Natural Language Processing blocks and how to train your own models. - -" -D92A34A349CEE727B017AF7D40B880B232220959_1,D92A34A349CEE727B017AF7D40B880B232220959," Sample project and notebooks - -To help you get started with the Watson Natural Language Processing library, you can download a sample project and notebooks from the Samples. - -You can access the Samples by selecting Samples from the Cloud Pak for Data navigation menu. - -" -D92A34A349CEE727B017AF7D40B880B232220959_2,D92A34A349CEE727B017AF7D40B880B232220959,"Sample notebooks - - - -* [Financial complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/39047aede50128e7cbc8ea19660fe1f6) - -This notebook shows you how to analyze financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). The notebook teaches you to use the Tone classification and Emotion classification models. -* [Car complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4b8aa2c1ee67a6cd1172a1cf760f65f7) - -This notebook demonstrates how to analyze car complaints using Watson Natural Language Processing. It uses publicly available complaint records from car owners stored by the National Highway and Transit Association (NHTSA) of the US Department of Transportation. This notebook shows you how use syntax analysis to extract the most frequently used nouns, which typically depict the problems that review authors talk about and combine these results with structured data using association rule mining. -* [Complaint classification with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f011c232) - -This notebook demonstrates how to train different text classifiers using Watson Natural Language Processing. The classifiers predict the product group from the text of a customer complaint. This could be used, for example to route a complaint to the appropriate staff member. The data that is used in this notebook is taken from the Consumer Complaint Database that is published by the Consumer Financial Protection Bureau (CFPB), a U.S. government agency and is publicly available. You will learn how to train a custom CNN model and a VotingEnsemble model and evaluate their quality. -* [Entity extraction on Financial Complaints with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f0112100) - -" -D92A34A349CEE727B017AF7D40B880B232220959_3,D92A34A349CEE727B017AF7D40B880B232220959,"This notebook demonstrates how to extract named entities from financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). In the notebook you will learn how to do dictionary-based term extraction to train a custom extraction model based on given dictionaries and extract entities using the BERT or a transformer model. - - - -" -D92A34A349CEE727B017AF7D40B880B232220959_4,D92A34A349CEE727B017AF7D40B880B232220959,"Sample project - -If you don't want to download the sample notebooks to your project individually, you can download the entire sample project [Text Analysis with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f010e4cb) from the IBM watsonx Gallery. - -The sample project contains the sample notebooks listed in the previous section, including: - - - -* Analyzing hotel reviews using Watson Natural Language Processing - -This notebook shows you how to use syntax analysis to extract the most frequently used nouns from the hotel reviews, classify the sentiment of the reviews and use targets sentiment analysis. The data file that is used by this notebook is included in the project as a data asset. - - - -You can run all of the sample notebooks with the NLP + DO Runtime 23.1 on Python 3.10 XS environment except for the Analyzing hotel reviews using Watson Natural Language Processing notebook. To run this notebook, you need to create an environment template that is large enough to load the CPU-optimized models for sentiment and targets sentiment analysis. - -Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) -" -715ABFB108ED8F6361D07762656DBD0443C57904_0,715ABFB108ED8F6361D07762656DBD0443C57904," Extracting targets sentiment with a custom transformer model - -You can train your own models for targets sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data. - -The Slate IBM Foundation model is available only in Runtime 23.1. - -Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - - - -* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=eninput) -* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enload) -* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=entrain) -* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enapply) -" -715ABFB108ED8F6361D07762656DBD0443C57904_1,715ABFB108ED8F6361D07762656DBD0443C57904,"* [Storing and loading the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enstore) - - - -" -715ABFB108ED8F6361D07762656DBD0443C57904_2,715ABFB108ED8F6361D07762656DBD0443C57904," Input data format for training - -You must provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a target_mentions field. The text represents the training example text, and the target_mentions field is an array, which contains an entry for each target mention with its text, location, and sentiment. - -Consider using Watson Knowledge Studio to enable your domain subject matter experts to easily annotate text and create training data. - -The following is an example of an array with sample training data: - -[ -{ -""text"": ""Those waiters stare at you your entire meal, just waiting for you to put your fork down and they snatch the plate away in a second."", -""target_mentions"": -{ -""text"": ""waiters"", -""location"": { -""begin"": 6, -""end"": 13 -}, -""sentiment"": ""negative"" -} -] -} -] - -The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you may use the utility method read_json_to_stream. It requires the syntax analysis model for the language of your input data. - -Sample code: - -import watson_nlp -from watson_nlp.toolkit.targeted_sentiment.training_data_reader import read_json_to_stream - -training_data_file = 'train_data.json' -dev_data_file = 'dev_data.json' - - Load the syntax analysis model for the language of your input data -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Prepare train and dev data streams -train_stream = read_json_to_stream(json_path=training_data_file, syntax_model=syntax_model) -dev_stream = read_json_to_stream(json_path=dev_data_file, syntax_model=syntax_model) - -" -715ABFB108ED8F6361D07762656DBD0443C57904_3,715ABFB108ED8F6361D07762656DBD0443C57904," Loading the pretrained model resources - -The pretrained Slate IBM Foundation model needs to be loaded before passing it to the training algorithm. - -To load the model: - - Load the pretrained Slate IBM Foundation model -pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased') - -" -715ABFB108ED8F6361D07762656DBD0443C57904_4,715ABFB108ED8F6361D07762656DBD0443C57904," Training the model - -For all options that are available for configuring sentiment transformer training, enter: - -help(watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train) - -The train method will create a new targets sentiment block model. - -The following is a sample call that uses the input data and pretrained model from the previous section (Training the model): - - Train the model -custom_tsa_model = watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train( -train_stream, -dev_stream, -pretrained_model_resource, -num_train_epochs=5 -) - -" -715ABFB108ED8F6361D07762656DBD0443C57904_5,715ABFB108ED8F6361D07762656DBD0443C57904," Applying the model on new data - -After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. Because the created custom model is a block model, you need to run syntax analysis on the input text and pass the results to the run() methods. - -Sample code: - -input_text = 'new input text' - - Run syntax analysis first -syntax_model = watson_nlp.load('syntax_izumo_en_stock') -syntax_analysis = syntax_model.run(input_text, parsers=('token',)) - - Apply the new model on top of the syntax predictions -tsa_predictions = custom_tsa_model.run(syntax_analysis) - -" -715ABFB108ED8F6361D07762656DBD0443C57904_6,715ABFB108ED8F6361D07762656DBD0443C57904," Storing and loading the model - -The custom targets sentiment model can be stored as any other model as described in ""Loading and storing models"", using ibm_watson_studio_lib. - -To load the custom targets sentiment model, additional steps are required: - - - -1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook. -2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar. Then run the cell. - -By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html). -3. Download and extract the model to your local runtime environment: - -import zipfile -model_zip = 'custom_TSA_model_file' -model_folder = 'custom_TSA' -wslib.download_file('custom_TSA_model', file_name=model_zip) - -with zipfile.ZipFile(model_zip, 'r') as zip_ref: -zip_ref.extractall(model_folder) -4. Load the model from the extracted folder: - -custom_TSA_model = watson_nlp.load(model_folder) - - - -Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html) -" -F7E8527824E15B4194A3FD12CEEE049F910016DB_0,F7E8527824E15B4194A3FD12CEEE049F910016DB," Watson Natural Language Processing library - -The Watson Natural Language Processing library provides natural language processing functions for syntax analysis and pre-trained models for a wide variety of text processing tasks, such as sentiment analysis, keyword extraction, and classification. The Watson Natural Language Processing library is available for Python only. - -With Watson Natural Language Processing, you can turn unstructured data into structured data, making the data easier to understand and transferable, in particular if you are working with a mix of unstructured and structured data. Examples of such data are call center records, customer complaints, social media posts, or problem reports. The unstructured data is often part of a larger data record that includes columns with structured data. Extracting meaning and structure from the unstructured data and combining this information with the data in the columns of structured data: - - - -* Gives you a deeper understanding of the input data -* Can help you to make better decisions. - - - -Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements. - -Although you can create your own models, the easiest way to get started with Watson Natural Language Processing is to run the pre-trained models on unstructured text to perform language processing tasks. - -Some examples of language processing tasks available in Watson Natural Language Processing pre-trained models: - - - -* Language detection: detect the language of the input text -* Syntax: tokenization, lemmatization, part of speech tagging, and dependency parsing -* Entity extraction: find mentions of entities (like person, organization, or date) -* Noun phrase extraction: extract noun phrases from the input text -* Text classification: analyze text and then assign a set of pre-defined tags or categories based on its content -* Sentiment classification: is the input document positive, negative or neutral? -* Tone classification: classify the tone in the input document (like excited, frustrated, or sad) -* Emotion classification: classify the emotion of the input document (like anger or disgust) -" -F7E8527824E15B4194A3FD12CEEE049F910016DB_1,F7E8527824E15B4194A3FD12CEEE049F910016DB,"* Keywords extraction: extract noun phrases that are relevant in the input text -* Concepts: find concepts from DBPedia in the input text -* Relations: detect relations between two entities -* Hierarchical categories: assign individual nodes within a hierarchical taxonomy to the input document -* Embeddings: map individual words or larger text snippets into a vector space - - - -Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Blocks and workflows support functions to load, run, train, and save a model. - -For more information, refer to [Working with pre-trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html). - -Some examples of how you can use the Watson Natural Language Processing library: - -Running syntax analysis on a text snippet: - -import watson_nlp - - Load the syntax model for English -syntax_model = watson_nlp.load('syntax_izumo_en_stock') - - Run the syntax model and print the result -syntax_prediction = syntax_model.run('Welcome to IBM!') -print(syntax_prediction) - -Extracting entities from a text snippet: - -import watson_nlp -entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled') -entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code=""en"") -print(entities.get_mention_pairs()) - -For examples of how to use the Watson Natural Language Processing library, refer to [Watson Natural Language Processing library usage samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-samples.html). - -" -F7E8527824E15B4194A3FD12CEEE049F910016DB_2,F7E8527824E15B4194A3FD12CEEE049F910016DB," Using Watson Natural Language Processing in a notebook - -You can run your Python notebooks that use the Watson Natural Language Processing library in any of the environments that listed here. The GPU environment templates include the Watson Natural Language Processing library. - -DO + NLP: Indicates that the environment template includes both the CPLEX and the DOcplex libraries to model and solve decision optimization problems and the Watson Natural Language Processing library. - - : Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). - - - -Environment templates that include the Watson Natural Language Processing library - - Name Hardware configuration CUH rate per hour - - NLP + DO Runtime 23.1 on Python 3.10 XS 2vCPU and 8 GB RAM 6 - DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6 - GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68 - GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136 - GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68 - GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136 - - - -Normally these environments are sufficient to run notebooks that use prebuilt models. If you need a larger environment, for example to train your own models, you can create a custom template that includes the Watson Natural Language Processing library. Refer to [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). - - - -* Create a custom template without GPU by selecting the engine type Default, the hardware configuration size that you need, and choosing NLP + DO Runtime 23.1 on Python 3.10 or DO + NLP Runtime 22.2 on Python 3.10 as the software version. -" -F7E8527824E15B4194A3FD12CEEE049F910016DB_3,F7E8527824E15B4194A3FD12CEEE049F910016DB,"* Create a custom template with GPU by selecting the engine type GPU, the hardware configuration size that you need, and choosing GPU Runtime 23.1 on Python 3.10 or GPU Runtime 22.2 on Python 3.10 as the software version. - - - -" -F7E8527824E15B4194A3FD12CEEE049F910016DB_4,F7E8527824E15B4194A3FD12CEEE049F910016DB," Learn more - - - -* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) - - - -Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_0,0ECEAC44DA213D067B5B5EA66694E6283457A441," ibm-watson-studio-lib for Python - -The ibm-watson-studio-lib library for Python provides access to assets. It can be used in notebooks that are created in the notebook editor. ibm-watson-studio-lib provides support for working with data assets and connections, as well as browsing functionality for all other asset types. - -There are two kinds of data assets: - - - -* Stored data assets refer to files in the storage associated with the current project. The library can load and save these files. For data larger than one megabyte, this is not recommended. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets. -* Connected data assets represent data that must be accessed through a connection. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection. The functions do not return the data of a connected data asset. You can either use the code that is generated for you when you click Read data on the Code snippets pane to access the data or you must write your own code. - - - -Note: The ibm-watson-studio-lib functions do not encode or decode data when saving data to or getting data from a file. Additionally, the ibm-watson-studio-lib functions can't be used to access connected folder assets (files on a path to the project storage). - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_1,0ECEAC44DA213D067B5B5EA66694E6283457A441," Setting up the ibm-watson-studio-lib library - -The ibm-watson-studio-lib library for Python is pre-installed and can be imported directly in a notebook in the notebook editor. To use the ibm-watson-studio-lib library in your notebook, you need the ID of the project and the project token. - -To insert the project token to your notebook: - - - -1. Click the More icon on your notebook toolbar and then click Insert project token. - -If a project token exists, a cell is added to your notebook with the following information: - -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - is the value of the project token. - -If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). - -To create a project token: - - - -1. From the Manage tab, select the Access Control page, and click New access token under Access tokens. -2. Enter a name, select Editor role for the project, and create a token. -3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token. - - - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_2,0ECEAC44DA213D067B5B5EA66694E6283457A441," Helper functions - -You can get information about the supported functions in the ibm-watson-studio-lib library programmatically by using help(wslib), or for an individual function by using help(wslib., for example help(wslib.get_connection). - -You can use the helper function wslib.show(...) for formatted printing of Python dictionaries and lists of dictionaries, which are the common result output type of the ibm-watson-studio-lib functions. - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_3,0ECEAC44DA213D067B5B5EA66694E6283457A441," The ibm-watson-studio-lib functions - -The ibm-watson-studio-lib library exposes a set of functions that are grouped in the following way: - - - -* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-infos) -* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-auth-token) -* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data) -* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=ensave-data) -* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-info) -* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-data-info) -* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enaccess-by-id) -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_4,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=endirect-proj-storage) -* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enspark-support) -* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enbrowse-assets) - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_5,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get project information - -While developing code, you might not know the exact names of data assets or connections. The following functions provide lists of assets, from which you can pick the relevant ones. In all examples, you can use wslib.show(assets) to pretty-print the list. The index of each item is printed in front of the item. - - - -* list_connections() - -This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connection function. - -For example: - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - -assets = wslib.list_connections() -wslib.show(assets) -connprops = wslib.get_connection(assets[0]) -wslib.show(connprops) -* list_connected_data() - -This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connected_data function. -* list_stored_data() - -This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the load_data and save_datafunctions. - -Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists. -* wslib.here - -By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point wslib.here provides the following functions: - - - -* get_name() - -This function returns the name of the project. -* get_description() - -This function returns the description of the project. -* get_ID() - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_6,0ECEAC44DA213D067B5B5EA66694E6283457A441,"This function returns the ID of the project. -* get_storage() - -This function returns storage information for the project. - - - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_7,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get authentication token - -Some tasks require an authentication token. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token. - -You can use the following function to get the bearer token: - - - -* get_current_token() - - - -For example: - -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) -token = wslib.auth.get_current_token() - -This function returns the bearer token that is currently used by the ibm-watson-studio-lib library. - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_8,0ECEAC44DA213D067B5B5EA66694E6283457A441," Fetch data - -You can use the following functions to fetch data from a stored data asset (a file) in your project. - - - -* load_data(asset_name_or_item, attachment_type_or_item=None) - -This function loads the data of a stored data asset into a BytesIO buffer. The function is not recommended for very large files. - -The function takes the following parameters: - - - -* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data(). -* attachment_type_or_item: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is loaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type data_profile_nlu. - -Here is an example that shows you how to load the data of a data asset: - - - - - - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - Fetch the data from a file -my_file = wslib.load_data(""MyFile.csv"") - - Read the CSV data file into a pandas DataFrame -my_file.seek(0) -import pandas as pd -pd.read_csv(my_file, nrows=10) - - - - -* download_file(asset_name_or_item, file_name=None, attachment_type_or_item=None) - -This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists. - -The function takes the following parameters: - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_9,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data(). -* file_name: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name. -* attachment_type_or_item: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is downloaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type data_profile_nlu. - -Here is an example that shows you how to you can use download_file to make your custom Python script available in your notebook: - - - - - - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - Let's assume you have a Python script ""helpers.py"" with helper functions on your local machine. - Upload the script to your project using the Data Panel on the right of the opened notebook. - - Download the script to the file system of your runtime -wslib.download_file(""helpers.py"") - - import the required functions to use them in your notebook -from helpers import my_func -my_func() - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_10,0ECEAC44DA213D067B5B5EA66694E6283457A441," Save data - -The functions to save data in your project storage do multiple things: - - - -* Store the data in project storage -* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project. -* Associate the asset with the file in the storage. - - - -You can use the following functions to save data: - - - -* save_data(asset_name_or_item, data, overwrite=None, mime_type=None, file_name=None) - -This function saves data in memory to the project storage. - -The function takes the following parameters: - - - -* asset_name_or_item: (Required) The name of the created asset or list item that is returned by list_stored_data(). You can use the item if you like to overwrite an existing file. -* data: (Required) The data to upload. This can be any object of type bytes-like-object, for example a byte buffer. -* overwrite: (Optional) Overwrites the data of a stored data asset if it already exists. By default, this is set to false. If an asset item is passed instead of a name, the behavior is to overwrite the asset. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type=application/text for plain text data. This parameter is ignored when overwriting an asset. -* file_name: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset. - -Here is an example that shows you how to save data to a file: - - - - - - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_11,0ECEAC44DA213D067B5B5EA66694E6283457A441,"wslib = access_project_or_space({""token"":""""}) - - let's assume you have the pandas DataFrame pandas_df which contains the data - you want to save as a csv file -wslib.save_data(""my_asset_name.csv"", pandas_df.to_csv(index=False).encode()) - - the function returns a dict which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data - - - - -* upload_file(file_path, asset_name=None, file_name=None, overwrite=False, mime_type=None) This function saves data in the file system in the runtime to a file associated with your project. The function takes the following parameters: - - - -* file_path: (Required) The path to the file in the file system. -* asset_name: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded. -* file_name: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded. -* overwrite: (Optional) Overwrites an existing file in storage. Defaults to false. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type='application/text' for plain text data. This parameter is ignored when overwriting an asset. - -Here is an example that shows you how you can upload a file to the project: - - - - - - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - Let's assume you have downloaded a file and want to save it - in your project. -import urllib.request -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_12,0ECEAC44DA213D067B5B5EA66694E6283457A441,"urllib.request.urlretrieve(""https://some/url/data_file.csv"", ""data_file.csv"") -wslib.upload_file(""data_file.csv"") - - The function returns a dictionary which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data. - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_13,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get connection information - -You can use the following function to access the connection metadata of a given connection. - - - -* get_connection(name_or_item) - -This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use wslib.show(connprops) to view the properties. The special key ""."" in the returned dictionary provides information about the connection asset. - -The function takes the following required parameter: - - - -* name_or_item: Either a string with the name of a connection or an item like those returned by list_connections(). - -Note that when you work with notebooks, you can click Read data on the Code snippets pane to generate code to load data from a connection into a pandas DataFrame for example. - - - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_14,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get connected data information - -You can use the following function to access the metadata of a connected data asset. - - - -* get_connected_data(name_or_item) - -This function returns the properties of a connected data asset, including the properties of the underlying connection. Use wslib.show() to view the properties. The special key ""."" in the returned dictionary provides information about the data and the connection assets. - -The function takes the following required parameter: - - - -* name_or_item: Either a string with the name of a connected data asset or an item like those returned by list_connected_data(). - -Note that when you work with notebooks, you can click Read data on the Code snippets pane to generate code to load data from a connected data asset into a pandas DataFrame for example. - - - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_15,0ECEAC44DA213D067B5B5EA66694E6283457A441," Access asset by ID instead of name - -You should preferably always access data assets and connections by a unique name. Asset names are not necessarily always unique and the ibm-watson-studio-lib functions will raise an exception when a name is ambiguous. You can rename data assets in the UI to resolve the conflict. - -Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project. This can happen for example, when projects are exported and re-imported. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example list_connections(). - -The entry point wslib.by_id provides the following functions: - - - -* get_connection(asset_id) - -This function accesses a connection by the connection asset ID. -* get_connected_data(asset_id) - -This function accesses a connected data asset by the connected data asset ID. -* load_data(asset_id, attachment_type_or_item=None) - -This function loads the data of a stored data asset by passing the asset ID. See [load_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass. -* save_data(asset_id, data, overwrite=None, mime_type=None, file_name=None) - -This function saves data to a stored data asset by passing the asset ID. This implies overwrite=True. See [save_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=ensave-data) for a description of the other parameters you can pass. -* download_file(asset_id, file_name=None, attachment_type_or_item=None) - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_16,0ECEAC44DA213D067B5B5EA66694E6283457A441,"This function downloads the data of a stored data asset by passing the asset ID. See [download_file()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass. - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_17,0ECEAC44DA213D067B5B5EA66694E6283457A441," Access project storage directly - -You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point wslib.storage. - -The entry point wslib.storage provides the following functions: - - - -* fetch_data(filename) - -This function returns the data in a file as a BytesIO buffer. The file does not need to be registered as a data asset. - -The function takes the following required parameter: - - - -* filename: The name of the file in the projectstorage. - - - -* store_data(filename, data, overwrite=False) - -This function saves data in memory to storage, but does not create a new data asset. The function returns a dictionary which contains the file name, file path and additional information. Use wslib.show() to print the information. - -The function takes the following parameters: - - - -* filename: (Required) The name of the file in the project storage. -* data: (Required) The data to save as a bytes-like object. -* overwrite: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false. - - - -* download_file(storage_filename, local_filename=None) - -This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed. - -The function takes the following parameters: - - - -* storage_filename: (Required) The name of the file in storage to download. -* local_filename: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name. - - - -* register_asset(storage_path, asset_name=None, mime_type=None) - -This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists. - - - -You can use this function if you have very large files that you cannot upload via save_data(). You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using register_asset(). - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_18,0ECEAC44DA213D067B5B5EA66694E6283457A441,"The function takes the following parameters: - - - -* storage_path: (Required) The path of the file in storage. -* asset_name: (Optional) The name of the created asset. It defaults to the file name. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type. - -Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken. - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_19,0ECEAC44DA213D067B5B5EA66694E6283457A441," Spark support - -The entry point wslib.spark provides functions to access files in storage with Spark. To get help information about the available functions, use help(wslib.spark.API). - -The entry point wslib.spark provides the following functions: - - - -* provide_spark_context(sc) - -Use this function to enable Spark support. - -The function takes the following required parameter: - - - -* sc: The SparkContext. It is provided in the notebook runtime. - -The following example shows you how to set up Spark support: - -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) -wslib.spark.provide_spark_context(sc) - - - -* get_data_url(asset_name) - -This function returns a URL to access a file in storage from Spark via Hadoop. - -The function takes the following required parameter: - - - -* asset_name: The name of the asset. - - - -* storage.get_data_url(file_name) - -This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name. - -The function takes the following required parameter: - - - -* file_name: The name of a file in the project storage. - - - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_20,0ECEAC44DA213D067B5B5EA66694E6283457A441," Browse project assets - -The entry point wslib.assets provides generic, read-only access to assets of any type. For selected asset types, there are dedicated functions that provide additional data. To get help on the available functions, use help(wslib.assets.API). - -The following naming conventions apply: - - - -* Functions named list_ return a list of Python dictionaries. Each dictionary represents one asset and includes a small set of properties (metadata) that identifies the asset. -* Functions named get_ return a single Python dictionary with the properties for the asset. - - - -To pretty-print a dictionary or list of dictionaries, use wslib.show(). - -The functions expect either the name of an asset, or an item from a list as the parameter. By default, the functions return only a subset of the available asset properties. By setting the parameter raw=True, you can get the full set of asset properties. - -The entry point wslib.assets provides the following functions: - - - -* list_assets(asset_type, name=None, query=None, selector=None, raw=False) - -This function lists all assets for the given type with respect to the given constraints. - -The function takes the following parameters: - - - -* asset_type: (Required) The type of the assets to list, for example data_asset. See list_asset_types() for a list of the available asset types. Use asset type asset for the list of all available assets in the project. -* name: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either name and query. -* query: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either name and query. -* selector: (Optional) A custom filter function on the candidate asset dictionary items. If the selector function returns True, the asset is included in the returned asset list. -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_21,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* raw: (Optional) Returns all of the available metadata. By default, the parameter is set to False and only a subset of the properties is returned. - - - -Examples of using the list_assets function: - - - - Import the lib -from ibm_watson_studio_lib import access_project_or_space -wslib = access_project_or_space({""token"":""""}) - - List all assets in the project -all_assets = wslib.assets.list_assets(""asset"") -wslib.show(all_assets) - - List all data assets with name 'MyFile.csv' -assets_by_name = wslib.assets.list_assets(""data_asset"", name=""MyFile.csv"") - - List all data assets whose name starts with ""MyF"" -assets_by_query = wslib.assets.list_assets(""data_asset"", query=""asset.name:(MyF)"") - - List all data assets which are larger than 1MB -sizeFilter = lambda x: x['metadata'] > 1000000 -large_assets = wslib.assets.list_assets(""data_asset"", selector=sizeFilter, raw=True) - - List all notebooks -notebooks = wslib.assets.list_assets(""notebook"") - - - -* list_asset_types(raw=False) - -This function lists all available asset types. - -The function can take the following parameter: - - - -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -* list_datasource_types(raw=False) - -This function lists all available data source types. - -The function can take the following parameter: - - - -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -* get_asset(name_or_item, asset_type=None, raw=False) - -The function returns the metadata of an asset. - -The function takes the following parameters: - - - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_22,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* name_or_item: (Required) The name of the asset or an item like those returned by list_assets() -* asset_type: (Optional) The type of the asset. If the parameter name_or_item contains a string for the name of the asset, setting asset_type is required. -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - -Example of using the list_assets and get_asset functions: - -notebooks = wslib.assets.list_assets('notebook') -wslib.show(notebooks) - -notebook = wslib.assets.get_asset(notebooks[0]) -wslib.show(notebook) - - - -* get_connection(name_or_item, with_datasourcetype=False, raw=False) - -This function returns the metadata of a connection. - -The function takes the following parameters: - - - -* name_or_item: (Required) The name of the connection or an item like those returned by list_connections() -* with_datasourcetype: (Optional) Returns additional information about the data source type of the connection. -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -* get_connected_data(name_or_item, with_datasourcetype=False, raw=False) - -This function returns the metadata of a connected data asset. - -The function takes the following parameters: - - - -* name_or_item: (Required) The name of the connected data asset or an item like those returned by list_connected_data() -* with_datasourcetype: (Optional) Returns additional information about the data source type of the associated connected data asset. -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -* get_stored_data(name_or_item, raw=False) - -This function returns the metadata of a stored data asset. - -" -0ECEAC44DA213D067B5B5EA66694E6283457A441_23,0ECEAC44DA213D067B5B5EA66694E6283457A441,"The function takes the following parameters: - - - -* name_or_item: (Required) The name of the stored data asset or an item like those returned by list_stored_data() -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -* list_attachments(name_or_item_or_asset, asset_type=None, raw=False) - -This function returns a list of the attachments of an asset. - -The function takes the following parameters: - - - -* name_or_item_or_asset: (Required) The name of the asset or an item like those returned by list_stored_data() or get_asset(). -* asset_type: (Optional) The type of the asset. It defaults to type data_asset. -* raw: (Optional) Returns the full set of metadata. By default, the parameter is False and only a subset of the properties is returned. - - - -Example of using the list_attachments function to read an attachment of a stored data asset: - -assets = wslib.list_stored_data() -wslib.show(assets) - -asset = assets[0] -attachments = wslib.assets.list_attachments(asset) -wslib.show(attachments) -buffer = wslib.load_data(asset, attachments[0]) - - - -Parent topic:[Using ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) -" -B019692A5844A9A72292A35B8953AA67836F8201_0,B019692A5844A9A72292A35B8953AA67836F8201," ibm-watson-studio-lib for R - -The ibm-watson-studio-lib library for R provides access to assets. It can be used in notebooks that are created in the notebook editor or in RStudio in a project. ibm-watson-studio-lib provides support for working with data assets and connections, as well as browsing functionality for all other asset types. - -There are two kinds of data assets: - - - -* Stored data assets refer to files in the storage associated with the current project. The library can load and save these files. For data larger than one megabyte, this is not recommended. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets. -* Connected data assets represent data that must be accessed through a connection. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection. The functions do not return the data of a connected data asset. You can either use the code that is generated for you when you click Read data on the Code snippets panel to access the data or you must write your own code. - - - -Note: The ibm-watson-studio-lib functions do not encode or decode data when saving data to or getting data from a file. Additionally, the ibm-watson-studio-lib functions can't be used to access connected folder assets (files on a path to the project storage). - -" -B019692A5844A9A72292A35B8953AA67836F8201_1,B019692A5844A9A72292A35B8953AA67836F8201," Setting up the ibm-watson-studio-lib library - -The ibm-watson-studio-lib library for R is pre-installed and can be imported directly in a notebook in the notebook editor. To use the ibm-watson-studio-lib library in your notebook, you need the ID of the project and the project token. - -To insert the project token to your notebook: - - - -1. Click the More icon on your notebook toolbar and then click Insert project token. - -If a project token exists, a cell is added to your notebook with the following information: - -library(ibmWatsonStudioLib) -wslib <- access_project_or_space(list(""token""="""")) - - is the value of the project token. - -If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). - -To create a project token: - - - -1. From the Manage tab, select the Access Control page, and click New access token under Access tokens. -2. Enter a name, select Editor role for the project, and create a token. -3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_2,B019692A5844A9A72292A35B8953AA67836F8201," The ibm-watson-studio-lib functions - -The ibm-watson-studio-lib library exposes a set of functions that are grouped in the following way: - - - -* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-infos) -* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-auth-token) -* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data) -* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=ensave-data) -* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-conn-info) -* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enget-conn-data-info) -* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enaccess-by-id) -" -B019692A5844A9A72292A35B8953AA67836F8201_3,B019692A5844A9A72292A35B8953AA67836F8201,"* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=endirect-proj-storage) -* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enspark-support) -* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enbrowse-assets) - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_4,B019692A5844A9A72292A35B8953AA67836F8201," Get project information - -While developing code, you might not know the exact names of data assets or connections. The following functions provide lists of assets, from which you can pick the relevant ones. In all examples, you can use wslib$show(assets) to pretty-print the list. The index of each item is printed in front of the item. - - - -* list_connections() - -This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to get_connection function. - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - -assets <- wslib$list_connections() -wslib$show(assets) -connprops <- wslib$get_connection(assets[0]) -* list_connected_data() - -This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the get_connected_data function. -* list_stored_data() - -This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a list item instead of a name to the load_data and save_data functions. - -Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists. -* wslib$here By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point wslib$here provides the following functions: - - - -* get_name() - -This function returns the name of the project. -* get_description() - -This function returns the description of the project. -* get_ID() - -This function returns the ID of the project. -* get_storage() - -This function returns storage information for the project. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_5,B019692A5844A9A72292A35B8953AA67836F8201," Get authentication token - -Some tasks require an authentication token. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token. - -You can use the following function to get the bearer token: - - - -* get_current_token() - - - -For example: - -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) -token <- wslib$auth$get_current_token() - -This function returns the bearer token that is currently used by the ibm-watson-studio-lib library. - -" -B019692A5844A9A72292A35B8953AA67836F8201_6,B019692A5844A9A72292A35B8953AA67836F8201," Fetch data - -You can use the following functions to fetch data from a stored data asset (a file) in your project. - - - -* load_data(asset_name_or_item, attachment_type_or_item = NULL) - -This function loads the data of a stored data asset into a bytes buffer. The function is not recommended for very large files. - -The function takes the following parameters: - - - -* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data(). -* attachment_type_or_item: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is loaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type data_profile_nlu. - -Here is an example that shows you how to load the data of a data asset: - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - - Fetch the data from a file -my_file <- wslib$load_data(""MyFile.csv"") - - Read the CSV data file into a data frame -df <- read.csv(text = rawToChar(my_file)) -head(df) - - - -* download_file(asset_name_or_item, file_name = NULL, attachment_type_or_item = NULL) - -This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists. - -The function takes the following parameters: - - - -* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data(). -" -B019692A5844A9A72292A35B8953AA67836F8201_7,B019692A5844A9A72292A35B8953AA67836F8201,"* file_name: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name. -* attachment_type_or_item: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is downloaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type data_profile_nlu. - -Here is an example that shows you how to you can use download_file to make your custom R script available in your notebook: - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - - Let's assume you have a R script ""helpers.R"" with helper functions on your local machine. - Upload the script to your project using the Data Panel on the right. - - Download the script to the file system of your runtime -wslib$download_file(""helpers.R"") - - Source the script to use the contained functions, e.g. ‘my_func’, in your notebook. -source(""helpers.R"") -my_func() - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_8,B019692A5844A9A72292A35B8953AA67836F8201," Save data - -The functions to store data in your project storage do multiple things: - - - -* Store the data in project storage -* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project. -* Associate the asset with the file in the storage. - - - -You can use the following functions to save data: - - - -* save_data(asset_name_or_item, data, overwrite = NULL, mime_type = NULL, file_name = NULL) - -This function saves data in memory to the project storage. - -The function takes the following parameters: - - - -* asset_name_or_item: (Required) The name of the created asset or list item that is returned by list_stored_data(). You can use the item if you like to overwrite an existing file. -* data: (Required) The data to upload. The expected data type is raw. -* overwrite: (Optional) Overwrites the data of a stored data asset if it already exists. Defaults to FALSE. If an asset item is passed instead of a name, the behavior is to overwrite the asset. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type=application/text for plain text data. This parameter is ignored when overwriting an asset. -* file_name: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset. - -Here is an example that shows you how to save data to a file: - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - -" -B019692A5844A9A72292A35B8953AA67836F8201_9,B019692A5844A9A72292A35B8953AA67836F8201," let's assume you have a data frame df which contains the data - you want to save as a csv file -csv <- capture.output(write.csv(df, row.names=FALSE), type=""output"") -csv_raw <- charToRaw(paste0(csv, collapse='n')) -wslib$save_data(""my_asset_name.csv"", csv_raw) - - the function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data - - - -* upload_file(file_path, asset_name = NULL, file_name = NULL, overwrite = FALSE, mime_type = NULL) - -This function saves data in the file system in the runtime to a file associated with your project. - -The function takes the following parameters: - - - -* file_path: (Required) The path to the file in the file system. -* asset_name: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded. -* file_name: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded. -* overwrite: (Optional) Overwrites an existing file in storage. Defaults to FALSE. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type='application/text' for plain text data. This parameter is ignored when overwriting an asset. - -Here is an example that shows you how you can upload a file to the project: - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - - Let's assume you have downloaded a file and want to save it - in your project. -" -B019692A5844A9A72292A35B8953AA67836F8201_10,B019692A5844A9A72292A35B8953AA67836F8201,"download.file(""https://some/url/data_file.csv"", ""data_file.csv"") -wslib$upload_file(""data_file.csv"") - - The function returns a list which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_11,B019692A5844A9A72292A35B8953AA67836F8201," Get connection information - -You can use the following function to access the connection metadata of a given connection. - - - -* get_connection(name_or_item) - -This function returns the properties (metadata) of a connection which you can use to fetch data from the connection data source. Use wslib$show(connprops) to view the properties. The special key ""."" in the returned list item provides information about the connection asset. - -The function takes the following required parameter: - - - -* name_or_item: Either a string with the name of a connection or an item like those returned by list_connections(). - -Note that when you work with notebooks, you can click Read data on the Code snippets panel to generate code to load data from a connection into a pandas DataFrame for example. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_12,B019692A5844A9A72292A35B8953AA67836F8201," Get connected data information - -You can use the following function to access the metadata of a connected data asset. - - - -* get_connected_data(name_or_item) - -This function returns the properties of a connected data asset, including the properties of the underlying connection. Use wslib$show() to view the properties. The special key ""."" in the returned list provides information about the data and the connection assets. - -The function takes the following required parameter: - - - -* name_or_item: Either a string with the name of a connected data asset or an item like those returned by list_connected_data(). - -Note that when you work with notebooks, you can click Read data on the Code snippets panel to generate code to load data from a connected data asset into a pandas DataFrame for example. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_13,B019692A5844A9A72292A35B8953AA67836F8201," Access asset by ID instead of name - -You should preferably always access data assets and connections by a unique name. Asset names are not necessarily always unique and the ibm-watson-studio-lib functions will raise an exception when a name is ambiguous. You can rename data assets in the UI to resolve the conflict. - -Accessing assets by a unique ID is possible but is discouraged as IDs are valid only in the current project and will break code when transferred to a different project. This can happen for example, when projects are exported and re-imported. You can get the ID of a connection, connected or stored data asset by using the corresponding list function, for example list_connections(). - -The entry point wslib$by_id provides the following functions: - - - -* get_connection(asset_id) - -This function accesses a connection by the connection asset ID. -* get_connected_data(asset_id) - -This function accesses a connected data asset by the connected data asset ID. -* load_data(asset_id, attachment_type_or_item = NULL) - -This function loads the data of a stored data asset by passing the asset ID. See [load_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data) for a decsription of the other parameters you can pass. -* save_data(asset_id, data, overwrite = NULL, mime_type = NULL, file_name = NULL) - -This function saves data to a stored data asset by passing the asset ID. This implies overwrite=TRUE. See [save_data()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=ensave-data) for a description of the other parameters you can pass. -* download_file(asset_id, file_name = NULL, attachment_type_or_item = NULL) - -" -B019692A5844A9A72292A35B8953AA67836F8201_14,B019692A5844A9A72292A35B8953AA67836F8201,"This function downloads the data of a stored data asset by passing the asset ID. See [download_file()](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html?context=cdpaas&locale=enfetch-data) for a description of the other parameters you can pass. - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_15,B019692A5844A9A72292A35B8953AA67836F8201," Access project storage directly - -You can fetch data from project storage and store data in project storage without synchronizing the project assets using the entry point wslib$storage. - -The entry point wslib$storage provides the following functions: - - - -* fetch_data(filename) - -This function returns the data in a file as a bytes buffer. The file does not need to be registered as data asset. - -The function takes the following required parameter: - - - -* filename: The name of the file in the project. - - - -* store_data(filename, data, overwrite = FALSE) - -This function saves data in memory to storage, but does not create a new data asset. The function returns a list which contains the file name, file path and additional information. Use Use wslib$show() to print the information. - -The function takes the following parameters: - - - -* filename: (Required) The name of the file in the project storage. -* data: (Required) The data to save as a raw object. -* overwrite: (Optional) Overwrites the data of a file in storage if it already exists. By default, this is set to false. - - - -* download_file(storage_filename, local_filename = NULL) - -This function downloads the data in a file in storage and stores it in the specified local file. The local file is overwritten if it already existed. - -The function takes the following parameters: - - - -* storage_filename: (Required) The name of the file in storage to download. -* local_filename: (Optional) The name of the file in the local file system of your runtime to downloaded the file to. Omit this parameter to use the storage file name. - - - -* register_asset(storage_path, asset_name = NULL, mime_type = NULL) - -This function registers the file in storage as a data asset in your project. This operation fails if a data asset with the same name already exists. You can use this function if you have very large files that you cannot upload via save_data(). You can upload large files directly to the IBM Cloud Object Storage bucket of your project, for example via the UI, and then register them as data assets using register_asset(). - -The function takes the following parameters: - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_16,B019692A5844A9A72292A35B8953AA67836F8201,"* storage_path: (Required) The path of the file in storage. -* asset_name: (Optional) The name of the created asset. It defaults to the file name. -* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. Use this parameter to specify a MIME type if your file name does not have a file extension or if you want to set a different MIME type. - -Note: You can register a file several times as a different data asset. Deleting one of those assets in the project also deletes the file in storage, which means that other asset references to the file might be broken. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_17,B019692A5844A9A72292A35B8953AA67836F8201," Spark support - -The entry point wslib$spark provides functions to access files in storage with Spark. - -The entry point wslib$spark provides the following functions: - - - -* provide_spark_context(sc) - -Use this function to enable Spark support. - -The function takes the following required parameter: - - - -* sc: The SparkContext. It is provided in the notebook runtime. - -The following example shows you how to set up Spark support: - -library(ibmWatsonStudioLib) -wslib <- access_project_or_space(list(""token""="""")) -wslib$spark$provide_spark_context(sc) - - - -* get_data_url(asset_name) - -This function returns a URL to access a file in storage from Spark via Hadoop. - -The function takes the following required parameter: - - - -* asset_name: The name of the asset. - - - -* storage.get_data_url(file_name) - -This function returns a URL to access a file in storage from Spark via Hadoop. The function expects the file name and not the asset name. - -The function takes the following required parameter: - - - -* file_name: The name of a file in the project storage. - - - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_18,B019692A5844A9A72292A35B8953AA67836F8201," Browse project assets - -The entry point wslib$assets provides generic, read-only access to assets of any type. For selected asset types, there are dedicated functions that provide additional data. - -The following naming conventions apply: - - - -* Functions named list_ return a list of named lists. Each contained list represents one asset and includes a small set of properties (metadata) that identifies the asset. -* Functions named get_ return a single named list with the properties for the asset. - - - -To pretty-print a list or list of named lists, use wslib$show(). - -The functions expect either the name of an asset, or an item from a list as the parameter. By default, the functions return only a subset of the available asset properties. By setting the parameter raw_info=TRUE, you can get the full set of asset properties. - -The entry point wslib$assets provides the following functions: - - - -* list_assets(asset_type, name = NULL, query = NULL, selector = NULL, raw_info = FALSE) - -This function lists all assets for the given type with respect to the given constraints. - -The function takes the following parameters: - - - -* asset_type: (Required) The type of the assets to list, for example data_asset. See list_asset_types() for a list of the available asset types. Use asset type asset for the list of all available assets in the project. -* name: (Optional) The name of the asset to list. Use this parameter if more than one asset with the same name exists. You can only specify either name and query. -* query: (Optional) A query string that is passed to the Watson Data API to search for assets. You can only specify either name and query. -* selector: (Optional) A custom filter function on the candidate asset list items. If the selector function returns TRUE, the asset is included in the returned asset list. -* raw_info: (Optional) Returns all of the available metadata. By default, the parameter is set to FALSE and only a subset of the properties is returned. - -" -B019692A5844A9A72292A35B8953AA67836F8201_19,B019692A5844A9A72292A35B8953AA67836F8201,"Examples of using the list_assets function: - - Import the lib -library(""ibmWatsonStudioLib"") -wslib <- access_project_or_space(list(""token""="""")) - - List all assets in the project -all_assets <- wslib$assets$list_assets(""asset"") -wslib$show(all_assets) - - List all data assets with name 'MyFile.csv' -assets_by_name <- wslib$assets$list_assets(""data_asset"", name = ""MyFile.csv"") - - List all data assets whose name starts with ""MyF"" -assets_by_query <- wslib$assets$list_assets(""data_asset"", query = ""asset.name:(MyF)"") - - List all data assets which are larger than 1MB -sizeFilter <- function(asset) asset$metadata$size > 1000000 -large_assets <- wslib$assets$list_assets(""data_asset"", selector = sizeFilter, raw_info = TRUE) -wslib$show(large_assets) - - List all notebooks -notebooks <- wslib$assets$list_assets(""notebook"") - - - -* list_asset_types(raw_info = FALSE) - -This function lists all available asset types. - -The function can take the following parameter: - - - -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - - - -* list_datasource_types(raw_info = FALSE) - -This function lists all available data source types. - -The function can take the following parameter: - - - -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - - - -* get_asset(name_or_item, asset_type=None, raw_info = FALSE) - -The function returns the metadata of an asset. - -The function takes the following parameters: - - - -" -B019692A5844A9A72292A35B8953AA67836F8201_20,B019692A5844A9A72292A35B8953AA67836F8201,"* name_or_item: (Required) The name of the asset or an item like those returned by list_assets() -* asset_type: (Optional) The type of the asset. If the parameter name_or_item contains a string for the name of the asset, setting asset_type is required. -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - -Example of using the list_assets and get_asset functions: - -notebooks <- wslib$assets$list_assets(""notebook"") -wslib$show(notebooks) - -notebook <- wslib$assets$get_asset(notebooks[1]]) -wslib$show(notebook) - - - -* get_connection(name_or_item, with_datasourcetype=False, raw_info = FALSE) - -This function returns the metadata of a connection. - -The function takes the following parameters: - - - -* name_or_item: (Required) The name of the connection or an item like those returned by list_connections() -* with_datasourcetype: (Optional) Returns additional information about the data source type of the connection. -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - - - -* get_connected_data(name_or_item, with_datasourcetype=False, raw_info = FALSE) - -This function returns the metadata of a connected data asset. - -The function takes the following parameters: - - - -* name_or_item: (Required) The name of the connected data asset or an item like those returned by list_connected_data() -* with_datasourcetype: (Optional) Returns additional information about the data source type of the associated connected data asset. -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - - - -* get_stored_data(name_or_item, raw_info = FALSE) - -" -B019692A5844A9A72292A35B8953AA67836F8201_21,B019692A5844A9A72292A35B8953AA67836F8201,"This function returns the metadata of a stored data asset. - -The function takes the following parameters: - - - -* name_or_item: (Required) The name of the stored data asset or an item like those returned by list_stored_data() -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - - - -* list_attachments(name_or_item_or_asset, asset_type=None, raw_info = FALSE) - -This function returns a list of the attachments of an asset. - -The function takes the following parameters: - - - -* name_or_item_or_asset: (Required) The name of the asset or an item like those returned by list_stored_data() or get_asset(). -* asset_type: (Optional) The type of the asset. It defaults to type data_asset. -* raw_info: (Optional) Returns the full set of metadata. By default, the parameter is FALSE and only a subset of the properties is returned. - -Example of using the list_attachments function to read an attachment of a stored data asset: - -assets <- wslib$list_stored_data() -wslib$show(assets) - -asset <- assets[1]] -attachments <- wslib$assets$list_attachments(asset) -wslib$show(attachments) -buffer <- wslib$load_data(asset, attachments[1]]) - - - - - -Parent topic:[Using ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) -" -23060B35041C9ABD00099B1E0B1D83DAFF453C6D_0,23060B35041C9ABD00099B1E0B1D83DAFF453C6D," Collaboration roles for governance - -Review the collaboration roles for managing access to governance tools such as inventories, AI use cases, and evaluations. - -" -23060B35041C9ABD00099B1E0B1D83DAFF453C6D_1,23060B35041C9ABD00099B1E0B1D83DAFF453C6D," User roles and permissions for governance - -The permissions that you allow you to work with governance artifacts depend on your watsonx roles: - - - -* IAM Platform access roles determine your permissions for the IBM Cloud account. At least the Viewer role is required to work with services. -* IAM Service access roles determine your permissions within services. -* Workspace collaborator roles determine what actions you have permission to perform within workspaces in IBM watsonx. - - - -For details, see [Levels of user access roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html). - -" -23060B35041C9ABD00099B1E0B1D83DAFF453C6D_2,23060B35041C9ABD00099B1E0B1D83DAFF453C6D," Roles for governance - -If you have the IAM Platform Admin role, you can: - - - -* Provision watsonx.governance -* Create inventory -* Create platform assets catalog -* Enable external model tracking -* Create attachment fact definitions -* Customize report templates - - - -If you have these workspace roles for an inventory, you can: - - - -Governance permissions for inventories - - Enabled permission Viewer Editor Admin/Owner - - Create and edit AI use cases ✓ ✓ - View AI use cases ✓ ✓ ✓ - Add collaborators to an inventory ✓ - Delete inventory ✓ - Evaluate model deployment ✓ ✓ - Add collaborators to a use case ✓ ✓ - Generate reports ✓ ✓ ✓ - Add attachments to a use case ✓ ✓ - Update asset type definitions
(For example: model_entry_user, modelfacts_user) ✓ - - - -If you have these workspace roles for an AI use case, you can: - - - -Governance permissions for AI use cases - - Enabled permission Editor/Collaborator Admin/Owner - - Delete AI use cases ✓ - Add collaborators to the use case ✓ - Edit AI use case ✓ ✓ - Edit use case ✓ ✓ - Add values to custom facts ✓ ✓ - Upload attachments to use case ✓ ✓ - - - -If you have these workspace roles for a project or space, you can: - - - -Governance permissions for project and space roles - - Enabled permission Viewer Editor/Collaborator Admin/Owner - - Track/untrack prompt template ✓ ✓ - Upload attachments to use case ✓ ✓ - Add values to custom facts ✓ ✓ - View AI factsheet ✓ ✓ ✓ - Generate report ✓ ✓ ✓ - - - -" -23060B35041C9ABD00099B1E0B1D83DAFF453C6D_3,23060B35041C9ABD00099B1E0B1D83DAFF453C6D," Learn more - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -" -074C9BAEB0177E3CF57BAC36E5FCBD13063498A1_0,074C9BAEB0177E3CF57BAC36E5FCBD13063498A1," Governing assets in AI use cases - -Create an AI use case to track and govern AI assets from request through production. Factsheets capture details about the asset for each stage of the AI lifecycle to help you meet governance and compliance goals. - -To learn about AI use cases, you can follow a tutorial in the Getting started with watsonx.governance sample project. Assets in the sample are prompt templates for a car insurance claim processing use case. The prompts use car insurance claims as input and then use large language models to help insurance agents process the claims. One prompt summarizes claims, another prompt extracts key information such as make and model, and the last prompt generates suggestions for the insurance agent. - -In Projects, start a new project, then choose to create a project from a sample. The project gallery includes the getting started sample. - -![Getting started sample project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-get-started-project.png) - -When your project is ready, open the Readme for a step-by-step tutorial. - -![Getting started sample project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-get-started-project-readme.png) - -" -074C9BAEB0177E3CF57BAC36E5FCBD13063498A1_1,074C9BAEB0177E3CF57BAC36E5FCBD13063498A1," Get started with AI use cases - -Set up or work with AI use cases: - - - -* [Create an inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for storing AI use cases -* [Set up an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -* [Track assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) in an AI use case -* [View factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-factsheet-viewing.html) for tracked assets - - - -Parent topic:[Governing AI assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html) -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_0,0F13ADFC739D217925DDCEBB152284565BD43DE8," Customizing details for a use case or factsheet - -You can programmatically customize the information that is collected in factsheets for AI use cases. Use customized factsheets as part of your AI Governance strategy. - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_1,0F13ADFC739D217925DDCEBB152284565BD43DE8," Updating a model or model use case programmatically - -You might want to update a model use case or model factsheet with additional information. For example, some companies have a standard set of details they want to accompany a model use case or model facts. - -Currently, you must update the tenant-level asset types by modifying the user attributes that uses the [Watson Data REST API](https://cloud.ibm.com/apidocs/watson-data-apiintroduction) to update the asset. - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_2,0F13ADFC739D217925DDCEBB152284565BD43DE8," Updating a custom asset type - -Follow these steps to update a custom asset type: - - - -1. Provide the bss_account_id query parameter for the [getcatalogtype method](https://cloud.ibm.com/apidocs/watson-data-apigetcatalogtype). -2. Provide asset_type as model_entry_user if you are updating attributes for model_entry. Provide asset_type as modelfacts_user if you are updating attributes for model facts. -3. Retrieve the current asset type definition by using the [getcatalogtype method](https://cloud.ibm.com/apidocs/watson-data-apigetcatalogtype) where asset_type is either modelfacts_user or model_entry_user. -4. Update the current asset type definition with the custom attributes by adding them to properties JSON object following the schema that is defined in the API documentation. The following types of attributes are supported to view and edit from the user interface of the model use case or model: - - - -* string -* date -* integer - - - -5. After the JSON is updated with the new properties, start the changes by using the [replaceassettype method](https://cloud.ibm.com/apidocs/watson-data-apireplaceassettype). Provide the asset_type , bss_account_id, and request payload. - - - -When the update is complete, you can view the custom attributes in the AI use case details page and model details page. - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_3,0F13ADFC739D217925DDCEBB152284565BD43DE8," Example 1: Retrieving and updating the model_entry_user asset type - -Note:This example updates the use case user data. You can use the same format but substitute modelfacts_user to retrieve and update details for the model factsheet. - -This curl command retrieves the asset type model_entry_user: - -curl -X GET --header 'Accept: application/json' --header ""Authorization: ZenApiKey ${MY_TOKEN}"" 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=' - -This snippet is a sample response payload for model use case user details: - -{ -""description"": ""The model use case to capture user defined attributes."", -""fields"": [], -""relationships"": [], -""properties"": {}, -""decorates"": [{ -""asset_type_name"": ""model_entry"" -}], -""global_search_searchable"": [], -""localized_metadata_attributes"": { -""name"": { -""default"": ""Additional details"", -""en"": ""Additional details"" -} -}, -""attribute_only"": false, -""name"": ""model_entry_user"", -""version"": 1, -""scope"": ""ACCOUNT"" -} - -This curl command updates the model_entry_user asset type: - -curl -X PUT --header 'Content-Type: application/json' --header 'Accept: application/json' --header ""Authorization: ZenApiKey ${MY_TOKEN}"" -d '@requestbody.json' 'https://api.dataplatform.cloud.ibm.com:443/v2/asset_types/model_entry_user?bss_account_id=' - -The requestbody.json contents look like this: - -{ -""description"": ""The model use case to capture user defined attributes."", -""fields"": [], -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_4,0F13ADFC739D217925DDCEBB152284565BD43DE8,"""relationships"": [], -""properties"": { -""user_attribute1"": { -""type"": ""string"", -""description"": ""User attribute1"", -""placeholder"": ""User attribute1"", -""is_array"": false, -""required"": true, -""hidden"": false, -""readonly"": false, -""default_value"": ""None"", -""label"": { -""default"": ""User attribute1"" -} -}, -""user_attribute2"": { -""type"": ""integer"", -""description"": ""User attribute2"", -""placeholder"": ""User attribute2"", -""is_array"": false, -""required"": true, -""hidden"": false, -""readonly"": false, -""label"": { -""default"": ""User attribute2"" -} -}, -""user_attribute3"": { -""type"": ""date"", -""description"": ""User attribute3"", -""placeholder"": ""User attribute3"", -""is_array"": false, -""required"": true, -""hidden"": false, -""readonly"": false, -""default_value"": ""None"", -""label"": { -""default"": ""User attribute3"" -} - -} -""decorates"": [{ -""asset_type_name"": ""model_entry"" -}], -""global_search_searchable"": [], -""attribute_only"": false, -""localized_metadata_attributes"": { -""name"": { -""default"": ""Additional details"", -""en"": ""Additional details"" -} -} -} - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_5,0F13ADFC739D217925DDCEBB152284565BD43DE8," Updating user details by using the Python client - -You can also update and replace an asset type with properties by using a Python script. For details, see [fact sheet elements description](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlfactsheet-asset-elements). - -After you update asset type definitions with custom attributes, you can provide values for those attributes from the model use case overview and model details pages. You can also update values to the custom attributes that use these Python API client methods: - - - -* [Model Asset Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlibm_aigov_facts_client.factsheet.asset_utils_model.ModelAssetUtilities.set_custom_fact) -* [Model Entry Utilities](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlibm_aigov_facts_client.factsheet.asset_utils_model.ModelEntryUtilities.set_custom_fact) - - - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_6,0F13ADFC739D217925DDCEBB152284565BD43DE8," Capturing cell facts for a model - -When a data scientist develops a model in a notebook, they generate visualizations for key model details, such as ROC curve, confusion matrix, panda profiling report, or the output of any cell execution. To capture those facts as part of a model use case, use the ['capture_cell_facts`](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlcapture-cell-facts) function in the AI Factsheets Python client library. - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_7,0F13ADFC739D217925DDCEBB152284565BD43DE8," Troubleshooting custom fields - -After you customize fields and make them available to users, a user trying to update fields in the Additional details section of model details might get this error: - -Update failed. To update an asset attribute, you must be a catalog Admin or an asset owner or member with the Editor role. Ask a catalog Admin to update your catalog role or ask an asset member with the Editor role to add you as a member. - -If the user already has edit permission on the model and is still getting the error message, follow these steps to resolve it. - - - -1. Invoke the API command for [createassetattributenewv2](https://cloud.ibm.com/apidocs/watson-data-api-cpdcreateassetattributenewv2). -2. Use this payload with the command: - -{ -""name"": ""modelfacts_system"", -""entity"": { -} -} - -where asset_id is the model_id. Enter either project_id or space_id or catalog_id where the model exists. - - - -" -0F13ADFC739D217925DDCEBB152284565BD43DE8_8,0F13ADFC739D217925DDCEBB152284565BD43DE8," Learn more - -Find out about working with an inventory programmatically, by using the [IBM_AIGOV_FACTS_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.html). -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_0,0BD226C12CB659BF7711FE30C594E548525DBBD2," Governing external models - -Enable governance for models that are created in notebooks or outside of Cloud Pak for Data. Track the results of model evaluations and model details in factsheets. - -In addition to governing models trained by using Watson Machine Learning, you can govern models that are created by using third-party tools such as Amazon Web Services or Microsoft Azure. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html). Additionally, models that are developed in notebooks are considered external models, so you can use AI Factsheets to govern models that you develop, deploy, and monitor on platforms other than Cloud Pak for Data. - -Use the model evaluations provided with watsonx.governance to measure performance metrics for a model you imported from an external provider. Capture the facts in factsheets for the model and the evaluation metrics as part of an AI use case. Use the tracked data as part of your governance and compliance strategy. - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_1,0BD226C12CB659BF7711FE30C594E548525DBBD2," Before you begin - -Before you can begin, make sure that you, or a user with an Admin role, does the following: - - - -* Enable the tracking of external models in an inventory. -* Assign an owner for the inventory. - - - -For details, see [Managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html). - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_2,0BD226C12CB659BF7711FE30C594E548525DBBD2," Preparing to track external models - -These points are an overview of the process for preserving facts for an external model. - - - -* Tracked external models are listed under AI use cases in the main navigation menu. -* You can use the API in a model notebook to save an external model asset to an inventory. -* Associate the external model asset with an AI use case in the inventory to start preserving the facts. Along with model metadata, new fields External model identifier and External deployment identifier describe how the models and deployments are identified in external systems, for example: AWS or Azure. -* You can also automatically add external models to an inventory when they are evaluated in watsonx.governance. The destination inventory is established following these rules: - - - -* The external model is created in the Platform assets catalog if its corresponding development-time model exists in the Platform assets catalog or if there is no development-time model that is created in any inventory. -* If the corresponding development-time model is created in an inventory by using the Python client, then the model is created in that inventory. - - - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_3,0BD226C12CB659BF7711FE30C594E548525DBBD2," Associating an external model asset with an AI use case - -Automatic external model tracking adds any external models that are evaluated in watsonx.governance to the inventory where the development-time model exists. After the model is in the inventory, you can associate an external model asset with a use case in the following ways: - - - -* Use the API to save the external model asset to any inventory programmatically from a notebook. The external model asset can then be associated with an AI use case. -* Associate the external model that is created with Watson OpenScale evaluation with an AI use case. - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_4,0BD226C12CB659BF7711FE30C594E548525DBBD2," Creating an external model asset with the API - - - -1. Create a model in a notebook. -2. Save the model. For example, you can save to an S3 bucket. -3. Use the API to create an external model asset (a representation of the external model) in an inventory. For more information on API commands that interact with the inventory, see the [IBM_AIGOV_FACTS_CLIENT documentation](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlexternalmodelfactselements). - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_5,0BD226C12CB659BF7711FE30C594E548525DBBD2," Registering an external model asset with an inventory - - - -1. Open the Assets tab in the inventory where you want to track the model. -2. Select the External model asset that you want to track. -3. Return to the Assets tab in the Inventory and click Add to AI use case. -4. Select an existing AI use case or create a new one. -5. Follow the prompts to save the details to the inventory. - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_6,0BD226C12CB659BF7711FE30C594E548525DBBD2," Registering an external model from Watson OpenScale - -If you are validating an external model in Watson OpenScale, you can associate an external model with an AI use case to track the lifecycle facts. - - - -1. Add an external model to the OpenScale dashboard. -2. If you already defined an AI use case with the API, the system recognizes the use case association. -3. As you create and monitor a deployment, the facts are registered with the associated use case. These facts display in the Validate or Operate stage, depending on how you classified the machine learning provider for the model. - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_7,0BD226C12CB659BF7711FE30C594E548525DBBD2," Populating the AI use case - -When facts are saved for an external model asset, they are associated with the pillar that represents their phase in the lifecycle, as follows: - - - -* If the external model asset is created from a notebook without deployment, it displays in the Develop pillar. -* If the external model asset is created from a notebook with deployment, it displays in the Test pillar. -* When the external model deployment is evaluated in OpenScale, it displays in the Validate or Operate stage, depending on how you classified the machine learning provider for the model. - - - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_8,0BD226C12CB659BF7711FE30C594E548525DBBD2," Example: tracking a Sagemaker model - -This sample model, created in Sagemaker, is registered for tracking and moves through the Test, Validate, and Operate phases. - -![Sample external model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/factsheet-external1.png) - -" -0BD226C12CB659BF7711FE30C594E548525DBBD2_9,0BD226C12CB659BF7711FE30C594E548525DBBD2," Viewing facts for an external model - -Viewing facts for an external model is slightly different from viewing facts for a Watson Machine Learning model. These rules apply: - - - -* Click the Assets tab of the inventory containing the external model assets to view facts. -* Unlike Watson Machine Learning model use cases, which have different fact sheets for models and deployments, fact sheets for external models combine information for the model and deployments on the same page. -* Multiple assets with the same name can be created in an inventory. To differentiate them the tags development, pre-production and production are assigned automatically to reflect their state. - - - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -" -3334DCCDB9872C1E7F698751B138AA5AF6CC8335_0,3334DCCDB9872C1E7F698751B138AA5AF6CC8335," Viewing a factsheet for a tracked asset - -Review the details that are captured for each tracked asset in an AI use case or print a report to share or archive. - -" -3334DCCDB9872C1E7F698751B138AA5AF6CC8335_1,3334DCCDB9872C1E7F698751B138AA5AF6CC8335," What is captured in a factsheet? - -From the point where you start tracking an asset in an AI use case, facts are collected in a factsheet for the asset. As the asset moves from one phase of the lifecycle to the next, the facts are added to the appropriate section. For example, a factsheet for a prompt template collects information for these categories: - - - - Category Description - - Governance basic details for the governance, including the name of the use case, version number, and approach information - Foundation model name and provider for the foundation model - Prompt template prompt template name, description, input, and variables - Prompt parameters options used to create the prompt template, such as decoding method - Evaluation results of the most recent evaluation - Attachments attached files and supporting documents - - - -Important: The factsheet records the most recent activity in any category. For example, if you evaluate a deployed prompt template in a pre-production space, and then evaluate it in a production space, the details from the production evaluation are captured in the factsheet, over-writing the previous data. Thus, the factsheet maintains a complete record of the current state of the asset. - -![Vewing a factsheet for a tracked prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-view-factsheet1.png) - -" -3334DCCDB9872C1E7F698751B138AA5AF6CC8335_2,3334DCCDB9872C1E7F698751B138AA5AF6CC8335," Next steps - -Click Export report to save a report of the factsheet. - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_0,391DBD504569F02CCC48B181E3B953198C8F3C8A," Managing an inventory for AI use cases - -Create or manage an inventory for storing and reviewing AI use cases. AI use cases collect governance facts for AI assets your organization tracks. You can view all the AI use cases in an inventory or open one to explore the details of an AI asset. - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_1,391DBD504569F02CCC48B181E3B953198C8F3C8A," Creating an inventory for AI use cases - -You must have Admin rights to create and manage an inventory. For more information, see [Collaboration roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html). - - - -1. Click AI use cases from the navigation menu. -2. Click the settings icon ![gear icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-setting-icon.png) for the AI use cases view. - -![Opening settings for AI use cases inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-use-case-settings.png) -3. Click New inventory on the Inventory management tab. -4. Assign a name, add an optional description, and associate a Cloud Object Storage instance. -5. (Optional) Click General to extend the functions of an inventory with these options: - - - -* If there is no Platform Access Catalog available for your account, you are prompted to create one. A Platform Access Catalog (PAC) is a platform catalog that provides a repository for inventory assets. It is required for governing external models or managing attachments and reports. -* Enable the option for External model governance to govern models that are trained with machine learning providers other than the Watson Machine Learning. For a list of supported providers, see [Supported machine learning providers](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html). - - - - - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_2,391DBD504569F02CCC48B181E3B953198C8F3C8A," Adding collaborators to an inventory - -Inventories are meant to be collaborative so that multiple people that perform different roles can contribute to governance of key assets. To add collaborators to an inventory: - - - -1. From the Inventory management tab, click Set access from the overflow menu for the inventory. -2. Click Add collaborators to add collaborators individually, or by user group. -3. Assign a role of Admin, Editor, or Viewer. -4. Collaborators are added to the list for the inventory. You can remove or change the assigned access as needed. - - - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_3,391DBD504569F02CCC48B181E3B953198C8F3C8A," Managing external models, report templates, and attachments - -You can extend inventory management to include the ability to govern external models, customize report templates, and manage attachments for factsheets. - -Before you can access these services, you must have access to a Platform Access Catalog. A Platform Access Catalog is a common catalog for storing data connections and is required for governing external models and notebooks, and for customizing report templates and creating attachment groups. - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_4,391DBD504569F02CCC48B181E3B953198C8F3C8A," Creating a Platform Assets Catalog - -If you have Admin access, you can create a Platform Access Catalog if one does not exist. - - - -1. From the General tab of Inventory Management, you are prompted to create a Platform Access Catalog. -2. Click Get started and follow the prompts to name the catalog, associate it with a Cloud Object Storage instance, and specify some configuration details. -3. After the catalog is created, you can add users as collaborators in the catalog. - - - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_5,391DBD504569F02CCC48B181E3B953198C8F3C8A," Enabling governance of external models - -Enable governance for models that are created in notebooks or outside of Cloud Pak for Data. Track the results of model evaluations and model details in factsheets. - - - -1. From the General tab of an inventory, enable the option for External model management. -2. Select an inventory for tracking external models. -3. Select an owner, then click Apply. - - - -Note: When external models are added, they are listed under AI use cases in the main navigation menu. - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_6,391DBD504569F02CCC48B181E3B953198C8F3C8A," Managing report templates - -As an inventory administrator, you can manage report templates to customize the report templates for inventory users. - -For details, see [Managing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html). - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_7,391DBD504569F02CCC48B181E3B953198C8F3C8A," Managing attachments - -As an inventory administrator, you can create and manage attachment groups for AI use cases to provide the structure for users to attach supporting files to enrich a use case or a factsheet. For example, if you want every use case to include approval documents, you can create a group to define placeholders for those documents in each use case. Users can then upload the documents to those placeholder slots. - -For more information, see [Managing attachments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-attachments.html) - -" -391DBD504569F02CCC48B181E3B953198C8F3C8A_8,391DBD504569F02CCC48B181E3B953198C8F3C8A," Learn more - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -" -6A186D2C83108A0288BDFE3D4CEA201AC0837503_0,6A186D2C83108A0288BDFE3D4CEA201AC0837503," Managing attachments for AI use cases - -Create attachment groups and define attachment slots for an AI use case or factsheet. - -" -6A186D2C83108A0288BDFE3D4CEA201AC0837503_1,6A186D2C83108A0288BDFE3D4CEA201AC0837503," Adding attachment groups - -If you have admin access to an inventory, you can define attachment groups and manage attachment definitions for the AI use cases or factsheets in the inventory. Use an attachment group to organize a set of related attachment facts and render them together. Attachments can provide supporting information and extra details for a use case. Data scientists might want to attach visualizations from their model. Model requesters might want to attach a file of requirements to describe a business need. - -" -6A186D2C83108A0288BDFE3D4CEA201AC0837503_2,6A186D2C83108A0288BDFE3D4CEA201AC0837503," Creating an attachment group - - - -1. Open the AI uses cases settings and click the Attachments tab. If you do not see this tab, you might have insufficient access. -2. Choose whether to add an attachment group to an AI use case or to the factsheet template. -3. Click Add group. -4. Enter a name and an optional description. -5. When you define the attachment group, an identifier is created from the name of the group. The identifier can be used for programmatic access to the group. Click Show identifier to view and edit the ID. -6. Save your changes to create the attachment group. - - - -" -6A186D2C83108A0288BDFE3D4CEA201AC0837503_3,6A186D2C83108A0288BDFE3D4CEA201AC0837503," Adding attachment facts to a group - -From an attachment group, add attachment fact definitions that specify how a user can add an attachment to a factsheet. Attachment definitions display as available slots in the attachment section for a use case or factsheet. - -Use the up and down arrow keys to reorder attachments in the list. - -In this example, an attachment group for approvals defines attachment facts for approvals from risk and compliance and from the model validator. - -![Defining an attachment group and attachment facts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-attach-group.png) - -When you save your attachment fact definitions, an attachment slot and description display on the use case or factsheet for attaching a file. A pin icon indicates an available attachment slot. Any user with at least edit access to the use case or factsheet can upload attachments. - -![Defining an attachment group and attachment facts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-attach-group2.png) - -Parent topic:[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) -" -538ECAE0B5AA21E499F39C2637764A05BFF7B6B6_0,538ECAE0B5AA21E499F39C2637764A05BFF7B6B6," Managing and customizing report templates - -If the default report templates that are provided with AI Factsheets do not meet your needs, you can download a default report template, customize it, and upload the new template. - -" -538ECAE0B5AA21E499F39C2637764A05BFF7B6B6_1,538ECAE0B5AA21E499F39C2637764A05BFF7B6B6," Customizing a custom report template - -Any user with at least Editor access can create a report from an AI use case that captures all the details from An AI use case. You can use reports for compliance verification, archiving, or other purposes. - -If the default templates for the reports do not meet the needs of your organization, you can customize the report templates, the branding file, or the default stylesheet. For example, you can replace the IBM logo with your own logo image file. You must have the Admin role for managing inventories to customize report templates. - -Follow these steps to customize a report template. - -" -538ECAE0B5AA21E499F39C2637764A05BFF7B6B6_2,538ECAE0B5AA21E499F39C2637764A05BFF7B6B6," Downloading a report - -To download a report template from the UI: - - - -1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access. -2. In the options menu for a report template, click Download. ![Downloading a report template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-report-template.png) -3. Open the .ftl file in an editor. -4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands. - - - -To download a report template by using APIs: - - - -1. Use the GET endpoint for /v1/aigov/report_templates in the [IBM Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api) to list the available templates. Note the ID for the template that you want to download. -2. Use the GET endpoint /v1/aigov/report_templates/{template_id}/content with the template ID to download the template file. -3. Open the .ftl file in an editor. -4. Edit the template by using instructions from [Apache FreeMarker](https://freemarker.apache.org/) or the API commands. - - - -" -538ECAE0B5AA21E499F39C2637764A05BFF7B6B6_3,538ECAE0B5AA21E499F39C2637764A05BFF7B6B6," Uploading a template - - - -1. Open the AI uses cases settings and click the Report templates tab. If you do not see this tab, you might have insufficient access. -2. Click Add template. -3. Specify a name for the template and an optional description. -4. Choose the type of template: model or model use case. The reports are available for external models and Watson Machine Learning models. -5. Upload the updated FTL file. - - - -Restriction:The ftl file that you upload must not import any other files. Support is not yet available for import statements other than system templates in the ftl file. - -The custom template displays in the Report templates section and is available for creating reports. Click Edit or Delete from the action menu for a custom template to update the template details or to remove the template. - -Parent topic:[Creating and managing inventories](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_0,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Watsonx.governance - -Use watsonx.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end-to-end monitoring for machine learning and generative AI models. Monitor your foundation model and machine learning assets from request to production. Collect facts about models that are built with IBM tools or third-party providers in a single dashboard to aid in meeting compliance and governance goals. - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_1,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Develop a comprehensive governance solution - -Using watsonx.governance, you can extend the best practices of AI governance from predictive machine learning models to generative AI while monitoring and mitigating the risks associated with models, users, and data sets. The benefits of this approach include: - - - -* Responsible AI: extend the practices of responsible AI from governing predictive machine learning models to the use of generative AI with any foundation or model provider. -* Explainability: Use automation to improve transparency and explainability for tracked models. Use tools for detecting and mitigating risks that are associated with AI. -* Transparent and regulatory policies: Mitigate AI risks by tracking the end-to-end AI lifecycle to aid compliance with internal policies and external regulations for enterprise-wide AI solutions. - - - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_2,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Use the AI risk atlas a guide - -Start your governance journey by reviewing the [Risk Atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) to learn about the potential risks of working with AI models. The Risk Atlas provides a guide to understanding some of the risks of working with AI models, including generative AI, foundation models, and machine learning models. In addition to describing potential risks, it provides real-world context. It is intended as an educational resource and is not meant as a prescriptive tool. - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_3,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Governance in action - -This illustration depicts a typical governance flow, from request to monitoring in production. - -![watsonx.governance flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/wastonx-gov-concept.svg) - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_4,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Components of watsonx.governance - -Watsonx.governance includes these tools for addressing your governance needs in an integrated solution: - - - -* Watson OpenScale provides tools for configuring monitors that evaluate your deployed assets against thresholds you specify. For example, you can configure threshold that alerts you when predictive machine learning models perform below a specified threshold for fairness in monitored outcomes, or drift from accuracy. Alerts for foundation models can warn you when a threshold is breached for the presence of hateful or abusive language or the detection of personal identifiable information. A Model Health monitor provides real-time performance tracking for deployed models. -* AI Factsheets collects the metadata for machine learning models and prompt templates you explicitly track. Develop AI use cases to gather all of the information for managing a model or prompt template from the request phase through development and into production. Manage multiple versions or a model, or compare different approaches to solving a business problem within a use case. Factsheets display information about the models including creation information, data that is used, and where the asset is in the lifecycle. A common model inventory dashboard gives you a view of all tracked assets, or you can view the details of a particular model, all in service of meeting policy and compliance goals. - - - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_5,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Extend governance with watsonx.ai - -To create an end-to-end experience for developing assets and then adding them to governance, use watsonx.ai with watsonx.governance. Watsonx.ai extends the Watson Studio and Watson Machine Learning services to work with foundation models, including capabilities for saving prompt templates for a curated collection of large language model assets. - -For more information on watsonx.ai, see: - - - -* [Overview of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -* [Signing up for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) - - - -" -C6223EEB52369B6B2BAA2B489C9DA41C882154B9_6,C6223EEB52369B6B2BAA2B489C9DA41C882154B9," Next steps - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_0,A85E898F28AC27DAA8961337A9B468004C1B8B21," Planning for AI governance - -Plan how to use watsonx.governance to accelerate responsible, transparent, and explainable AI workflows with an AI governance solution that provides end-to-end monitoring for machine learning and generative AI models. - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_1,A85E898F28AC27DAA8961337A9B468004C1B8B21," Governance capabilities - -Note:To govern metadata from foundation models, you must have watsonx.ai provisioned. - -Consider these watsonx.governance capabilities as you plan your governance strategy: - - - -* Collect metadata in factsheets about machine learning models and prompt templates for large language models. -* Customize the metadata facts that are captured in factsheets for machine learning and foundation models. -* Monitor machine learning deployments for fairness, drift, and quality to ensure that your models are meeting specified standards. -* Monitor foundation models for breaches of toxic language thresholds or detection of personal identifiable information. -* Evaluate prompt templates with metrics designed to measure performance and to test for the presence of prohibited content, such as hateful speech. -* Collect model health data including data size, latency, and throughput to help you assess performance issues and manage resource consumption. -* Assign a single risk score to tracked models to indicate the relative impact of the associated model. For example, a model that predicts sensitive information such as a credit score might be assigned a higher risk score than a model that projects ice cream sales. -* Use the automated transaction analysis tools to improve transparency and explainability for your AI assets. For example, see how a feature contributes to a prediction and test what-if scenarios to explore different outcomes. - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_2,A85E898F28AC27DAA8961337A9B468004C1B8B21," Planning for governance - -Consider these governance strategies: - - - -* [Build your governance team](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=enpeople) -* [Set up your governance structures](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=enstructure) -* [Manage collaboration with roles and access control](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encollab) -* [Develop a communication plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encommunicate). -* [Implement a simple solution](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=ensimple) -* [Plan for more complex solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-plan.html?context=cdpaas&locale=encomplex) - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_3,A85E898F28AC27DAA8961337A9B468004C1B8B21," Build a governance team - -Consider the expertise that you need on your governance team. A typical governance plan might include the following roles. In some cases, the same person might fill multiple roles. In other cases, a role might represent a team of people. - - - -* Model owner: The owner creates an AI use case to track a solution to a business need. The owner requests the model or prompt template, manages the approval process, and tracks the solution through the AI Lifecycle. -* Model developer/Data scientist: The developer works with the data in a data set or a large language model (LLM) and creates the machine learning model or LLM prompt template. -* Model validator: the validator tests the solution to determine whether it meets the goals that are stated in the AI use case. -* Risk and compliance manager: The risk manager determines the policies and compliance thresholds for the AI use case. For example, the risk manager might determine the rules to apply for testing a solution for fairness or for screening output for hateful and abusive speech. -* MLOps engineer: The MLOps engineer moves a solution from a pre-production (test) environment, to a production environment when a solution is deemed ready to be fully deployed. -* App developer: Following deployment, and app developer runs evaluations against the deployment to monitor how the solution performs against the metric threshold set by the risk and compliance owner. If performance drops below specified thresholds, the app developer works with the other stakeholders to address problems and update the model or prompt template. - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_4,A85E898F28AC27DAA8961337A9B468004C1B8B21," Set up a governance structure - -After identifying roles and assembling a team, plan your governance structure. - - - -1. Create an inventory for storing AI use cases. An inventory is where you store and view AI use cases and the factsheets that are associated with the assets being governed. Depending on your governance requirements, store all use cases in a single inventory, or create multiple inventories for your governance efforts. -2. Create projects for collaboration. If you are using IBM tools, create a Watson Studio project. The project can hold the data that is required to train or test the AI solution and the model or prompt template being governed. Use the access control to restrict access to the approved collaborators. -3. Create a pre-production deployment space. Use the space to test your model or prompt template by using test data. Like a project, a space provides access control features so you can include the required collaborators. -4. Configure test and validation evaluations. Provide the model or prompt template details and configure a set of evaluations to test the performance of your solution. For example, you might test a machine learning model for dimensions such as fairness, quality, and drift, and test a prompt template against metrics such as perplexity (how accurate the output is), or toxicity (whether the output contains hateful or abusive speech.) By testing on known (labeled) data, you can evaluate the performance before moving a solution to production. -5. Configure a production space. When the model or prompt template is ready to be deployed to a production environment, move the solution and all dependencies to a production space. A production space typically has a tighter access control list. -6. Configure evaluations for the deployed model. Provide the model details and configure evaluations for the solution. You now test against live data rather than test data. It is important to monitor your solution so that you are alerted if thresholds are crossed, indicating a potential problem with the deployed solution. - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_5,A85E898F28AC27DAA8961337A9B468004C1B8B21," Manage collaboration for governance - -Watsonx.governance is built on a collaborative platform to allow for all approved team members to contribute to the goals of solving business problems. - -To plan for collaboration, consider how to manage access to the inventories, projects, and spaces you use for governance. - -Use roles along with access control features to ensure that your team has appropriate access to meet goals. - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_6,A85E898F28AC27DAA8961337A9B468004C1B8B21," Develop a communication plan - -Some of the workflow around defining an AI use case and moving assets through the lifecycle rely on effective communication. Decide how your team will communicate and establish the details. For example,: - - - -* Will you use email for decision-making or a messaging tool such as Slack? -* Is there a formal process for adding comments to an asset as it moves through a workflow? - - - -Create your communication plan and share it with your team. - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_7,A85E898F28AC27DAA8961337A9B468004C1B8B21," Implement a simple governance solution - -As you roll out your governance strategy, start with a simple implementation, then consider how to build incrementally to a more comprehensive solution. The simplest implementation requires an AI use case in an inventory, with an asset moving from request to production. - -For the most straightforward implementation of AI governance, you can use a IBM Knowledge Catalog to track and inventory models. An AI use case in an inventory consists of a set of factsheets containing lineage, history, and other relevant information about a model's lifecycle. A watsonx administrator must create an inventory and add data scientists, data engineers, and other users as collaborators. - -![Inventories store factsheets with metadata about governed assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-simple.svg) - -AI use case owners can request and track assets: - - - -* Business users create AI use cases in the inventory to request machine-learning models or LLM prompt templates. -* Data scientists associate the trained asset with an AI use case to create AI factsheets. - - - -AI factsheets accumulate information about the model or prompt templates in the following ways: - - - -* All actions that are associated with the tracked asset are automatically saved, including deployments and evaluations. -* All changes to input data assets are automatically saved. -* Data scientists can add tags, business terms, supporting documentation, and other information. -* Data scientists can associate challenger models with the AI use cases to compare model performance. - - - -Validators and other stakeholders review AI factsheets to ensure compliance and certify asset progress from development to production. They can also generate reports from the factsheets to print, share, or archive details. - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_8,A85E898F28AC27DAA8961337A9B468004C1B8B21," Plan for more complex solutions - -You can extend your AI governance implementation at any time. Consider these options to extend governance: - - - -* MLOps engineers can extend model tracking to include external models that are created with third-party machine learning models. -* MLOps engineers can add custom properties to factsheets to track more information. -* Compliance analysts can customize the default report templates to generate tailored reports for the organization. -* Record the results of IBM Watson OpenScale evaluations for fairness and other metrics as part of model tracking. - - - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_9,A85E898F28AC27DAA8961337A9B468004C1B8B21," Governing assets that are created locally or externally - -Watsonx.governance provides the tools for you to govern assets you created using IBM tools, such as machine learning models created by using AutoAI or foundation model prompt templates created in a watsonx project. You can also govern machine learning models that are created by using non-IBM tools, such as Microsoft Azure or Amazon Web Services. As you develop your governance plan, consider these differences: - - - -* IBM assets developed with tools such as Watson Studio are available for governance earlier in the lifecycle. You can track the factsheet for a local asset from the Development phase, and have visibility into details such as the training data and creation details from an earlier stage. -* An inventory owner or administrator must enable governance for external models. -* When governance is enabled for external models, they can be added to an AI use case explicitly, or automatically, when they are evaluated with Watson OpenScale. - - - -For a list of supported machine learning model providers, see [Supported machine learning providers in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html). - -" -A85E898F28AC27DAA8961337A9B468004C1B8B21_10,A85E898F28AC27DAA8961337A9B468004C1B8B21," Next steps - -To begin governance, follow the steps in [Provisioning and launching IBM watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) to provision Watson OpenScale with AI Factsheets. - -Parent topic:[Watsonx.governance overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html) -" -71479786E864B942786028481E30DFB35E422BA8_0,71479786E864B942786028481E30DFB35E422BA8," Tracking a machine learning model - -Track machine learning models in an AI use case to meet governance and compliance goals. - -" -71479786E864B942786028481E30DFB35E422BA8_1,71479786E864B942786028481E30DFB35E422BA8," Tracking machine learning models in an AI use case - -Track machine learning models that are trained in a project and saved as a model asset. You can add a machine learning model to an AI use case from a project or space. - - - -1. Open the project or space that contains the model asset that you want to govern. -2. From the action menu for the asset, click Track in AI use case. -3. Select an existing AI use case or follow the prompts to create a new one. -4. Choose an existing approach or create a new approach. An approach creates a version set for all assets in the same approach. -5. Choose a version numbering scheme. All of the assets in an approach share a common version. Choose from: - - - -* Experimental if you plan to update frequently. -* Stable if the assets are not changing rapidly. -* Custom if you want to start a new version number. Version numbering must follow a schema of major.minor.patch. - - - - - -![Tracking a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-model1.png) - -Watch this video to see how to track a machine learning model in an AI use case. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -Once tracking is enabled, all collaborators for the use case can review details for the asset. - -![Viewing a tracked model in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-model2.png) - -For a machine learning model, facts include creation details, training data used, and information from evaluation metrics. - -![Viewing a factsheet for a tracked model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-model2.png) - -For details on tracking a machine learning model that is created in a Jupyter Notebook or trained with a third-party machine learning provider, see [Tracking external models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html). - -" -71479786E864B942786028481E30DFB35E422BA8_2,71479786E864B942786028481E30DFB35E422BA8," Learn more - -Parent topic:[Tracking assets in use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_0,0E6365D1DD3EC522C4DA68B662F05A0120617593," Tracking prompt templates - -Track a prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals. - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_1,0E6365D1DD3EC522C4DA68B662F05A0120617593," Tracking prompt templates - -A prompt template is the saved prompt input for a foundation model. A prompt template can include variables so that it can be run with different options. For example, if you have a prompt that summarizes meeting notes for project-X, you can define a variable so that the same prompt can run for project-Y. - -You can add a saved prompt template to an AI use case to track the details for the prompt template. In addition to recording details about the prompt template creation information and source model details, the factsheet tracks information from prompt template evaluations to capture performance metrics. You can evaluate prompt templates before or after you start tracking a prompt template. - -Important: Before you start tracking a prompt template in an AI use case, make sure the prompt template is stable. After you enable tracking, the prompt template is locked, and you can no longer update it. This is to preserve the integrity of the prompt template so that all of the facts collected in the factsheet apply to a single version of the prompt template. If you are still experimenting with a prompt template, do not start tracking it in an AI use case. - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_2,0E6365D1DD3EC522C4DA68B662F05A0120617593," Before you begin - -Before you can track a prompt template, these conditions must be met. - - - -* Be an administrator or editor for the project that contains the prompt template. -* The prompt template must include at least one variable. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). - - - -Watch this video to see how to track a prompt template in an AI use case. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_3,0E6365D1DD3EC522C4DA68B662F05A0120617593," Tracking a prompt template or machine learning model in an AI use case - -You can add a prompt template to an AI use case from a project or space. - - - -1. Open the project or space that contains the prompt template that you want to govern. -2. From the action menu for the asset, click View AI use case. ![Tracking a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-prompt1.png) - -3. If this prompt template is not already part of an AI use case, you are prompted to Track in AI use case. When you start tracking a prompt template, it is locked and you can no longer edit it. To make changes, you must create a new prompt template. ![Starting to track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-prompt-temp-track.png) - -4. Select an existing AI use case or follow the prompts to create a new one. -5. Choose an existing approach or create a new approach. An approach represents one facet of a complete solution. Each approach creates a version set for all assets in the same approach. -6. Choose a version numbering scheme. All the assets in an approach share a common version. Choose from: - - - -* Experimental if you plan to update frequently. -* Stable if the assets are not changing rapidly. -* Custom if you want to start a new version number. Version numbering must follow a schema of major.minor.patch. - - - - - -When tracking is enabled, all collaborators for the use case can review details for the prompt template. - -![Viewing a tracked prompt template in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-prompt2.png) - -Details are captured for each lifecycle stage for a prompt template. - - - -* Develop provides information about how the prompt is defined, including the prompt itself, creation date, foundation model that is used, prompt parameters set, and variables defined. -* Evaluate displays the dimension metrics from evaluating your prompt template. -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_4,0E6365D1DD3EC522C4DA68B662F05A0120617593,"* Operate provides details that are related to how the prompt template is deployed for productive use. - - - -![Viewing the lifecycle for a tracked prompt template in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-track-prompt3.png) - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_5,0E6365D1DD3EC522C4DA68B662F05A0120617593," Viewing the factsheet for a tracked prompt template - -Click the name of the prompt template in an AI use case to view the associated factsheet. - -![Viewing the factsheet for a tracked prompt template in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-prompt-factsheet1.png) - -The factsheet for a prompt template collects this type of data: - - - -* Governance collects basic information such as the name of the AI use case, the description, and the approach name and version data. -* Foundation model displays the name of the foundation model, the license ID, and the model publisher. -* Prompt template shows the prompt name, ID, prompt input, and variables. -* Prompt parameters collect the configuration options for the prompt template, including the decoding method and stopping criteria. -* Evaluation displays the data from evaluation, including alerts, and metric data from the evaluation. For example, this prompt template shows the metrics data for quality evaluations on the prompt template. One threshold alert was triggered by the evaluation: ![Viewing evaluation metrics for a prompt template in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-prompt-factsheet2.png) -* Validate shows the data for how the prompt template was evaluated, including the data set used for the validation, alerts triggered, and evaluation metric data. -* Attachments shows information about attachments that support the use case. - - - -Note: As the prompt template moves from one stage of the lifecycle to the next, facts are added to the factsheet for the prompt template. The factsheet always represents the latest state of the prompt template. For example, if you validate a prompt template in a pre-production deployment space, and then again in a production deployment space, the details from the production phase are recorded in the factsheet, overwriting previous evaluation results. - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_6,0E6365D1DD3EC522C4DA68B662F05A0120617593," Moving a prompt template through lifecycle stages - -When a prompt template is tracked, you can see details from creating the prompt template, and evaluating performance against appropriate metrics. The next stage in the lifecycle is to _validate the prompt template. This involves testing the prompt template with new data. If you are the prompt engineer who is tasked with validating the asset, follow these steps to validate the prompt template and capture the validation data in the associated factsheet. - - - -1. From the project containing the prompt template, export the project to a compressed ZIP file. -2. Create a new project and populate it with the exported ZIP file. -3. Upload validation data, evaluate the prompt template, and save the results to the validation project. -4. From the project, promote the prompt template to a new or existing deployment space that is designated as a Production stage. The stage is assigned when the space is created and cannot be updated, so create a new space if you do not have a production space available. ![Create or select a deployment space with Production stage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-space-prod.png) -5. After you promote the prompt template to a deployment space, you can configure continuous monitoring. -6. Details from monitoring the prompt template in a production space are displayed in the Operate lifecycle stage of the AI use case. - - - -" -0E6365D1DD3EC522C4DA68B662F05A0120617593_7,0E6365D1DD3EC522C4DA68B662F05A0120617593," Learn more - - - -* See [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html) for details on preparing a prompt template for production. -* See [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html) for details on evaluating a prompt template for dimensions such as accuracy or to test for the presence of hateful or abusive speech. - - - -Parent topic:[Tracking assets in an AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) -" -F30CF59ADFFCBE4164483B5A63260724A1DFC7CA_0,F30CF59ADFFCBE4164483B5A63260724A1DFC7CA," Tracking assets in an AI use case - -Track machine learning models or prompt templates in AI use cases to capture details about them in factsheets. Use the information collected in the AI use case to monitor the progress of assets through the AI lifecycle, from request to production. - -Define an AI use case to identify a business problem and request a solution. A solution might be a predictive machine learning model or a generative AI prompt template. When an asset is developed, associate it with the use case to capture details about the asset in factsheets. As the asset moves through the AI lifecycle, from development to testing and then to production, the factsheets collect the data to support governance or compliance goals. - -" -F30CF59ADFFCBE4164483B5A63260724A1DFC7CA_1,F30CF59ADFFCBE4164483B5A63260724A1DFC7CA," Creating approaches to compare ways to solve a problem - -Each AI use case can contain at least one approach. An approach is one facet of the solution to the business problem represented by the AI use case. For example, you might create two approaches to compare by using different frameworks for predictive models to see which one performs best. Or, created approaches to track several prompt templates in a use case. - -Approaches also capture version information. The same version number is applied to all assets in an approach. If you have a stable version of an asset, you might maintain that version in an approach and create a new approach for the next round of iteration and experimentation. - -This use case includes three approaches for organizing three prompt templates for an insurance claims processing use case: - -![Multiple approaches for an insurance claim use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-prompt-approach1.png) - -" -F30CF59ADFFCBE4164483B5A63260724A1DFC7CA_2,F30CF59ADFFCBE4164483B5A63260724A1DFC7CA," Adding assets to a use case - -You can track these assets in an AI use case: - - - -* [Prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) include the prompt input for a foundation model and variables that are defined to make the prompt reusable for generating new output. -* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-ml-model.html) that are created by using a Watson Machine Learning tool such as AutoAI or SPSS Modeler. -* [External models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-external-models.html) are models that are created in Jupyter Notebooks or models that are created by using a third-party machine learning provider. - - - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_0,8BE1A39CDBAAA858051954548474DD3E307B20CB," Setting up an AI use case - -Create an AI use case to define a business problem and track the related AI assets through their lifecycle. View details about governed assets or generate reports to help meet governance and compliance goals. - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_1,8BE1A39CDBAAA858051954548474DD3E307B20CB," Creating AI use cases in an inventory - -An inventory presents a view of all the AI use cases that you can access that are assigned to that inventory. Use multiple inventories to manage groups of AI use cases. For example, you might create an inventory for governing prompt templates and another for governing machine learning assets. Add collaborators to inventories so they can view or contribute to AI uses cases. - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_2,8BE1A39CDBAAA858051954548474DD3E307B20CB," Before you begin - - - -* Enable watsonx.governance and provision Watson OpenScale. -* You must have access to an existing inventory or have sufficient access to [create a new inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html). - - - -For details on watsonx.governance roles and managing access for governance, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html). If you do not have sufficient access to create or contribute to an inventory, contact your administrator. - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_3,8BE1A39CDBAAA858051954548474DD3E307B20CB," Viewing AI use cases - - - -1. Click AI use cases from the navigation menu to view all existing AI use cases you can access, or click Request a model with an AI use case from the home page. From the primary view, you can search for a specific use case or filter the view to focus on certain use cases. For example, filter the view by Inventory to view all the AI use cases in a particular inventory. - -![Viewing AI use cases in an inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-inv-use-cases.png) -2. Click the name of an AI use case to open it and view the details on these tabs: - - - -* Overview shows the essential details for the use case. -* Lifecycle shows the assets that are tracked in the use case, which is organized by the phases of the AI lifecycle. -* Access lists collaborators for the use case and assigned roles. - - - -3. Click the name of an asset to view the associated factsheet. - - - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_4,8BE1A39CDBAAA858051954548474DD3E307B20CB," Generating a report from a use case - -You can generate reports from use cases or factsheets to share or preserve records. By default, the reports generate these default reports: - - - -* Basic report contains the set of facts visible on the Overview and Lifecycle tabs. -* Full report contains all facts about the use case and the models, prompt templates, and deployments it contains. - - - -The inventory admin can customize reports to include custom branding or to change the fields included in reports. For details, see [Customizing report templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-manage-reports.html). To create a report: - - - -1. Open a use case in an inventory. -2. Click the Export report icon to generate a PDF record of the use case. -3. Choose a format option and export the report. - - - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_5,8BE1A39CDBAAA858051954548474DD3E307B20CB," Creating an AI use case - - - -1. Click AI use cases from the navigation menu. -2. Click New AI use case. -3. Enter a name and choose an inventory for the use case. If you do not have access to an inventory, you must create one before you can define a use case. See [Managing an inventory for AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-inventory-manage.html) for details. -4. Complete optional fields as needed: - - - - - - Option Notes - - Description Define the business problem and provide any details about the proposed solution. - Risk level Assign a risk level that reflects the nature of the business problem and the anticipated solution according to your governance policies. For example, assign a risk level of High for a model that processes sensitive personal data. - Supporting data Enter links to supporting documents that support or clarify the purpose of the use case - Owner For a use case with multiple owners, you can edit ownership - Status By default, a new AI use case is assigned a status of default, as it is typically waiting for assets to be added for tracking. You can manually change the status. For example, change to Awaiting development if you do not require any additional review or approval for a requested model. Change to Developed if you already have a model to add to governance. Review the complete list of status options in the following section. - Tags Assign or create tags to make your AI use cases easier to find or group. - - - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_6,8BE1A39CDBAAA858051954548474DD3E307B20CB," Use case status details - -Update the status field to provide users of the use case an immediate reflection of the current state. - - - - Status Description - - Ready for use case approval Use case is defined and ready for review - Use case approved Use case ready for model or prompt template development - Use case rejected Use case not ready for model or prompt development - Awaiting development Awaiting delivery of AI asset (model or prompt) - Development in progress AI asset (model or prompt) in development - Developed Trained model or prompt template added to use case - Ready for AI asset validation AI asset ready for testing or evaluation - Validation complete AI asset is tested or evaluated - Ready for AI asset approval Waiting for approval to move AI asset to production - Promote to production space AI asset is promoted to a production environment - Deployed for operation AI asset deployed for production - In operation AI asset is live in a production environment - Under revision AI asset requires updating - Decommissioned AI asset removed from production environment - - - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_7,8BE1A39CDBAAA858051954548474DD3E307B20CB," Adding collaborators to an AI use case - -Add collaborators so they can view or contribute to the AI use case. - - - -1. From the Access tab of the AI use case, click Add members. -2. Search for a member by name or email address. -3. Assign an access level and click Add. For details on permissions, see [Collaboration roles for governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-collab-roles.html). - - - -" -8BE1A39CDBAAA858051954548474DD3E307B20CB_8,8BE1A39CDBAAA858051954548474DD3E307B20CB," Next steps - -After you create an AI use case, use it to track assets. Depending on your governance strategy, your next step might be to: - - - -* Send a link to the use case to a reviewer for approval. -* Send a link to a data scientist to create the requested asset. -* [Add an asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-tracking-overview.html) for tracking in the use case. - - - -Parent topic:[Governing assets in AI use cases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-use-cases.html) -" -1F14865C04B28B02EE0760D7099554A916E26926_0,1F14865C04B28B02EE0760D7099554A916E26926," Creating the catalog for platform connections - -You can create a Platform assets catalog to share connections across your organization. Any user who you add as a collaborator to the catalog can see these connections. - -You can add an unlimited number of collaborators and connection assets to the Platform assets catalog. - -If you are signed up for both Cloud Pak for Data as a Service and watsonx, you share a single Platform assets catalog between the two platforms. Any connection assets that you add to the catalog on either platform are available in both platforms. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx. - -" -1F14865C04B28B02EE0760D7099554A916E26926_1,1F14865C04B28B02EE0760D7099554A916E26926," Requirements - -Before you create the Platform assets catalog, understand the required permissions and the requirements for storage and duplicate handling. - -Required permission : You must have the IAM Administrator role in the IBM Cloud account. : To view your roles, go to Administration > Access (IAM). Then select Roles in the IBM Cloud console. - -Storage requirement : You must specify the IBM Cloud Object Storage instance configured during [IBM Cloud account setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html). If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow catalog creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). - -Duplicate asset handling : Assets are considered duplicates if they have the same asset type and the same name. : Select how to handle duplicate assets: : - Update original assets : - Overwrite original assets : - Allow duplicates (default) : - Preserve original assets and reject duplicates : You can change the duplicate handling preferences at any time on the catalog Settings page. - -" -1F14865C04B28B02EE0760D7099554A916E26926_2,1F14865C04B28B02EE0760D7099554A916E26926," Creating the Platform assets catalog - -To create the Platform assets catalog: - - - -1. From the main menu, choose Data > Platform connections. -2. Click Create catalog. -3. Select the IBM Cloud Object Storage service. If you don't have an existing service instance, [create a IBM Cloud Object Storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) and then refresh the page. -4. Click Create. The Platform assets catalog is created in a dedicated storage bucket. Initially, you are the only collaborator in the catalog. -5. Add collaborators to the catalog. Go to the Access control page in the catalog and add collaborators. You assign each user a [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html?context=cdpaas&locale=enroles): - - - -* Assign the Admin role at least one other user so that you are not the only person who can add collaborators. -* Assign the Editor role to all users who are responsible for adding connections to the catalog. -* Assign the Viewer role to the users who need to find connections and use them in projects. - - - -You can give all the users access to the Platform assets catalog by assigning the Viewer role to the Public Access group. By default, all users in your account are members of the Public Access group. See [add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/catalog-collaborators.html). - -6. Add connections to the catalog. You can delegate this step to other collaborators who have the Admin or Editor role. See [Add connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - - - -" -1F14865C04B28B02EE0760D7099554A916E26926_3,1F14865C04B28B02EE0760D7099554A916E26926," Platform assets catalog collaborator roles - -The Platform assets catalog roles provide the permissions in the following table. - - - - Action Viewer Editor Admin - - View connections ✓ ✓ ✓ - Use connections in projects ✓ ✓ ✓ - Use connections in spaces ✓ ✓ ✓ - View collaborators ✓ ✓ ✓ - Add connections ✓ ✓ - Modify connections ✓ ✓ - Delete connections ✓ ✓ - Add or remove collaborators ✓ - Change collaborator roles ✓ - Delete the catalog ✓ - - - -Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4_0,58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4," Stop using services or IBM watsonx - -You can stop using any services or IBM watsonx at any time, whether you are accessing the services from your own or someone else's IBM Cloud account. - -The method you choose to stop using IBM watsonx depends on your goal: - - - -* To remove your access to IBM watsonx in all IBM Cloud accounts that you belong to, [leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeactivate). -* To stop the use of a service in your IBM Cloud account, [delete your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeleteapps) in your IBM Cloud account. -* To stop all use of all IBM Cloud services in your account, [delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html?context=cdpaas&locale=endeletecloud). - - - -When other users in your account stop using IBM watsonx, they are cleaned up appropriately. - -" -58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4_1,58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4," Leave IBM watsonx - -If you want to leave IBM watsonx: - - - -1. Log in to IBM watsonx. -2. Click your avatar and then Profile. -3. On the Profile page, click Leave watsonx. If you change your mind about leaving, you can [sign up to re-activate your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html). - - - -Use this process when you want to stop using IBM watsonx, you are not the account owner, and you want to keep your IBM Cloud account. - -These are the results when you leave IBM watsonx: - - - -* Your profile is deleted and you can't log in to IBM watsonx. -* Your projects and deployment spaces remain until you delete your services. -* Your IBM Cloud account remains active. -* Your IBM Cloud services are not affected. - - - -" -58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4_2,58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4," Delete a service - -To remove any of your services: - - - -1. Log in to IBM watsonx. -2. Click Administration > Services > Service instances. -3. Click the menu next to the service you want to remove and choose Delete. - - - -This action is the same as deleting the service in IBM Cloud. If you change your mind within 30 days, you can get your services and data back by reprovisioning the service. - -These are the results when you delete the Watson Studio service: - - - -* Your IBM watsonx profile remains. -* You can no longer access that service from that IBM Cloud account. -* You can still access your services from other accounts. -* Your billing for that service stops. -* Your data in IBM Cloud Object Storage remains. -* Your projects remain. -* You remain a collaborator in all your projects in other IBM Cloud accounts. - - - -" -58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4_3,58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4," Closing a IBM Cloud account - -If you want to stop using IBM Cloud services altogether and delete all your data, you can deactivate your IBM Cloud account. Follow these steps to close your Lite account: - - - -1. Sign in to your IBM Cloud account. -2. In the IBM Cloud console, go to the Manage > Account > Account settings page. -3. Click Close Account. After an account is closed for 30 days, all data is deleted and all services are removed. - - - -If you are not the owner of the account, you do not see a Close Account button. - -These are the results when your IBM Cloud account is in the Canceled state: - - - -* All your data in IBM Cloud is permanently deleted in 30 days. -* The projects and catalogs in your account are deleted. -* Your IBM watsonx profile and your IBM Cloud profile are deleted. -* All the IBM Cloud services in you account are deleted in 30 days. -* You are removed as a collaborator from projects and catalogs in other accounts within 30 days. - - - -If you want to close a Pay-As-You-Go or Subscription account, contact [Support](https://cloud.ibm.com/unifiedsupport/supportcenter). - -" -58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4_4,58B70FAC914F72C4AE9116DE6E26880E1CEDCFF4," Learn more - - - -* [Removing users from the account or from the workspace](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html) -* [IBM Cloud docs: Leaving an account](https://cloud.ibm.com/docs/account?topic=account-account-membership) -* [IBM Cloud docs: Managing your account settings](https://cloud.ibm.com/docs/account?topic=account-account_settings) - - - -Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -" -F964EFDA57733A3B39890B30FF22BD5C47EED893_0,F964EFDA57733A3B39890B30FF22BD5C47EED893," Managing IBM watsonx - -As the owner or an administrator of the IBM Cloud account, you can monitor and manage services and the platform. - - - -* [Configuring services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=encore) - - - -An IBM Cloud account administrator is a user in the account who was assigned the Administrator role in IBM Cloud for the All Identity and Access enabled services option in IAM. If you're not sure of your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). - -You perform some administrative tasks within IBM watsonx, and others in IBM Cloud. Some tasks require steps in both areas, depending on your goals. - -" -F964EFDA57733A3B39890B30FF22BD5C47EED893_1,F964EFDA57733A3B39890B30FF22BD5C47EED893," Configuring services - -The services that are included in watsonx.ai are Watson Studio and Watson Machine Learning. - - - - Task In IBM watsonx? In IBM Cloud? - - [Manage services in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) ✓ ✓ - [Switch service region](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html?context=cdpaas&locale=enregion) ✓ - [Upgrade your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlaccount) ✓ ✓ - [Upgrade your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlapp) ✓ - [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) ✓ - [Remove users](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html) ✓ ✓ - [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) ✓ ✓ - [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) ✓ ✓ - [View and manage environment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.htmlmonitor-cuh) ✓ - [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) ✓ ✓ - [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) ✓ ✓ -" -F964EFDA57733A3B39890B30FF22BD5C47EED893_2,F964EFDA57733A3B39890B30FF22BD5C47EED893," [Set resources scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ - [Set type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ - [Manage IBM Cloud account in IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html) ✓ - [Manage all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) ✓ ✓ - [Secure IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) ✓ ✓ - [Set up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) ✓ - [Delegate encryption keys for IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok) ✓ ✓ - - - -" -F964EFDA57733A3B39890B30FF22BD5C47EED893_3,F964EFDA57733A3B39890B30FF22BD5C47EED893," Switch service region - -The platform and services are available in multiple IBM Cloud service regions and you can have services in more than one region. Your projects, catalogs, and data are specific to the region in which they were saved and can be accessed only from your services in that region. If you provision Watson Studio services in both the Dallas and the Frankfurt regions, you can't access projects that you created in the Frankfurt region from the Dallas region. - -To switch your service region: - - - -1. Log in to IBM watsonx. -2. Click the Region Switcher![Region Switcher icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/region_switcher.png) in the home page header. -3. Select the region that contains your services and projects. - - - -For wider browsers, you can select the region from the dropdown menu. - -" -F964EFDA57733A3B39890B30FF22BD5C47EED893_4,F964EFDA57733A3B39890B30FF22BD5C47EED893," Learn more - - - -* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) -* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) -* [Roles in the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) - - - -Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_0,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Setting up IBM Cloud Object Storage for use with IBM watsonx - -An IBM Cloud Object Storage service instance is provisioned automatically with a Lite plan when you join IBM watsonx. Workspaces, such as projects, require IBM Cloud Object Storage to store files that are related to assets, including uploaded data files or notebook files. - -You can also connect to IBM Cloud Object Storage as a data source. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html). - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_1,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Overview of setting up Cloud Object Storage - -To set up Cloud Object Storage, complete these tasks: - - - -1. [Generate an administrative key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=engen-key). -2. [Ensure that Global location is set in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=englobal). -3. [Provide access to Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enaccess). - - - -* [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign). -* [Enable storage delegation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del). - - - -4. [Optional: Protect sensitive data](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enprotect). -5. [Optional: Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enbyok). - - - -Watch the following video to see how administrators set up Cloud Object Storage for use with Cloud Pak for Data as a Service. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_2,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Generate an administrative key - -You generate an administrative key for Cloud Object Storage by creating an initial test project. The test project can be deleted after its creation. Its sole purpose is to generate the key. - -To automatically generate the administrative key for your Cloud Object Storage instance: - - - -1. From the IBM watsonx main menu, select Projects > View all projects and then click New project. -2. Specify to create an empty project. -3. Enter a project name, such as ""Test Project"". -4. Select your Cloud Object Storage instance. -5. Click Create. The administrative key is generated. -6. Delete the test project. - - - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_3,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Ensure that Global location is set for Cloud Object Storage in each user's profile - -Cloud Object Storage requires the Global location to be configured in each user's profile. The Global location is configured automatically, but it might be changed by mistake. An error occurs when a project is created if the Global location is not enabled in the user's profile. Ask users to check that Global location is enabled. - -[Check for the Global location in each user's profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html). - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_4,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Provide access to Cloud Object Storage - -You can provide different levels of access to Cloud Object Storage for people who need to work in IBM watsonx. Using the storage delegation setting on the Cloud Object Storage instance, you can provide quick access to most users to create projects and catalogs. However, another option is to provide targeted access by using IAM roles and access groups. Role-based access enacts stricter controls for viewing the Cloud Object Storage instance directly and for creating projects and catalogs. If you decide to provide controlled access with IAM roles and access groups, you must disable storage delegation for the Cloud Object Storage instance. - -You enable storage delegation for the Cloud Object Storage instance to provide access to nonadministrative users. Users with minimal IAM permissions can create projects and catalogs, which automatically create buckets in the Cloud Object Storage instance. See [Enable storage delegation for nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enstor-del). - -You provide more controlled access with IAM roles and access groups. For example, the Cloud Object Storage Manager role provides permissions to create projects and spaces together with the corresponding buckets in the Cloud Object Storage instance. It also provides permissions to view all buckets and encryption root keys in the Cloud Object Storage instance, to view the metadata for a bucket and delete buckets, and to perform other administrative tasks that are related to buckets. See [Assign roles to enable access](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html?context=cdpaas&locale=enassign). - -No role assignments are needed for collaborators who work with the data in a project or catalog. Users who are given collaborator roles can work in the project or catalog without storage delegation or an IAM role. See [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_5,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Assign roles to enable access - -The IBM Cloud account owner or administrator assigns appropriate roles to users to provide access to Cloud Object Storage. Storage delegation must be disabled when using role-based access. - -Rather than assigning each individual user a set of roles, you can create an access group. Access groups expedite role assignments by grouping permissions. For instructions on creating access groups, see [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui). - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_6,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Enable storage delegation - -Storage delegation for the Cloud Object Storage instance allows nonadministrative users to create projects, the Platform assets catalog, and the corresponding Cloud Object Storage buckets. Storage delegation provides wide access to Cloud Object Storage and allows users with minimal permissions to create projects. Storage delegation for projects also includes deployment spaces. - -To enable storage delegation for the Cloud Object Storage instance: - - - -1. From the navigation menu, select Administration > Configurations and settings > Storage delegation. -2. Set storage delegation for Projects to on. -3. Optional. If you want a non-administrative user to create the Platform assets catalog, set storage delegation for Catalogs to on. - - - -![Storage delegation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/cos-delegation.png) - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_7,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Optional: Encrypt your IBM Cloud Object Storage instance with your own key - -Encryption protects the data for your projects and catalogs. Data at rest in Cloud Object Storage is encrypted by default with randomly generated keys that are managed by IBM. For increased protection, you can create and manage your own encryption keys with IBM Key Protect. IBM Key Protect for IBM Cloud is a centralized key management system for generating, managing, and deleting encryption keys used by IBM Cloud services. - -For more information, see [IBM Cloud docs: IBM Key Protect for IBM Cloud](https://cloud.ibm.com/docs/services/key-protect?topic=key-protect-aboutabout). - -Not all [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) support the use of your own encryption keys. Check your specific plan for details. - -To encrypt your Cloud Object Storage instance with your own key, you need an instance of the IBM Key Project service. Although Key Protect is a paid service, each account is allowed five keys without charge. - -In IBM Cloud, provision Key Protect and generate a key: - - - -1. Create an instance of Key Protect for your account from the IBM Cloud catalog. See [IBM Cloud docs: Provisioning the Key Protect service](https://cloud.ibm.com/docs/key-protect?topic=key-protect-provision&interface=ui). -2. Grant a service authorization between your Key Protect instance and your Cloud Object Storage instance. Do not associate a key with a bucket. If you don't grant the authorization, users cannot create projects and catalogs with the Cloud Object Storage instance. For more information, see [IBM Cloud docs: Using authorizations to grant access between services](https://cloud.ibm.com/docs/account?topic=account-serviceauth&interface=ui). You can also grant a service authorization for a root key from Watson Studio, by choosing Manage > Access (IAM). -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_8,39AD64C9004E83507A968C5C0B1C8EF952B3EACE,"3. Create a root key to protect your Cloud Object Storage instance. See [IBM Cloud docs: Creating root keys](https://cloud.ibm.com/docs/key-protect?topic=key-protect-create-root-keys&interface=uicreate_root_keys). - - - -In IBM watsonx, add the key to the Cloud Object Storage instance: - - - -1. Select Administration > Configurations and settings > Storage delegation. -2. Slide the toggle for Projects, Catalogs, or both to select data for encryption with your key. -3. Click Add... under Encryption keys to add an encryption key. -4. Select the Key Protect instance and the Key Protect key. -5. Click OK to add the encryption key. - - - -Important: If you change or remove the key, you lose access to existing encrypted data in the Cloud Object Storage instance. - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_9,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Optional: Protect sensitive data stored on Cloud Object Storage - -When you join IBM watsonx, a single Cloud Object Storage instance is automatically provisioned for you. The Cloud Object Storage instance contains separate buckets for each project to store data assets and related files. The ability to create projects and thus to add buckets to Cloud Object Storage is available only to users with the Platform Administrator role and the Manager role for the Cloud Object Storage Service. Although only users with these roles can create projects and their accompanying buckets, any user with the Editor or Viewer role can see the data files. For some businesses, the data files contain sensitive information and require stricter access controls. - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_10,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Control access to Cloud Object Storage with multiple instances - -For paid plans, you can control access to sensitive data files by creating one or more Cloud Object Storage instances and assigning access to specific users. Project creators select the appropriate Cloud Object Storage instance when they create a project. The data assets and files for the project are stored in a bucket in the selected instance. Users with Editor or Viewer roles can work in the projects, but they cannot see the assets directly in the related Cloud Object Storage bucket. You can assign access to a specific Cloud Object Storage instance either to an individual user or to an access group. You must be the account owner or administrator to create service instances and assign access. - -Extra fees are not incurred by creating more than one Cloud Object Storage instances because charges are determined by overall storage utilization. The number of instances is not a factor for Cloud Object Storage fees. - -Only one instance of Cloud Object Storage is allowed for the Lite plan. You can change your pricing plan from the IBM Cloud catalog. - -To create a Cloud Object Storage instance and assign access: - - - -1. Select Services > Services catalog from the navigation menu. -2. Select Storage > Cloud Object Storage. -3. Click Create. A Service name is generated for you on IBM Cloud. -4. Select Manage > Access(IAM). -5. Select Users or Access groups. -6. Click Assign access. -7. In the Services list, choose Cloud Object Storage. -8. For Resources, choose: - - - -* Scope = Specific resources -* Attribute type = Service instance -* Operator = string equals -* Value = name of Cloud Object Storage - - - -9. For Roles and actions, choose: - - - -* Service access = Manager -* Platform access = Administrator - - - -10. Click Add and Assign. - - - -The specified Cloud Object Storage instance can be accessed only by the user or access group with the Service role of Manager and the Platform role of Administrator. Other users can work in the projects but cannot create projects or view assets directly in the bucket. - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_11,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Next step - -Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). - -" -39AD64C9004E83507A968C5C0B1C8EF952B3EACE_12,39AD64C9004E83507A968C5C0B1C8EF952B3EACE," Learn more - - - -* [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) - - - -Parent topic:[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -322E404E76067637F1D0AFDF44CBE309C2A53221_0,322E404E76067637F1D0AFDF44CBE309C2A53221," Accessibility features in IBM watsonx content and documentation - -IBM is committed to accessibility. Accessibility features that follow compliance guidelines are included in IBM watsonx content and documentation to benefit users with disabilities. Parts of the user interface of IBM watsonx are accessible, but not entirely. Only documentation is compliant, with a subset of parts of the overall product. - -IBM watsonx documentation uses the latest W3C Standard, [WAI-ARIA 1.0](https://www.w3.org/TR/wai-aria/) to ensure compliance with the [United States Access Board Section 508 Standards](https://www.access-board.gov/ict/), and the [ Web Content Accessibility Guidelines (WCAG) 2.0](https://www.w3.org/TR/WCAG20/). - -The IBM watsonx online product documentation is enabled for accessibility. Accessibility features help users who have a disability, such as restricted mobility or limited vision, to use information technology products successfully. Documentation is provided in HTML so that it is easily accessible through assistive technology. With the accessibility features of IBM watsonx, you can do the following tasks: - - - -* Use screen-reader software and digital speech synthesizers to hear what is displayed on the screen. Consult the product documentation of the assistive technology for details on using assistive technologies with HTML-based information. -* Use screen magnifiers to magnify what is displayed on the screen. -* Operate specific or equivalent features by using only the keyboard. - - - -For more information about the commitment that IBM has to accessibility, see [IBM Accessibility](http://www.ibm.com/able). - -" -322E404E76067637F1D0AFDF44CBE309C2A53221_1,322E404E76067637F1D0AFDF44CBE309C2A53221," TTY service - -In addition to standard IBM help desk and support websites, IBM has established a TTY telephone service for use by deaf or hard of hearing customers to access sales and support services: - -800-IBM-3383 (800-426-3383) within North America - -" -322E404E76067637F1D0AFDF44CBE309C2A53221_2,322E404E76067637F1D0AFDF44CBE309C2A53221," Additional interface information - -The IBM watsonx user interfaces do not have content that flashes 2 - 55 times per second. - -The IBM watsonx web user interfaces rely on cascading stylesheets to render content properly and to provide a usable experience. If you are a low-vision user, you can adjust your operating system display settings, and use settings such as high contrast mode. You can control font size by using the device or web browser settings. -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_0,C3552C5E0F334C8BC3557960821DC5EF931851A1," Activities for assets - -For some asset types, you can see the activities of each asset in projects. The activities graph shows the history of the events that are performed on the asset for some tools. An event is an action that changes or copies the asset. For example, editing the asset description is an event, but viewing the asset is not an event. - -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_1,C3552C5E0F334C8BC3557960821DC5EF931851A1," Requirements and restrictions - -You can view the activities of assets under the following circumstances. - - - -* Workspaces -You can view the asset activities in projects. - - - - - -* Limitations -Activities have the following limitations: - - - -* Activities graphs are currently available only for Watson Machine Learning models and data assets. -* Activities graphs do not appear in Microsoft Internet Explorer 11 browsers. - - - - - -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_2,C3552C5E0F334C8BC3557960821DC5EF931851A1," Activities events - -To view activities for an asset in a project, click the asset name and click ![Activities icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/activities.svg). The activities panel shows a timeline of events. Summary information about the asset shows where asset was created, what the last event for it was, and when the last event happened. The first event for each asset is its creation. - -Activities events can describe actions that are applicable to all asset types or actions that are specific to an asset type: - - - -* [General events](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=engeneral) -* [Events specific to Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=enwml) -* [Events specific to data assets from files and connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html?context=cdpaas&locale=endata) - - - -You can see this type of information about each event: - - - -* Where: In which catalog or project the event occurred. -* Who: The name of the user who performed the action, unless the action was automated. Automated actions generate events, but don't show usernames. -* What: A description of the action. Some events show details about the original and updated values. -* When: The date and time of the event. - - - -Activities also track relationships between assets. In the activities panel, the creation of a new asset based on the original asset is shown at the top of the list. Click See details to view asset details. - -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_3,C3552C5E0F334C8BC3557960821DC5EF931851A1," General events - -You can see these general events: - - - -* Name updated -* Description updated -* Tags updated - - - -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_4,C3552C5E0F334C8BC3557960821DC5EF931851A1," Events specific to Watson Machine Learning models - -Activities tracking is available for all Watson Machine Learning service plans, however, you wouldn't see events for actions that are not available with your plan. - -In addition to general events, you can see these events that are specific to models: - - - -* Model created -* Model deployed -* Model re-evaluated -* Model retrained -* Set as active model - - - -A model asset shows this information in the Created from field, depending on how it was created: - - - -* The name of the associated data asset -* The name of the associated connection asset -* The project name where it was created - - - -" -C3552C5E0F334C8BC3557960821DC5EF931851A1_5,C3552C5E0F334C8BC3557960821DC5EF931851A1," Events specific to data assets from files and connected data assets - -In addition to general events, you can see these events that are specific to data assets from files and connected data assets: - - - -* Added to project from a Data Refinery flow -* Added to a project from a file -* Data classes updated -* Schema updated by a Data Refinery flow -* Profile created -* Profile updated -* Profile deleted -* Downloaded - - - -A data asset shows this information in the Created from field, depending on how it was created: - - - -* The name of the Data Refinery flow that created it -* Its associated connection name -* The project name where it was created or came from - - - -Parent topic:[Finding and viewing an asset in a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/view-asset.html) -" -256ED6CA079A147359A51199DC333B23C2708B42_0,256ED6CA079A147359A51199DC333B23C2708B42," Asset types and properties - -You create content, in the form of assets, when you work with tools in collaborative workspaces. An asset is an item that contains information about a data set, a model, or another item that works with data. - -You add assets by importing them or creating them with tools. You work with assets in collaborative workspaces. The workspace that you use depends on your tasks. - - - -* Projects -Where you collaborate with others to work with data and create assets. Most tools are in projects and you run assets that contain code in projects. For example, you can import data, prepare data, analyze data, or create models in projects. See [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html). - - - - - -* Deployment spaces -Where you deploy and run assets that are ready for testing or production. You move assets from projects into deployment spaces and then create deployments from those assets. You monitor and update deployments as necessary. See [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - - - -You can find any asset in any of the workspaces for which you are a collaborator by searching for it from the global search bar. See [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html). - -You can create many different types of assets, but all assets have some common properties: - - - -* [Asset types](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=entypes) -* [Common properties for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=encommon) -" -256ED6CA079A147359A51199DC333B23C2708B42_1,256ED6CA079A147359A51199DC333B23C2708B42,"* [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_2,256ED6CA079A147359A51199DC333B23C2708B42," Asset types - -To create most types of assets, you must use a specific tool. - -The following table lists the types of assets that you can create, the tools you need to create them, and the workspaces where you can add them. - - - -Asset types - - Asset type Description Tools to create it Workspaces - - [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Automatically generates candidate predictive model pipelines. AutoAI Projects - [Connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents data that is accessed through a connection to a remote data source. Connected data tool Projects, Spaces - [Connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Contains the information to connect to a data source. Connection tool Projects, Spaces - [Data asset from a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents a file that you uploaded from your local system. Upload pane Projects, Spaces - [Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html) Prepares data. Data Refinery Projects, Spaces - [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solves optimization problems. Decision Optimization Projects - [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Trains a common model on a set of remote data sources. Federated Learning Projects -" -256ED6CA079A147359A51199DC333B23C2708B42_3,256ED6CA079A147359A51199DC333B23C2708B42," [Folder asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=endata) Represents a folder in IBM Cloud Object Storage. Connected data tool Projects, Spaces - [Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) Runs Python or R code to analyze data or build models. Jupyter notebook editor, AutoAI, Prompt Lab Projects - Model Contains information about a saved or imported model. Various tools that run experiments or train models Projects, Spaces - [Model use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) Tracks the lifecycle of a model from request to production. watsonx.governance Inventory - [Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automates the model lifecycle. Watson Pipelines Projects - [Prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) A single prompt. Prompt Lab Projects - [Prompt session](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) The history of a working session in the Prompt Lab. Prompt Lab Projects - [Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) Contains Python code to support a model in production. Jupyter notebook editor Projects, Spaces -" -256ED6CA079A147359A51199DC333B23C2708B42_4,256ED6CA079A147359A51199DC333B23C2708B42," [Script](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) Contains a Python or R script to support a model in production. Jupyter notebook editor, RStudio Projects, Spaces - [SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Runs a flow to prepare data and build a model. SPSS Modeler Projects - [Visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Shows visualizations from a data asset. Visualization page in data assets Projects - [Synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generates synthetic tabular data. Synthetic Data Generator Projects - [Tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html) A tuned foundation model. Tuning Studio Projects - [Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) A tuning experiment that builds a tuned foundation model. Tuning Studio Projects - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_5,256ED6CA079A147359A51199DC333B23C2708B42," Common properties for assets - -Assets accumulate information in properties when you create them, use them, or when they are updated by automated processes. Some properties are provided by users and can be edited by users. Other properties are automatically provided by the system. Most system-provided properties can't be edited by users. - -" -256ED6CA079A147359A51199DC333B23C2708B42_6,256ED6CA079A147359A51199DC333B23C2708B42," Common properties for assets everywhere - -Most types of assets have the properties that are listed in the following table in all the workspaces where those asset types exist. - - - -Common properties for assets - - Property Description Editable? - - Name The asset name. Can contain up to 255 characters. Supports multibyte characters. Cannot be empty, contain Unicode control characters, or contain only blank spaces. Asset names do not need to be unique within a project or deployment space. Yes - Description Optional. Supports multibyte characters and hyperlinks. Yes - Creation date The timestamp of when the asset was created or imported. No - Creator or Owner The username or email address of the person who created or imported the asset. No - Last modified date The timestamp of when the asset was last modified. No - Last editor The username or email address of the person who last modified the asset. No - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_7,256ED6CA079A147359A51199DC333B23C2708B42," Common properties for assets that run in tools - -Some assets are associated with running a tool. For example, an AutoAI experiment asset runs in the AutoAI tool. Assets that run in tools are also known as operational assets. Every time that you run assets in tools, you start a job. You can monitor and schedule jobs. Jobs use compute resources. Compute resources are measured in capacity unit hours (CUH) and are tracked. Depending on your service plans, you can have a limited amount of CUH per month, or pay for the CUH that you use every month. - -For many assets that run in tools, you have a choice of the compute environment configuration to use. Typically, larger and faster environment configurations consume compute resources faster. - -In addition to basic properties, most assets that run in tools contain the following types of information in projects: - - - -Properties for assets in projects - - Properties Description Editable? Workspaces - - Environment definition The environment template, hardware specification, and software specification for running the asset. See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). Yes Projects, Spaces - Settings Information that defines how the asset is run. Specific to each type of asset. Yes Projects - Associated data assets The data that the asset is working on. Yes Projects - Jobs Information about how to run the asset, including the environment definition, schedule, and notification options. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). Yes Projects, Spaces - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_8,256ED6CA079A147359A51199DC333B23C2708B42," Data asset types and their properties - -Data asset types contain metadata and other information about data, including how to access the data. - -How you create a data asset depends on where your data is: - - - -* If your data is in a file, you upload the file from your local system to a project or deployment space. -* If your data is in a remote data source, you first create a connection asset that defines the connection to that data source. Then, you create a data asset by selecting the connection, the path or other structure, and the table or file that contains the data. This type of data asset is called a connected data asset. - - - -The following graphic illustrates how data assets from files point to uploaded files in Cloud Object Storage. Connected data assets require a connection asset and point to data in a remote data source. - -![This graphic shows that data assets from files point to uploaded files and connected data assets require a connection asset and point to data in a remote data source.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data-assets.svg) - -You can create the following types of data assets in a project or deployment space: - - - -* Data asset from a file -Represents a file that you uploaded from your local system. The file is stored in the object storage container on the IBM Cloud Object Storage instance that is associated with the workspace. The contents of the file can include structured data, unstructured textual data, images, and other types of data. You can create a data asset with a file of any format. However, you can do more actions on CSV files than other file types. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data). - -You can create a data asset from a file by uploading a file in a workspace. You can also create data files with tools and convert them to assets. For example, you can create data assets from files with the Data Refinery, Jupyter notebook, and RStudio tools. -* Connected data asset -" -256ED6CA079A147359A51199DC333B23C2708B42_9,256ED6CA079A147359A51199DC333B23C2708B42,"Represents a table, file, or folder that is accessed through a connection to a remote data source. The connection is defined in the connection asset that is associated with the connected data asset. You can create a connected data asset for every supported connection. When you access a connected data asset, the data is dynamically retrieved from the data source. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data). - -You can import connected data assets from a data source with the connected data tool in a workspace. -* Folder asset -Represents a folder in IBM Cloud Object Storage. A folder data asset is special case of a connected data asset. You create a folder data asset by specifying the path to the folder and the IBM Cloud Object Storage connection asset. You can view the files and subfolders that share the path with the folder data asset. The files that you can view within the folder data asset are not themselves data assets. For example, you can create a folder data asset for a path that contains news feeds that are continuously updated. See [Properties of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enprop-data). - -You can import folder assets from IBM Cloud Object Storage with the connected data tool in a workspace. -* Connection asset -Contains the information necessary to create a connection to a data source. See [Properties of connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html?context=cdpaas&locale=enconn). - -You can create connections with the connection tool in a workspace. - - - -Learn more about creating and importing data assets: - - - -* [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -" -256ED6CA079A147359A51199DC333B23C2708B42_10,256ED6CA079A147359A51199DC333B23C2708B42,"* [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html) - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_11,256ED6CA079A147359A51199DC333B23C2708B42," Properties of data assets from files and connected data assets - -In addition to basic properties, data assets from files and connected data assets have the properties or pages that are listed in the following table. - - - -Properties of data assets from files and connected data assets - - Property or page Description Editable? Workspaces - - Tags Optional. Text labels that users create to simplify searching. A tag consists of one string of up to 255 characters. It can contain spaces, letters, numbers, underscores, dashes, and the symbols # and @. Yes Projects - Format The MIME type of a file. Automatically detected. Yes Projects, Spaces - Source Information about the data file in storage or the data source and connection. No Projects, Spaces - Asset details Information about the size of the data, the number of columns and rows, and the asset version. No Projects, Spaces - Preview asset A preview of the data that includes a limited set of columns and rows from the original data source. See [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html). No Projects, Spaces - Profile page Metadata and statistics about the content of the data. See [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html). Yes Projects - Visualizations page Charts and graphs that users create to understand the data. See [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html). Yes Projects - Feature group page Information about which columns in the data asset are used as features in models. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html). Yes Projects, Spaces - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_12,256ED6CA079A147359A51199DC333B23C2708B42," Properties of connection assets - -The properties of connection assets depend on the data source that you select when you create a connection. See [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). Connection assets for most data sources have the properties that are listed in the following table. - - - -Properties of connection assets - - Properties Description Editable? Workspaces - - Connection details The information that identifies the data source. For example, the database name, hostname, IP address, port, instance ID, bucket, endpoint URL, and so on. Yes Projects, Spaces - Credential setting Whether the credentials are shared across the platform (default) or each user must enter their personal credentials. Not all data sources support personal credentials. Yes Projects, Spaces - Authentication method The format of the credentials information. For example, an API key or a username and password. Yes Projects, Spaces - Credentials The username and password, API key, or other credentials, as required by the data source and the specified authentication method. Yes Projects, Spaces - Certificates Whether the data source port is configured to accept SSL connections and other information about the SSL certificate. Yes Projects, Spaces - Private connectivity The method to connect to a database that is not externalized to the internet. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Yes Projects, Spaces - - - -" -256ED6CA079A147359A51199DC333B23C2708B42_13,256ED6CA079A147359A51199DC333B23C2708B42," Learn more - - - -* [Profiles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) -* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) -* [Asset contents or previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) -* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html) -* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html) -* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -* [Connection types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) - - - -Parent topic:[Overview of IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -" -C562415979D3B38AA74C27FD68F13D54FFE47FE5_0,C562415979D3B38AA74C27FD68F13D54FFE47FE5," Adding associated services to a project - -To run some tools, you must associate a Watson Machine Learning service instance with the project. - -After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), you can add an associated service to it at any time. - -Required permission : You must have the Admin role in the project to add an associated service. - -For some types of assets, you must associate the IBM Watson Machine Learning service with the project. You are prompted to associate the IBM Watson Machine Learning the first time you open tools like Prompt Lab, AutoAI, SPSS Modeler, and Decision Optimization. - -You can also add the Watson Machine Learning service to a project directly: - - - -1. Go to the project's Manage tab and select the Services and integrations page. -2. In the IBM Services section, click Associate Service. -3. Select your IBM Watson Machine Learning service instance and click Associate. - - - -" -C562415979D3B38AA74C27FD68F13D54FFE47FE5_1,C562415979D3B38AA74C27FD68F13D54FFE47FE5," Learn more - - - -* [Creating and managing IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) -* [IBM Cloud services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -E45C894CB0DF39AD55A35183D62A6CBD570076CA_0,E45C894CB0DF39AD55A35183D62A6CBD570076CA," Browser support - -The supported web browsers provide the best experience for IBM watsonx. - -Use the latest versions of these web browers with IBM watsonx: - - - -* Chrome -* Microsoft Edge -* Mozilla Firefox -Tip for Firefox on Mac users: Horizontal scrolling within the UI can be interpreted by your Mac as an attempt to swipe between pages. If this behavior is undesired or if the browser crashes after the service prompts you to stay on the page, consider disabling the Swipe between pages gesture in Launchpad > System Preferences > Trackpad > More Gestures. -* Firefox ESR (see Mozilla Firefox Extended Support Release for more details) - - - -" -E45C894CB0DF39AD55A35183D62A6CBD570076CA_1,E45C894CB0DF39AD55A35183D62A6CBD570076CA," Learn more - - - -* [Language support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html) - - - -Parent topic:[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html) -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_0,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Project collaborators - -Collaborators are the people you add to the project to work together. After you create a project, add collaborators to share knowledge and resources freely, shift workloads flexibly, and help one another complete jobs. - -Required permissions : To manage collaborators, both of the following conditions must be true: : - You must have the Admin role in the project. : - You must belong to the project creator's IBM Cloud account. - - - -* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enadd-collaborators) -* [Add service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enserviceids) -* [Change collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enchange-role) -* [Remove a collaborator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html?context=cdpaas&locale=enremove-a-collaborator) - - - -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_1,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Add collaborators - -To add a collaborator as a Viewer or Editor of your project, they must either be: - - - -* A member of the project creator's IBM Cloud account, or; -* A member of the same organization single sign-on (SAML federation on IBM Cloud). - - - -To add a collaborator as an Admin of your project, they must be a member of the project creator's IBM Cloud account. - -Watch this video to see how to add collaborators and grant them access to your projects. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -To add collaborators to your project: - - - -1. From your project, click the Access Control page on the Manage tab. -2. Click Add collaborators then select Add users. -3. Add the collaborators who you want to have the same access level: - - - -* Type email addresses into the Find users field. -* Copy multiple email addresses, separated by commas, and paste them into the Find users field. - - - -4. Choose the [role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) for the collaborators and click Add: - - - -* Viewer: View the project. -* Editor: Control project assets. -* Admin: Control project assets, collaborators, and settings. - - - -5. Add more collaborators with the same or different access levels. -6. Click Add. - - - -The invited users are added to your project immediately. - -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_2,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Add service IDs - -You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Because service IDs are not tied to a specific user, if a user happens to leave an organization and is deleted from the account, the service ID remains ensuring that your application or service stays up and running. See [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids). - -To add a service ID to your project: - - - -1. From your project, select the Access Control page on the Manage tab. -2. Click Add collaborators and select Add service IDs. -3. In the Find service IDs field, search for the service name or description and select the one you want. -4. Add other service IDs that you want to have the same access level. -5. Select the access level. -6. Click Add. - - - -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_3,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Change collaborator roles - -To change the role for a project collaborator or service ID: - - - -1. Go to the Access Control page on the Manage tab. -2. In the row for the collaborator or service ID, click the edit icon next to the role name. -3. Select the new role and click Save. - - - -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_4,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Remove a collaborator - -To remove a collaborator or service ID from a project, go to the Access Control page on the Manage tab. In the row for the collaborator or service ID, click the remove icon. - -" -E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA_5,E2089A0F2315F9897F6E3DDE50E4FC3EBD2E65AA," Learn more - - - -* [Collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) -* [Setup additional account users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -DE79F406DB76B8D50A2B8AB35D4A385983AA5F54_0,DE79F406DB76B8D50A2B8AB35D4A385983AA5F54," Project collaborator roles and permissions - -When you add a collaborator to a project, you specify which actions that the user can do by assigning a role. - -These roles provide these permissions for projects: - - - - Action Viewer Editor Admin - - View all information for data assets ✓ ✓ ✓ - View jobs ✓ ✓ ✓ - Add and read data assets ✓ ✓ - View Data Refinery flows and SPSS Modeler flows ✓ ✓ - View all other types of assets ✓ ✓ ✓ - Create, add, modify, or delete all types of assets ✓ ✓ - Submit inference requests to foundation models, including tuned foundation models ✓ ✓ - Run and schedule assets that run in tools and jobs ✓ ✓ - Create and modify data asset visualizations ✓ ✓ ✓ - Save visualizations to your project ✓ ✓ - Create and modify data asset profiles ✓ ✓ - Share notebooks ✓ ✓ - Promote assets to deployment spaces ✓ ✓ - Edit the project readme ✓ ✓ - Use project access tokens ✓ ✓ - Manage environment templates ✓ ✓ - Stop your own environment runtimes ✓ ✓ - Export a project to desktop ✓ ✓ - Manage project collaborators * ✓ - Set up integrations ✓ - Manage associated services ✓ - Manage project access tokens ✓ - Mark project as sensitive ✓ - - - -* To add collaborators or change collaborator roles, users with the Admin role in the project must also belong to the project creator's IBM Cloud account. - -" -DE79F406DB76B8D50A2B8AB35D4A385983AA5F54_1,DE79F406DB76B8D50A2B8AB35D4A385983AA5F54," Learn more - - - -* [Adding collaborators to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_0,41AD83283A66CC3C467F70EA638B9C1C6681A160," Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service - -IBM watsonx as a Service and Cloud Pak for Data as a Service have similar platform functionality and are compatible in many ways. The watsonx platform provides a subset of the tools and services that are provided by Cloud Pak for Data as a Service. However, watsonx.ai and watsonx.governance on watsonx provide more functionality than the same set of tools on Cloud Pak for Data as a Service. - - - -* [Common platform functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enplatform) -* [Services on each platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=enservices) -* [Data science and MLOps tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=entools) -* [AI governance tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html?context=cdpaas&locale=engov) - - - -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_1,41AD83283A66CC3C467F70EA638B9C1C6681A160," Common platform functionality - -The following platform functionality is common to both watsonx and Cloud Pak for Data as a Service: - - - -* Security, compliance, and isolation -* Compute resources for running workloads -* Global search for assets across the platform -* The Platform assets catalog for sharing connections across the platform -* Role-based user management within workspaces -* A services catalog for adding services -* View compute usage from the Administration menu -* Connections to remote data sources -* Connection credentials that are personal or shared -* Sample assets and projects - - - -If you are signed up for both watsonx and Cloud Pak for Data as a Service, you can switch between platforms. See [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html). - -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_2,41AD83283A66CC3C467F70EA638B9C1C6681A160," Services on each platform - -Both platforms provide services for data science and MLOps and AI governance use cases: - - - -* Watson Studio -* Watson Machine Learning -* Watson OpenScale - - - -However, the services for watsonx.ai and watsonx.governance on the watsonx platform include features for working with foundation models and generative AI that are not included in these services on Cloud Pak for Data as a Service. - -Cloud Pak for Data as a Service also provides services for these use cases: - - - -* Data integration -* Data governance - - - -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_3,41AD83283A66CC3C467F70EA638B9C1C6681A160," Data science and AI tools - -Both platforms provide a common set of data science and AI tools. However, on watsonx, you can also perform foundation model inferencing with the Prompt Lab tool or with a Python library in notebooks. Foundation model inferencing and the Prompt Lab tool are not available on Cloud Pak for Data as a Service. - -The following table shows which data science and AI tools are available on each platform. - - - -Tools on watsonx and Cloud Pak for Data - - Tool On watsonx? On Cloud Pak for Data? - - [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) ✓ No - [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) ✓ No - [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) ✓ ✓ - [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) ✓ ✓ - [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) ✓ ✓ - [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) ✓ ✓ - [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) ✓ ✓ - [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) ✓ ✓ - [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) ✓ ✓ -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_4,41AD83283A66CC3C467F70EA638B9C1C6681A160," [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) ✓ ✓ - [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) ✓ ✓ - - - -If you are signed up for Cloud Pak for Data as a Service, you can access watsonx and you can move your projects and deployment spaces that meet the requirements from one platform to the other. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html). - -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_5,41AD83283A66CC3C467F70EA638B9C1C6681A160," AI governance tools - -Both platforms contain the same AI use case inventory and evaluation tools. However, on watsonx, you can track and evaluate generative AI assets and dimensions. See [Comparison of governance solutions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-comparison.html). - -" -41AD83283A66CC3C467F70EA638B9C1C6681A160_6,41AD83283A66CC3C467F70EA638B9C1C6681A160," Learn more - - - -* [Switching your platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html) -* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) -* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) -* [Overview of IBM watsonx as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) - - - -Parent topic:[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_0,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Frequently asked questions - -Find answers to frequently asked questions about watsonx.ai. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_1,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Account and setup questions - - - -* [How do I sign up for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensign-up-wxai) -* [Can I try watsonx for free?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfree) -* [How do I upgrade watsonx.ai and watsonx.governance?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enupgrade) -* [Which regions can I provision watsonx.ai and watsonx.governance in?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html) -* [Which web browsers are supported for watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) -* [How can I get the most runtime from my Watson Studio Lite plan?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enws-lite) -* [How do I change languages for the product and the documentation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/localization.html) -* [How do I find my IBM Cloud account owner or administrator?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enaccountadmin) -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_2,812C39CF410F9FE3F0D0E7C62ED1BC015370C849,"* [Can I provide feedback?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfeedback) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_3,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Foundation model questions - - - -* [What foundation models are available and where do they come from?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-available) -* [What data was used to train foundation models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-data) -* [Do I need to check generated output for biased, inappropriate, or incorrect content?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-check) -* [Is there a limit to how much text generation I can do?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-token-limit) -* [Does prompt engineering train the foundation model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-train) -* [Does IBM have access to or use my data in any way?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-privacy) -* [What APIs are available?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfm-apis) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_4,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Project questions - - - -* [How do I load very large files to my project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enverylarge) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_5,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," IBM Cloud Object Storage questions - - - -* [What is saved in IBM Cloud Object Storage for workspaces?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensaved-in-cos) -* [Do I need to upgrade IBM Cloud Object Storage when I upgrade other services?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enupgrade-cos) -* [Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encosstep) - - - - Notebook questions - - - -* [Can I install libraries or packages to use in my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=eninstall-libraries) -* [Can I call functions that are defined in one notebook from another notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfunctions-defined) -* [Can I add arbitrary notebook extensions?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enarbitrary) -* [How do I access the data from a CSV file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encsv-file) -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_6,812C39CF410F9FE3F0D0E7C62ED1BC015370C849,"* [How do I access the data from a compressed file in a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=encompressed-file) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_7,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Security and reliability questions - - - -* [How secure is IBM watsonx?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensecurity) -* [Is my data and notebook protected from sharing outside of my collaborators?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enprotected-notebooks) -* [Do I need to back up my notebooks?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enbackup-notebooks) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_8,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Sharing and collaboration questions - - - -* [What are the implications of sharing a notebook?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ensharing-notebooks) -* [How can I share my work outside of RStudio?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enhow-share) -* [How do I share my SPSS Modeler flow with another project?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enshare-spss) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_9,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Machine learning questions - - - -* [How do I run an AutoAI experiment?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enrun-autoai) -* [What is available for automated model building?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-autoai) -* [What frameworks and libraries are available for my machine learning models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-frameworks) -* [What is an API Key?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwml-api-key) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_10,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Watson OpenScale questions - - - -* [What is Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfaq-whatsa) -* [How do I convert a prediction column from an integer data type to a categorical data type?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-faqs-convert-data-types) -* [Why does Watson OpenScale need access to training data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=entrainingdata) -* [What does it mean if the fairness score is greater than 100 percent?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enfairness-score-over100) -* [How is model bias mitigated by using Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-001-bias) -* [Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-002-attrib) -* [Is it possible to mitigate bias for regression-based models?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-003-regress) -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_11,812C39CF410F9FE3F0D0E7C62ED1BC015370C849,"* [What are the different methods of debiasing in Watson OpenScale?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-004-methods-bias) -* [Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2. If the data is in Netezza, can Watson OpenScale use Netezza?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enconfigmodel) -* [Why doesn't Watson OpenScale see the updates that were made to the model?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=ennew-model-missing) -* [What are the various kinds of risks associated in using a machine learning model? ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-risk) -* [Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-dashboard-email) -* [In Watson OpenScale, what data is used for Quality metrics computation?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-quality-data) -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_12,812C39CF410F9FE3F0D0E7C62ED1BC015370C849,"* [In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html?context=cdpaas&locale=enwos-thresholds) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_13,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," IBM watsonx.ai questions - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_14,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I sign up for watsonx? - -Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). If you sign up for watsonx.governance, you automatically provision watsonx.ai as well. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_15,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Can I try watsonx for free? - -Yes, when you sign up for IBM watsonx.ai, you automatically provision the free version of the underlying services: Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage. When you sign up for IBM watsonx.governance, you automatically provision the free version of Watson OpenScale and the free versions of the services for IBM watsonx.ai. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_16,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I upgrade watsonx.ai and watsonx.governance? - -When you're ready to upgrade any of the underlying services for watsonx.ai or watsonx.governance, you can upgrade in place without losing any of your work or data. - -You must be the owner or administrator of the IBM Cloud account for a service to upgrade it. See [Upgrading services on watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_17,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How can I get the most runtime from my Watson Studio Lite plan? - -The Watson Studio Lite plan allows for 10 CUH per month. You can maximize your available CUH by setting your assets to use environments with lower CUH rates. For example, you can [change your notebook environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env). To see the available environments and the required CUH, go to the [Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=wx). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_18,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I find my IBM Cloud account owner? - -If you have an enterprise account or work in an IBM Cloud that you don't own, you might need to ask an account owner to give you access to a workspace or another role. - -To find your IBM Cloud account owner: - - - -1. From the navigation menu, choose Administration > Access (IAM). -2. From the avatar menu, make sure you're in the right account, or switch accounts, if necessary. -3. Click Users, and find the username with the word owner next to it. - - - -To understand roles, see [Roles for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html). To determine your roles, see [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_19,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Can I provide feedback? - -Yes, we encourage feedback as we continue to develop this platform. From the navigation menu, select Support > Share an idea. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_20,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Foundation models - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_21,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What foundation models are available and where do they come from? - -See the complete list of [supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_22,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What data was used to train foundation models? - -Links to details about each model, including pretraining data and fine-tuning, are available here: [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_23,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Do I need to check generated output for biased, inappropriate, or incorrect content? - -Yes, you must review the generated output of foundation models. Third Party models have been trained with data that might contain biases and inaccuracies and can generate outputs containing misinformation, obscene or offensive language, or discriminatory content. - -In the Prompt Lab, when you toggle AI guardrails on, any sentence in the prompt text or model output that contains harmful language will be replaced with a message saying potentially harmful text has been removed. - -See [Avoiding undesirable output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_24,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Is there a limit to how much text generation I can do? - -With the free trial of watsonx.ai, you can use up to 25,000 tokens per month. Your token usage is the sum of your input and output tokens. - -With a paid service plan, there is no token limit, but you are charged for the tokens that you submit as input plus the tokens that you receive in the generated output. - -See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_25,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Does prompt engineering train the foundation model? - -No, submitting prompts to a foundation model does not train the model. The models available in watsonx.ai are pretrained, so you do not need to train the models before you use them. - -See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_26,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Does IBM have access to or use my data in any way? - -No, IBM does not have access to your data. - -Your work on watsonx.ai, including your data and the models that you create, are private to your account: - - - -* Your data is accessible only by you. Your data is used to train only your models. Your data will never be accessible or used by IBM or any other person or organization. Your data is stored in dedicated storage buckets and is encrypted at rest and in motion. -* Your models are accessible only by you. Your models will never be accessible or used by IBM or any other person or organization. Your models are secured in the same way as your data. - - - -Learn more about security and your options: - - - -* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) -* [Security for IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_27,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What APIs are available? - -You can prompt foundation models in watsonx.ai programmatically using the Python library. - -See [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_28,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Projects - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_29,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I load very large files to my project? - -You can't load data files larger than 5 GB to your project. If your files are larger, you must use the Cloud Object Storage API and load the data in multiple parts. See the [curl commands](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) for working with Cloud Object Storage directly on IBM Cloud. - -See [Adding very large objects to a project's Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_30,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," IBM Cloud Object Storage - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_31,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What is saved in IBM Cloud Object Storage for workspaces? - -When you create a project or deployment space, you specify a IBM Cloud Object Storage and create a bucket that is dedicated to that workspace. These types of objects are stored in the IBM Cloud Object Storage bucket for the workspace: - - - -* Files for data assets that you uploaded into the workspace. -* Files associated with assets that run in tools, such as, notebooks and models. -* Metadata about assets, such as the asset type, format, and tags. - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_32,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Do I need to upgrade IBM Cloud Object Storage when I upgrade other services? - -You must upgrade your IBM Cloud Object Storage instance only when you run out of storage space. Other services can use any IBM Cloud Object Storage plan and you can upgrade any service or your IBM Cloud Object Storage service independently. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_33,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Why am I unable to add storage to an existing project or to see the IBM Cloud Object Storage selection in the New Project dialog? - -IBM Cloud Object Storage requires an extra step for users who do not have administrative privileges for it. The account administrator must [enable nonadministrative users to create projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlcos-delegation). - -If you have administrator privileges and do not see the latest IBM Cloud Object Storage, try again later because server-side caching might cause a delay in rendering the latest values. - - Notebooks - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_34,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Can I install libraries or packages to use in my notebooks? - -You can install Python libraries and R packages through a notebook, and those libraries and packages will be available to all your notebooks that use the same environment template. For instructions, see [Import custom or third-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html). If you get an error about missing operating system dependencies when you install a library or package, notify IBM Support. To see the preinstalled libraries and packages and the libraries and packages that you installed, from within a notebook, run the appropriate command: - - - -* Python: !pip list -* R: installed.packages() - - - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_35,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Can I call functions that are defined in one notebook from another notebook? - -There is no way to call one notebook from another notebook on the platform. However, you can put your common code into a library outside of the platform and then install it. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_36,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Can I add arbitrary notebook extensions? - -No, you can't extend your notebook capabilities by adding arbitrary extensions as a customization because all notebook extensions must be preinstalled. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_37,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I access the data from a CSV file in a notebook? - -After you load a CSV file into object storage, load the data by clicking the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) in an opened notebook, clicking Read data and selecting the CSV file from the project. Then, click in an empty code cell in your notebook and insert the generated code. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_38,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I access the data from a compressed file in a notebook? - -After you load the compressed file to object storage, get the file credentials by clicking the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) in an opened notebook, clicking Read data and selecting the compressed file from the project. Then, click in an empty code cell in your notebook and load the credentials to the cell. Alternatively, click to copy the credentials to the clipboard and paste them into your notebook. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_39,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Security and reliability - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_40,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How secure is IBM watsonx? - -The IBM watsonx platform is very secure and resilient. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_41,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Is my data and notebook protected from sharing outside of my collaborators? - -The data that is loaded into your project and notebooks is secure. Only the collaborators in your project can access your data or notebooks. Each platform account acts as a separate tenant of the Spark and IBM Cloud Object Storage services. Tenants cannot access other tenant's data. - -If you want to share your notebook with the public, then hide your data service credentials in your notebook. For the Python and R languages, enter the following syntax: @hidden_cell - -Be sure to save your notebook immediately after you enter the syntax to hide cells with sensitive data. - -Only then should you share your work. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_42,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Do I need to back up my notebooks? - -No. Your notebooks are stored in IBM Cloud Object Storage, which provides resiliency against outages. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_43,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Sharing and collaboration - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_44,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What are the implications of sharing a notebook? - -When you share a notebook, the permalink never changes. Any person with the link can view your notebook. You can stop sharing the notebook by clearing the checkbox to share it. Updates are not automatically shared. When you update your notebook, you can sync the shared notebook by reselecting the checkbox to share it. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_45,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How can I share my work outside of RStudio? - -One way of sharing your work outside of RStudio is connecting it to a shared GitHub repository that you and your collaborators can work from. Read this [blog post](https://www.r-bloggers.com/rstudio-and-github/) for more information. - -However, the best method to share your work with the members of a project is to use notebooks in the project that uses the R kernel. - -RStudio is a great environment to work in for prototyping and working individually on R projects, but it is not yet integrated with projects. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_46,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I share my SPSS Modeler flow with another project? - -By design, modeler flows can be used only in the project where the flow is created or imported. If you need to use a modeler flow in a different project, you must download the flow from current project (source project) to your local environment and then import the flow to another project (target project). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_47,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," IBM Watson Machine Learning - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_48,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I run an AutoAI experiment? - -Go to [Creating an AutoAI experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) to watch a short video to see how to create and run an AutoAI experiment and then follow a tutorial to set up your own sample. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_49,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What is available for automated model building? - -The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines that are customized for your predictive modeling problem. These model pipelines are created iteratively as AutoAI analyzes your data set and discovers data transformations, algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective. For details, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_50,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What frameworks and libraries are supported for my machine learning models? - -You can use popular tools, libraries, and frameworks to train and deploy machine learning models by using IBM Watson Machine Learning. The [supported frameworks topic](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) lists supported versions and features, as well as deprecated versions scheduled to be discontinued. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_51,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What is an API Key? - -API keys allow you to easily authenticate when using the CLI or APIs that can be used across multiple services. API Keys are considered confidential since they are used to grant access. Treat all API keys as you would a password since anyone with your API key can impersonate your service. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_52,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Watson OpenScale - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_53,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What is Watson OpenScale - -IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant wherever your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_54,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How do I convert a prediction column from an integer data type to a categorical data type? - -For fairness monitoring, the prediction column allows only an integer numerical value even though the prediction label is categorical. How do I configure a categorical feature that is not an integer? Is a manual conversion required? - -The training data might have class labels such as “Loan Denied”, “Loan Granted”. The prediction value that is returned by IBM Watson Machine Learning scoring end point has values such as “0.0”, “1.0"". The scoring end point also has an optional column that contains the text representation of prediction. For example, if prediction=1.0, the predictionLabel column might have a value “Loan Granted”. If such a column is available, when you configure the favorable and unfavorable outcome for the model, specify the string values “Loan Granted” and “Loan Denied”. If such a column is not available, then you need to specify the integer and double values of 1.0, 0.0 for the favorable, and unfavorable classes. - -IBM Watson Machine Learning has a concept of output schema that defines the schema of the output of IBM Watson Machine Learning scoring end point and the role for the different columns. The roles are used to identify which column contains the prediction value, which column contains the prediction probability, and the class label value, and so on. The output schema is automatically set for models that are created by using model builder. It can also be set by using the IBM Watson Machine Learning Python client. Users can use the output schema to define a column that contains the string representation of the prediction. Set the modeling_role for the column to ‘decoded-target’. Read the [documentation for the IBM Watson Machine Learning Python client](https://ibm.github.io/watson-machine-learning-sdk/). Search for “OUTPUT_DATA_SCHEMA” to understand the output schema and the API to use is to store_model API that accepts the OUTPUT_DATA_SCHEMA as a parameter. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_55,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Why does Watson OpenScale need access to training data? - -You must either provide Watson OpenScale access to training data that is stored in Db2 or IBM Cloud Object Storage, or you must run a Notebook to access the training data. - -Watson OpenScale needs access to your training data for the following reasons: - - - -* To generate contrastive explanations: To create explanations, access to statistics, such as median value, standard deviation, and distinct values from the training data is required. -* To display training data statistics: To populate the bias details page, Watson OpenScale must have training data from which to generate statistics. -* To build a drift detection model: The Drift monitor uses training data to create and calibrate drift detection. - - - -In the Notebook-based approach, you are expected to upload the statistics and other information when you configure a deployment in Watson OpenScale. Watson OpenScale no longer has access to the training data outside of the Notebook, which is run in your environment. It has access only to the information uploaded during the configuration. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_56,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What does it mean if the fairness score is greater than 100 percent? - -Depending on your fairness configuration, your fairness score can exceed 100 percent. It means that your monitored group is getting relatively more “fair” outcomes as compared to the reference group. Technically, it means that the model is unfair in the opposite direction. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_57,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," How is model bias mitigated by using Watson OpenScale? - -The debiasing capability in Watson OpenScale is enterprise grade. It is robust, scalable and can handle a wide variety of models. Debiasing in Watson OpenScale consists of a two-step process: Learning Phase: Learning customer model behavior to understand when it acts in a biased manner. - -Application Phase: Identifying whether the customer’s model acts in a biased manner on a specific data point and, if needed, fixing the bias. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.html) and [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.html). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_58,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Is it possible to check for model bias on sensitive attributes, such as race and sex, even when the model is not trained on them? - -Yes. Recently, Watson OpenScale delivered a ground-breaking feature called “Indirect Bias detection.” Use it to detect whether the model is exhibiting bias indirectly for sensitive attributes, even though the model is not trained on these attributes. For more information, see [Understanding how debiasing works](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-debias-ovr.htmlmf-debias-indirect). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_59,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Is it possible to mitigate bias for regression-based models? - -Yes. You can use Watson OpenScale to mitigate bias on regression-based models. No additional configuration is needed from you to use this feature. Bias mitigation for regression models is done out-of-box when the model exhibits bias. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_60,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What are the different methods of debiasing in Watson OpenScale? - -You can use both Active Debiasing and Passive Debiasing for debiasing. For more information, see [Debiasing options](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-debias.htmlit-dbo-active). - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_61,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Configuring a model requires information about the location of the training data and the options are Cloud Object Storage and Db2. If the data is in Netezza, can Watson OpenScale use Netezza? - -Use this [Watson OpenScale Notebook](https://github.com/IBM/watson-openscale-samples/blob/main/Cloud%20Pak%20for%20Data/Batch%20Support/Configuration%20generation%20for%20OpenScale%20batch%20subscription.ipynb) to read the data from Netezza and generate the training statistics and also the drift detection model. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_62,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Why doesn't Watson OpenScale see the updates that were made to the model? - -Watson OpenScale works on a deployment of a model, not on the model itself. You must create a new deployment and then configure this new deployment as a new subscription in Watson OpenScale. With this arrangement, you are able to compare the two versions of the model. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_63,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," What are the various kinds of risks associated in using a machine learning model? - -Multiple kinds of risks that are associated with machine learning models, such as any change in input data that is also known as Drift can cause the model to make inaccurate decisions, impacting business predictions. Training data can be cleaned to be free from bias but runtime data might induce biased behavior of model. - -Traditional statistical models are simpler to interpret and explain, but unable to explain the outcome of the machine learning model can pose a serious threat to the usage of the model. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_64,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," Must I keep monitoring the Watson OpenScale dashboard to make sure that my models behave as expected? - -No, you can set up email alerts for your production model deployments in Watson OpenScale. You receive email alerts whenever a risk evaluation test fails, and then you can come and check the issues and address them. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_65,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," In Watson OpenScale, what data is used for Quality metrics computation? - -Quality metrics are calculated that use manually labeled feedback data and monitored deployment responses for this data. - -" -812C39CF410F9FE3F0D0E7C62ED1BC015370C849_66,812C39CF410F9FE3F0D0E7C62ED1BC015370C849," In Watson OpenScale, can the threshold be set for a metric other than 'Area under ROC' during configuration? - -No, currently, the threshold can be set only for the 'Area under ROC' metric. -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_0,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Feature differences between watsonx deployments - -IBM watsonx as a Service and watsonx on Cloud Pak for Data software have some differences in features and implementation. IBM watsonx as a Service is a set of IBM Cloud services. Watsonx services on Cloud Pak for Data 4.8 are offered as software that you must install and maintain. Services that are available on both deployments also have differences in features on IBM watsonx as a Service compared to watsonx software on Cloud Pak for Data 4.8. - - - -* [Platform differences](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enplatform) -* [Common features across services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=encommon) -* [Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enws) -* [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwml) -* [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html?context=cdpaas&locale=enwos) - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_1,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Platform differences - -IBM watsonx as a Service and watsonx software on Cloud Pak for Data share a common code base, however, they differ in the following key ways: - - - -Platform differences - - Features As a service Software - - Software, hardware, and installation IBM watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic. You sign up at [https://dataplatform.cloud.ibm.com](https://dataplatform.cloud.ibm.com). You provide and maintain hardware. You install, maintain, and upgrade the software. See [Software requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/software-reqs.html). - Storage You provision a IBM Cloud Object Storage service instance to provide storage. See [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html). You provide persistent storage on a Red Hat OpenShift cluster. See [Storage requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/storage-requirements.html). - Compute resources for running workloads Users choose the appropriate runtime for their jobs. Compute usage is billed based on the rate for the runtime environment and the duration of the job. See [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html). You set up the number of Red Hat OpenShift nodes with the appropriate number of vCPUs. See [Hardware requirements](https://www.ibm.com/docs/SSQNUZ_4.8.x/sys-reqs/hardware-reqs.html) and [Monitoring the platform](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/platform-management.html). -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_2,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Cost You buy each service that you need at the appropriate plan level. Many services bill for compute and other resource consumption. See each service page in the [IBM Cloud catalog](https://cloud.ibm.com/catalog) or in the services catalog on IBM watsonx, by selecting Administration > Services > Services catalog from the navigation menu. You buy a software license based on the services that you need. See [Cloud Pak for Data](https://cloud.ibm.com/catalog/content/ibm-cp-datacore-6825cc5d-dbf8-4ba2-ad98-690e6f221701-global). - Security, compliance, and isolation The data security, network security, security standards compliance, and isolation of IBM watsonx are managed by IBM Cloud. You can set up extra security and encryption options. See [Security of IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html). Red Hat OpenShift Container Platform provides basic security features. Cloud Pak for Data is assessed for various Privacy and Compliance regulations and provides features that you can use in preparation for various privacy and compliance assessments. You are responsible for additional security features, encryption, and network isolation. See [Security considerations](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/security.html). - Available services Most watsonx services are available in both deployment environments.
See [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Includes many other services for other components and solutions. See [Services for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html). -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_3,2BB452B4C9E3458BC02A9D392961E9C643E402DE," User management You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. See [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html).
You can also set up SAML federation on IBM Cloud. See [IBM Cloud docs: How IBM Cloud IAM works](https://cloud.ibm.com/docs/account?topic=account-iamoverview). You can add users and create user groups from the Administration menu. You can use the Identity and Access Management Service or use your existing SAML SSO or LDAP provider for identity and password management. You can create dynamic, attribute-based user groups. See [User management](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/admin/users.html). - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_4,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Common features across services - -The following features that are provided with the platform are effectively the same for services on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: - - - -* Global search for assets across the platform -* The Platform assets catalog for sharing connections across the platform -* Role-based user management within collaborative workspaces across the platform -* Common infrastructure for assets and workspaces -* A services catalog for adding services -* View compute usage from the Administration menu - - - -The following table describes differences in features across services between IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: - - - -Differences in common features across services - - Feature As a service Software - - Manage all projects Users with the Manage projects permission from the IAM service access Manager role for the IBM Cloud Pak for Data service can join any project with the Admin role and then manage or delete the project. Users with the Manage projects permission can join any project with the Admin role and then manage or delete the project. - Connections to remote data sources Most supported data sources are common to both deployment environments.
See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). See [Supported data sources](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/access/data-sources.html). - Connection credentials that are personal or shared Connections in projects and catalogs can require personal credentials or allow shared credentials. Shared credentials can be disabled at the account level. Platform connections can require personal credentials or allow shared credentials. Shared credentials can be disabled at the platform level. - Connection credentials from secrets in a vault Not available Available - Kerberos authentication Not available Available for [some services and connections](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/kerberos.html) - Sample assets and projects from the Samples app Available Not available - Custom JDBC connector Not available Available starting in 4.8.0 - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_5,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Watson Studio - -The following Watson Studio features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: - - - -* Collaboration in projects and deployment spaces -* Accessing project assets programmatically -* Project import and export by using a project ZIP file -* Jupyter notebooks -* Job scheduling -* Data Refinery -* Watson Natural Language Processing for Python - - - -This table describes the feature differences between the Watson Studio service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required. For more information about feature differences between offering plans on IBM watsonx, see [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). - - - -Differences in Watson Studio - - Feature As a service Software - - Sandbox project Created automatically Not available - Create project Create:
* An empty project
* A project from a sample in the Samples
* A project from file Create:
* An empty project
* A project from file
* A project with Git integration - Git integration * Publish notebooks on GitHub
* Publish notebooks as gist * Integrate a project with Git
* sync assets to repository in one project and use those assets into another project - Project terminal for advanced Git operations Not available Available in projects with default Git integration - Organize assets in projects with folders Not available Available starting with 4.8.0 - Foundation model inferencing Available Requires the watsonx.ai service. - Foundation model tuning Available Not available - Supported foundation models Most foundation models are available on both deployments. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) Requires that the models are installed on the cluster. See [Supported foundation models](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/fm-models.html). - AI guardrails for prompting Available Not available -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_6,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Prompt variables Available Not available - Synthentic data generation Available Requires the Synthetic Data Generator service. - JupyterLab Not available Available in projects with Git integration - Visual Studio Code integration Not available Available - RStudio Cannot integrate with Git Can integrate with Git. Requires an RStudio Server Runtimes service. - Python scripts Not available Work with Python scripts in JupyterLab. Requires a Watson Studio Runtimes service. - Generate code to load data to a notebook by using the Flight service Not available Available - Manage notebook lifecycle Not available Use CPDCTL for notebook lifecycle management - Code package assets (set of dependent files in a folder structure) Not available Use CPDCTL to create code package assets in a deployment space - Promote notebooks to spaces Not available Available manually from the project's Assets page or programmatically by using CPDCTL - Python with GPU Support available for a single GPU type only (Nvidia K80) Support available for multiple Nvidia GPU types. Requires a Watson Studio Runtimes service. - Create and use custom images Not available Create custom images for Python (with and without GPU), R, JupyterLab (with and without GPU), RStudio, and SPSS environments. Requires a Watson Studio Runtimes and other applicable services. - Anaconda Repository Not available Use to create custom environments and custom images - Hadoop integration Not available Build and train models, and run Data Refinery flows on a Hadoop cluster. Requires the Execution Engine for Apache Hadoop service. - Decision Optimization Available Requires the Decision Optimization service. - SPSS Modeler Available Requires the SPSS Modeler service. - Watson Pipelines Available Requires the Watson Pipelines service. - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_7,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Watson Machine Learning - -The following Watson Machine Learning features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: - - - -* Collaboration in projects and deployment spaces -* Deploy models -* Deploy functions -* Watson Machine Learning REST APIs -* Watson Machine Learning Python client -* Create online deployments -* Scale and update deployments -* Define and use custom components -* Use Federated Learning to train a common model with separate and secure data sources -* Monitor deployments across spaces -* Updated forms for testing online deployment -* Use nested pipelines -* AutoAI data imputation -* AutoAI fairness evaluation -* AutoAI time series supporting features - - - -This table describes the differences in features between the Watson Machine Learning service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required. For details about functionality differences between offering plans on IBM watsonx, see [Watson Machine Learning offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - - - -Feature differences between Watson Machine Learning deployments - - Feature As a service Software - - AutoAI training input Current [supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) [Supported data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) change by release - AutoAI experiment compute configuration 8 CPU and 32 GB [Different sizes available](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) - AutoAI limits on data size
and number of prediction targets Set limits [Limits differ by compute configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) - AutoAI incremental learning Not available Available -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_8,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Deploy using popular frameworks
and software specifications Check for latest [supported versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) [Supported versions](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/ml-manage-frame-and-specs.html) differ by release - Connect to databases for batch deployments Check for [support by deployment type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) Check for support by [deployment type](https://www.ibm.com/docs/SSQNUZ_4.8.x/wsj/analyze-data/deploy-batch-details.html)
and by version - Deploy and score Python scripts Available via Python client Create scripts in JupyterLab or Python client, then deploy - Deploy and batch score R Scripts Not available Available - Deploy Shiny apps Not available Create and deploy Shiny apps
Deploy from code package - Evaluate jobs for fairness, or drift Requires the Watson OpenScale service Requires the Watson OpenScale service - Evaluate online deployments in a space
for fairness, drift or explainability Not available Available
Requires the Watson OpenScale service - Control space creation No restrictions by role Use permissions to control who can view and create spaces - Import from GIT project to space Not available Available - Code package automatically created when importing
from Git project to space Not available Available - Update RShiny app from code package Not available Available - Track model details in a model inventory Register models to view factsheets with lifecycle details. Requires the IBM Knowledge Catalog service. Available
Requires the AI Factsheets service. - Create and use custom images Not available Create custom images for Python or SPSS - Notify collaborators about Pipeline events Not available Use Send Mail to notify collaborators -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_9,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Import project or space file into a nonempty space Not available Available - Deep Learning Experiments Not available Requires the Watson Machine Learning Accelerator service - Provision and manage IBM Cloud service instances Add instances for Watson Machine Learning
or Watson OpenScale Services are provisioned on the cluster
by the administrator - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_10,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Watson OpenScale - -The following Watson OpenScale features are effectively the same on IBM watsonx as a Service and watsonx software on Cloud Pak for Data 4.8: - - - -* Evaluate deployments for fairness -* Evaluate the quality of deployments -* Monitor deployments for drift -* View and compare model results in an Insights dashboard -* Add deployments from the machine learning provider of your choice -* Set alerts to trigger when evaluations fall below a specified threshold -* Evaluate deployments in a user interface or notebook -* Custom evaluations and metrics -* View details about evaluations in model factsheets - - - -This table describes the differences in features between the Watson OpenScale service on the as-a-service and software deployment environments, differences between offering plans, and whether additional services are required. - - - -Differences IBM Watson OpenScale - - Feature As a service Software - - Upload pre-scored test data Not available Available - IBM SPSS Collaboration and Deployment Services Not available Available - Batch processing Not available Available - Support access control by user groups Not available Available - Free database and Postgres plans Available Postgres available starting in 4.8 - Set up multiple instances Not available Available - Integration with OpenPages Not available Available - - - -" -2BB452B4C9E3458BC02A9D392961E9C643E402DE_11,2BB452B4C9E3458BC02A9D392961E9C643E402DE," Learn more - - - -* [Services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) -* [Services for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/svc-nav/head/services.html) -* [Cloud deployment environment options for Cloud Pak for Data 4.8](https://www.ibm.com/docs/SSQNUZ_4.8.x/cpd/plan/deployment-environments.html) - - - -Parent topic:[Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_0,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Get help - -You can get help with IBM watsonx through documentation, training, support, and community resources. - - - -* [Platform setup](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=enplatform) -* [Training](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=entraining) -* [Community resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=encommunity) -* [Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=ensamples) -* [Support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html?context=cdpaas&locale=ensupport) - - - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_1,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Help with platform setup - -You must be the account owner or administrator for a billable IBM Cloud account to set up the IBM watsonx platform for your organization. To learn how to set up IBM watsonx, see [Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_2,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Training - -Start training your model by data preparation, analysis, and visualization. Learn how to build, deploy, and trust your models. Use the following tutorials and videos to get started with IBM watsonx: - - - -* [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) - - - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_3,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Community resources - -Share and gain knowledge using IBM Community and get the most out of our services. - -Explore blogs, forums, and other resources in these communities: - - - -* [watsonx.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e) -* [Data Science Community](https://community.ibm.com/community/user/datascience/home) -* [Watson Studio Community](https://community.ibm.com/community/user/watsonstudio/home) - - - -Find more blogs and forums on the following platforms: - - - -* [IBM Data and AI on Medium](https://medium.com/ibm-data-ai) -* [Watson Studio Stack Overflow](https://stackoverflow.com/questions/tagged/watson-studio) - - - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_4,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Samples - -You can use sample projects, notebooks, and data sets to get started fast. - -Find samples in the following locations: - - - -* [Samples](https://dataplatform.cloud.ibm.com/gallery) -* [IBM Data Science assets in GitHub](https://github.com/IBMDataScience) - - - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_5,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Support - -IBM Cloud provides you with three paid support options to customize your experience according to your business needs. Choose from [Basic, Advanced, or Premium support plan](https://cloud.ibm.com/docs/get-support?topic=get-support-support-plans). The level of support that you select determines the severity that you can assign to support cases and your level of access to the tools available in the Support Center. - -You can also go to [IBM Cloud Support Center](https://cloud.ibm.com/unifiedsupport/supportcenter) to open a support case, browse FAQs, or ask questions to IBM Chat Bot. - -" -EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7_6,EC03E18490E47DB0EFBD6A00BDA7DDB85B0A14D7," Learn more - - - -* [Known issues](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html) -* [FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html) -" -292D19849E8FBE48869F5E3A50439964563A90D1_0,292D19849E8FBE48869F5E3A50439964563A90D1," Quick start: Analyze data in a Jupyter notebook - -You can create a notebook in which you run code to prepare, visualize, and analyze data, or build and train a model. Read about Jupyter notebooks, then watch a video and take a tutorial that’s suitable for users with some knowledge of Python code. - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. -3. Create a notebook in the project. -4. Add code to the notebook to load and analyze your data. -5. Run your notebook and share the results with your colleagues. - - - -" -292D19849E8FBE48869F5E3A50439964563A90D1_1,292D19849E8FBE48869F5E3A50439964563A90D1," Read about notebooks - -A Jupyter notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data: - - - -* The data -* The code computations that process the data -* Visualizations of the results -* Text and rich media to enhance understanding - - - -[Read more about notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) - -" -292D19849E8FBE48869F5E3A50439964563A90D1_2,292D19849E8FBE48869F5E3A50439964563A90D1," Watch a video about notebooks - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to learn the basics of Jupyter notebooks. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -292D19849E8FBE48869F5E3A50439964563A90D1_3,292D19849E8FBE48869F5E3A50439964563A90D1," Try a tutorial to create a notebook - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep01) -* [Task 2: Add a notebook to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep02) -* [Task 3: Load a file and save the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep03) -* [Task 4: Find and edit the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep04) -* [Task 5: Share read-only version of the notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep05) -* [Task 6: Schedule a notebook to run at a different time.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=enstep06) - - - -This tutorial will take approximately 15 minutes to complete. - -Expand all sections - - - -* Tips for completing this tutorial - -" -292D19849E8FBE48869F5E3A50439964563A90D1_4,292D19849E8FBE48869F5E3A50439964563A90D1,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -292D19849E8FBE48869F5E3A50439964563A90D1_5,292D19849E8FBE48869F5E3A50439964563A90D1,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the notebook and data asset. You can use your sandbox project or create a project. Follow these steps to open a project and add a data asset to the project: 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. From the navigation menu, click Samples. 1. Search for an interesting data set, and select the data set. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. After the data set is added, click View Project. 1. In the project, click the Assets tab to see the data set. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. -### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab in the project. - -" -292D19849E8FBE48869F5E3A50439964563A90D1_6,292D19849E8FBE48869F5E3A50439964563A90D1,"![The following image shows the Assets tab in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-assets-tab-01.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Add a notebook to your project - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:06. Follow these steps to create a new notebook in your project. 1. In your project, on the Assets tab, click New asset > Work with data and models in Python or R notebooks. 1. Type a name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows blank notebook. - -![The following image shows the blank notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-editor.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Load a file and save the notebook - -" -292D19849E8FBE48869F5E3A50439964563A90D1_7,292D19849E8FBE48869F5E3A50439964563A90D1,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:23. Now you can access the data asset in your notebook that you uploaded to your project earlier. Follow these steps to load data into a data frame: 1. Click in an empty code cell in your notebook. 1. Click the Code snippets icon ( ![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png){: iih}). 1. In the side pane, click Read data. 1. Click Select data from project. 1. Locate the data asset from the project, and click Select. 1. In the Load as drop-down list, select the load option that you prefer. 1. Click Insert code to cell. The code to read and load the data asset is inserted into the cell. 1. Click Run to run your code. The first few rows of your data set will display. 1. To save a version of your notebook, click File > Save Version. You can also just save your notebook with File > Save. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the notebook with the pandas DataFrame. - -![The following image shows the notebook with the pandas DataFrame.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-cell01.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Find and edit the notebook - -" -292D19849E8FBE48869F5E3A50439964563A90D1_8,292D19849E8FBE48869F5E3A50439964563A90D1,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:19. Follow these steps to locate the saved notebook on the Assets tab, and edit the notebook: 1. In the project navigation trail, click your project name to return to your project. 1. Click the Assets tab to find the notebook. 1. When you click the notebook, it will open in READ ONLY mode. 1. To edit the notebook, click the pencil icon ![Pencil icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/edit.svg){: iih}. 1. Click the Information icon ![Information icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/information.svg){: iih} to open the Information panel. 1. On the General tab, edit the name and description of the notebook. 1. Click the Environment tab to see how you can change the environment used to run the notebook or update the runtime status to either stop and restart. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the notebook with the Information panel displayed. - -![The following image shows the notebook with the Information panel displayed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-environment.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Share read-only version of the notebook - -" -292D19849E8FBE48869F5E3A50439964563A90D1_9,292D19849E8FBE48869F5E3A50439964563A90D1,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:52. Follow these steps to create a link to the notebook to share with colleagues: 1. Click the Share icon ![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/share.svg){: iih} if you would like to share the read-only view of the notebook. 1. Click to turn on the Share with anyone who has the link toggle button. 1. Select what content you would like to share through a link or social media. 1. Click the Copy icon ![Copy icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/copy.svg){: iih} to copy a direct link to this notebook. 1. Click Close. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Share dialog box. - -![The following image shows the Share dialog box.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-share.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Schedule a notebook to run at a different time - -" -292D19849E8FBE48869F5E3A50439964563A90D1_10,292D19849E8FBE48869F5E3A50439964563A90D1,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 02:08. Follow these steps to create a job to schedule the notebook to run at a specific time or repeat based on a schedule: 1. Click the Jobs icon, and select Create a job. -![Create a job](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-create-job.png) 1. Provide the name and description of the job, and click Next. 1. Select the notebook version and environment runtime, and click Next. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click Next. 1. (Optional) click the toggle button to receive notifications for this job, and click Next. 1. Review the details, and click either Create (to create the job, but not run the job immediately) or Create and run (to run the job immediately). 1. The job will display in the Jobs tab in the project. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Jobs tab. - -![The following image shows the Jobs tab.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gs-notebook-job.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html?context=cdpaas&locale=envideo-preview) - - - -" -292D19849E8FBE48869F5E3A50439964563A90D1_11,292D19849E8FBE48869F5E3A50439964563A90D1," Next steps - -Now you can use this data set for further analysis. For example, you or other users can do any of these tasks: - - - -* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) -* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) - - - -" -292D19849E8FBE48869F5E3A50439964563A90D1_12,292D19849E8FBE48869F5E3A50439964563A90D1," Additional resources - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -" -292D19849E8FBE48869F5E3A50439964563A90D1_13,292D19849E8FBE48869F5E3A50439964563A90D1,"![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_0,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Quick start: Build and deploy a machine learning model in a Jupyter notebook - -You can create, train, and deploy machine learning models with Watson Machine Learning in a Jupyter notebook. Read about the Jupyter notebooks, then watch a video and take a tutorial that’s suitable for intermediate users and requires coding. - -Required services : Watson Studio : Watson Machine Learning - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add a notebook to the project. You can create a blank notebook or import a notebook from a file or GitHub repository. -3. Add code and run the notebook. -4. Review the model pipelines and save the desired pipeline as a model. -5. Deploy and test your model. - - - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_1,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Read about Jupyter notebooks - -A Jupyter notebook is a web-based environment for interactive computing. If you choose to build a machine learning model in a notebook, you should be comfortable with coding in a Jupyter notebook. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model. - -[Read more about training models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) - -[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_2,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Watch a video about creating a model in a Jupyter notebook - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to train, deploy, and test a machine learning model in a Jupyter notebook. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_3,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Try a tutorial to create a model in a Jupyter notebook - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep01) -* [Task 2: Add a notebook to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep02) -* [Task 3: Set up the environment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep03) -* [Task 4: Run the notebook:](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep04) - - - -* Build and train a model. -* Save a pipeline as a model. -* Deploy the model. -* Test the deployed model. - - - -* [Task 5: View and test the deployed model in the deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep05) - - - - - -* [(Optional) Clean up.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=enstep06) - - - -This tutorial will take approximately 30 minutes to complete. - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_4,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Sample data - -The sample data used in this tutorial is from data that is part of scikit-learn and will be used to train a model to recognize images of hand-written digits, from 0-9. - -Expand all sections - - - -* Tips for completing this tutorial - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_5,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_6,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:07. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_7,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new project. - -![The following image shows the new project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-new-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Add a notebook to your project - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_8,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:18. You will use a sample notebook in this tutorial. Follow these steps to add the sample notebook to your project: 1. Access the [Use sckit-learn to recognize hand-written digits notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4){: new_window} in the Samples. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. Verify the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. 1. From the menu, click Kernel > Restart & Clear Output, then confirm by clicking Restart and Clear All Outputs to clear the output from the last saved run. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new notebook. - -![The following image shows the new notebook.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-new-notebook.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Set up the environment - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_9,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:44. The first section in the notebook sets up the environment by specifying your IBM Cloud credentials and Watson Machine Learning service instance location. Follow these steps to set up the environment in your notebook: 1. Scroll to the Set up the environment section. 1. Choose a method to obtain the API key and location. - Run the IBM Cloud CLI commands in the notebook from a command prompt. - Use the IBM Cloud console. 1. Launch the [API keys section in the IBM Cloud Console](https://cloud.ibm.com/iam/apikeys){: new_window}, and [create an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key){: new_window}. 1. Access your [IBM Cloud resource list](https://cloud.ibm.com/resources){: new_window}, view your Watson Machine Learning service instance, and note the Location. 1. See the Watson Machine Learning [API Docs](https://cloud.ibm.com/apidocs/machine-learning){: new_window} for the correct endpoint URL. For example, Dallas is in us-south. 1. Paste your API key and location into cell 1. 1. Run cells 1 and 2. 1. Run cell 3 to install the ibm-watson-machine-learning package. 1. Run cell 4 to import the API client and create the API client instance using your credentials. 1. Run cell 5 to see a list of all existing deployment spaces. If you do not have a deployment space, then follow these steps: 1. Open another tab with your watsonx deployment. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, click Deployments. 1. Click New deployment space. 1. " -316974F0A70EE2199BF6CD912E62BFB53D200F0A_10,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"Add a name and optional description for the deployment. 1. Click Create, then View new space. 1. Click the Manage tab. 1. Copy the Space GUID and close the tab, this value will be your space_id. 1. Copy and paste the appropriate deployment space ID into cell 6, then run cell 6 and cell 7 to set the default space. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the notebook with all of the environment variables set up. -![The following image shows the notebook with all of the environment variables set up.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-cell07.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Run the notebook - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_11,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 02:14. Now that all of the environment variables are set up, you can run the rest of the cells in the notebook. Follow these steps to read through the comments, run the cells, and review the output: 1. Run the cells in the Explore data section. 1. Run the cells in the Create a scikit-learn model section to. 1. Prepare the data by splitting it into three data sets (train, test, and score). 1. Create the pipeline. 1. Train the model. 1. Evaluate the model using the test data. 1. Run the cells in the Publish model section to publish the model, get model details, and get all models. 1. Run the cells in the Create model deployment section. 1. Run the cells in the Get deployment details section. 1. Run the cells in the Score section, which sends a scoring request to the deployed model and shows the prediction. 1. Click *File > Save to save the notebook and its output. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the notebook with the prediction. - -![The following image shows the notebook with the prediction.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-prediction.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: View and test the deployed model in the deployment space - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_12,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:07. You can also view the model deployment directly from the deployment space. Follow these steps to test the deployed model in the space. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, click Deployments. 1. Click the Spaces tab. 1. Select the appropriate deployment space from the list. 1. Click Scikit model. 1. Click Deployment of scikit model. 1. Review the Endpoint and Code snippets. 1. Click the Test tab. " -316974F0A70EE2199BF6CD912E62BFB53D200F0A_13,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"You can test the deployed model by pasting the following JSON code: json {""input_data"": [{""values"": 0.0, 0.0, 5.0, 16.0, 16.0, 3.0, 0.0, 0.0, 0.0, 0.0, 9.0, 16.0, 7.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.0, 15.0, 2.0, 0.0, 0.0, 0.0, 0.0, 1.0, 15.0, 16.0, 15.0, 4.0, 0.0, 0.0, 0.0, 0.0, 9.0, 13.0, 16.0, 9.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 14.0, 12.0, 0.0, 0.0, 0.0, 0.0, 5.0, 12.0, 16.0, 8.0, 0.0, 0.0, 0.0, 0.0, 3.0, 15.0, 15.0, 1.0, 0.0, 0.0], 0.0, 0.0, 6.0, 16.0, 12.0, 1.0, 0.0, 0.0, 0.0, 0.0, 5.0, 16.0, 13.0, 10.0, 0.0, 0.0, 0.0, 0.0, 0.0, 5.0, 5.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 8.0, 15.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 13.0, 13.0, 0.0, 0.0, 0.0, 0.0, 0.0, 6.0, 16.0, 9.0, 4.0, 1.0, 0.0, 0.0, 3.0, 16.0, 16.0, 16.0, 16.0, 10.0, 0.0, 0.0, 5.0, 16.0, 11.0, 9.0, 6.0, 2.0]]}]} 1. Click Predict. The resulting prediction indicates that the hand-written digits are 5 and 4. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Test tab with the prediction. -![The following image shows the *Test* tab with the prediction.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-test-tab.png){: width=""100%"" } -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_14,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* (Optional) Task 6: Clean up - -If you'd like to remove all of the assets created by the notebook, create a new notebook based on the [Machine Learning artifacts management notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb){: new_window}. A link to this notebook is also available in the Clean up section of the Use scikit-learn to recognize hand-written digits notebook used in this tutorial. -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html?context=cdpaas&locale=envideo-preview) - - - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_15,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Next steps - -Now you can use this data set for further analysis. For example, you or other users can do any of these tasks: - - - -* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) -* [Analyze the data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) - - - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_16,316974F0A70EE2199BF6CD912E62BFB53D200F0A," Additional resources - - - -* Try these other methods to build models: - - - -* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) -* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) -* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -" -316974F0A70EE2199BF6CD912E62BFB53D200F0A_17,316974F0A70EE2199BF6CD912E62BFB53D200F0A,"![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. -* Find more [Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_0,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Quick start: Build a model using SPSS Modeler - -You can create, train, and deploy models using SPSS Modeler. Read about SPSS Modeler, then watch a video and follow a tutorial that’s suitable for beginners and requires no coding. - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add an SPSS Modeler flow to the project. -3. Configure the nodes on the canvas, and run the flow. -4. Review the model details and save the model. -5. Deploy and test your model. - - - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_1,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Read about SPSS Modeler - -With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results. - -SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. - -[Read more about SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) - -[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_2,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Watch a video about creating a model using SPSS Modeler - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to create and run an SPSS Modeler flow to train a machine learning model. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_3,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Try a tutorial to create a model using SPSS Modeler - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep01) -* [Task 2: Add a data set to your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep02) -* [Task 3: Create the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep03) -* [Task 4: Add the nodes to the SPSS Modeler flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep04) -* [Task 5: Run the SPSS Modeler flow and explore the model details.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep05) -* [Task 6: Evaluate the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep06) -* [Task 7: Deploy and test the model with new data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=enstep07) - - - -This tutorial will take approximately 30 minutes to complete. - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_4,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Example data - -The data set used in this tutorial is from the University of California, Irvine, and is the result of an extensive study based on hospital admissions over a period of time. The model will use three important factors to help predict chronic kidney disease. - -Expand all sections - - - -* Tips for completing this tutorial - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_5,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_6,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the SPSS Modeler flow. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new project. - -![The following image shows the new project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-new-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Add the data set to your project - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_7,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:13. This tutorial uses a sample data set. Follow these steps to add the sample data set to your project: 1. Access the [UCI ML Repository: Chronic Kidney Disease Data Set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a25870b7249ad55605de7a2e59567a7e){: new_window} in the Samples. 1. Click Preview. There are three important factors that help predict chronic kidney disease which are available as part of this analysis: the age of the test subject, the serum creatinine test results, and diabetes test results. And the class value indicates if the patient has been previously diagnosed for kidney disease. 1. Click Add to project. 1. Select the project from the list, and click Add. 1. Click View Project. 1. From your project's Assets page, locate the UCI ML Repository Chronic Kidney Disease Data Set.csv file. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab in the project. - -![The following image shows the *Assets* tab in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-project-asset.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Create the SPSS Modeler flow - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_8,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:11. Follow these steps to create an SPSS Modeler flow in the project: 1. Click New asset > Build models as a visual flow. 1. Type a name and description for the flow. 1. For the runtime definition, accept the Default SPSS Modeler S definition. 1. Click Create. This opens up the Flow Editor that you'll use to create the flow. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the flow editor. - -![The following image shows the flow editor.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-flow-editor.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Add the nodes to the SPSS Modeler flow - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_9,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:31. After you load the data, you must transform the data. Create a simple flow by dragging transformers and estimators onto the canvas and connecting them to the data source. Use the following nodes from the palette: - Data Asset: loads the csv file from the project - Partition: divides the data into training and testing segments - Type: sets the data type. Use it to designate the class field as a target type. - C5.0: a classification algorithm - Analysis: view the model and check its accuracy - Table: preview the data with predictions Follow these steps to create the flow: 1. Add the data asset node: 1. From the Import section, drag the Data Asset node onto the canvas. 1. Double-click the Data Asset node to select the data set. 1. Select Data asset > UCI ML Repository Chronic Kidney Disease Data Set.csv. 1. Click Select. 1. View the Data Asset properties. 1. Click Save. 1. Add the Partition node: 1. From the Field Operations section, drag the Partition node onto the canvas. 1. Connect the Data Asset node to the Partition node. 1. Double-click the Partition node to view its properties. The default partition divides half of the data for training and the other half for testing. 1. Click Save. 1. Add the Type node: 1. From the Field Operations section, drag the Type node onto the canvas. 1. Connect the Partition node to the Type node. 1. Double-click the Type node to view its properties. The Type node specifies the measurement level for each field. This source data file uses four different measurement levels: Continuous, Categorical, Nominal, Ordinal, and Flag. 1. Search for the class field. For each field, the role indicates the part that each field plays in modeling. Change the classRole to Target - the field you want to predict. 1. Click Save. 1. Add the C5.0 classification algorithm node: 1. " -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_10,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"From the Modeling section, drag the C5.0 node onto the canvas. 1. Connect the Type node to the C5.0 node. 1. Double-click the C5.0 node to view its properties. By default, the C5.0 algorithm builds a decision tree. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples can't be split any further. Finally, the lowest-level splits are reexamined, and those that don't contribute significantly to the value of the model are removed. 1. Toggle on Use settings defined in this node. 1. For Target, select class. 1. In the Inputs section, click Add columns. 1. Clear the checkbox next to Field name. 1. Select age, sc, dm. 1. Click OK. 1. Click Save. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the completed flow. -![flow showing Data Asset node, Partition node, Type node, and C5.0 class node](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-completed-flow.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Run the SPSS Modeler flow and explore the model details - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_11,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:20. Now that you have designed the flow, follow these steps to run the flow, and examine the tree diagram to see the decision points: 1. Right-click the C5.0 node and select Run. Running the flow generates a new model nugget on the canvas. 1. Right-click the model nugget and select View Model to view the model details. 1. View the Model Information which provides a model summary. 1. Click Top Decision Rules. A table displays a series of rules that were used to assign individual records to child nodes based on the values of different input fields. 1. Click Feature Importance. A chart shows the relative importance of each predictor in estimating the model. From this, you can see that serum creatinine is easily the most significant factor, with diabetes being the next most significant factor. 1. Click Tree Diagram. The same model is displayed in the form of a tree, with a node at each decision point. 1. Hover over the top node, which provides a summary for all the records in the data set. Almost 40% of the cases in the data set are classified as not diagnosed with kidney disease. The tree can provide additional clues as to what factors might be responsible. 1. Notice the two branches stemming from the top node, which indicates a split by serum creatinine. - Review the branch that shows records where the serum creatinine is greater than 1.25. In this case, 100% of those patients have a positive kidney disease diagnosis. - Review the branch that shows records where the serum creatinine is less than or equal to 1.25. Almost 80% of those patients don't have a positive kidney disease diagnosis, but almost 20% with lower serum creatinine were still diagnosed with kidney disease. 1. " -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_12,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"Notice the branches stemming from sc<=1.250, which is split by diabetes. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and diagnosed diabetes (dm=yes). 100% of these patients were also diagnosed with kidney disease. - Review the branch that shows patients with low serum creatinine (sc<=1.250) and no diabetes (dm=no), 85% were not diagnosed with kidney disease, but 15% of them were still diagnosed with kidney disease. 1. Notice the branches stemming from dm = no, which is split by the last significant factor, age. - Review the branch that shows patients 14 years old or younger (age <= 14). This branch shows that 75% of young patients with low serum creatinine and no diabetes were at risk of getting kidney disease. - Review the branch that shows patients older than 14 years old (age > 14). This branch shows that only 12% of patients over 14 years old with low serum creatinine and no diabetes were at risk of getting kidney disease. 1. Close the model details. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the tree diagram. -![The following image shows the tree diagram.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-tree-diagram.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Evaluate the model - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_13,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 07:24. Follow these steps to use the Analysis and Table nodes to evaluate the model: 1. From the Outputs section, drag the Analysis node onto the canvas. 1. Connect the Model nugget to the Analysis node. 1. Right-click the Analysis node, and select Run. 1. From the Outputs panel, open the Analysis, which shows that the model correctly predicted a kidney disease diagnosis almost 95% of the time. Close the Analysis. 1. Right-click the Analysis node, and select Save branch as a model. 1. For the Model name, type Kidney Disease Analysis{: .cp}. 1. Click Save. 1. Click Close. 1. From the Outputs section, drag the Table node onto the canvas. 1. Connect the Model nugget to the Table node. 1. Right-click the Table node, and select Preview data. 1. When the Preview displays, scroll to the last two columns. The $C-Class column contains the prediction of kidney disease, and the $CC-Class column indicates the confidence score for that prediction. 1. Close the Preview. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the preview table with the predictions. - -![The following image shows the preview table with the predictions.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-preview-predictions.png){: width=""100%"" } -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_14,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 7: Deploy and test the model with new data - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 09:10. Lastly, follow these steps to deploy this model and predict the outcome with new data. 1. Return to the Project's Assets tab. 1. Click the Models section, and open the Kidney Disease Analysis model. 1. Click Promote to deployment space. 1. Choose an existing deployment space. If you don't have a deployment space, you can create a new one: 1. Provide a space name. 1. Select a storage service. 1. Select a machine learning service. 1. Click Create. 1. Click Close. 1. Select Go to the model in the space after promoting it. 1. Click Promote. 1. When the model displays inside the deployment space, click New deployment. 1. Select Online as the Deployment type. 1. Specify a name for the deployment. 1. Click Create. 1. When the deployment is complete, click the deployment name to view the deployment details page. 1. Go to the Test tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. " -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_15,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"Click the JSON input, then copy the following test data and paste it to replace the existing JSON text: json { ""input_data"": [ { ""fields"": ""age"", ""bp"", ""sg"", ""al"", ""su"", ""rbc"", ""pc"", ""pcc"", ""ba"", ""bgr"", ""bu"", ""sc"", ""sod"", ""pot"", ""hemo"", ""pcv"", ""wbcc"", ""rbcc"", ""htn"", ""dm"", ""cad"", ""appet"", ""pe"", ""ane"", ""class"" ], ""values"": ""62"", ""80"", ""1.01"", ""2"", ""3"", ""normal"", ""normal"", ""notpresent"", ""notpresent"", ""423"", ""53"", ""1.8"", """", """", ""9.6"", ""31"", ""7500"", """", ""no"", ""yes"", ""no"", ""poor"", ""no"", ""yes"", ""ckd"" ] ] } ] } 1. Click Predict to predict whether a 62 year old with diabetes and a serum creatinine ratio of 1.8 would likely be diagnosed with kidney disease. The resulting prediction indicates that this patient has a high probability of a kidney disease diagnosis. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Test tab for the model deployment with a prediction. -![The following image shows the Test tab for the model deployment with a prediction.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/spss-deployment-prediction.gif){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html?context=cdpaas&locale=envideo-preview) - - - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_16,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Next steps - -Now you can use this data set for further analysis. For example, you can perform tasks such as: - - - -* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) -* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) - - - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_17,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A," Additional resources - - - -* Find more [SPSS Modeler tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials.html) -* Try these other methods to build models: - - - -* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) -* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) -* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -" -F870AF12BC30438B0DAB4FF5365B5279F2F9A93A_18,F870AF12BC30438B0DAB4FF5365B5279F2F9A93A,"![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. -* Contribute to the [SPSS Modeler community](https://ibm.biz/spss-modeler-community) - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -E14C56A78F56157E862DE99906254B291F5B3321_0,E14C56A78F56157E862DE99906254B291F5B3321," Quick start: Build and deploy a machine learning model with AutoAI - -You can automate the process of building a machine learning model with the AutoAI tool. Read about the AutoAI tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding. - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. -3. Create an AutoAI experiment in the project. -4. Review the model pipelines and save the desired pipeline as a model to deploy or as a notebook to customize. -5. Deploy and test your model. - - - -" -E14C56A78F56157E862DE99906254B291F5B3321_1,E14C56A78F56157E862DE99906254B291F5B3321," Read about AutoAI - -The AutoAI graphical tool automatically analyzes your data and generates candidate model pipelines customized for your predictive modeling problem. These model pipelines are created iteratively as AutoAI analyzes your dataset and discovers data transformations, algorithms, and parameter settings that work best for your problem setting. Results are displayed on a leaderboard, showing the automatically generated model pipelines ranked according to your problem optimization objective. - -[Read more about AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) - -[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - -" -E14C56A78F56157E862DE99906254B291F5B3321_2,E14C56A78F56157E862DE99906254B291F5B3321," Watch a video about creating a model using AutoAI - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to create and run an AutoAI experiment based on the bank marketing sample. - -Note: This video shows tasks 2-5 of this tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -E14C56A78F56157E862DE99906254B291F5B3321_3,E14C56A78F56157E862DE99906254B291F5B3321," Try a tutorial to create a model using AutoAI - -This tutorial guides you through training a model to predict if a customer is likely subscribe to a term deposit based on a marketing campaign. - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep01) -* [Task 2: Build and train the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep02) -* [Task 3: Promote the model to a deployment space and deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep03) -* [Task 4: Test the deployed model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep04) -* [Task 5: Create a batch job to score the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=enstep05) - - - -This tutorial will take approximately 30 minutes to complete. - -" -E14C56A78F56157E862DE99906254B291F5B3321_4,E14C56A78F56157E862DE99906254B291F5B3321," Sample data - -The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion. - -![Spreadsheet of the Bank marketing data set](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_description.png) - -Expand all sections - - - -* Tips for completing this tutorial - -" -E14C56A78F56157E862DE99906254B291F5B3321_5,E14C56A78F56157E862DE99906254B291F5B3321,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -E14C56A78F56157E862DE99906254B291F5B3321_6,E14C56A78F56157E862DE99906254B291F5B3321,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new project. - -" -E14C56A78F56157E862DE99906254B291F5B3321_7,E14C56A78F56157E862DE99906254B291F5B3321,"![The following image shows the new project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-new-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Build and train the model - -" -E14C56A78F56157E862DE99906254B291F5B3321_8,E14C56A78F56157E862DE99906254B291F5B3321,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:08. Now that you have a project, you are ready to build and train the model using AutoAI. Follow these steps to create the AutoAI experiment, review the model pipelines, and select a pipeline to save as a model: 1. Click the Assets tab in your project, and then click New asset > Build machine learning models automatically. 1. On the Build machine learning models automatically page, complete the basic fields: 1. Click the Samples panel. 1. Select Bank marketing sample data, and click Next. The project name and description will be filled in for you. 1. Confirm that the Machine Learning service instance that you associated with your project is selected in the Watson Machine Learning Service Instance field. 1. Click Create. 1. In this sample AutoAI experiment, you will see that the Bank marketing sample data is already selected for your experiment. ![Choose a prediction column](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_predict_label.png){: biw} 1. Review the preset experiment settings. Based on the data set and the selected column to predict, AutoAI analyzes a subset of the data and chooses a prediction type and metric to optimize. In this case, the prediction type is Binary Classification, the positive class is Yes, and the optimized metric is ROC AUC & run time. 1. Click Run experiment. As the model trains, you see an infographic that shows the process of building the pipelines. -" -E14C56A78F56157E862DE99906254B291F5B3321_9,E14C56A78F56157E862DE99906254B291F5B3321,"![Build model pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_pipeline_build2.png){: biw} For a list of algorithms, or estimators, available with each machine learning technique in AutoAI, see: [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html). 1. After the experiment run is complete, you can view and compare the ranked pipelines in a leaderboard. ![Pipeline leaderboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_leaderboard2.png){: biw} 1. You can click Pipeline comparison to see how they differ. ![Pipeline comparison metric chart](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_metric.png){: biw} 1. Click the highest ranked pipeline to see the pipeline details. 1. Click Save as, select Model, and click Create. This saves the pipeline as a model in your project. 1. When the model is saved, click the View in project link in the notification to view the model in your project. Alternatively, you can navigate to the Assets tab in the project, and click the model name in the Models section. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the model. - -![The following image shows the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-model.png){: width=""100%"" } -" -E14C56A78F56157E862DE99906254B291F5B3321_10,E14C56A78F56157E862DE99906254B291F5B3321,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Promote the model to a deployment space and deploy the trained model - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:57. Before you can deploy the model, you need to promote the model to a deployment space. Follow these steps to promote the model to a deployment space to deploy the model: 1. Click Promote to deployment space. 1. Choose an existing deployment space. If you don't have a deployment space: 1. Click Create a new deployment space. 1. Provide a space name and optional description. 1. Select a storage service. 1. Select a machine learning service. 1. Click Create. 1. Click Close. 1. Select your new deployment space from the list. 1. Select the Go to the model in the space after promoting it option. 1. Click Promote. Note: If you didn't select the option to go to the model in the space after promoting it, you can use the navigation menu to navigate to Deployments to select your deployment space and model.1. With the model open, click New deployment. 1. Select Online as the Deployment type. 1. Specify a name for the deployment. 1. Click Create. 1. When the deployment is complete, click the deployment name to view the deployment details page. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new deployment. - -" -E14C56A78F56157E862DE99906254B291F5B3321_11,E14C56A78F56157E862DE99906254B291F5B3321,"![The following image shows the new deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-deployment.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Test the deployed model - -" -E14C56A78F56157E862DE99906254B291F5B3321_12,E14C56A78F56157E862DE99906254B291F5B3321,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 06:22. Now that you have the model deployed, you can test that that online deployment using the user interface or through the Watson Machine Learning APIs. Follow these steps to use the user interface to test the model with new data: 1. Click the Test tab. You can test the deployed model from the deployment details page in two ways: test with a form or test with JSON code. 1. Click the JSON input tab, copy the following test data, and paste it to replace the existing JSON text: json { ""input_data"": [ { ""fields"": ""age"", ""job"", ""marital"", ""education"", ""default"", ""balance"", ""housing"", ""loan"", ""contact"", ""day"", ""month"", ""duration"", ""campaign"", ""pdays"", ""previous"", ""poutcome"" ], ""values"": 27, ""unemployed"", ""married"", ""primary"", ""no"", 1787, ""no"", ""no"", ""cellular"", 19, ""oct"", 79, 1, -1, 0, ""unknown"" ] ] } ] } 1. Click Predict to predict whether a customer with the specified attributes is likely to sign up for a particular kind of account. The resulting prediction indicates that this customer has a high probability of not enrolling in the marketing promotion. 1. Click the X to close the Prediction results window. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the results of testing the deployment. The values for your prediction might differ from the values in the following image. - -" -E14C56A78F56157E862DE99906254B291F5B3321_13,E14C56A78F56157E862DE99906254B291F5B3321,"![The following image shows the results of testing the deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-deployment-test.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Create a batch job to score the model - -Now that you have tested the deployed model with a single prediction, you can create a batch deployment to score multiple records at the same time. ### Task 5a: Set up batch deployment ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 07:00. For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction. Follow these steps to upload the payload data to the deployment space: 1. " -E14C56A78F56157E862DE99906254B291F5B3321_14,E14C56A78F56157E862DE99906254B291F5B3321,"Copy and paste the following text into a text editor, and save the file as bank-payload.csv. txt age,job,marital,education,default,balance,housing,loan,contact,day,month,duration,campaign,pdays,previous,poutcome 30,unemployed,married,primary,no,1787,no,no,cellular,19,oct,79,1,-1,0,unknown 33,services,married,secondary,no,4789,yes,yes,cellular,11,may,220,1,339,4,failure 35,management,single,tertiary,no,1350,yes,no,cellular,16,apr,185,1,330,1,failure 30,management,married,tertiary,no,1476,yes,yes,unknown,3,jun,199,4,-1,0,unknown 59,blue-collar,married,secondary,no,0,yes,no,unknown,5,may,226,1,-1,0,unknown 35,management,single,tertiary,no,747,no,no,cellular,23,feb,141,2,176,3,failure 36,self-employed,married,tertiary,no,307,yes,no,cellular,14,may,341,1,330,2,other 39,technician,married,secondary,no,147,yes,no,cellular,6,may,151,2,-1,0,unknown 41,entrepreneur,married,tertiary,no,221,yes,no,unknown,14,may,57,2,-1,0,unknown 43,services,married,primary,no,-88,yes,yes,cellular,17,apr,313,1,147,2,failure 39,services,married,secondary,no,9374,yes,no,unknown,20,may,273,1,-1,0,unknown 43,admin.,married,secondary,no,264,yes,no,cellular,17" -E14C56A78F56157E862DE99906254B291F5B3321_15,E14C56A78F56157E862DE99906254B291F5B3321,",apr,113,2,-1,0,unknown 36,technician,married,tertiary,no,1109,no,no,cellular,13,aug,328,2,-1,0,unknown 20,student,single,secondary,no,502,no,no,cellular,30,apr,261,1,-1,0,unknown 31,blue-collar,married,secondary,no,360,yes,yes,cellular,29,jan,89,1,241,1,failure 40,management,married,tertiary,no,194,no,yes,cellular,29,aug,189,2,-1,0,unknown 56,technician,married,secondary,no,4073,no,no,cellular,27,aug,239,5,-1,0,unknown 37,admin.,single,tertiary,no,2317,yes,no,cellular,20,apr,114,1,152,2,failure 25,blue-collar,single,primary,no,-221,yes,no,unknown,23,may,250,1,-1,0,unknown 31,services,married,secondary,no,132,no,no,cellular,7,jul,148,1,152,1,other 1.Click your deployment space in the navigation trail. ![Navigation trail](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-breadcrumbs.png) 1. Click the Assets tab. 1. Drag the bank-payload.csv file into the side panel, and wait for the file to upload. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab in the deployment space. -" -E14C56A78F56157E862DE99906254B291F5B3321_16,E14C56A78F56157E862DE99906254B291F5B3321,"![Assets tab in the deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-assets-tab.png){: width=""100%"" } ### Task 5b: Create the batch deployment ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 07:30. To process a batch of inputs and have the output written to a file instead of displayed in real time, create a batch deployment job. 1. Go to the Assets tab in the deployment space. 1. Click the ![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} Overflow menu for your model, and choose Deploy. 1. For the Deployment type, select Batch. 1. Type a name for the deployment. 1. Choose the smallest hardware specification. 1. Click Create. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows batch deployment. - -" -E14C56A78F56157E862DE99906254B291F5B3321_17,E14C56A78F56157E862DE99906254B291F5B3321,"![Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai-batch-page.png){: width=""100%"" } ### Task 5c: Create the batch job ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 07:44. The batch job runs the deployment. To create the job, you specify the input data and the name for the output file. You can set up a job to run on a schedule, or run immediately. Follow these steps to create a batch job: 1. On the deployment page, click New job. 1. Specify a name for the job, and click Next. 1. Select the smallest hardware specification, and click Next. 1. Optional: Set a schedule, and click Next. 1. Optional: Choose to receive notifications, and click Next. 1. On the Choose data screen, select the Input data: 1. Click Select data source. 1. Select Data asset > bank-payload.csv. 1. Click Confirm. 1. Back on the Choose data screen, specify the Output file: 1. Click Add. 1. Click Select data source. 1. Ensure that the Create new tab is selected. 1. For the Name, type bank-output.csv{: .cp}. 1. Click Confirm. 1. Click Next for the final step. 1. Review the settings, and click Create and run to run the job immediately. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the job details for the batch deployment. - -" -E14C56A78F56157E862DE99906254B291F5B3321_18,E14C56A78F56157E862DE99906254B291F5B3321,"![Create a job for the batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_job.png){: width=""100%"" } ### Task 5d: View the output ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 08:42. Follow these steps to review the output file from the batch job. 1. Click the job name to see the status. 1. When the status changes to Completed, click your deployment space name in the navigation trail. 1. Click the Assets tab. 1. Click the bank-output.csv file to review the prediction results for the customer information that is submitted for batch processing. For each case, the prediction returned these customers are unlikely to subscribe to the bank promotion. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the results of the batch deployment job. - -![The following image shows the results of the batch deployment job.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/autoai_bank_sample_batch_output.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html?context=cdpaas&locale=envideo-preview) - - - -" -E14C56A78F56157E862DE99906254B291F5B3321_19,E14C56A78F56157E862DE99906254B291F5B3321," Next steps - -Now you can use this data set for further analysis. For example, you or other users can do any of these tasks: - - - -* [Cleansing and shaping data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) -* [Analyze the data in a Juypter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) - - - -" -E14C56A78F56157E862DE99906254B291F5B3321_20,E14C56A78F56157E862DE99906254B291F5B3321," Additional resources - - - -* Try these additional tutorials to get more hands-on experience with building models using AutoAI: - - - -* [Build a Binary classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) -* [Build a univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html) -* [Build a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html) - - - -* Try these other methods to build models: - - - -* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) -* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) -* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -" -E14C56A78F56157E862DE99906254B291F5B3321_21,E14C56A78F56157E862DE99906254B291F5B3321,"![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_0,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Quick start: Build, run, and deploy a Decision Optimization model - -You can build and run Decision Optimization models to help you make the best decisions to solve business problems based on your objectives. Read about Decision Optimization, then watch a video and take a tutorial that’s suitable for users with some knowledge of prescriptive analytics, but does not require coding. - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add a Decision Optimization Experiment to the project. You can add compressed files or data from sample files. -3. Associate a Watson Machine Learning Service with the project. -4. Create a deployment space to associate with the project's Watson Machine Learning Service. -5. Review the data, model objectives, and constraints in the Modeling Assistant. -6. Run one or more scenarios to test your model and review the results. -7. Deploy your model. - - - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_1,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Read about Decision Optimization - -Decision Optimization can analyze data and create an optimization model (with the Modeling Assistant) based on a business problem. First, an optimization model is derived by converting a business problem into a mathematical formulation that can be understood by the optimization engine. The formulation consists of objectives and constraints that define the model that the final decision is based on. The model, together with your input data, forms a scenario. The optimization engine solves the scenario by applying the objectives and constraints to limit millions of possibilities and provides the best solution. This solution satisfies the model formulation or relaxes certain constraints if the model is infeasible. You can test scenarios using different data, or by modifying the objectives and constraints and re-running them and viewing solutions. Once satisfied you can deploy your model. - -[Read more about Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_2,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Watch a video about creating a Decision Optimization model - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to run a sample Decision Optimization experiment to create, solve, and deploy a Decision Optimization model with Watson Studio and Watson Machine Learning. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. The user interface is frequently improved. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_3,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Try a tutorial to create a model that uses Decision Optimization - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep01) -* [Task 2: Create a Decision Optimization experiment in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep02) -* [Task 3: Build a model and visualize a scenario result.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep03) -* [Task 4: Change model objectives and constraints.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep04) -* [Task 5: Deploy the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep05) -* [Task 6: Test the model.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=enstep06) - - - -This tutorial will take approximately 30 minutes to complete. - -Expand all sections - - - -* Tips for completing this tutorial - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_4,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_5,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the data and the AutoAI experiment. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. 1. When the project opens, click the Manage tab and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the new project. - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_6,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![The following image shows the new project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-new-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Create a Decision Optimization experiment - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_7,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:20. Now, follow these steps to create the Decision Optimization experiment in your project: 1. From your new project, click New asset > Solve optimization problems. 1. Select Local file. 1. Click Get sample files to view the GitHub repository containing the sample files. 1. In the DO-Samples repository, open the watsonx.ai and Cloud Pak for Data as a Service folder. 1. Click the HouseConstructionScheduling.zip file containing the house construction sample files. 1. Click Download to save the zip file to your computer. 1. Return to the Create a Decision Optimization experiment page, and click Browse. 1. Select the HouseConstructionScheduling.zip file from your computer. 1. Click Open. 1. If you don't already have a Watson Machine Learning service associated with this project, click Add a Machine Learning service. 1. Review your Watson Machine Learning service instances. You can use an existing service, or create a new service instance from here: click New service, select Machine Learning, and click Create. 1. Select your Watson Machine Learning instance from the list, and click Associate. 1. If necessary, click Cancel to return to the Services & integrations page. For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. 1. Choose a deployment space to associate with this experiment. If you do not have an existing deployment space, create one: 1. In the Select deployment space section, click New deployment space. 1. In the Name field, type House sample{: .cp} to provide a name for the deployment space. 1. Click Create. 1. When the space is ready, and click Close to return to the Create a Decision Optimization experiment page. Your new deployment space is selected. 1. " -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_8,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"Click Create to open the Decision Optimization experiment. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the experiment with the sample files. -![The following image shows the experiment with the sample files.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-exp-builder.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Build a model and visualize a scenario result - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_9,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:47. Follow these steps to build a model and visualize the result using the Decision Optimization Modeling Assistant. 1. In the left pane, click Build model to open the Modeling Assistant. This model was built with the Modeling Assistant so you can see that the objectives and constraints are in natural language, but you can also formulate your model in Python, OPL or import CPLEX and CPO models. 1. Click Run to run the scenario to solve the model and wait for the run to complete. 1. When the run completes, the Explore solution view displays. Under the Results tab, click Solution assets to see the resulting (best) values for the decision variables. These solution tables are displayed in alphabetical order by default. 1. In the left pane, select Visualization. 1. Under the Solutions tab, select Gantt to view the scenario with the optimal schedule. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Visualization page with a Gantt chart. - -![The following image shows the Visualization page with a Gantt chart.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-gantt-chart.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Change model objectives and constraints - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_10,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 03:01. Now, you want to make a change to your model formulation to consider an additional objective. Follow these steps to change the model objectives and constraints: 1. Click Build model. 1. In the left pane, click the Overflow menu ![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} next to Scenario 1, and select Duplicate. 1. For the name, type Scenario 2{: .cp}, and click Create. 1. For Scenario 2, add an objective to the model to optimize the quality of work based on the expertise of each contractor. 1. Under Add to model, in the search field, type overall quality{: .cp}, and press Enter. 1. Expand the Objective section. 1. Click Maximize overall quality of Subcontractor-Activity assignments according to table of assignment values to add it as an objective. This new objective is now listed under the Objectives section along with the Minimize time to complete all Activities objective. 1. For the objective that you just added, click table of assignment values, and select Expertise. A list of Expertise parameters displays. 1. From this list, click definition to change the field that defines contractor expertise, and select Skill Level. 1. Click Run to run the scenario to build the model and wait for the run to complete. 1. Return to the Explore solution page to view the Objectives and Solution assets. 1. In the left pane, select Visualization. 1. Under the Solutions tab, select Gantt to view the scenario with the optimal schedule. 1. " -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_11,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"Click Overview in the left pane to compare statistics between Scenario 1 and Scenario 2. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Visualization page with the new Gantt chart. -![The following image shows the Visualization page with the new Gantt chart.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-gantt-chart-02.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Deploy the model - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_12,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:07. Next, follow these steps to promote the model to a deployment space, and create a deployment: 1. Click the Overflow menu ![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} next to Scenario 1, and select Save for deployment. 1. In the Model name field, type House Construction{: .cp}, and click Next. 1. Review the model information, and click Save. 1. After the model is successfully saved, a notification bar displays with a link to the model. Click View in project. 1. If you miss the notification, then click the project name in the navigation trail. 1. Click the Assets tab in the project. 1. Click the House Construction model. 1. Click Promote to deployment space. 1. For the Target space, select House sample (or your deployment space) from the list. 1. Check the option to Check Go to the model in the space after deploying it. 1. Click Promote. 1. After the model is successfully promoted, the House Construction model displays in the deployment space. 1. Click New deployment. 1. For the deployment name, type House deployment{: .cp}. 1. For the Hardware definition, select 2 CPU and 8 GB RAM from the list. 1. Click Create. 1. Wait for the deployment status to change to Deployed. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the House deployment. - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_13,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![The following image shows the Visualization page with the House deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-house-deployment.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Test a model - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_14,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:55. To test the model with a scenario, you must upload data files from your computer to the deployment space. Follow these steps to test the model by creating a job using the CSV files included with the sample zip file: 1. Click House sample (or your deployment space) in the navigation trail to return to the deployment space. 1. Click the Assets tab. 1. In the HouseConstructionScheduling.zip file on your computer, you will find several CSV files in the .containers > Scenario 1 folder. 1. Click the Upload asset icon ![Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg){: iih} to open the Data panel. 1. Drag the Subcontractor.csv, Activity.csv, and Expertise.csv files into the Drop files here or browse for files to upload area in the Data panel. 1. Click the Deployments tab. 1. Click House deployment. 1. Now to submit a job to score the model, click New job. 1. For the job name, type House construction job{: .cp}. 1. Click Next. 1. Select the default values on the Configure page, and click Next. 1. Select the default values on the Schedule page, and click Next. 1. Select the default values on the Notify page, and click Next. 1. On the Choose data page, in the Input section, select the corresponding data assets that you previously loaded into your space for each input ID. 1. In the Output section, you will provide the name for each solution table to be created. 1. For Output ID ScheduledActivities.csv, click Select data source > Create new, type ScheduledActivities.csv{: .cp} for the name, and click Confirm. 1. " -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_15,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"For Output ID NotScheduledActivities.csv, click Select data source > Create new, type NotScheduledActivities.csv{: .cp} for the name, and click Confirm. 1. For Output ID stats.csv, click Select data source > Create new, type stats.csv{: .cp} for the name, and click Confirm. 1. For Output ID kpis.csv, click Select data source > Create new, type kpis.csv{: .cp} for the name, and click Confirm. 1. For Output ID solution.json, click Select data source > Create new, type solution.json{: .cp} for the name, and click Confirm. 1. For Output ID log.txt, click Select data source > Create new, type log.txt{: .cp} for the name, and click Confirm. 1. Review the information on the Choose data page, and then click Next. 1. Review the information on the Review and create page, and then click Create and run. 1. From the House deployment model page, click the job that you created named House construction job to see its status. 1. After the job run completes, click House sample (or your deployment space) to return to the deployment space. 1. On the Assets tab, you will see the output files: - ScheduledActivities.csv - NotScheduledactivities.csv - stats.csv - kpis.csv - solution.json - log.txt 1. For each of these assets, click the Download icon, and then view each of these files. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the completed batch job. -![The following image shows the Visualization page with the completed batch job.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-completed-batch-job.png){: width=""100%"" } -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_16,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html?context=cdpaas&locale=envideo-preview) - - - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_17,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Next steps - -Now you can use this data set for further analysis. For example, you or other users can do any of these tasks: - - - -* [Learn to build this model from scratch with the Modeling Assistant](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html) -* [Leverage this deployed model in an end user application using the Watson Machine Learning Rest API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html) -* [Deploy Decision Optimization models using the Watson Machine Learning Python Client](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html) - - - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_18,C535650C17CDE010EACBF5B6BF85FD8E593B77D6," Additional resources - - - -* Try these other methods to build models: - - - -* [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) -* [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) -* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) -* [Submit jobs by using the Watson Machine Learning API](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.html) - - - -* [Building and running Decision Optimization Experiments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html) -* [Deploying Decision Optimization models with UI](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html) -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -" -C535650C17CDE010EACBF5B6BF85FD8E593B77D6_19,C535650C17CDE010EACBF5B6BF85FD8E593B77D6,"![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon]Upload asset to project iconData sets][] that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - - - -* Contribute to the [Decision Optimization community](https://ibm.biz/decision-optimization-community) - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_0,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Quick start: Evaluate and track a prompt template - -Take this tutorial to learn how to evaluate and track a prompt template. You can evaluate prompt templates in projects or deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. Then, you can track the prompt template in an AI use case to capture and share facts about the asset to help you meet governance and compliance goals. - -Required services : watsonx.governance - -Your basic workflow includes these tasks: - - - -1. Open a project that contains the prompt template to evaluate. Projects are where you can collaborate with others to work with assets. -2. Evaluate a prompt template using test data. -3. Review the results on the AI Factsheet. -4. Track the evaluated prompt template in an AI use case. -5. Deploy and test your evaluated prompt template. - - - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_1,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Read about prompt templates - -With watsonx.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types: - - - -* Classification -* Summarization -* Generation -* Question answering -* Entity extraction - - - -[Read more about evaluating prompt templates in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html) - -[Read more about evaluating prompt templates in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_2,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Watch a video about evaluating and tracking a prompt template - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_3,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Try a tutorial to evaluating and tracking a prompt template - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep01) -* [Task 2: Evaluate the sample prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep02) -* [Task 3: Create a model inventory and AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep03) -* [Task 4: Start tracking the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep04) -* [Task 5: Create a new project for validation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep05) -* [Task 6: Validate the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep06) -* [Task 7: Deploy the prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep07) - - - -Expand all sections - - - -* Tips for completing this tutorial - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_4,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_5,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Create a project - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_6,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:08. You need a project store the prompt template and the evaluation. Follow these steps to create a project based on a sample: 1. Access the [Getting started with watsonx governance](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1b6c8d6e-a45c-4bf1-84ee-8fe9a6daa56d){: external} project in the Samples. 1. Click Create project. 1. Accept the default values for the project name, and click Create. 1. Click View new project when the project is successfully created. 1. Associate a Watson Machine Learning service with the project: 1. When the project opens, click the Manage tab, and select the Services and integrations page. 1. On the IBM services tab, click Associate service. 1. Select your Watson Machine Learning instance. If you don't have a Watson Machine Learning service instance provisioned yet, follow these steps: 1. Click New service. 1. Select Watson Machine Learning. 1. Click Create. 1. Select the new service instance from the list. 1. Click Associate service. 1. If necessary, click Cancel to return to the Services & Integrations page. 1. Click the Assets tab in the project to see the sample assets. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}.For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the project Assets tab. " -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_7,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"You are now ready to evaluate the sample prompt template in the project. -![Sample project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-sample-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Evaluate the sample prompt template - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_8,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:36. The sample project contains a few prompt templates and CSV files used as test data. Follow these steps to download the test data and evaluate one of the sample prompt templates: 1. On the project's Assets tab, click the Overflow menu ![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} next to the Insurance claim summarization test data.csv file. 1. Click Insurance claim summarization to open the prompt template in Prompt Lab. 1. Click the Prompt variables icon ![Prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/parameter.svg){: iih}. Note: To run evaluations, you must create at least one prompt variable.1. Scroll to the Try section. Notice the {input} variable in the Input field. You must include the prompt variable as input for testing your prompt. 1. Click Evaluate. 1. Expand the Generative AI Quality section to see a list of dimensions. The available metrics depend on the task type of the prompt. For example, summarization has different metrics than classification. 1. Click Next. 1. Select the test data: 1. Click Browse. 1. Select the Insurance claim summarization test data.csv file. 1. Click Open. 1. For the Input column, select Insurance _Claim. 1. For the Reference output column, select Summary. 1. Click Next. 1. Click Evaluate. When the evaluation completes, you see the test results on the Evaluate tab. 1. Click the AI Factsheet tab. 1. View the information on each of the sections on the tab. 1. " -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_9,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"Click Evaluation > Develop > Test to see the test results again. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the results of the evaluation. Now you can start tracking the prompt template in an AI use case. -![Prompt template evaluation test results](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-evaluate-prompt-template.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Create a model inventory and AI use case - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_10,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:54. You use a model inventory for storing and reviewing AI use cases. AI use cases collect governance facts for AI assets that your organization tracks. You can view all the AI use cases in an inventory. Follow these steps to create a model inventory and AI use case: ### Create a model inventory 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose AI governance > AI use cases. 1. Manage your inventories: - If you have existing inventory, then you skip to [Create a new AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=ennew-ai-use-case) can use that inventory. - If you don't have any inventories, then click Manage inventories. 1. Click New inventory. 1. For the name, copy and paste the following text: txt Golden Bank Insurance Inventory 1. For the description, copy and paste the following text: txt Model inventory for insurance related processing 1. Clear the Add collaborators after creation option. 1. Select your Cloud Object Storage instance from the list. 1. Click Create. 1. Close the Manage inventories page. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the model inventory. You are now ready to create an AI use case. - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_11,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![Model inventory](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-model-inventory.png){: width=""100%"" } ### Create an AI use case 1. Click New AI use case. 1. For the Name, copy and paste the following text: txt Insurance claims processing AI use case 1. Select an existing model inventory. 1. Click Create to accept the default values for the rest of the fields. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the AI use case. You are now ready to track the prompt template. - -![AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-new-ai-use-case.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Start tracking the prompt template - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_12,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 02:33. You can track your prompt template in an AI use case to report the development and test process to your peers. Follow these steps to start tracking the prompt template: 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects. 1. Select the Getting started with watsonx governance project. 1. Click the Assets tab. 1. From the Overflow menu ![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} for the Claims processing summarization prompt template, select View AI Factsheet. 1. On the AI Factsheet tab, click the Governance page. 1. Click Track an AI use case. 1. Select the Insurance claims processing AI use case. 1. Select LLM Prompt Engineering for the approach. 1. Click Next. 1. For the model version, select Experimental. 1. Accept the default value for the version number. 1. Click Next. 1. Click Track asset. 1. Click the View details icon ![View details icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/arrow-up-right.svg){: iih} to open the AI use case. 1. Click the Lifecycle tab to see the prompt template in the Develop phase. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Lifecycle tab in the AI use case with the prompt temmplate in the Develop phase. You are now ready to continue to the Validate phase - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_13,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![The Lifecycle tab in the AI use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-ai-factsheet-governance-page.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Create a new project for validation - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_14,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 03:22. Typically, the prompt engineer evaluates the prompt with test data, and the validation engineer validates the prompt. The validation engineer has access to the validation data that prompt engineers might not have. In this case, validation data occurs in a different project. Follow these steps to export the development project and import it as a new validation project to move the asset into the validation phase of the AI lifecycle: 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects. 1. Select the Getting started with watsonx governance project. 1. Click the Import/Export icon ![Import/Export icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/import-export.svg){: iih} > Export project. 1. Check the box to select all assets. 1. Click Export. 1. For the project name, copy and paste the following text, and then click Save. txt validation project.zip 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects. 1. Click New project. 1. Select Create a project from a sample of file. 1. Click Browse. 1. Select the validation project.zip, and click Open. 1. For the project name, copy and paste the following text: txt Validation project 1. Click Create. 1. When the project is created, click View new project. 1. " -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_15,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"Follow the same steps as in [Step 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep01) to associate your Watson Machine Learning service with this project. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the validation project Assets tab. You are now ready to evaluate the sample prompt template in the validation project. -![Validation project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-vaildation-project.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Validate the prompt template - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_16,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:18. Now you are ready to evaluate the prompt template in this validation project using the same evaluation process as before. Use the same test data set for evaluation. And select the same Input and Output columns as before. Follow these steps to validate the prompt template: 1. Click the Assets tab in the Validation project. 1. Repeat the steps in [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=enstep02) to evaluate the Claims processing summarization prompt template. 1. Click the AI Factsheet tab when the evaluation is complete. 1. View both sets of test results: 1. Click Evaluation > Develop > Test. 1. Click Evaluation > Validate > Test. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the validation test results. You are now ready to promote the prompt template to a deployment space, and then deploy the prompt template. - -![Prompt template evaluation test results](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-evaluate-prompt-template-validation.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 7: Deploy the prompt template - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_17,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 05:00. ### Promote the prompt template to a deployment space You promote the prompt template to a deployment space in preparation for deploying it. Follow these steps to prompte the prompt template: 1. Click Validation project in the projects navigation trail. 1. From the Overflow menu ![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} for the Claims processing summarization prompt template, select Promote to space. 1. For the Target space, select Create a new deployment space. 1. For the Space name, copy and paste the following text: txt Insurance claims deployment space 1. For the Deployment stage, select Production. 1. Select your machine learning service from the list. 1. Click Create. 1. Click Close. 1. Select the Insurance claims deployment space deployment space from the list. 1. Check the option to Go to the space after promoting the prompt template. 1. Click Promote. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the prompt template in the deployment space. You are now ready to create a deployment. - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_18,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![Prompt template in deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-deployment-space.png){: width=""100%"" } ### Deploy the prompt template Now you can deploy the prompt template from inside the deployment space. Follow these steps to create a deployment: 1. From the Overflow menu ![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih} for the Insurance claims summarization prompt template, select Deploy. 1. For the deployment name, copy and paste the following text: txt Insurance claims summarization deployment 1. Click Create. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the deployed prompt template. - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_19,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![Deployed prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-deployment.png){: width=""100%"" } ### View the deployed prompt template Follow these steps to view the deployed prompt template in its current phase of the lifecycle: 1. View the deployment when it is ready. The API reference tab provides information for you to use the prompt template deployment in your application. 1. Click the Test tab. The Test tab allows you to submit an instruction and Input to test the deployment. 1. Click Generate. 1. Click the AI Factsheet tab. The AI Factsheet shows that the prompt template is now in the operate phase. 1. Scroll down, and click the arrow for more details. 1. Select the Evaluation > Operate > Deployment 1 page. 1. Click the View details icon ![View details icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/arrow-up-right.svg){: iih} to open the AI use case. 1. Click the Lifecycle tab. 1. Click the Insurance claim summarization prompt template in the Operate phase. When you are done, click Close. 1. Click the Insurance claims summarization deployment prompt template deployment in the Operate phase. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the prompt template prompt template in the Operate phase of the lifecycle. - -![Prompt template in the Operate phase](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-gov-operate-prompt-template.png){: width=""100%"" } -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_20,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html?context=cdpaas&locale=envideo-preview) - - - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_21,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Next steps - -You are now ready to try the [Prompt a foundation model with the retrieval-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html). - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_22,7FEB0313C4AA5133F215A847F2ABAA025E83BB38," Additional resources - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -" -7FEB0313C4AA5133F215A847F2ABAA025E83BB38_23,7FEB0313C4AA5133F215A847F2ABAA025E83BB38,"![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_0,4E83416B551F557D5BDA600450E6CCB7742EB51D," Quick start: Prompt a foundation model with the retrieval-augmented generation pattern - -Take this tutorial to learn how to use foundation models in IBM watsonx.ai to generate factually accurate output grounded in information in a knowledge base by applying the retrieval-augmented generation pattern. Foundation models can generate output that is factually inaccurate for a variety of reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text. This tutorial uses a sample notebook using the retrieval-augmented generation pattern method to improve the accuracy of the generated output. - -Required services : Watson Studio : Watson Machine Learning - -Your basic workflow includes these tasks: - - - -1. Open a project. Projects are where you can collaborate with others to work with data. -2. Add a notebook to your project. You can create your own notebook, or add a [sample notebook](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) to your project. -3. Add and edit code, then run the notebook. -4. Review the notebook output. - - - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_1,4E83416B551F557D5BDA600450E6CCB7742EB51D," Read about retrieval-augmented generation pattern - -You can scale out the technique of including context in your prompts by leveraging information in a knowledge base. The retrieval-augmented generation pattern involves three basic steps: - - - -* Search for relevant content in your knowledge base -* Pull the most relevant content into your prompt as context -* Send the combined prompt text to the model to generate output - - - -[Read more about the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html?context=wx) - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_2,4E83416B551F557D5BDA600450E6CCB7742EB51D," Watch a video about using the retrieval-augmented generation pattern - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_3,4E83416B551F557D5BDA600450E6CCB7742EB51D," Try a tutorial to prompt a foundation model with the retrieval-augmented generation pattern - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep01) -* [Task 2: Add a sample notebook to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep02) -* [Task 3: Edit the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep03) -* [Task 4: Run the notebook and review the output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep04) - - - -Expand all sections - - - -* Tips for completing this tutorial - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_4,4E83416B551F557D5BDA600450E6CCB7742EB51D,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side-wx.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_5,4E83416B551F557D5BDA600450E6CCB7742EB51D,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the sample notebook. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_6,4E83416B551F557D5BDA600450E6CCB7742EB51D,"Follow the steps to verify that you have an existing project or create a project. 1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You will use Watson Machine Learning to prompt the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check if this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click Associate service. 1. Check the box next to your Watson Machine Learning service instance. 1. Click Associate. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Manage tab with the associated service. You are now ready to add the sample notebook to your project. - -![Manage tab in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-fm-associated-service.png){: width=""100%"" } -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_7,4E83416B551F557D5BDA600450E6CCB7742EB51D,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Add the sample notebook to your project - -The sample notebook uses a small knowledge base and a simple search component to demonstrate the basic pattern. The scenario used in this notebook is for a company that sells seeds for planting in a garden. The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. The new widge is being added to the website to answer customer questions on the contents of the articles. Watch this video to see how to add a sample notebook to a project, and then follow the steps to add the notebook to your project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -1. Access the [Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. Type the notebook name and description (optional). 1. Select a runtime environment for this notebook. 1. Click Create. Wait for the notebook editor to load. 1. From the menu, click Kernel > Restart & Clear Output, then confirm by clicking Restart and Clear All Outputs to clear the output from the last saved run. -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_8,4E83416B551F557D5BDA600450E6CCB7742EB51D,"For more information on associated services, see [Adding associated services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html){: new_window}. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the notebook open in Edit mode. Now you are ready to set up the prerequisites for running the notebook. - -![Notebook open in Edit mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-fm-notebook-begin.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Edit the notebook - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:57. Before you can run the notebook, you need to set up the environment. Follow these steps to verify the notebook prerequisites: 1. Scroll to the For IBM watsonx on IBM Cloud section in the notebook to see the two prerequisites to run the notebook. 1. Under the Create an IBM Cloud API key section, you need to pass your credentials to the Watson Machine Learning API using an API key. If you don't already have a saved API key, then follow these steps to create an API key. -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_9,4E83416B551F557D5BDA600450E6CCB7742EB51D,"1. Access the [IBM Cloud console API keys page](https://cloud.ibm.com/iam/apikeys){: new_window}. 1. Click Create an IBM Cloud API key. If you have any existing API keys, the button may be labelled Create. 1. Type a name and description. 1. Click Create. 1. Copy the API key. 1. Download the API key for future use. 1. Review the Associate an instance of the Watson Machine Learning service with the current project section. You completed this prerequisite in [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=enstep01). 1. Scroll to the Run the cell to provide the IBM Cloud API key section: 1. Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} to run the cell. 1. Paste the API key, and press Enter. 1. Under Run the cell to set the credentials for IBM watsonx on IBM Cloud, click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} to run the cell and set the credentials. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following images shows the notebook with the prerequisites completed. Now you are ready to run the notebook and review the output. - -![Notebook with the prerequisites completed](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-fm-notebook-apikey.png){: width=""100%"" } -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_10,4E83416B551F557D5BDA600450E6CCB7742EB51D,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Run the notebook and review the output - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_11,4E83416B551F557D5BDA600450E6CCB7742EB51D,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:03. The sample notebook includes information about the retrieval-augmented generation and how you can adapt the notebook for your specific use case. Follow these steps to run the notebook and review the output: 1. Scroll to the Step 2: Create a Knowledge Base section in the notebook: 1. Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells sets up the knowledge base as a collection of two articles. These articles were written as samples for watsonx.ai, they are not real articles published anywhere else. The authors and publication dates are fictional. 1. Scroll to the Step 3: Build a simple search component section in the notebook: 1. Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells builds a simple search component. Many articles that discuss retrieval-augmented generation assume the retrieval component uses a vector database. However, to perform the general retrieval-augmented generation pattern, any search-and-retrieve method that can reliably return relevant content from the knowledge base will do. In this notebook, the search component is a trivial search function that returns the index of one or the other of the two articles in the knowledge base, based on a simple regular expression match. 1. Scroll to the Step 4: Craft prompt text section in the notebook: 1. " -4E83416B551F557D5BDA600450E6CCB7742EB51D_12,4E83416B551F557D5BDA600450E6CCB7742EB51D,"Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} for each of the two cells in that section. 1. Review the output for the two cells in the section. The code in these cells crafts the prompt text. There is no one, best prompt for any given task. However, models that have been instruction-tuned, such as bigscience/mt0-xxl-13b, google/flan-t5-xxl-11b, or google/flan-ul2-20b, can generally perform this task with a sample prompt. Conservative decoding methods tend towards succinct answers. In the prompt, notice two string placeholders (marked with %s) that will be replaced at generation time: - The first placeholder will be replaced with the text of the relevant article from the knowledge base - The second placeholder will be replaced with the question to be answered 1. Scroll to the Step 5: Generate output using the foundation models Python library section in the notebook: 1. Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} for each of the three cells in that section. 1. Review the output for the three cells in the section. The code in these cells generates output by using the Python library. You can prompt foundation models in watsonx.ai programmatically using the Python library. For more information about the library, see the following topics: - [Introduction to the foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=wx){: new_window} - [Foundation models Python library reference](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html){: new_window} 1. " -4E83416B551F557D5BDA600450E6CCB7742EB51D_13,4E83416B551F557D5BDA600450E6CCB7742EB51D,"Scroll to the Step 6: Pull everything together to perform retrieval-augmented generation section in the notebook: 1. Click the Run icon ![Run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-run.png){: iih} for each of the two cells in that section. This code pulls everything together to perform retrieval-augmented generation. 1. Review the output for the first cell in the section. The code in this cell sets up the user input elements. 1. For the second cell in the section, type a question related to tomatoes or cucumbers to see the answer and the source. For example, Do I use mulch with tomatoes?. 1. Review the answer to your question. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the completed notebook. -![The completed notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-fm-notebook-complete.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html?context=cdpaas&locale=envideo-preview) - - - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_14,4E83416B551F557D5BDA600450E6CCB7742EB51D," Next steps - - - -* ![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch the video beginning at 02:55 to learn about considerations for applying the retrieval-augmented generation pattern to a production solution. -* Try the [Prompt a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) tutorial using Prompt Lab. - - - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_15,4E83416B551F557D5BDA600450E6CCB7742EB51D," Additional resources - - - -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) -* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -" -4E83416B551F557D5BDA600450E6CCB7742EB51D_16,4E83416B551F557D5BDA600450E6CCB7742EB51D,"![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_0,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Quick start: Generate synthetic tabular data - -Take this tutorial to learn how to generate synthetic tabular data in IBM watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities. This tutorial helps you learn how to use the graphical flow editor tool, Synthetic Data Generator, to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. - -Required services : Watson Studio - -Your basic workflow includes these tasks: - - - -1. Open a project. Projects are where you can collaborate with others to work with data. -2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. -3. Create and run a synthetic data flow to the project. You use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. -4. Review the synthetic data flow and output. - - - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_1,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Read about synthetic data - -Synthetic data is information that has been generated on a computer to augment or replace real data to improve AI models, protect sensitive data, and mitigate bias. Synthetic data helps to mitigate many of the logistical, ethical, and privacy issues that come with training machine learning models on real-world examples. - -[Read more about Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_2,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Watch a video about generating synthetic tabular data - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_3,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Try a tutorial to generate synthetic tabular data - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep01) -* [Task 2: Add data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02) -* [Task 3: Create a synthetic data flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep03) -* [Task 4: Review the data flow and output](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep04) - - - -Expand all sections - - - -* Tips for completing this tutorial - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_4,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_5,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to to store the assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. - -![Home screen with sandbox project listed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-home-screen.png){: width=""100%"" } -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_6,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Add data to your project - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:24. The data set used in this tutorial contains typical information that a company gathers about their customers, and is available in the Samples. Follow these steps to find the data set in the Samples and add it to your project: 1. Access the [Customers data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4bfbe430a82e23821aed0647b506da93){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. After the data set is added, click View Project. For more information on adding data assets from the Samples to your project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab in the project. Now you are ready to create the synthetic data flow. - -![The following image shows the Assets tab in the project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-synthetic-data-assets-tab.png){: width=""100%"" } -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_7,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Create a synthetic data flow - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_8,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:43. Use the Synthetic Data Generator to create a data flow that generates synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. Follow these steps to create a synthetic data flow asset in your project: 1. From the Assets tab in your project, click New asset > Generate synthetic tabular data. 1. For the name, type Bank customers{: .cp}. 1. Click Create. 1. On the Welcome to Synthetic Data Generator screen, click First time user, and click Continue. This option provides a guided experience for you to build the data flow. 1. Review the two use cases: - Leverage your existing data: Generate a structured synthetic data set based on your production data. You can connect to a database, import or upload a file, mask, and generate your output before exporting. - Create from custom data: Generate a structured synthetic data set based on meta data. You can define the data within each table column, their distributions, and any correlations. 1. Select the Leverage your existing data use case, and click Next to import existing data. 1. Click Select data from project to use the customers data asset that you added from the Samples. 1. Select Data asset > customers.csv. 1. Click Select. 1. Click Next. 1. In the list of columns, search for creditcard_number{: .cp}. 1. In the Anonymize column for CREDITCARD_NUMBER, select Yes to mask customers' credit card numbers. 1. Click Next. 1. Accept the default settings on the Mimic options page. These options generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data. Click Next. 1. For the File name, type bank_customers.csv{: .cp}, and click Next. 1. Review the settings, and click Save and run. The Synthetic Data Generator tool displays with the data flow. " -220E465DBC0C22FF06F80DF18B25044DD1EBC787_9,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"Wait for the run to complete. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the data flow open in the Synthetic Data Generator. Now you can explore the data flow and view the output. -![The following image shows the data flow open in the Synthetic Data Generator.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-synthetic-data-flow.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Review the data flow and output - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_10,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:48. When the run completes, you can explore the data flow. Follow these steps to review the synthetic data flow and the results: 1. Click the Palette icon ![Palette icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/side-panel-close-filled.svg){: iih} to close the node panel. 1. Double-click the Import node to see the settings. 1. Review the Data properties. The tool read the data set from the project and filled in the appropriate data properties. 1. Expand the Types section. The tool read the values and columns in the data set. 1. Click Cancel. 1. Double-click the Anonymize node to see the settings. 1. Verify that the CREDITCARD_NUMBER column is set to be anonymized. 1. Expand the Anonymize values section. Here you can customize how the values are anonymized. 1. Click Cancel. 1. Double-click the Mimic node to see the settings. 1. Review the default settings to mimic the data in the source customers data set. 1. Click Cancel. 1. Double-click the Generate node to see the settings. 1. Review the list of Synthesized columns. 1. Optional: Review the Correlations and Advanced Options. 1. Click Cancel. 1. Double-click the Export node to see the settings. 1. Optional: By default the exported data is stored in the project. Click Change path to store the exported data in a connection, such as Db2 Warehouse. 1. Click Cancel. 1. Click your project name to return to the Assets tab. ![Project breadcrumbs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-sandbox-breadcrumbs.png){: biw} 1. " -220E465DBC0C22FF06F80DF18B25044DD1EBC787_11,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"Click bank_customers.csv to see a preview of the generated synthetic tabular data. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the exported, generated synthetic tabular data set. -![The following image shows the exported, generated synthetic tabular data set.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-synthetic-data.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html?context=cdpaas&locale=envideo-preview) - - - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_12,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Next steps - -Try these additional tutorials to get more hands-on experience with watsonx.ai: - - - -* [Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) -* [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) -* [Build machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.htmltutorials-for-building-deploying-and-trusting-models) - - - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_13,220E465DBC0C22FF06F80DF18B25044DD1EBC787," Additional resources - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -" -220E465DBC0C22FF06F80DF18B25044DD1EBC787_14,220E465DBC0C22FF06F80DF18B25044DD1EBC787,"![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. -* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_0,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Quick start: Automate the lifecycle for a model with pipelines - -You can create an end-to-end pipeline to deliver concise, pre-processed, and up-to-date data stored in an external data source. Read about Watson Pipelines, then watch a video and take a tutorial. - -Required services : Watson Studio : Watson Machine Learning - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add connections and data to the project. You can add CSV files or data from a remote data source through a connection. -3. Create a pipeline in the project. -4. Add nodes to the pipeline to perform tasks. -5. Run the pipeline and view the results. - - - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_1,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Read about pipelines - -The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts. Putting a model into production is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift. - -[Read more about pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) - -[Learn about other ways to build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_2,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Watch a video about pipelines - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. You might notice slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_3,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Try a tutorial to create a model with Pipelines - -This tutorial guides you through exploring and running an AI pipeline to build and deploy a model. The model predicts if a customer is likely subscribe to a term deposit based on a marketing campaign. - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep01) -* [Task 2: Create a deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02) -* [Task 3: Create the sample pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep03) -* [Task 4: Explore an existing pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep04) -* [Task 5: Run the pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep05) -* [Task 6: View the assets, deployed model, and online deployment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep06) - - - -This tutorial takes approximately 30 minutes to complete. - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_4,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Sample data - -The sample data that is used in the guided experience is UCI: Bank marketing data used to predict whether a customer enrolls in a marketing promotion. - -![Spreadsheet of the Bank marketing data set](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai_bank_sample_description.png) - -Expand all sections - - - -* Tips for completing this tutorial - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_5,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_6,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. - -![Home screen with sandbox project listed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-home-screen.png){: width=""100%"" } -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_7,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Create a deployment space - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:16. Deployment spaces help you to organize supporting resources such as input data and environments; deploy models or functions to generate predictions or solutions; and view or edit deployment details. Follow these steps to create a deployment space. 1. From the watsonx navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Deployments. If you have an existing deployment space, you can skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep02). 1. Click New deployment space. 1. Type a name for your deployment space. 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click Create. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the empty deployment space: - -![The following image shows the empty deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-pipeline-deployment-space.png){: width=""100%"" } -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_8,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Create the sample pipeline - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:08. You create and run pipelines in a project. Follow these steps to create a pipeline based on a sample in a project: 1. On the watsonx home page, select your sandbox or a different existing project from the drop down list. ![Project list drop down](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-project-dropdown.png){: biw} 1. Click Customize my journey, and then select View all tasks. 1. Select Automate model lifecycle. 1. Click Samples. 1. Select Orchestrate an AutoAI experiment, and click Next. 1. Optional: Change the name for the pipeline. 1. Click Create. The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is deployed to a space. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the sample pipeline. - -![The following image shows the sample pipeline.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-pipeline-start.png){: width=""100%"" } -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_9,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Explore the existing pipeline - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_10,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:30. The sample pipeline includes several nodes to create assets and use those assets to build a model. Follow these steps to view the nodes: 1. Click the Global objects![Global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/settings-adjust.svg){: iih} icon to view the pipeline parameters. Expand the deployment_space parameter. This pipeline includes a parameter to specify a deployment space where the best model from the AutoAI experiment is stored and deployed. Click the X to close the window. 1. Double-click the Create data file node to see that it is configured to access the data set for the experiment. Click Cancel to close the properties pane. 1. Double-click the Create AutoAI experiment node. View the experiment name, the scope, which is where the experiment is stored, the prediction type (binary classification, multiclass classification, or regression), the prediction column, and positive class. The rest of the parameters are all optional. Click Cancel to close the properties pane. 1. Double-click the Run AutoAI experiment node. This node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model. The first two parameters are required. The first parameter takes the output from the Create AutoAI experiment node as the input to run the experiment. The second parameter takes the output from the Create data file node as the training data input for the experiment. The rest of the parameters are all optional. Click Cancel to close the properties pane. 1. Double-click the Create Web service node. This node creates a deployment with the name onboarding-bank-marketing-prediction-deployment. The first parameter takes the best model output from the Run AutoAI experiment node as the input to create the deployment with the specified name. " -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_11,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"The rest of the parameters are all optional. Click Cancel to close the properties pane. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the properties for the Create web service node. You are now ready to run the sample pipeline. -![The following image the properties for the Create web service node.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-pipeline-properties.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Run the pipeline - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_12,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 03:43. Now that the pipeline is complete, follow these steps to run the pipeline: 1. From the toolbar, click Run pipeline > Trial run. 1. In the Values for pipeline parameters section, select your deployment space: 1. Click Select Space. 1. Click Spaces. 1. Select your deployment space from [Task 1](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=enstep01). 1. Click Choose. 1. Provide an API key if this occasion is your first time running a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption. - If you have an existing API key, click Use existing API key, paste the API key, and click Save. - If you don't have an existing API key, click Generate new API key, provide a name, and click Save. Copy the API key, and then save the API key for future use. When you're done, click Close. 1. Click Run to start running the pipeline. 1. Monitor the pipeline progress. 1. Scroll through consolidated logs while the pipeline is running. The trial run might take up to 10 minutes to complete. 1. As each operation completes, select the node for that operation on the canvas. 1. On the Node Inspector tab, view the details of the operation. 1. Click the Node output tab to see a summary of the output for each node operation. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the pipeline after it completed the trial run. You are now ready to review the assets that the pipeline created. - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_13,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"![Completed run of pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-pipeline-completed-run.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: View the assets, deployed model, and online deployment - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:27. The pipeline created several assets in the deployment space. Follow these steps to view the assets: 1. From the watsonx navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Deployments. 1. Click the name for your deployment space. 1. On the Assets tab, view All assets. 1. Click the bank-marketing-data.csv data asset. The Create data file node created this asset. 1. Click the model beginning with the name onboarding-bank-marketing-prediction. The Run AutoAI experiment node generated several model candidates, and chose this as the best model. 1. Click the Model details tab, and scroll through the model and training information. 1. Click the Deployments tab, and open the onboarding-bank-marketing-prediction-deployment. 1. Click the Test tab. 1. Click the JSON input tab. 1. " -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_14,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"Replace the sample text with the following JSON text, and click Predict.JSON { ""input_data"": [ { ""fields"": ""age"", ""job"", ""marital"", ""education"", ""default"", ""balance"", ""housing"", ""loan"", ""contact"", ""day"", ""month"", ""duration"", ""campaign"", ""pdays"", ""previous"", ""poutcome"" ], ""values"": 35, ""management"", ""married"", ""tertiary"", ""no"", 0, ""yes"", ""no"", ""cellular"", 1, ""jun"", 850, 10, -1, 4, ""unknown"" ] ] } ] }### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the results of the test; the prediction is to approve the applicant. The confidence scores for your test might be different from the scores that are shown in the image. -![Test results predictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-pipeline-results.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html?context=cdpaas&locale=envideo-preview) - - - - - -* Try these other methods to build models: - - - -* [Build and deploy a model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) -* [Build and deploy a model in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_15,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"* [Build and deploy a model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) -* [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_16,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B,"![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -" -870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B_17,870BF64E17FEB1BBDAE7B35E9941DB781F26AD6B," Learn more - - - -* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_0,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Quick start: Prompt a foundation model using Prompt Lab - -Take this tutorial to learn how to use the Prompt Lab in watsonx.ai. There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. See [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) to help you successfully prompt most text-generating foundation models. - -Required services : Watson Studio : Watson Machine Learning - -Your basic workflow includes these tasks: - - - -1. Open a project. Projects are where you can collaborate with others to work with data. -2. Open the Prompt Lab. The Prompt Lab lets you experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. -3. Type your prompt in the prompt editor. You can type prompts in either freeform and structured mode. -4. Select the model to use. You can submit your prompt to any of the models supported by watsonx.ai. -5. Save your work as a projet asset. Saving your work as a project asset makes your work available to collaborators in the current project. - - - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_1,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Read about prompting a foundation model - -Foundation models are very large AI models. They have billions of parameters and are trained on terabytes of data. Foundation models can perform a variety of tasks, including text-, code-, or image generation, classification, conversation, and more. Large language models are a subset of foundation models used for text- and code-related tasks. In IBM watsonx.ai, there is a collection of deployed large language models that you can use, as well as tools for experimenting with prompts. - -[Read more about Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_2,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Watch a video about prompting a foundation model - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_3,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Try a tutorial to prompt a foundation model - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep01) -* [Task 2: Use the Prompt Lab in Freeform mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02) -* [Task 3: Use the Prompt Lab in Structured mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep03) -* [Task 4: Use the sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep04) -* [Task 5: Choose a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep05) -* [Task 6: Adjust model parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep06) -* [Task 7: Save your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep07) - - - -Expand all sections - - - -* Tips for completing this tutorial - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_4,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [watsonx.ai Community discussion forum](https://community.ibm.com/community/user/watsonx/communities/community-home/digestviewer?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side-wx.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_5,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store Prompt Lab assets. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -1. From the watsonx home screen, scroll to the Projects section. If you see any projects listed, then skip to [Task 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=enstep02). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you will see the sandbox project in the Projects section. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the home screen with the sandbox listed in the Projects section. You are now ready to open the Prompt Lab. - -![Home screen with sandbox project listed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-home-screen.png){: width=""100%"" } -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_6,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Use the Prompt Lab in Freeform mode - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:03. You can type your prompt text in a freeform, plain text editor and then click Generate to send your prompt to the model. Follow these steps to use the Prompt Lab in Freeform mode: 1. From the home screen, click the Experiment with foundation models and build prompts tile. 1. Select each checkbox to accept the acknowledgements, and then click Skip tour. 1. Click the Freeform tab to prompt a foundation model in Freeform mode. 1. Click Switch mode. 1. Copy and paste the following text in the text field, and then click Generate to see the output for the Class name: Problem.Classify this customer message into one of two classes: question, problem. Class name: Question Description: The customer is asking a technical question or a how-to question about our products or services. Class name: Problem Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results. Message: I'm having trouble registering for a new account. Class name: -### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following images shows the generated output for the prompt in Freeform mode. Now you are ready to prompt a foundation model in Structured mode. - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_7,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![Generated output for the prompt in Freeform mode.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-prompt-freeform.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Use the Prompt Lab in Structured mode - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:19. You can type your prompt in a structured format. The structured format is helpful for few-shot prompting, when your prompt has multiple examples. Follow these steps to use the Prompt Lab in Structured mode: 1. Click the Structured tab. 1. Click Switch mode. 1. In the Instruction field, copy and paste the following text: Given a message submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description so the chat can be routed to the correct support team.{: .cp} 1. In the Setup field, copy and paste the following text in each column: | Input | Output | | ----- | ----- | | When I try to log in, I get an error. | Problem | | Where can I find the plan prices? | Question | | What is the difference between trial and paygo? | Question | | The registration page crashed, and now I can't create a new account. | Problem | | What regions are supported? | Question | | I can't remember my password. | Problem | -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_8,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"1. In the Try field, copy and paste the following text: I'm having trouble registering for a new account.{: .cp} 1. Click Generate to see the output Problem. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following images shows the generated output for the prompt in Structured mode. Now you are ready to try the sample prompts. - -![Generated output for the prompt in Structured mode](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-prompt-structured.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Use the sample prompts - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_9,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:33. If you’re not sure how to begin, sample prompts can get your started. Follow these steps to use the sample prompts: 1. Open the Sample prompts icon ![Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/sample-prompts-icon.png){: iih} to display the list. 1. Scroll through the list, and click the Marketing email generation sample prompt. 1. View the selected model. When you load a sample prompt, an appropriate model is selected for you. 1. Open the Model Parameters![Model parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/settings-adjust.svg){: iih} panel. The appropriate decoding and stopping criteria parameters are set automatically too. 1. Click Generate to submit the sample prompt to the model, and see the sample email output. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the generated output from a sample prompt. Now you are ready to customize the sample prompt output by selecting a different model and parameters. - -![Generated output from a sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-sample-prompt-output.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Choose a foundation model - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_10,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:04. You can submit the same prompt to a different model. Follow these steps to choose a different foundation model: 1. Click Model > View all foundation models. 1. Click a model to learn more about a model, and see detail such as the model architecture, pretraining data, fine-tuning information, and performance against benchmarks. 1. Click Back to return to the list of models. 1. Select either the flan-t5-xxl-11b or mt0-xxl-13b foundation model, and click Select model. 1. Hover over the model output column and click the X icon to delete the previous output. 1. Click the same sample prompt, Marketing email generation, from the list. 1. Click Generate to generate output using the new model. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows generated output using a different model. You are now ready to adjust the model parameters. - -![Generated output using a different model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-sample-prompt-output-new-model.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Adjust model parameters - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_11,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:28. You can experiment with changing decoding or stopping criteria parameters. Follow these steps to adjust model parameters. Note: The model parameters vary based on the currently selected model.The following table defines the model parameters available for the flan-t5-xxl-11b foundation model. | Model parameters | Meaning | | ----- | ----- | | Decoding | Set decoding to Greedy to always select words with the highest probability. Set decoding to Sampling to customize the variability of word selection. | | Temperature | Control the creativity of generated text. Higher values will lead to more randomly generated outputs. | | Top P (nucleus sampling) | Set to < 1.0 to use only the smallest set of most probable tokens with probabilities that add up to top_p or higher. | | Top K | Set the number of highest probability vocabulary tokens to keep for top-k-filtering. Lower values make it less likely the model will go off topic. | | Random seed | Control the random sampling of the generated tokens when sampling is enabled. Setting the random see to the same number for each generation ensures experimental repeatability. | | Repetition penalty | Set a repetition penalty to counteract the model's tendency to repeat prompt text verbatim or get stuck in a loop. 1.00 indicates no penalty. | | Stop sequences | Set stop sequences to one ore more strings to cause the text generation to stop if or when they are produced as part of the output. | | Min tokens | Define the minimum number to tokens to generate. Stop sequences encountered prior to the minimum number of tokens being generated are ignored. | | Max tokens | Define the maximum number to tokens to generate. | -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_12,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"1. Change the Top K parameter to 10 to make it less likely the model will go off topic. 1. Click X to delete the previous model output. 1. Click the same sample prompt from the list. 1. Click Generate to generate output using the new model parameters. 1. Click the Session history icon ![Session history icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-session-history-icon.png){: iih} after submitting multiple prompts to view your session history. 1. Click any entry to work with a previous prompt, model specification, and parameter settings, and then click Restore. 1. Edit the prompt, change the model, or adjust decoding and stopping criteria parameters. 1. Click Generate to generate output using the updated information. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows generated output using different model parameters. You are now ready to save your work. - -![Generated output using a different model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-session-history.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - -> - - - -* Task 7: Save your work - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_13,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 02:15. You can save your work in three formats: | Asset type | Description | | ----- | ----- | | Prompt template | Save the current prompt only, without its history. | | Prompt session | Save history and data from the current session. | | Notebook | Save the current prompt as a notebook. | -Follow these steps to save your work: 1. Click Save work > Save as. 1. Select Prompt template. 1. For the name, type Sample prompts{: .cp}. 1. Select the View in project after saving option. 1. Click Save. 1. On the project's Assets tab, click the Sample prompts asset to load that prompt in the Prompt Lab and get right back to work. 1. Click the Saved prompts![Saved prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/saved-prompts-icon.png){: iih} to see saved prompt from your sandbox project. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the project's Assets tab with the prompt template asset: - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_14,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![Project's Assets tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-saved-prompt-in-project.png){: width=""100%"" } ![Checkmark icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} The following image shows saved prompt in the Prompt Lab: ![Saved prompt in Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-saved-prompt-in-prompt-lab.png){: biw} -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html?context=cdpaas&locale=envideo-preview) - - - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_15,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Next steps - -You are now ready to: - - - -* Use the [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to prompt [foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) and save your work to a project. -* Try the [Prompt a foundation model with the retrieval-augmented generation pattern tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) - - - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_16,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9," Additional resources - - - -* [Saving your work](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html) -* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -" -C038BA342A00562BCB7A569E4E2ACB7349C9CEF9_17,C038BA342A00562BCB7A569E4E2ACB7349C9CEF9,"![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_0,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Quick start: Refine data - -You can save data preparation time by quickly transforming large amounts of raw data into consumable, high-quality information that is ready for analytics. Read about the Data Refinery tool, then watch a video and take a tutorial that’s suitable for beginners and does not require coding. - -Your basic workflow includes these tasks: - - - -1. Open your sandbox project. Projects are where you can collaborate with others to work with data. -2. Add your data to the project. You can add CSV files or data from a remote data source through a connection. -3. Open the data in Data Refinery. -4. Perform steps using operations to refine the data. -5. Create and run a job to transform the data. - - - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_1,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Read about Data Refinery - -Use Data Refinery to cleanse and shape tabular data with a graphical flow editor. You can also use interactive templates to code operations, functions, and logical operators. When you cleanse data, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated. When you shape data, you customize it by filtering, sorting, combining or removing columns, and performing operations. - -You create a Data Refinery flow as a set of ordered operations on data. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you perspective and insights into your data. When you save the refined data set, you typically load it to a different location than where you read it from. In this way, your source data remains untouched by the refinement process. - -[Read more about refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_2,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Watch a video about refining data - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to see how to refine data. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_3,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Try a tutorial to refine data - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep01) -* [Task 2: Open the data set in Data Refinery.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep02) -* [Task 3: Review the data with Profile and Visualizations.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep03) -* [Task 4: Refine the data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep04) -* [Task 5: Run a job for the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep05) -* [Task 6: Create another data asset from the Data Refinery flow.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep06) -* [Task 7: View the data assets and your Data Refinery flow in your project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=enstep07) - - - -This tutorial will take approximately 30 minutes to complete. - -Expand all sections - - - -* Tips for completing this tutorial - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_4,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_5,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -You need a project to store the data and the Data Refinery flow. You can use your sandbox project or create a project. 1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg){: iih}, choose Projects > View all projects 1. Open your sandbox project. If you want to use a new project: 1. Click New project. 1. Select Create an empty project. 1. Enter a name and optional description for the project. 1. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html){: new_window} or create a new one. 1. Click Create. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows a new, empty project. - -![The following image shows a new, empty project.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-new-project.png){: width=""100%"" } For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window}. -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Open the data set in Data Refinery - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_6,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:05. Follow these steps to add a data asset to your project and create a Data Refinery flow. The data set you will use in this tutorial is available in the Samples. 1. Access the [Airline data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8fa07e57e69f7d0cb970c86c6ae52d41){: new_window} in the Samples. 1. Click Add to project. 1. Select your project from the list, and click Add. 1. After the data set is added, click View Project. For more information on adding a data asset from the Samples to a project, see [Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html). 1. On the Assets tab, click the airline-data.csv data asset to preview its content. 1. Click Prepare data to open a sample of the file in Data Refinery, and wait until Data Refinery reads and processes a sample of the data. 1. Close the Information and Steps panels. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the airline data asset open in Data Refinery. - -![The following image shows the airline data asset open in Data Refinery.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-asset-open-in-dr.png){: width=""100%"" } -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_7,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Review the data with Profile and Visualizations - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:47. IBM Knowledge Catalog automatically profiles and classifies the content of an asset based on the values in those columns. Follow these steps to use the Profile and Visualizations tabs to explore the data. Tip: Use the Profile and Visualizations pages to view changes in the data as you refine it.1. Click the Profile tab to review the [frequency distribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html){: new_window} of the data so that you can find the outliers. 1. Scroll through the columns to the see the statistics for each column. The statistics show the interquartile range, minimum, maximum, median and standard deviation in each column. 1. Hover over a bar to see additional details. The following image shows the Profile tab: -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_8,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![Profile tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/airline-profile-integer.png) 1. Click the Visualizations tab. 1. Select the UniqueCarrier column to visualize. Suggested [charts](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html){: new_window} have a blue dot next to their icons. 1. Click the Pie chart. Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Visualizations tab. You are now ready to refine the data. - -![Visualizations tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/airline-viz.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Refine the data - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_9,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"### Data Refinery operations Data Refinery uses two kinds of operations to refine data, GUI operations and coding operations. You will use both kinds of operations in this tutorial. - [GUI operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html){: new_window} can consist of multiple steps. Select an operation from New step. A subset of the GUI operations is also available from each column's overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih}). When you open a file in Data Refinery, the Convert column type operation is automatically applied as the first step to convert any non-string data types to inferred data types (for example, to Integer, Date, Boolean, etc.). You can undo or edit this step. - [Coding operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html){: new_window} are interactive templates for coding operations, functions, and logical operators. Most of the operations have interactive help. Click the operation name in the command-line text box to see the coding operations and their syntax options. ![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:16. Refining data is a series of steps to build a Data Refinery flow. As you go through this task, view the Steps panel to follow your progress. You can select a step to delete or edit it. If you make a mistake, you can also click the Undo icon ![Undo icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/undo.png){: iih}. " -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_10,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"Follow these steps to refine the data: 1. Go back to the Data tab. 1. Select the Year column. Click the Overflow menu (![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih}) and choose Sort descending. 1. Click Steps to see the new step in the Steps panel. 1. Focus on the delays for a specific airline. This tutorial uses United Airlines (UA), but you can choose any airline. 1. Click New step, and then choose the GUI operation Filter. 1. Choose the UniqueCarrier column. 1. For Operator, choose Is equal to. 1. For Value, type the string for the airline for which you want to see delay information. For example, UA{: .cp}. -![Filter operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/filter.png) 1. Click Apply. Scroll to the UniqueCarrier column to see the results. 1. Create a new column that adds the arrival and departure delay times together. 1. Select the DepDelay column. 1. Notice that the Convert column type operation was automatically applied as the first step to convert the String data types in all the columns whose values are numbers to Integer data types. 1. Click New step, and then choose the GUI operation Calculate. 1. For Operator, choose Addition. 1. Select Column, and then choose the ArrDelay column. 1. Select Create new column for results. 1. For New column name, type TotalDelay{: .cp}. -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_11,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![Calculate operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/calculate.png) 1. You can position the new column at the end of the list of columns or next to the original column. In this case, select Next to original column. 1. Click Apply. The new column, TotalDelay, is added. 1. Move the new TotalDelay column to the beginning of the data set: 1. In the command-line text box, choose the select operation. 1. Click the word select, and then choose select(`, everything()). 1. Click , and then choose the TotalDelay column. When you finish, the command should look like this: select(TotalDelay, everything()) 1. Click Apply. The TotalDelay column is now the first column. 1. Reduce the data to four columns: Year, Month, DayofMonth, and TotalDelay. Use the group_by coding operation to divide the columns into groups of year, month, and day. 1. In the command-line text box, choose the group_by operation. 1. Click , and then choose the Year column. 1. Before the closing parenthesis, type: ,Month,DayofMonth{: .cp}. When you finish, the command should look like this: group_by(Year,Month,DayofMonth) 1. Click Apply. 1. Use the select coding operation for the TotalDelay column. In the command-line text box, select the select operation. -Click , and choose the TotalDelay column. The command should look like this: select(TotalDelay) 1. Click Apply. The shaped data now consists of the Year, Month, DayofMonth, and TotalDelay columns. The following screen image shows the first four rows of the data. -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_12,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![The first four rows of the Data Refinery flow with the Year, Month, DayofMonth, and TotalDelay columns](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/four_columns_with_totaldelay.png) 1. Show the mean of the values of the TotalDelay column, and create a new AverageDelay column: 1. Click New step, and then choose the GUI operation Aggregate. 1. For the Column, select TotalDelay. 1. For Operator, select Mean. 1. For Name of the aggregated column, type AverageDelay{: .cp}. -![Aggregate operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/aggregate.png){: height=""500px""} 1. Click Apply. The new column AverageDelay is the average of all the delay times. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the first four rows of the data. - -![The following screen image shows the first four rows of the data.](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/four_columns_with_delay.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Run a job for the Data Refinery flow - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_13,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:16. When you run a job for the Data Refinery flow, the steps are run on the entire data set. You select the runtime and add a one-time or repeating schedule. The output of the Data Refinery flow is added to the data assets in the project. Follow these steps to run a job to create the refined data set. 1. From the Data Refinery toolbar, click the Jobs icon, and select Save and create a job. -![Save and create a job](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/save-create-job.png) 1. Type a name and description for the job, and click Next. 1. Select a runtime environment, and click Next. 1. (Optional) Click the toggle button to schedule a run. Specify the date, time and if you would like the job to repeat, and click Next. 1. (Optional) Turn on notifications for this job, and click Next. 1. Review the details, and click Create and run to run the job immediately. -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_14,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![create job](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/create-run.png) 1. When the job is created, click the job details link in the notification to view the job in your project. Alternatively, you can navigate to the Jobs tab in the project, and click the job name to open it. 1. When the Status for the job is Completed, use the project navigation trail to navigate back to the Assets tab in the project. 1. Click the Data > Data assets section to see the output of the Data Refinery flow, airline-data_shaped.csv. 1. Click the Flows > Data Refinery flows section to see the Data Refinery flow, airline-data.csv_flow. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab with the Data Refinery flow and shaped asset. - -![The following image shows the Assets tab with the Data Refinery flow and shaped asset.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-project-shaped-data.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Create another data asset from the Data Refinery flow - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_15,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 05:26. Follow these steps to further refine the data set by editing the Data Refinery flow: 1. Click airline-data.csv_flow to open the flow in Data Refinery. 1. Sort the AverageDelay column in descending order. 1. Select the AverageDelay column. 1. Click the column Overflow menu (![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/overflow-menu--vertical.svg){: iih}), and then select Sort descending. 1. Click the Flow settings icon ![Flow settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png){: iih}. 1. Click the Target data set panel. 1. Click Edit properties. 1. In the Format target properties dialog, change the data asset name to airline-data_sorted_shaped.csv{: .cp}. -![changed output file name](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/output_changed-new.png) 1. Click Save to return to the Flow settings. 1. Click Apply to save the settings. 1. From the Data Refinery toolbar, click the Jobs icon and select Save and view jobs. -![Save and view jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/save-view-jobs.png) 1. Select the job for the airline data, and then click View. 1. From the Job window toolbar, click the Run job icon. -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_16,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![Run jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/run-job.png) ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the completed job details. - -![The following image shows the completed job details.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-completed-job.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 7: View the data assets and your Data Refinery flow in your project - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_17,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 06:40. Now follow these steps to view the three data assets, the original, the first refined data set, and the second refined data set: 1. When the job completes, go to the project page. 1. Click the Assets tab. 1. In the Data assets section, you will see the original data set that you uploaded and the output of the two Data Refinery flows. airline-data_sorted_shaped.csvairline-data_csv_shapedairline-data.csv 1. Click the airline-data_csv_shaped data asset to see the mean delay unsorted. Navigate back to the Assets tab. 1. Click airline-data_sorted_shaped.csv data asset to see the mean delay sorted in descending order. Navigate back to the Assets tab. 1. Click the *Flows > Data Refinery flows section shows the Data Refinery flow: airline-data.csv_flow. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Assets tab with all of the assets displayed. - -![The following image shows the Assets tab with all of the assets displayed.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-final-assets-tab.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html?context=cdpaas&locale=envideo-preview) - - - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_18,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Next steps - -Now the data is ready to be used. For example, you or other users can do any of these tasks: - - - -* [Analyze the data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) -* [Build and train a model with the data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) - - - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_19,98AA3E34D14723232D266A85CBB9E2B1816B1AA5," Additional resources - - - -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -" -98AA3E34D14723232D266A85CBB9E2B1816B1AA5_20,98AA3E34D14723232D266A85CBB9E2B1816B1AA5,"![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -8109B6380043CE464115025DD32A7A821FD56DB7_0,8109B6380043CE464115025DD32A7A821FD56DB7," Quick start: Tune a foundation model - -There are a couple of reasons to tune your foundation model. By tuning a model on many labeled examples, you can enhance the model performance compared to prompt engineering alone. By tuning a base model to perform similarly to a bigger model in the same model family, you can reduce costs by deploying that smaller model. - -Required services : Watson Studio : Watson Machine Learning - -Your basic workflow includes these tasks: - - - -1. Open a project. Projects are where you can collaborate with others to work with data. -2. Add your data to the project. You can upload data files, or add data from a remote data source through a connection. -3. Create a Tuning experiment in the project. The tuning experiment uses the Tuning Studio experiment builder. -4. Review the results of the experiment and the tuned model. The results include a Loss Function chart and the details of the tuned model. -5. Deploy and test your tuned model. Test your model in the Prompt Lab. - - - -" -8109B6380043CE464115025DD32A7A821FD56DB7_1,8109B6380043CE464115025DD32A7A821FD56DB7," Read about tuning a foundation model - -Prompt tuning adjusts the content of the prompt that is passed to the model. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. You tune a model with the Tuning Studio to guide an AI foundation model to return the output you want. - -[Read more about Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) - -" -8109B6380043CE464115025DD32A7A821FD56DB7_2,8109B6380043CE464115025DD32A7A821FD56DB7," Watch a video about tuning a foundation model - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Watch this video to preview the steps in this tutorial. There might be slight differences in the user interface that is shown in the video. The video is intended to be a companion to the written tutorial. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -8109B6380043CE464115025DD32A7A821FD56DB7_3,8109B6380043CE464115025DD32A7A821FD56DB7," Try a tutorial to tune a foundation model - -In this tutorial, you will complete these tasks: - - - -* [Task 1: Open a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep01) -* [Task 2: Test your base model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep02) -* [Task 3: Add your data to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep03) -* [Task 4: Create a Tuning experiment in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep04) -* [Task 5: Configure the Tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep05) -* [Task 6: Deploy your tuned model to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep06) -* [Task 7: Test your tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enstep07) - - - -Expand all sections - - - -* Tips for completing this tutorial - -" -8109B6380043CE464115025DD32A7A821FD56DB7_4,8109B6380043CE464115025DD32A7A821FD56DB7,"### Use the video picture-in-picture Tip: Start the video, then as you scroll through the tutorial, the video moves to picture-in-picture mode. Close the video table of contents for the best experience with picture-in-picture. You can use picture-in-picture mode so you can follow the video as you complete the tasks in this tutorial. Click the timestamps for each task to follow along.The following animated image shows how to use the video picture-in-picture and table of contents features: ![How to use picture-in-picture and chapters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/pip-and-chapters.gif){: width=""560px"" height=""315px"" data-tearsheet=""this""} ### Get help in the community If you need help with this tutorial, you can ask a question or find an answer in the [Cloud Pak for Data Community discussion forum](https://community.ibm.com/community/user/cloudpakfordata/communities/community-home/digestviewer?communitykey=c0c16ff2-10ef-4b50-ae4c-57d769937235){: new_window}. ### Set up your browser windows For the optimal experience completing this tutorial, open Cloud Pak for Data in one browser window, and keep this tutorial page open in another browser window to switch easily between the two applications. Consider arranging the two browser windows side-by-side to make it easier to follow along. ![Side-by-side tutorial and UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/tutorial-side-by-side.png){: width=""560px"" height=""315px"" data-tearsheet=""this""} Tip: If you encounter a guided tour while completing this tutorial in the user interface, click Maybe later. -" -8109B6380043CE464115025DD32A7A821FD56DB7_5,8109B6380043CE464115025DD32A7A821FD56DB7,"[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 1: Open a project - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:04. You need a project to store the tuning experiment. Watch a video to see how to create a sandbox project and associate a service. Then follow the steps to verify that you have an existing project or create a sandbox project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -8109B6380043CE464115025DD32A7A821FD56DB7_6,8109B6380043CE464115025DD32A7A821FD56DB7,"### Verify a existing project or create a new project 1. From the watsonx home screen, scroll to the Projects section. If you see any projects that are listed, then skip to [Associate the Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=enassociate). If you don't see any projects, then follow these steps to create a project. 1. Click Create a sandbox project. When the project is created, you see the sandbox in the Projects section. 1. Open an existing project or the new sandbox project. \ Associate the Watson Machine Learning service with the project You use Watson Machine Learning to tune the foundation model, so follow these steps to associate your Watson Machine Learning service instance with your project. 1. In the project, click the Manage tab. 1. Click the Services & Integrations page. 1. Check whether this project has an associated Watson Machine Learning service. If there is no associated service, then follow these steps: 1. Click Associate service. 1. Check the box next to your Watson Machine Learning service instance. 1. Click Associate. 1. If necessary, click Cancel to return to the Services & Integrations page. For more information or to watch a video, see [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html){: new_window} and [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html). \ ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Manage tab with the associated service. You are now ready to add the sample notebook to your project. - -" -8109B6380043CE464115025DD32A7A821FD56DB7_7,8109B6380043CE464115025DD32A7A821FD56DB7,"![Manage tab in the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-fm-associated-service.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 2: Test your base model - -" -8109B6380043CE464115025DD32A7A821FD56DB7_8,8109B6380043CE464115025DD32A7A821FD56DB7,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 00:19. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. ![Select the sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-select-sandbox.png){: biw} 1. Click the Experiment with foundation models and build prompts tile. 1. Select your tuned model. 1. Click the model drop-down list, and select View all foundation models. 1. Select the flan-t5-xl-3b model. 1. Click Select model. 1. On the Structured mode page, type the Instruction: txt Summarize customer complaints 1. Provide the examples and test input. | Example input | Example output | | ----- | ----- | | I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. | Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue | | I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. | Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft | -" -8109B6380043CE464115025DD32A7A821FD56DB7_9,8109B6380043CE464115025DD32A7A821FD56DB7,"1. In the Try text field, copy and paste the following prompt: txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file. 1. Click Generate, and review the results. 1. Click Save work > Save as. 1. Select Prompt template. 1. For the name, type Base model prompt{: .cp}. 1. Click Save. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows results in the Prompt Lab. - -![The following image shows results in the Prompt Lab.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-base-prompt-lab.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 3: Add your data to the project - -" -8109B6380043CE464115025DD32A7A821FD56DB7_10,8109B6380043CE464115025DD32A7A821FD56DB7,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:12. You need to add the training data to your project. On the Samples page, you can find the customer complaints data set. This data set includes fictitious data of typical customer complaints regarding credit reports. Follow these steps to add the data set from the Samples to the project: 1. Access the [Customer complaints data set](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/725afa8c-0f58-47ac-b88c-26961c4f20a0){: new_window} on the Samples page. 1. Click Add to project. 1. Select your sandbox project. 1. Click Add. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Samples asset added to the project. The next step is to create the Tuning experiment. - -![The following image shows the Samples asset added to the project. The next step is to create the Tuning experiment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-sample-data.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 4: Create a Tuning experiment in the project - -" -8109B6380043CE464115025DD32A7A821FD56DB7_11,8109B6380043CE464115025DD32A7A821FD56DB7,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:32. Now you are ready to create a tuning experiment in your sandbox project that uses the data set you just added to the project. Follow these steps to create a Tuning experiment: 1. Return to the watsonx home screen. 1. Verify that your sandbox project is selected. ![Select the sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-select-sandbox.png){: biw} 1. Click Tune a foundation model with labeled data. 1. For the name, type: txt Summarize customer complaints tuned model 1. For the description, type: txt Tuning Studio experiment to tune a foundation model to handle customer complaints. 1. Click Create. The Tuning Studio displays. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment. - -![The following image shows the Tuning experiment open in Tuning Studio. Now you are ready to configure the tuning experiment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-new-exp.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 5: Configure the Tuning experiment - -" -8109B6380043CE464115025DD32A7A821FD56DB7_12,8109B6380043CE464115025DD32A7A821FD56DB7,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 01:47. In the Tuning Studio, you can configure the tuning experiment. The foundation model to tune is completed for you. Follow these steps to configure the tuning experiment: 1. For the foundation model to tune, select flan-t5-xl-3b. 1. Select Text for the method to initialize the prompt. There are two options: - Text: Uses text that you specify. - Random: Uses values that are generated for you as part of the tuning experiment. 1. For the Text field, type: txt Summarize the complaint provided into one sentence. The following table shows example text for each task type: | Task type | Example | | ----- | ----- | | Classification | Classify whether the sentiment of each comment is Positive or Negative | | Generation | Make the case for allowing employees to work from home a few days a week | | Summarization | Summarize the main points from a meeting transcript | -" -8109B6380043CE464115025DD32A7A821FD56DB7_13,8109B6380043CE464115025DD32A7A821FD56DB7,"1. Select Summarization for the task type that most closely matches what you want the model to do. There are three task types: - Summarization generates text that describes the main ideas that are expressed in a body of text. - Generation generates text such as a promotional email. - Classification predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. When you use the classification task, you need to list the class labels that you want the model to use. Specify the same labels that are used in your tuning training data. 1. Select your training data from the project. 1. Click Select from project. 1. Click Data asset. 1. Select the customer complaints training data.json file. 1. Click Select asset. 1. Click Start tuning. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the configured tuning experiment. Next, you review the results and deploy the tuned model. - -![The following image shows the configured tuning experiment.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-configured-exp.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 6: Deploy your tuned model to a deployment space - -" -8109B6380043CE464115025DD32A7A821FD56DB7_14,8109B6380043CE464115025DD32A7A821FD56DB7,"![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 03:17. When the experiment run is complete, you see the tuned model and the Loss function chart. Loss function measures the difference between predicted and actual results with each training run. Follow these steps to view the loss function chart and the tuned model: 1. Review the Loss function chart. A downward sloping curve means that the model is getting better at generating the expected output. ![Completed tuning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-exp-complete.png){: biw} 1. Below the chart, click the Summarize customer complaints tuned model. 1. Scroll through the model details. 1. Click Deploy. 1. For the name, type: txt Summarize customer complaints tuned model 1. For the Target deployment space, select an existing deployment space. If you don't have an existing deployment space, follow these steps: 1. For the Target deployment space, select Create a new deployment space. 1. For the deployment space name, type: txt Foundation models deployment space 1. Select a storage service from the list. 1. Select your provisioned machine learning service from the list. 1. Click Create. 1. Click Close. 1. For the Target deployment space, verify that Foundation models deployment space is selected. 1. Check the View deployment in deployment space after creating option. 1. Click Create. 1. On the Deployments page, click the Summarize customer complaints tuned mode deployment to view the details. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows the deployment in the deployment space. " -8109B6380043CE464115025DD32A7A821FD56DB7_15,8109B6380043CE464115025DD32A7A821FD56DB7,"You are now ready to test the deployed model. -![The following image shows the deployment in the deployment space.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-deployment.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - - - -* Task 7: Test your tuned model - -![preview tutorial video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) To preview this task, watch the video beginning at 04:04. You can test your tuned model in the Prompt Lab. Follow these steps to test your tuned model: 1. From the model deployment page, click Open in prompt lab, and then select your sandbox project. The Prompt Lab displays. 1. Select your tuned model. 1. Click the model drop-down list, and select View all foundation models. 1. Select the Summarize customer complaints tuned model model. 1. Click Select model. 1. On the Structured mode page, type the Instruction: Summarize customer complaints{: .cp} 1. On the Structured mode page, provide the examples and test input. | Example input | Example output | | ----- | ----- | | I forgot in my initial date I was using Capital One and this debt was in their hands and never was done. | Debt collection, sub-product: credit card debt, issue: took or threatened to take negative or legal action sub-issue | | I am a victim of identity theft and this debt does not belong to me. Please see the identity theft report and legal affidavit. | Debt collection, dub-product, I do not know, issue. attempts to collect debt not owed. sub-issue debt was a result of identity theft | -" -8109B6380043CE464115025DD32A7A821FD56DB7_16,8109B6380043CE464115025DD32A7A821FD56DB7,"1. In the Try text field, copy and paste the following prompt: txt After I reviewed my credit report, I am still seeing information that is reporting on my credit file that is not mine. please help me in getting these items removed from my credit file. 1. Click Generate, and review the results. ### ![Checkpoint icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/checkmark--filled-blue.svg){: iih} Check your progress The following image shows results in the Prompt Lab. - -![The following image shows results in the Prompt Lab.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-ts-prompt-lab.png){: width=""100%"" } -[Back to the top](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html?context=cdpaas&locale=envideo-preview) - - - -" -8109B6380043CE464115025DD32A7A821FD56DB7_17,8109B6380043CE464115025DD32A7A821FD56DB7," Next steps - -Try these other tutorials: - - - -* [Prompt a foundation model in the Prompt Lab tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) -* [Prompt a foundation model with retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) - - - -" -8109B6380043CE464115025DD32A7A821FD56DB7_18,8109B6380043CE464115025DD32A7A821FD56DB7," Additional resources - - - -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) -* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) -* View more [videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). -* Find sample data sets, projects, models, prompts, and notebooks in the Samples to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -" -8109B6380043CE464115025DD32A7A821FD56DB7_19,8109B6380043CE464115025DD32A7A821FD56DB7,"![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - - - -Parent topic:[Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -" -F495F5206C908FB1A31F18A8AB3CE9465164564C_0,F495F5206C908FB1A31F18A8AB3CE9465164564C," Getting started with IBM watsonx as a Service - -You can sign up for IBM watsonx.ai or IBM watsonx.governance and explore the tutorials, resources, and tools to immediately get started working with models or governing models. If you are an administrator, follow the steps to set up watsonx for your organization. - -" -F495F5206C908FB1A31F18A8AB3CE9465164564C_1,F495F5206C908FB1A31F18A8AB3CE9465164564C," Start working - -To start working: - - - -1. If you haven't already, [sign up](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) for watsonx.ai or watsonx.governance. -2. Click a task tile on the watsonx home page and start working. For example, click Experiment with foundation models and build prompts to open the Prompt Lab. Then, choose a sample prompt and start experimenting. Your first project, where you save your work, is created automatically. See [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html). -3. Explore your resources: - - - -* Take a [Quick start tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -* Click a category in the Samples area of the home page to try out a notebook, a prompt, or a sample project. - - - - - -If you are an existing Cloud Pak for Data as a Service user, you can [switch to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/platform-switcher.html). - -" -F495F5206C908FB1A31F18A8AB3CE9465164564C_2,F495F5206C908FB1A31F18A8AB3CE9465164564C," Set up the platform as an administrator - -To set up the watsonx platform for your organization, see [Setting up the platform as an administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). - -" -F495F5206C908FB1A31F18A8AB3CE9465164564C_3,F495F5206C908FB1A31F18A8AB3CE9465164564C," Learn about watsonx - -To understand watsonx, start with these resources: - - - -* [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -* [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) -* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -* [Read blogs on Medium and the IBM Community](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.htmlcommunity) - - - -Other information: - - - -* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html) -* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) -" -AAE40F1CC335A650C1EB806E404394DA596FB433_0,AAE40F1CC335A650C1EB806E404394DA596FB433," Known issues and limitations - -The following limitations and known issues apply to watsonx. - - - -* [Regional limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/region-lims.html) -* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=ennotebooks) -* [Machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enwmlissues) -* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enspssissues) -* [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enconnectissues) -* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enpipeline-issues) -* [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enxgov-issues) - - - - Notebook issues - -You might encounter some of these issues when getting started with and using notebooks. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_1,AAE40F1CC335A650C1EB806E404394DA596FB433," Manual installation of some tensor libraries is not supported - -Some tensor flow libraries are preinstalled, but if you try to install additional tensor flow libraries yourself, you get an error. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_2,AAE40F1CC335A650C1EB806E404394DA596FB433," Connection to notebook kernel is taking longer than expected after running a code cell - -If you try to reconnect to the kernel and immediately run a code cell (or if the kernel reconnection happened during code execution), the notebook doesn't reconnect to the kernel and no output is displayed for the code cell. You need to manually reconnect to the kernel by clicking Kernel > Reconnect. When the kernel is ready, you can try running the code cell again. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_3,AAE40F1CC335A650C1EB806E404394DA596FB433," Using the predefined sqlContext object in multiple notebooks causes an error - -You might receive an Apache Spark error if you use the predefined sqlContext object in multiple notebooks. Create a new sqlContext object for each notebook. See [this Stack Overflow explanation](http://stackoverflow.com/questions/38117849/you-must-build-spark-with-hive-export-spark-hive-true/3811811238118112). - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_4,AAE40F1CC335A650C1EB806E404394DA596FB433," Connection failed message - -If your kernel stops, your notebook is no longer automatically saved. To save it, click File > Save manually, and you should get a Notebook saved message in the kernel information area, which appears before the Spark version. If you get a message that the kernel failed, to reconnect your notebook to the kernel click Kernel > Reconnect. If nothing you do restarts the kernel and you can't save the notebook, you can download it to save your changes by clicking File > Download as > Notebook (.ipynb). Then you need to create a new notebook based on your downloaded notebook file. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_5,AAE40F1CC335A650C1EB806E404394DA596FB433," Hyperlinks to notebook sections don't work in preview mode - -If your notebook contains sections that you link to from an introductory section at the top of the notebook for example, the links to these sections will not work if the notebook was opened in view-only mode in Firefox. However, if you open the notebook in edit mode, these links will work. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_6,AAE40F1CC335A650C1EB806E404394DA596FB433," Can't connect to notebook kernel - -If you try to run a notebook and you see the message Connecting to Kernel, followed by Connection failed. Reconnecting and finally by a connection failed error message, the reason might be that your firewall is blocking the notebook from running. - -If Watson Studio is installed behind a firewall, you must add the WebSocket connection wss://dataplatform.cloud.ibm.com to the firewall settings. Enabling this WebSocket connection is required when you're using notebooks and RStudio. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_7,AAE40F1CC335A650C1EB806E404394DA596FB433," Insufficient resources available error when opening or editing a notebook - -If you see the following message when opening or editing a notebook, the environment runtime associated with your notebook has resource issues: - -Insufficient resources available -A runtime instance with the requested configuration can't be started at this time because the required hardware resources aren't available. -Try again later or adjust the requested sizes. - -To find the cause, try checking the status page for IBM Cloud incidents affecting Watson Studio. Additionally, you can open a support case at the IBM Cloud Support portal. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_8,AAE40F1CC335A650C1EB806E404394DA596FB433," Machine learning issues - -You might encounter some of these issues when working with machine learning tools. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_9,AAE40F1CC335A650C1EB806E404394DA596FB433," Region requirements - -You can only associate a Watson Machine Learning service instance with your project when the Watson Machine Learning service instance and the Watson Studio instance are located in the same region. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_10,AAE40F1CC335A650C1EB806E404394DA596FB433," Accessing links if you create a service instance while associating a service with a project - -While you are associating a Watson Machine Learning service to a project, you have the option of creating a new service instance. If you choose to create a new service, the links on the service page might not work. To access the service terms, APIs, and documentation, right click the links to open them in new windows. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_11,AAE40F1CC335A650C1EB806E404394DA596FB433," Federated Learning assets cannot be searched in All assets, search results, or filter results in the new projects UI - -You cannot search Federated Learning assets from the All assets view, the search results, or the filter results of your project. - -Workaround: Click the Federated Learning asset to open the tool. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_12,AAE40F1CC335A650C1EB806E404394DA596FB433," Deployment issues - - - -* A deployment that is inactive (no scores) for a set time (24 hours for the free plan or 120 hours for a paid plan) is automatically hibernated. When a new scoring request is submitted, the deployment is reactivated and the score request is served. Expect a brief delay of 1 to 60 seconds for the first score request after activation, depending on the model framework. -* For some frameworks, such as SPSS modeler, the first score request for a deployed model after hibernation might result in a 504 error. If this happens, submit the request again; subsequent requests should succeed. - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_13,AAE40F1CC335A650C1EB806E404394DA596FB433," Watson Machine Learning limitations - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_14,AAE40F1CC335A650C1EB806E404394DA596FB433," AutoAI known limitations - - - -* Currently, AutoAI experiments do not support double-byte character sets. AutoAI only supports CSV files with ASCII characters. Users must convert any non-ASCII characters in the file name or content, and provide input data as a CSV as defined in [this CSV standard](https://tools.ietf.org/html/rfc4180). -* To interact programmatically with an AutoAI model, use the REST API instead of the Python client. The APIs for the Python client required to support AutoAI are not generally available at this time. - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_15,AAE40F1CC335A650C1EB806E404394DA596FB433," Data module not found in IBM Federated Learning - -The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it. You might see the following error message: - -ModuleNotFoundError: No module named 'ibmfl.util.datasets' - -The issue possibly results from using an outdated DataHandler. Please review and update your DataHandler to conform to the latest spec. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up-to-date. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_16,AAE40F1CC335A650C1EB806E404394DA596FB433," SPSS Modeler issues - -You might encounter some of these issues when working in SPSS Modeler. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_17,AAE40F1CC335A650C1EB806E404394DA596FB433," SPSS Modeler runtime restrictions - -Watson Studio does not include SPSS functionality in Peru, Ecuador, Colombia and Venezuela. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_18,AAE40F1CC335A650C1EB806E404394DA596FB433," Merge node and unicode characters - -The Merge node treats the following very similar Japanese characters as the same character. -![Japanese characters](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/SPSSmergenode.png) - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_19,AAE40F1CC335A650C1EB806E404394DA596FB433," Connection issues - -You might encounter this issue when working with connections. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_20,AAE40F1CC335A650C1EB806E404394DA596FB433," Cloudera Impala connection does not work with LDAP authentication - -If you create a connection to a Cloudera Impala data source and the Cloudera Impala server is set up for LDAP authentication, the username and password authentication method in IBM watsonx will not work. - -Workaround: Disable the Enable LDAP Authentication option on the Impala server. See [Configuring LDAP Authentication](https://docs.cloudera.com/cdp-private-cloud-base/latest/impala-secure/topics/impala-ldap.html) in the Cloudera documentation. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_21,AAE40F1CC335A650C1EB806E404394DA596FB433," Watson Pipelines known issues - -The issues pertain to Watson Pipelines. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_22,AAE40F1CC335A650C1EB806E404394DA596FB433," Nesting loops more than 2 levels can result in pipeline error - -Nesting loops more than 2 levels can result in an error when you run the pipeline, such as Error retrieving the run. Reviewing the logs can show an error such as text in text not resolved: neither pipeline_input nor node_output. If you are looping with output from a Bash script, the log might list an error like this: PipelineLoop can't be run; it has an invalid spec: non-existent variable in $(params.run-bash-script-standard-output). To resolve the problem, do not nest loops more than 2 levels. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_23,AAE40F1CC335A650C1EB806E404394DA596FB433," Asset browser does not always reflect count for total numbers of asset type - -When selecting an asset from the asset browser, such as choosing a source for a Copy node, you see that some of the assets list the total number of that asset type available, but notebooks do not. That is a current limitation. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_24,AAE40F1CC335A650C1EB806E404394DA596FB433," Cannot delete pipeline versions - -Currently, you cannot delete saved versions of pipelines that you no longer need. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_25,AAE40F1CC335A650C1EB806E404394DA596FB433," Deleting an AutoAI experiment fails under some conditions - -Using a Delete AutoAI experiment node to delete an AutoAI experiment that was created from the Projects UI does not delete the AutoAI asset. However, the rest of the flow can complete successfully. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_26,AAE40F1CC335A650C1EB806E404394DA596FB433," Cache appears enabled but is not enabled - -If the Copy assets Pipelines node's Copy mode is set to Overwrite, cache is displayed as enabled but remains disabled. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_27,AAE40F1CC335A650C1EB806E404394DA596FB433," Watson Pipelines limitations - -These limitations apply to Watson Pipelines. - - - -* [Single pipeline limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enpipeline-limits) -* [Limitations by configuration size](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enconfig-size) -* [Input and output size limits](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=eninput-limit) -* [Batch input limited to data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html?context=cdpaas&locale=enbatch-input) - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_28,AAE40F1CC335A650C1EB806E404394DA596FB433," Single pipeline limits - -These limitation apply to a single pipeline, regardless of configuration. - - - -* Any single pipeline cannot contain more than 120 standard nodes -* Any pipeline with a loop cannot contain more than 600 nodes across all iterations (for example, 60 iterations - 10 nodes each) - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_29,AAE40F1CC335A650C1EB806E404394DA596FB433," Limitations by configuration size - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_30,AAE40F1CC335A650C1EB806E404394DA596FB433," Small configuration - -A SMALL configuration supports 600 standard nodes (across all active pipelines) or 300 nodes run in a loop. For example: - - - -* 30 standard pipelines with 20 nodes run in parallel = 600 standard nodes -* 3 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 300 nodes in a loop - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_31,AAE40F1CC335A650C1EB806E404394DA596FB433," Medium configuration - -A MEDIUM configuration supports 1200 standard nodes (across all active pipelines) or 600 nodes run in a loop. For example: - - - -* 30 standard pipelines with 40 nodes run in parallel = 1200 standard nodes -* 6 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 600 nodes in a loop - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_32,AAE40F1CC335A650C1EB806E404394DA596FB433," Large configuration - -A LARGE configuration supports 4800 standard nodes (across all active pipelines) or 2400 nodes run in a loop. For example: - - - -* 80 standard pipelines with 60 nodes run in parallel = 4800 standard nodes -* 24 pipelines containing a loop with 10 iterations and 10 nodes in each iteration = 2400 nodes in a loop - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_33,AAE40F1CC335A650C1EB806E404394DA596FB433," Input and output size limits - -Input and output values, which include pipeline parameters, user variables, and generic node inputs and outputs, cannot exceed 10 KB of data. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_34,AAE40F1CC335A650C1EB806E404394DA596FB433," Batch input limited to data assets - -Currently, input for batch deployment jobs is limited to data assets. This means that certain types of deployments, which require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_35,AAE40F1CC335A650C1EB806E404394DA596FB433," Issues with Cloud Object Storage - -These issue apply to working with Cloud Object Storage. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_36,AAE40F1CC335A650C1EB806E404394DA596FB433," Issues with Cloud Object Storage when Key Protect is enabled - -Key Protect in conjunction with Cloud Object Storage is not supported for working with Watson Machine Learning assets. If you are using Key Protect, you might encounter these issues when you are working with assets in Watson Studio. - - - -* Training or saving these Watson Machine Learning assets might fail: - - - -* Auto AI -* Federated Learning -* Watson Pipelines - - - -* You might be unable to save an SPSS model or a notebook model to a project - - - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_37,AAE40F1CC335A650C1EB806E404394DA596FB433," Issues with watsonx.governance - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_38,AAE40F1CC335A650C1EB806E404394DA596FB433," Delay showing prompt template deployment data in a factsheet - -When a deployment is created for a prompt template, the facts for the deployment are not added to factsheet immediately. You must first evaluate the deployment or view the lifecycle tracking page to add the facts to the factsheet. - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_39,AAE40F1CC335A650C1EB806E404394DA596FB433," Display issues for existing Factsheet users - -If you previously used factsheets with IBM Knowledge Catalog and you create a new AI use case in watsonx.governance, you might see some display issues, such as duplicate Risk level fields in the General information and Details section of the AI use case interface. - -To resolve display problems, update the model_entry_user asset type definition. For details on updating a use case programmatically, see [Customizing details for a use case or factsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-customize-user-facts.html). - -" -AAE40F1CC335A650C1EB806E404394DA596FB433_40,AAE40F1CC335A650C1EB806E404394DA596FB433," Redundant attachment links in factsheet - -A factsheet tracks all of the events for an asset over all phases of the lifecycle. Attachments show up in each stage, creating some redundancy in the factsheet. -" -E5EA38444D60150C0FD2EB498BF33793DDE5FED2_0,E5EA38444D60150C0FD2EB498BF33793DDE5FED2," Language support for the product and the documentation - -IBM watsonx is translated into multiple languages. - -" -E5EA38444D60150C0FD2EB498BF33793DDE5FED2_1,E5EA38444D60150C0FD2EB498BF33793DDE5FED2," Supported languages - -The IBM watsonx user interface is translated into these languages: - - - -* Brazilian Portuguese -* Simplified Chinese -* Traditional Chinese -* French -* German -* Italian -* Japanese -* Korean -* Spanish -* Swedish - - - -The documentation is automatically translated into these languages: - - - -* Brazilian Portuguese -* Simplified Chinese -* French -* German -* Italian -* Japanese -* Korean -* Spanish - - - -IBM is not responsible for any damages or losses resulting from the use of automatically (machine) translated content. - -When the translated documentation is not as current as the English content, you see a message and have the option of switching to the English content. - -" -E5EA38444D60150C0FD2EB498BF33793DDE5FED2_2,E5EA38444D60150C0FD2EB498BF33793DDE5FED2," Changing languages - -To change the language for this documentation, scroll to the end of any documentation page, and select a language from the language selector. - -![Screen capture of the language switcher](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lang-switcher.png) - -To change the language for both the product user interface and this documentation, select a different language for your browser: - - - -* In the Google Chrome browser, you can change the language in the advanced settings. -* In the Mozilla Firefox browser, you can change the language in the general settings. - - - -" -E5EA38444D60150C0FD2EB498BF33793DDE5FED2_3,E5EA38444D60150C0FD2EB498BF33793DDE5FED2," Learn more - - - -* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) - - - -Parent topic:[FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html) -" -F0DB4483C93A5D14DBF8076C4DD42D22A4F8542D_0,F0DB4483C93A5D14DBF8076C4DD42D22A4F8542D," Notices - -These notices apply to the watsonx platform. - -The Offering includes some or all of the following that IBM provides under the [SIL Open Font License 1.1](https://opensource.org/license/openfont-html/): - - - -* AMSFONTS -* AMSFONTS (matplotlib) -* CARLOGO (matplotlib) -* CMMI9 (libtasn1) -* cvxopt 1.3.0 -* FONTS (harfbuzz) -* FONTS (pillow) -* FONT AWESOME (Apache ORC) -* FONT AWESOME - FONT (bazel) -* FONT AWESOME 4.2.0 (arrow) -* FONT-AWESOME-IE7.MIN.CSS (Jetty) -* FONT AWESOME (nbconvert) -* FONTAWESOME-FONTS -* HELVETICA-NEUE -* READLINE.PS (Readline) -* FONT-AWESOME (Notebook) -* FONTAWESOME -* FONTAWESOME (Tables) -* FONT AWESOME FONTS -* FONTAWESOME (FONT) (JupyterLab) -* Font-Awesome v4.6.3 -* Font-Awesome v4.3.0 -* Font-Awesome v4.7.0 -* handsontable v0.25.1 -* minio 7.1.7 -* IBM PLEX TYPEFACE (carbon-components) -* nbconvert v5.2.1 -* nbconvert v5.1.1 -* nbconvert 6.4.4 -* nbconvert 6.5.0 -* nbdime 3.1.1 -* NotoNastaliqUrdu-Regular.ttf (pillow) -* NOTO-FONTS (pillow) -* QTAWESOME-FONTS (qtawesome) -* qtawesome v3.3.0 -* READLINE.PS (Readline) -* RLUSERMAN.PS (Readline) -* STIX FONT (matplotlib) - - - -The Offering includes some or all of the following that IBM provides under the [UBUNTU FONT LICENCE Version 1.0](https://ubuntu.com/legal/font-licence): - - - -* Font_license (Werkzeug) - - - -" -F0DB4483C93A5D14DBF8076C4DD42D22A4F8542D_1,F0DB4483C93A5D14DBF8076C4DD42D22A4F8542D," Learn more - -[Foundation model use terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html) -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_0,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Overview of IBM watsonx as a Service - -IBM watsonx.ai is a studio of integrated tools for working with generative AI capabilities that are powered by foundation models and for building machine learning models. The IBM watsonx.ai component provides a secure and collaborative environment where you can access your organization's trusted data, automate AI processes, and deliver AI in your applications. The IBM watsonx.governance component provides end-to-end monitoring for machine learning and generative AI models to accelerate responsible, transparent, and explainable AI workflows. - -Watch this short video that introduces watsonx.ai. - -Looking for watsonx.data? Go to [IBM watsonx.data documentation](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-getting-started). - -You can accomplish the following goals with watsonx: - - - -* Build machine learning models -Build models by using open source frameworks and code-based, automated, or visual data science tools. -* Experiment with foundation models -Test prompts to generate, classify, summarize, or extract content from your input text. Choose from IBM models or open source models from Hugging Face. -* Manage the AI lifecycle -Manage and automate the full AI model lifecycle with all the integrated tools and runtimes to train, validate, and deploy AI models. -* Govern AI -Track and document the detailed history of AI models to help ensure compliance. - - - -Watsonx.ai provides these tools for working with data and models: - - - -Tools for working with data and models - - What you can use What you can do Best to use when - - [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Access and refine data from diverse data source connections.

Materialize the resulting data sets as snapshots in time that might combine, join, or filter data for other data scientists to analyze and explore. You need to visualize the data when you want to shape or cleanse it.

You want to simplify the process of preparing large amounts of raw data for analysis. -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_1,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with IBM and open source foundation models by inputting prompts. You want to engineer prompts for your generative AI solution. - [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tailor the output that a foundation model returns to better meet your needs. You want to adjust foundation model outputs for use in your generative AI solution. - [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Use AutoAI to automatically select algorithms, engineer features, generate pipeline candidates, and train machine learning model pipeline candidates.

Then, evaluate the ranked pipelines and save the best as models.

Deploy the trained models to a space, or export the model training pipeline that you like from AutoAI into a notebook to refine it. You want an advanced and automated way to build a good set of training pipelines and machine learning models quickly.

You want to be able to export the generated pipelines to refine them. - [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) Prompt foundation models with the Python library.

Use notebooks and scripts to write your own feature engineering, model training, and evaluation code in Python or R. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage.

Code with your favorite open source frameworks and libraries. You want to use Python or R coding skills to have full control over the code that you use to work with models. -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_2,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," [SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Use SPSS Modeler flows to create your own machine learning model training, evaluation, and scoring flows. Use training data sets that are available in the project, or connections to data sources such as databases, data lakes, or object storage. You want a simple way to explore data and define machine learning model training, evaluation, and scoring flows. - [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Analyze data and build and test machine learning models by working with R in RStudio. You want to use a development environment to work in R. - [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Prepare data, import models, solve problems and compare scenarios, visualize data, find solutions, produce reports, and save models to deploy with Watson Machine Learning. You need to evaluate millions of possibilities to find the best solution to a prescriptive analytics problem. - [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train a common machine learning model that uses distributed data. You need to train a machine learning model without moving, combining, or sharing data that is distributed across multiple locations. - [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Use pipelines to create repeatable and scheduled flows that automate notebook, Data Refinery, and machine learning pipelines, from data ingestion to model training, testing, and deployment. You want to automate some or all of the steps in an MLOps flow. -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_3,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. You want to mask or mimic production data or you want to generate synthetic data from a custom data schema. - - - -Watsonx.governance provides these tools for governing models. - - - -Tools for governing models - - What you can use What you can do Best to use when - - [Factsheets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-create-use-case.html) View model lifecycle status, general model and deployment details, training information and metrics, and deployment metrics. You want to make sure that your model is compliant and performing as expected. - [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html) Monitor model output and explain model predictions. You need to keep your models fair and be able to explain model predictions. - - - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_4,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Security and privacy of your data and models - -Your work on watsonx, including your data and the models that you create, are private to your account: - - - -* Your data is accessible only by you. Your data is used to train only your models. Your data will never be accessible or used by IBM or any other person or organization. Your data is stored in dedicated storage buckets from your IBM Cloud Object Storage service instance. Data is encrypted at rest and in motion. -* The models that you create are accessible only by you. Your models will never be accessible or used by IBM or any other person or organization. Your models are secured in the same way as your data. - - - -Learn more about security and your options: - - - -* [Security and privacy of foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) -* [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) -* [Security of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) - - - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_5,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Underlying architecture - -Watsonx includes the following functionality as the secure and scalable foundation for your organization to collaborate efficiently: - - - -* Software and hardware -Watsonx is fully managed by IBM on IBM Cloud. Software updates are automatic. Scaling of compute resources and storage is automatic. -* Storage -A IBM Cloud Object Storage service instance is automatically provisioned for you to provide storage. -* Compute resources -You can choose the appropriate runtime for your jobs. Compute resource usage is billed based on the rate for the runtime environment and its active duration. -* Security, compliance, and isolation -The data security, network security, security standards compliance, and isolation of watsonx are managed by IBM Cloud. You can set up extra security and encryption options. -* User management -You add users and user groups and manage their account roles and permissions with IBM Cloud Identity and Access Management. You assign roles within each collaborative workspace across the platform. -* Global search -You can search for assets across the platform. -* Shared connections to data sources -You can share connections with others across the platform in the Platform assets catalog. -* Samples -You can experiment with IBM-curated sample data sets, notebooks, projects, and models. - - - -Watsonx.ai on the watsonx platform includes the Watson Studio, Watson Machine Learning, and IBM Cloud Object Storage services. Watsonx.governance on the watsonx platform includes the watsonx.governance service. - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_6,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Workspaces and assets - -Watsonx is organized as a set of collaborative workspaces where you can work with your team or organization. Each workspace has a set of members with roles that provide permissions to perform actions. Most users work with assets, which are the items that users add to the platform. Data assets contain metadata that represents data, while assets that you create in tools, such as models, run code to work with data. You build assets in projects, and manage the deployment of completed assets in deployment spaces. - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_7,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Projects and tools - -Projects are where your data science and model builder teams work with data to create assets, such as, saved prompts, notebooks, models, or pipelines. Your first project, which is known as your sandbox project, is created automatically when you sign up for watsonx.ai. - -The following image shows what the Overview page of a project might look like. - -![Overview page for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-project-overview.png) - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_8,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Deployment spaces - -Deployment spaces are where your ModelOps team deploys models and other deployable assets to production and then tests and manages deployments in production. After you build models and deployable assets in projects, you promote them to deployment spaces. - -The following image shows what the Overview page of a deployment space might look like. - -![Overview page for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-deployment-overview.png) - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_9,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Samples - -The platform includes an integrated collection of samples that provides models, data assets, prompts, notebooks, and sample projects. Sample notebooks provide examples of data science and machine learning code. Sample projects contain sets of data, models, other assets, and detailed instructions on how to solve a particular business problem. - -The following image shows what Samples looks like. - -![Samples page](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/wx-samples.png) - - - -* See a tour of the samples collection - - - -" -E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC_10,E88EDBB9A31F8B7C70FB3BA48136D9C3CD6767AC," Learn more - - - -* [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -* [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) -* [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) -* [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -* [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) -* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) -" -64057AA641F9259654E5F08D996209EF8027A3AF_0,64057AA641F9259654E5F08D996209EF8027A3AF," Switching between the IBM watsonx as a Service and Cloud Pak for Data as a Service platforms - -If you are a Cloud Pak for Data as a Service user, you have access to IBM watsonx as a Service and you can switch between the two platforms. - -Important:Foundation model inferencing and the Prompt Lab tool to work with foundation models are available only in the Dallas and Frankfurt regions. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab. - -If you signed up for watsonx only, you can't switch to Cloud Pak for Data as a Service and you don't have a Switch platform option. To switch to Cloud Pak for Data as a Service, you must sign up for it. - -To switch between platforms: - - - -1. Log in to either IBM watsonx as a Service or Cloud Pak for Data as a Service. Your region must be Dallas. -2. On the platform home page, click the Switch platform icon (![switch platform icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/Platform-switcher-icon.svg)) next to your avatar, and select the platform. - - - -" -64057AA641F9259654E5F08D996209EF8027A3AF_1,64057AA641F9259654E5F08D996209EF8027A3AF," Service instances and resource consumption - -When you switch platforms, you continue using the same Watson Studio and Watson Machine Learning service instances. - -The resources that you consume for each of these service instances is cumulative. For example, suppose you use 3 CUH for Watson Studio on Cloud Pak for Data as a Service in the first half of July. Then, you switch to watsonx and use 3 CUH for Watson Studio in the second half of July. Your total CUH for the Watson Studio service for July is 6 CUH. - -" -64057AA641F9259654E5F08D996209EF8027A3AF_2,64057AA641F9259654E5F08D996209EF8027A3AF," Switch projects and deployment spaces between platforms - -You can switch a project or a deployment space from one platform to the other if that project or space meets the requirements and restrictions. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) and [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html). - -" -64057AA641F9259654E5F08D996209EF8027A3AF_3,64057AA641F9259654E5F08D996209EF8027A3AF," Platform assets catalog - -You share a single Platform assets catalog between the two platforms and any previously or newly added connection assets in your Platform assets catalog are available on both platforms. However, if you add other types of assets to the Platform assets catalog on Cloud Pak for Data as a Service, you can't access those types of assets on watsonx. - -" -64057AA641F9259654E5F08D996209EF8027A3AF_4,64057AA641F9259654E5F08D996209EF8027A3AF," Notifications - -Your notifications are specific to each platform. - -" -64057AA641F9259654E5F08D996209EF8027A3AF_5,64057AA641F9259654E5F08D996209EF8027A3AF," Learn more - - - -* [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) -* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) -* [Comparison of IBM watsonx as a Service and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) - - - -Parent topic:[Getting started with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_0,E3526B694C68C40EDC206E216B454E63B83F3EBA," Asset contents or previews - -In projects and other workspaces, you can see a preview of data assets that contain relational data. - - - -* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire) -* [Previews of data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=endata) - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_1,E3526B694C68C40EDC206E216B454E63B83F3EBA," Requirements and restrictions - -You can view the contents or previews of assets under the following conditions and restrictions. - - - -* Workspaces -You can view the preview or contents of assets in these workspaces: - - - -* Projects -* Deployment spaces - - - - - - - -* Types of assets - - - -* Data assets from files -* Connected data assets -* Models -* Notebooks - - - - - - - -* Required permissions -To see the asset contents or preview, these conditions must be true: - - - -* You have any collaborator role in the workspace. - - - - - - - -* Restrictions for data assets - -Additional requirements apply to connected data assets and data assets from files. See [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data). Previews are not available for data assets that were added as managed assets by using the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-apicreateattachmentnewv2). - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_2,E3526B694C68C40EDC206E216B454E63B83F3EBA," Previews of data assets - -The previews of data assets show a view of the data. - -You can see when the data in the preview was last fetched and refresh the preview data by clicking the refresh icon. - - - -* [Requirements for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire-data) -* [Preview information for data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enpreview-info) -* [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles) - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_3,E3526B694C68C40EDC206E216B454E63B83F3EBA," Requirements for data assets - -The additional requirements for viewing previews of data assets depend on whether the data is accessed through a connection or from a file. - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_4,E3526B694C68C40EDC206E216B454E63B83F3EBA," Connected data assets - -You can see previews of data assets that are accessed through a connection if all these conditions are true: - - - -* You have access to the data asset and its associated connection. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire). -* The data asset contains structured data. Structured data resides in fixed fields within a record or file, for example, relational database data or spreadsheets. -* You have credentials for the connection: - - - -* For connections with shared credentials, the username in the connection details has access to the object at the data source. -* For connections with personal credentials, you must enter your personal credentials when you see a key icon (![the key symbol for private connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)). This is a one-time step that permanently unlocks the connection for you. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - - - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_5,E3526B694C68C40EDC206E216B454E63B83F3EBA," Data assets from files - -You can see previews of data assets from files if the following conditions are true: - - - -* You have access to the data asset. See [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enrequire). -* The file is stored in IBM Cloud Object Storage. For preview of text or image files from an IBM Cloud Object Storage connection to work, the connection credentials must include an access key and a secret key. If you’re using an existing Cloud Object Storage connection that doesn’t have these keys, edit the connection asset and add them. See [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html). -* The file type is supported. See [File extensions and mime types of previewed files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html?context=cdpaas&locale=enfiles). - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_6,E3526B694C68C40EDC206E216B454E63B83F3EBA," Preview information for data assets - -For structured data, the preview displays a limited number of rows and columns: - - - -* The number of rows in the preview is limited to 1,000. -* The amount of data is limited to 800 KB. The more columns the data asset has, the fewer rows that appear in the preview. - - - -Previews show different information for different types of data assets and files. - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_7,E3526B694C68C40EDC206E216B454E63B83F3EBA," Structured data - -For structured data, the preview shows column names, data types, and a subset of columns and rows of data. The supported formats of structured data area: Relational data, CSV, TSV, Avro, partitioned data, and Parquet (projects). - -Assets from file based connections like Apache Kafka and Apache Cassandra are not supported. - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_8,E3526B694C68C40EDC206E216B454E63B83F3EBA," Unstructured data - -Unstructured data files must be stored in IBM Cloud Object Storage to have previews. - -For these unstructured data files, the preview shows the whole document: Text, JSON, HTML, PDF, images, and Microsoft Excel documents. HTML files are supported in text format. Images stored in IBM Cloud Object Storage support JPG, JPEG, PNG, GIF, BMP, and BMP1. Microsoft Excel document previews show the first sheet. - -For connected folder assets, the preview shows the files and subfolders, which you can also preview. - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_9,E3526B694C68C40EDC206E216B454E63B83F3EBA," File extensions and mime types of previewed files - -These types of files that contain structured data have previews: - - - -Structured data files - - Extension Mime type - - AVRO - CSV text/csv - CSV1 application/csv - JSON application/json - PARQ - TSV - TXT text/plain - XLSX application/vnd.openxmlformats-officedocument.spreadsheetml.sheet - XLS application/vnd.ms-excel - XLSM application/vnd.ms-excel.sheet.macroEnabled.12 - - - -These types of image files have previews: - - - -Image files - - Extension Mime type - - BMP image/bmp - GIF image/gif - JPG image/jpeg - JPEG image/jpeg - PNG image/png - - - -These types of document files have previews: - - - -Document files - - Extension Mime type - - HTML text/html - PDF application/pdf - TXT text/plain - - - -" -E3526B694C68C40EDC206E216B454E63B83F3EBA_10,E3526B694C68C40EDC206E216B454E63B83F3EBA," Learn more - - - -* [Searching for assets across the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) -* [Profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) -* [Activities](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/asset-activities.html) -* [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/visualizations.html) - - - -Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_0,AA213D259727545C26401AD5CFB4916B6EFBD18D," Profiles of data assets - -An asset profile includes generated information and statistics about the asset content. You can see the profile on an asset's Profile page. - - - -* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprereqs) -* [Creating a profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=encreate-profile) -* [Profile information](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html?context=cdpaas&locale=enprofile-results) - - - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_1,AA213D259727545C26401AD5CFB4916B6EFBD18D," Requirements and restrictions - -You can view the profile of assets under the following circumstances. - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_2,AA213D259727545C26401AD5CFB4916B6EFBD18D,"Required permissions -: To view a data asset's Profile page, you can have any role in a project or catalog. : To create or update a profile, you must have the Admin or Editor role in the project or catalog. - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_3,AA213D259727545C26401AD5CFB4916B6EFBD18D,"Workspaces -: You can view the asset profile in projects. - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_4,AA213D259727545C26401AD5CFB4916B6EFBD18D,"Types of assets -: These types of assets have a profile: - - - -* Data assets from relational or nonrelational databases from a connection to the data sources, except Cloudant -* Data assets from partitioned data sets, where a partitioned data set consists of multiple files and is represented by a single folder uploaded from the local file system or from file-based connections to the data sources -* Data assets from files uploaded from the local file system or from file-based connections to the data sources, with these formats: - - - -* CSV -* XLS, XLSM, XLSX (Only the first sheet in a workbook is profiled.) -* TSV -* Avro -* Parquet - - - -However, structured data files are not profiled when data assets do not explicitly reference them, such as in these circumstances: - - - -* The files are within a connected folder asset. Files that are accessible from a connected folder asset are not treated as assets and are not profiled. -* The files are within an archive file. The archive file is referenced by the data asset and the compressed files are not profiled. - - - - - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_5,AA213D259727545C26401AD5CFB4916B6EFBD18D," Creating a profile - -In projects, you can create a profile for a data asset by clicking Create profile. You can update an existing profile when the data changes. - -" -AA213D259727545C26401AD5CFB4916B6EFBD18D_6,AA213D259727545C26401AD5CFB4916B6EFBD18D," Profiling results - -When you create or update an asset profile, the columns in the data asset are analyzed. By default, the profile is created based on the first 5,000 rows of data. If the data asset has more than 250 columns, the profile is created based on the first 1,000 rows of data. - -The profile of a data asset shows information about each column in the data set: - - - -* When was the profile created or last updated. -* How many columns and rows were analyzed. -* The data types for columns and data types distribution. -* The data formats for columns and formats distribution. -* The percentage of matching, mismatching, or missing data for each column. -* The frequency distribution for all values identified in a column. -* Statistics about the data for each column: - - - -* The number of distinct values indicates how many different values exist in the sampled data for the column. -* The percentage of unique values indicates the percentage of distinct values that appear only once in the column. -* The minimum, maximum, or mean, and sometimes the standard deviation in that column. Depending on a column’s data format, the statistics vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values and a standard deviation value while statistics for a column of data type string have minimum length, maximum length, and mean length values. - - - - - -Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -" -B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62_0,B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62," Creating a project - -You create a project to collaborate with your team on working with data and other resources to achieve a particular goal, such as building a model. - -Your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx.ai. - -You can create an empty project, start from a sample project that provides sample data and other assets, or import a previously exported project. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html). The number of projects you can create per data center is 100. - -Your project resources can include data, collaborators, tools, assets that run code, like notebooks and models, and other types of assets. - - - -* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=enrequirements) -* [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html?context=cdpaas&locale=encreate-a-project) - - - -" -B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62_1,B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62," Requirements and restrictions - -Before you create a project, understand the requirements for storage and the project name. - -Storage requirement : You must associate an [IBM Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) with your project to store assets. Each project has a separate bucket to hold the project's assets. If you are not an administrator for the IBM Cloud Object Storage instance, it must be [configured to allow project creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). When a new project is created, the Cloud Object Storage bucket defaults to Regional resiliency. Regional buckets distribute data across several data centers that are within the same metropolitan area. If one of these data centers suffers an outage or destruction, availability and performance are not affected. - -Project name requirements : Your project name must follow these requirements: : - Must be unique in the account. : - Must contain 1 - 255 characters. : - Can't contain these characters: % \ : - Can't contain leading or trailing underscores (_). : - Can't contain leading or trailing spaces. Leading or trailing spaces are automatically truncated. - -" -B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62_2,B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62," Creating a project - -To create a project: - - - -1. Choose Projects > View all projects from the navigation menu and click New project. -2. Choose whether to create an empty project or to create a project based on an exported project file or a sample project. -3. If you chose to create a project from a file or a sample, upload a project file or select a sample project. See [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html). -4. If you chose to create a new project, add a name on the New project screen. -5. You can [mark the project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html). The project has a sensitive tag and project collaborators can't move data assets out of the project. You cannot change this setting after the project is created. -6. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one. -7. Click Create. You can start adding resources to your project. - - - -The object storage bucket name for the project is based on the project name without spaces or nonalphanumberic characters plus a unique identifier. - -Watch this video to see how to create both an empty project, imported project, and a project from a sample. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62_3,B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62," Next steps - - - -* [Add collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [Add data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) - - - -" -B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62_4,B7BAFAD14D0BF628C14FE0821AF01AEE98A0AE62," Learn more - - - -* [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) -* [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) -* [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) - - - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_0,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Quick start tutorials - -Take quick start tutorials to learn how to perform specific tasks, such as refine data or build a model. These tutorials help you quickly learn how to do a specific task or set of related tasks. - -The quick start tutorials are categorized by task: - - - -* [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprepare) -* [Analyzing and visualizing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enanalyze) -* [Building, deploying, and trusting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enbuild) -* [Working with foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enprompt) - - - -Each tutorial requires one or more service instances. Some services are included in multiple tutorials. The tutorials are grouped by task. You can start with any task. Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources. - -The tags for each tutorial describe the level of expertise ( - -Beginner - -, - -Intermediate - -, or - -Advanced - -), and the amount of coding required ( - -No code - -, - -Low code - -, or - -All code - -). - -After completing these tutorials, see the [Other learning resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html?context=cdpaas&locale=enresources) section to continue your learning. - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_1,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Preparing data - -To get started with preparing, transforming, and integrating data, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform. - -Your data preparation workflow has these basic steps: - - - -1. Create a project. -2. If necessary, create the service instance that provides the tool you want to use and associate it with the project. -3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data. -4. Choose a tool to analyze your data. Each of the tutorials describes a tool. -5. Run or schedule a job to prepare your data. - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_2,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Tutorials for preparing data - -Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources: - - - - Tutorial Description Expertise for tutorial - - [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data.

Beginner

No code - [Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) Generate synthetic tabular data using a graphical flow editor. Select operations to generate data.

Beginner

No code - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_3,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Analyzing and visualizing data - -To get started with analyzing and visualizing data, understand the overall workflow, choose a tutorial, and check out other learning resources for working with other tools. - -Your analyzing and visualizing data workflow has these basic steps: - - - -1. Create a project. -2. If necessary, create the service instance that provides the tool you want to use and associate it with the project. -3. Add data to your project. You can add data files from your local system, data from a remote data source that you connect to, data from a catalog, or sample data. -4. Choose a tool to analyze your data. Each of the tutorials describes a tool. - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_4,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Tutorials for analyzing and visualizing data - -Each of these tutorials provides a description of the tool, a video, the instructions, and additional learning resources: - - - - Tutorial Description Expertise for tutorial - - [Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) Load data, run, and share a notebook. Understand generated Python code.

Intermediate

All code - [Refine and visualize data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) Prepare and visualize tabular data with a graphical flow editor. Select operations to manipulate data.

Beginner

No code - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_5,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Building, deploying, and trusting models - -To get started with building, deploying, and trusting models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform. - -The model workflow has three main steps: build a model asset, deploy the model, and build trust in the model. - -![Overview of model workflow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-engineer-overview-wx.svg) - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_6,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Tutorials for building, deploying, and trusting models - -Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources: - - - - Tutorial Description Expertise for tutorial - - [Build and deploy a machine learning model with AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html) Automatically build model candidates with the AutoAI tool. Build, deploy, and test a model without coding.

Beginner

No code - [Build and deploy a machine learning model in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-notebook.html) Build a model by updating and running a notebook that uses Python code and the Watson Machine Learning APIs. Build, deploy, and test a scikit-learn model that uses Python code.

Intermediate

All code - [Build and deploy a machine learning model with SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build-spss.html) Build a C5.0 model that uses the SPSS Modeler tool. Drop data and operation nodes on a canvas and select properties.

Beginner

No code - [Build and deploy a Decision Optimization model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html) Automatically build scenarios with the Modeling Assistant. Solve and explore scenarios, then deploy and test a model without coding.

Intermediate

No code -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_7,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," [Automate the lifecycle for a model with pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-pipeline.html) Create and run a pipeline to automate building and deploying a machine learning model. Drop operation nodes on a canvas and select properties.

Beginner

No code - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_8,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Prompting foundation models - -To get started with prompting foundation models, understand the overall workflow, choose a tutorial, and check out other learning resources for working on the platform. - -Your prompt engineering workflow has these basic steps: - - - -1. Create a project. -2. If necessary, create the service instance that provides the tool you want to use and associate it with the project. -3. Choose a tool to prompt foundation models. Each of the tutorials describes a tool. -4. Save and share your best prompts. - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_9,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Tutorials for working with foundation models - -Each tutorial provides a description of the tool, a video, the instructions, and additional learning resources: - - - - Tutorial Description Expertise for tutorial - - [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding.

Beginner

No code - [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code.

Intermediate

All code - [Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) Tune a foundation model to enhance model performance. Use the Tuning Studio to tune a model without coding.

Intermediate

No code - [Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle. Use the evaluation tool and an AI use case to track the prompt template.

Beginner

No code - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_10,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Other learning resources - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_11,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Guided tutorials - -Access the [Build an AI model sample project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c6008d167803ef95c1b37da931604cac) to follow a guided tutorial in the Samples. After you create the sample project, the readme provides instructions: - - - -* Choose Explore and prepare data to remove anomalies in the data with Data Refinery. -* Choose Build a model in a notebook to build a model with Python code. -* Choose Build and deploy a model to automate building a model with the AutoAI tool. - - - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - - - -* Watch a preview of the guided tutorial video series - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_12,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Documentation - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_13,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," General - - - -* [Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -* [Adding data to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_14,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Preparing data - - - -* [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) - - - -[Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_15,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Analyzing and visualizing data - - - -* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_16,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Building, deploying, and trusting models - - - -* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -* [Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_17,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Prompting a foundation model - - - -* [Retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) -* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_18,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Videos - - - -* [A comprehensive set of videos](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html) that show many common tasks in watsonx. - - - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_19,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Samples - -Find sample data sets, projects, models, prompts, and notebooks in the Samples area to gain hands-on experience: - -![Notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook.svg)[Notebooks](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=notebook) that you can add to your project to get started analyzing data and building models. - -![Project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/ibm-cloud--projects.svg)[Projects](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=project-template) that you can import containing notebooks, data sets, prompts, and other assets. - -![Data set icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/data--set.svg)[Data sets](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=dataset) that you can add to your project to refine, analyze, and build models. - -![Prompt icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/prompt.svg)[Prompts](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=example-prompt) that you can use in the Prompt Lab to prompt a foundation model. - -![Model icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/model.svg)[Foundation models](https://dataplatform.cloud.ibm.com/gallery?context=wx&format=foundation-model) that you can use in the Prompt Lab. - -" -C32FE380CF3083B6D85554063B5ACB153FC1C8BE_20,C32FE380CF3083B6D85554063B5ACB153FC1C8BE," Training - - - -* [Watson Studio Methodology](https://www.ibm.com/training/course/W7067G) is an IBM Training e-Learning course that provides an in-depth look at Watson Studio. -* [Take control of your data with Watson Studio](https://developer.ibm.com/learningpaths/get-started-watson-studio/) is a learning path that consists of step-by-step tutorials that explain the process of working with data using Watson Studio. - - - -Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_0,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability for services and features - -IBM watsonx is deployed on the IBM Cloud multi-zone region network. The availability of services and features can vary across regional data centers. - -You can view the regional availability for every service in the [Services catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=wx). - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_1,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability of the Watson Studio and Watson Machine Learning services - -Watsonx.ai includes the Watson Studio and Watson Machine Learning services to provide foundation and machine learning model tools. - -The Watson Studio and Watson Machine Learning services are available in the following regional data centers: - - - -* Dallas (us-south), in Texas US -* Frankfurt (eu-de), in Germany - - - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_2,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability of foundation models - -The following table shows the IBM Cloud data centers where each foundation model is available. A checkmark indicates that the model is hosted in the region. - - - -Table 1. IBM Cloud data center support - - Model name Dallas Frankfurt - - flan-t5-xl-3b ✓ - flan-t5-xxl-11b ✓ ✓ - flan-ul2-20b ✓ ✓ - gpt-neox-20b ✓ ✓ - granite-13b-chat-v2 ✓ ✓ - granite-13b-chat-v1 ✓ ✓ - granite-13b-instruct-v2 ✓ ✓ - granite-13b-instruct-v1 ✓ ✓ - llama-2-13b-chat ✓ ✓ - llama-2-70b-chat ✓ ✓ - mpt-7b-instruct2 ✓ ✓ - mt0-xxl-13b ✓ ✓ - starcoder-15.5b ✓ ✓ - - - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_3,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Tool and environment limitations for the Frankfurt region - - - -Table 2. Frankfurt regional limitations - - Service Limitation - - Watson Studio If you need a Spark runtime, you must use the [Spark environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/jupyter-spark.html) in Watson Studio for the SPSS Modeler and notebook editor tools. - Watson Studio [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) is not supported. - Watson Studio [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) is not supported. - - - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_4,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability of watsonx.governance - -Watsonx.governance Lite and Essentials plans are available only in the Dallas region. - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_5,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability of Watson OpenScale - -Watson OpenScale legacy plans are available only in the Frankfurt region. - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_6,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Regional availability of the Cloud Object Storage service - -The region for the Cloud Object Storage service is Global. Cloud Object Storage buckets for workspaces are Regional buckets. For more information, see [IBM Cloud docs: Cloud Object Storage endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints). - -" -8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5_7,8D2B29253C00AE6A20730D0C9AD3284DC0FCABF5," Learn more - - - -* [IBM Cloud docs: IBM Cloud global data centers](https://www.ibm.com/cloud/data-centers) -* [Services in the IBM watsonx catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) - - - -Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html) -" -F908A5F1D788E2597335215A464817436A3D3ED1_0,F908A5F1D788E2597335215A464817436A3D3ED1," Levels of user access roles in IBM watsonx - -Every user of IBM watsonx has multiple levels of roles with the corresponding permissions, or actions. The permissions determine what actions a user can perform on the platform or within a service. Some roles are set in IBM Cloud, and others are set in IBM watsonx. - -The IBM Cloud account owner or administrator sets the Identity and Access (IAM) Platform and Service access roles in the IBM Cloud account. Workspace administrators in watsonx set the collaborator roles for workspaces, for example, projects and deployment spaces. - -Familiarity with the IBM Cloud IAM feature, Access groups, Platform roles, and Service roles is required to configure user access for IBM watsonx. See [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) for a description of IBM Cloud IAM Platform and Service roles. - -This illustration shows the different levels of roles assigned to each user so that they can work in IBM watsonx. - -Levels of roles in IBM watsonx - -![Levels of roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/roles_venn.svg) - -The levels of roles are: - - - -* [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enplatform) determine your permissions for the IBM Cloud account. At least the Viewer role is required to work with services. -* [IAM Service access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enservice) determine your permissions within services. -" -F908A5F1D788E2597335215A464817436A3D3ED1_1,F908A5F1D788E2597335215A464817436A3D3ED1,"* [Workspace collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html?context=cdpaas&locale=enworkspace) determine what actions you have permission to perform within workspaces in IBM watsonx. - - - -" -F908A5F1D788E2597335215A464817436A3D3ED1_2,F908A5F1D788E2597335215A464817436A3D3ED1," IAM Platform access roles - -The IAM Platform access roles are assigned and managed in the IBM Cloud account. - -IAM Platform access roles provide permissions to manage the IBM Cloud account and to access services within IBM watsonx. The Platform access roles are Viewer, Operator, Editor, and Administrator. The Platform roles are available to all services on IBM Cloud. - -The Viewer role has minimal, view-only permissions. Users need at least Viewer role to see the services in IBM watsonx. A Viewer can: - - - -* View, but not modify, available service instances and assets -* Associate services with projects. -* Become collaborator in projects or deployment spaces. -* Create projects and deployment spaces if assigned appropriate permissions for Cloud Object Storage. - - - -The Operator role has permissions to configure existing service instances. - -The Editor role provides access to these actions: - - - -* All Viewer role permissions. -* Provision instances of services. -* Update plans for service instances. - - - -The Administrator role provides the same permissions as the Owner role for the account. With Administrator role, you can: - - - -* All Viewer, Operator, and Editor permissions. -* Perform all management actions for services. -* Add users to the [IBM Cloud account and assign roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html) -* Perform administrative tasks in IBM watsonx -* [Manage services for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) - - - -To understand IAM Platform access roles, see [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview). - -" -F908A5F1D788E2597335215A464817436A3D3ED1_3,F908A5F1D788E2597335215A464817436A3D3ED1," IAM Service access roles - -Service roles apply to individual services and define actions permitted within the service. IBM Cloud Object Storage has its own set of Service access roles. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). - -" -F908A5F1D788E2597335215A464817436A3D3ED1_4,F908A5F1D788E2597335215A464817436A3D3ED1," Workspace collaborator roles - -Your role in a specific workspace determines what actions you can perform in that workspace. Your IAM roles do not affect your role within a workspace. For example, you can be the Administrator of the Cloud account, but this does not automatically make you an administrator for a project or catalog. The Admin collaborator role for a project (or other workspace) must be explicitly assigned. Similarly, roles are specific to each project. You may have Admin role in a project, which gives you full control of the contents of that project, including managing collaborators and assets. But you can have the Viewer role in another project, which allows you to only view the contents of that project. - -Projects and deployment spaces have these roles: - - - -* Admin: Control assets, collaborators, and settings in the workspace. -* Editor: Control assets in the workspace. -* Viewer: View the workspace and its contents. - - - -The permissions that are associated with each role are specific to the type of workspace: - - - -* [Project collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) -* [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html) - - - -" -F908A5F1D788E2597335215A464817436A3D3ED1_5,F908A5F1D788E2597335215A464817436A3D3ED1," Learn more - - - -* [IBM Cloud docs: What is IBM Cloud Identity and Access Management?](https://cloud.ibm.com/docs/account?topic=account-iamoverview) -* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) -* [Setting up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -* [Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) -* [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) -* [Determine your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) - - - -Parent topic:[Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html) -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_0,ABFAAF84948B090C8EA099FF44CC8CD878371073," IBM Cloud account security - -Account security mechanisms for IBM watsonx are provided by IBM Cloud. These security mechanisms, including SSO and role-based, group-based, and service-based access control, protect access to resources and provide user authentication. - - - - Mechanism Purpose Responsibility Configured on - - [Access (IAM) roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=eniam-access-roles) Provide role-based access control for services Customer IBM Cloud - [Access groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enaccess-groups) Configure access groups and policies Customer IBM Cloud - [Resource groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enresource-groups) Organize resources into groups and assign access Customer IBM Cloud - [Service IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enservice-ids) Enables an application outside of IBM Cloud access to your IBM Cloud services Customer IBM Cloud - [Service ID API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enservice-id-api-keys) Authenticates an application to a Service ID Customer IBM Cloud - [Activity Tracker](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enactivity-tracker) Monitor events related to IBM watsonx Customer IBM Cloud -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_1,ABFAAF84948B090C8EA099FF44CC8CD878371073," [Multifactor authentication (MFA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=enmultifactor-authentication) Require users to authenticate with a method beyond ID and password Customer IBM Cloud - [Single sign-on authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html?context=cdpaas&locale=ensingle-sign-on) Connect with an identity provider (IdP) for single sign-on (SSO) authentication by using SAML federation Shared IBM Cloud - - - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_2,ABFAAF84948B090C8EA099FF44CC8CD878371073," IAM access roles - -You can use IAM access roles to provide users access to all resources that belong to a resource group. You can also give users access to manage resource groups and create new service instances that are assigned to a resource group. - -For step-by-step instructions, see [IBM Cloud docs: Assigning access to resources](https://cloud.ibm.com/docs/account?topic=account-access-getstarted) - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_3,ABFAAF84948B090C8EA099FF44CC8CD878371073," Access groups - -After you set up and organize resource groups in your account, you can streamline access management by using access groups. Create access groups to organize a set of users and service IDs into a single entity. You can then assign a policy to all group members by assigning it to the access group. Thus you can assign a single policy to the access group instead of assigning the same policy multiple times per individual user or service ID. - -By using access groups, you can minimally manage the number of assigned policies by giving the same access to all identities in an access group. - -For more information see: - - - -* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups&interface=ui). - - - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_4,ABFAAF84948B090C8EA099FF44CC8CD878371073," Resource groups - -Use resource groups to organize your account's resources into logical groups that help with access control. Rather than assigning access to individual resources, you assign access to the group. Resources are any service that is managed by IAM, such as databases. Whenever you create a service instance from the Cloud catalog, you must assign it to a resource group. - -Resource groups work with access group policies to provide a way to manage access to resources by groups of users. By including a user in an access group, and assigning the access group to a resource group, you provide access to the resources contained in the group. Those resources are not available to nonmembers. The Lite account comes with a single resource group, named ""Default"", so all resources are placed in the Default resource group. With paid accounts, Administrators can create multiple resource groups to support your business and provide access to resources on an as-needed basis. - -For step-by-step instructions, see [IBM Cloud docs: Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs) - -For tips on configuring resource groups to provide secure access, see [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup) - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_5,ABFAAF84948B090C8EA099FF44CC8CD878371073," Service IDs - -You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Service IDs are not tied to a specific user. If a user leaves an organization and is deleted from the account, the service ID remains intact to ensure that your service continues to work. Access policies that are assigned to each service ID ensure that your application has the appropriate access for authenticating with your IBM Cloud services. See [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html). - -One way in which Service IDs and access policies can be used is to manage access to the Cloud Object Storage buckets. See [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html). - -For more information, see [IBM Cloud docs: Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids). - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_6,ABFAAF84948B090C8EA099FF44CC8CD878371073," Service ID API keys - -For extra protection, Service IDs can be combined with unique API keys. The API key that is associated with a Service ID can be set for one-time use or unlimited use. For more information, see [IBM Cloud docs: Managing service IDs API keys](https://cloud.ibm.com/docs/account?topic=account-serviceidapikeys). - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_7,ABFAAF84948B090C8EA099FF44CC8CD878371073," Activity Tracker - -The Activity Tracker collects and stores audit records for API calls (events) made to resources that run in the IBM Cloud. You can use Activity Tracker to monitor the activity of your IBM Cloud account to investigate abnormal activity and critical actions, and to comply with regulatory audit requirements. The events that are collected comply with the Cloud Auditing Data Federation (CADF) standard. IBM services that generate Activity Tracker events follow the IBM Cloud security policy. - -For a list of events that apply to IBM watsonx, see [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html). - -For instructions on configuring Activity Tracker, see [IBM Cloud docs: Getting started with IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started). - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_8,ABFAAF84948B090C8EA099FF44CC8CD878371073," Multifactor authentication - -Multifactor authentication (or MFA) adds an extra layer of security by requiring multiple types of authentication methods upon login. After entering a valid username and password, users must also satisfy a second authentication method. For example, a time-sensitive passcode is sent to the user, either through text or email. The correct passcode must be entered to complete the login process. - -For more information, see [IBM Cloud docs: Types of multifactor authentication](https://cloud.ibm.com/docs/account?topic=account-types). - -" -ABFAAF84948B090C8EA099FF44CC8CD878371073_9,ABFAAF84948B090C8EA099FF44CC8CD878371073," Single sign-on authentication - -Single sign-on (SSO) is an authentication method that enables users to log in to multiple, related applications that use one set of credentials. - -IBM watsonx supports SSO using Security Assertion Markup Language (SAML) federated IDs. SAML federation requires coordination with IBM to configure. SAML connects IBMids with the user credentials that are provided by an identity provider (IdP). For companies that have configured SAML federation with IBM, users can log in to IBM watsonx with their company credentials. SAML federation is the recommended method for SSO configuration with IBM watsonx. - -The [IBMid Enterprise Federation Adoption Guide](https://ibm.ent.box.com/notes/78040808400?s=yqjnprek2rm99jgqhlm04xz0nsjda69a) describes the steps that are required to federate your identity provider (IdP). You need an IBM Sponsor, which is an IBM employee that works as the contact person between you and the IBMid team. - -For an overview of SAML federation, see [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide). This blog discusses both SAML federation and IBM Cloud App ID. IBM Cloud App ID is supported as a Beta version with IBM watsonx. - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -DA7407D415B3EFF25CA3DD588BBA677CC8CD3494_0,DA7407D415B3EFF25CA3DD588BBA677CC8CD3494," Collaborator security - -IBM watsonx provides attribute-based access control to protect workspaces such as projects and catalogs. You control access to workspaces by assigning roles and by restricting collaborators. - - - -Table 1. Collaborator security mechanisms for IBM watsonx - - Mechanism Purpose Responsibility Configured on - - [Collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html?context=cdpaas&locale=encollaborator-roles) Assign roles to control access to workspaces Customer IBM watsonx - - - -" -DA7407D415B3EFF25CA3DD588BBA677CC8CD3494_1,DA7407D415B3EFF25CA3DD588BBA677CC8CD3494," Collaborator roles - -Everyone working in IBM watsonx is assigned a role that determines the workspaces that they can access and the tasks that they can perform. Collaborator roles control access to projects, deployment spaces, and catalogs using permissions specific to the role. Roles are assigned in IBM watsonx to provide Admin, Editor, or Viewer permissions. - -Users also have an IAM Platform access role for the Cloud account and they may also have an IAM Service access role for workspaces. To understand how the roles provide secure access, see [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html). - -To understand the permissions for each collaborator role, see [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_0,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Data security - -In IBM watsonx, data security mechanisms, such as encryption, protect sensitive customer and corporate data, both in transit and at rest. A secure , and other mechanisms protect your valuable corporate data. A secure IBM Cloud Object Storage instance stores data assets from projects, catalogs, and deployment spaces. - - - - Mechanism Purpose Responsibility Configured on - - [Configuring Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enconfiguring-cloud-object-storage) IBM Cloud Object Storage is required to store assets Customer IBM Cloud - [Controlling access with service credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=encontrolling-access-with-service-credentials) Authorize a Cloud Object Storage instance for a specific project Customer IBM Cloud and IBM watsonx - [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enencrypting-at-rest-data) Default encryption is provided. Use IBM Key Protect to manage your own keys. Shared IBM Cloud - [Encrypting in motion data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enencrypting-in-motion-data) Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion. IBM, Third-party clouds IBM Cloud, Cloud providers - [Backups](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html?context=cdpaas&locale=enbackups) Use IBM Cloud Backup to manage backups for your data. Shared IBM Cloud - - - -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_1,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Configuring Cloud Object Storage - -IBM Cloud Object Storage provides storage for projects, catalogs, and deployment spaces. You are required to associate an IBM Cloud Object Storage instance when you create projects, catalogs, or deployment spaces to store files for assets, such as uploaded data files or notebook files. The Lite plan instance is free to use for storage capacity up to 25 GB per month. - -You can also access data sources in an IBM Cloud Object Storage instance. To access data IBM Cloud Object Storage, you create an IBM Cloud Object Storage connection when you want to connect to data stored in IBM Cloud Object Storage. An IBM Cloud Object Storage connection has a different purpose from the IBM Cloud Object Storage instance that you associate with a project, deployment space, or catalog. - -The IBM Cloud Identity and Access Management (IAM) service securely authenticates users and controls access to IBM Cloud Object Storage. See [IBM Cloud docs: Getting started with IAM](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-iam) for instructions on setting up access control for Cloud Object Storage on IBM Cloud. - -See [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage) - -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_2,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Controlling access with service credentials - -Cloud Object Storage credentials consist of a service credential and a Service ID. Policies are assigned to Service IDs to control access. The credentials are used to create a secure connection to the Cloud Object Storage instance, with access control as determined by the policy. - -For more information, see [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) - -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_3,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Encrypting at rest data - -By default, at rest data is encrypted with randomly generated keys that are managed by IBM. If the default keys are sufficient protection for your data, no additional action is needed. To provide extra protection for at rest data, you can create and manage your own keys with IBM® Key Protect for IBM Cloud™. Key Protect is a full-service encryption solution that allows data to be secured and stored in IBM Cloud Object Storage. - -To encrypt your Cloud Object Storage instance with your own key, create an instance of the IBM Key Project service from the IBM Cloud catalog. Not all Watson Studio plans support customer-generated encryption keys. - - - -* For instructions on encrypting your Cloud Object Storage instance with your own key, see [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) -* For an overview of how to encrypt data with your own keys, see [IBM Cloud docs: Encrypting data with your own keys](https://cloud.ibm.com/docs/overview?topic=overview-key-encryption) -* For the complete documentation for Key Protect, see [IBM Cloud docs: IBM Key Protect](https://cloud.ibm.com/docs/key-protect) -* For an overview of how encryption works in the IBM Cloud Security Architecture, see [Data security architecture](https://www.ibm.com/cloud/architecture/architectures/data-security-arch) - - - -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_4,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Encrypting in motion data - -Data is encrypted when transmitted by IBM on any public networks and within the Cloud Service's private data center network. Encryption methods such as HTTPS, SSL, and TLS are used to protect data in motion. - -" -F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5_5,F0B03CE62E6DC8AC7598A8B8316C4BF8CEA132D5," Backups - -To avoid loss of important data, create and properly store backups. You can use IBM Cloud Backup to securely back up your data between IBM Cloud servers in one or more IBM Cloud data centers. See [IBM Cloud docs: Getting started with IBM Cloud Backup](https://cloud.ibm.com/docs/Backup?topic=Backup-getting-started) - -Learn More For more information, see [IBM Cloud docs: Getting started with Security and Compliance Center](https://cloud.ibm.com/docs/security-compliance). - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7_0,4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7," Enterprise security - -An enterprise is a hierarchy of IBM Cloud accounts that contains a parent account at the highest level with child account groups as the middle level and optional individual accounts that you can add at the lowest level. To provide security between the levels of accounts, enterprises isolate user and access management between the enterprise account and its child accounts. - -The users and their assigned access in the enterprise account are entirely separate from users in the child accounts, and no access is inherited between the two types of accounts. User and access management in each enterprise and each account is entirely separate and must be managed by the account owner or a user given the Administrator role in the specific account. - -Resources and services within an enterprise function the same as in stand-alone accounts. Each account in an enterprise can contain resource groups that manage access to multiple resources. For account security and how to use resource groups, see [IBM Cloud account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html). - -" -4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7_1,4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7," Use cases - -The user lists for each account are only visible to the users who are invited to that account. Just because a user is invited and given access to manage the entire enterprise, it doesn't mean that they can view the users who are invited to each child account. - -Both user management and access management are entirely separate in each account and in the enterprise itself. This separation means that users who manage your enterprise can't access account resources within the child accounts unless you specifically enable them to. For example, your financial officer can have the Administrator role on the Billing account management service within the enterprise account. The financial officer must be invited to a child account with the appropriate access rights to view offers or update spending limits for the child account. - -![Role inheritance for enterprises](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/enterprise_role_hierarchy.svg) - -" -4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7_2,4EF4409D11DD4D5360D711EEA8E1E71DAC4C0BD7," Learn more - -For an overview of enterprise accounts, see [IBM Cloud docs: What is an enterprise?](https://cloud.ibm.com/docs/account?topic=account-what-is-enterprise) - -For step-by-step instructions for setting up an enterprise hierarchy of accounts, see [IBM Cloud docs: Setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-tutorial) - -For tips for setting up an enterprise, see [IBM Cloud docs: Best practices for setting up an enterprise](https://cloud.ibm.com/docs/account?topic=account-enterprise-best-practices) - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -AAC63365F37F6B307BA343F45706E388D24245D4_0,AAC63365F37F6B307BA343F45706E388D24245D4," Network security - -IBM watsonx provides network security mechanisms to protect infrastructure, data, and applications from potential threats and unauthorized access. Network security mechanisms provide secure connections to data sources and control traffic across both the public internet and internal networks. - - - -Table 1. Network security mechanisms for IBM watsonx - - Mechanism Purpose Responsibility Configured on - - [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enprivate-network-service-endpoints) Access services through secure private network endpoints Customer IBM Cloud - [Access to private data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enaccess-to-private-data-sources) Connect to data sources that are protected by a firewall Customer IBM watsonx - [Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enintegrations) Secure connections to Third-party clouds through a firewall Customer and Third-party clouds IBM watsonx - [Connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enconnections) Secure connections to data sources Customer IBM watsonx - [Connections to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=ensecure) The Satellite Connector and Satellite location provide secure connections to data sources in a hybrid environment Customer IBM Cloud and IBM watsonx -" -AAC63365F37F6B307BA343F45706E388D24245D4_1,AAC63365F37F6B307BA343F45706E388D24245D4," [VPNs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=envpns) Share data securely across public networks Customer IBM Cloud - [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enallow-specific-ip-addresses) Protect from access by unknown IP addresses Customer IBM Cloud - [Allow third party URLs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enthirdpartyurls) Allow third party URLs on an internal network Customer Customer firewall - [Multi-tenancy](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enmulti-tenancy) Provide isolation in a SaaS environment IBM and Third-party clouds IBM Cloud, Cloud providers - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_2,AAC63365F37F6B307BA343F45706E388D24245D4," Private network service endpoints - -Use private network service endpoints to securely connect to endpoints over IBM private cloud, rather than connecting to resources over the public network. With Private network service endpoints, services are no longer served on an internet routable IP address and thus are more secure. Service endpoints require virtual routing and forwarding (VRF) to be enabled on your account. VRF is automatically enabled for Virtual Private Clouds (VPCs). - -For more information about service endpoints, see: - - - -* [Securing connections to services with private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html?audience=wdp) -* [Blog: Introducing Private Service Endpoints in IBM Cloud Databases](https://www.ibm.com/cloud/blog/introducing-private-service-endpoints-in-ibm-cloud-databases?mhsrc=ibmsearch_a&mhq=private%20cloud%20endpoints) -* [IBM Cloud docs: Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview) -* [IBM Cloud docs: Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint) -* [IBM Cloud docs: Public and private network endpoints](https://cloud.ibm.com/docs/watson?topic=watson-public-private-endpoints&mhsrc=ibmsearch_a&mhq=public%20cloud%20endpoints) - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_3,AAC63365F37F6B307BA343F45706E388D24245D4," Access to private data sources - -Private data sources are on-premises data sources that are protected by a firewall. IBM watsonx requires access through the firewall to reach the data sources. To provide secure access, you create inbound firewall rules to allow access for the IP address ranges for IBM watsonx. The inbound rules are created in the configuration tool for your firewall. - -See [Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) - -" -AAC63365F37F6B307BA343F45706E388D24245D4_4,AAC63365F37F6B307BA343F45706E388D24245D4," Integrations - -You can configure integrations with third-party cloud platforms to allow IBM watsonx users to access data sources hosted on those clouds. The following security mechanisms apply to integrations with third-party clouds: - - - -1. An authorized account on the third-party cloud, with appropriate permissions to view account credentials -2. Permissions to allow secure connections through the firewall of the cloud provider (for specific IP ranges) - - - -For example, you have a data source on AWS that you are running notebooks on. You need to integrate with AWS and then generate a connection to the database. The integration and connection are secure. After you configure firewall access, you can grant appropriate permissions to users and provide them with credentials to access data. - -See [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) - -" -AAC63365F37F6B307BA343F45706E388D24245D4_5,AAC63365F37F6B307BA343F45706E388D24245D4," Connections - -Connections require valid credentials to access data. The account owner or administrator configures the type of credentials that are required, either shared or personal, at the account level. Shared credentials make the data source and its credentials accessible to all collaborators in the project. Personal credentials require each collaborator to provide their own credentials to use the data source. - -Connections require valid credentials to access data. The account owner or administrator configures the type of credentials that are required at the account level. The connection creator enters a valid credential. The options are: - - - -* Either shared or personal allows users to specify personal or shared credentials when creating a new connection by selecting a radio button and entering the correct credential. -* Personal credentials require each collaborator to provide their own credentials to use the data source. -* Shared credentials make the data source and its credentials accessible to all collaborators in the project. Users enter a common credential which was created by the creator of the connection. - - - -For more information about connections, see: - - - -* [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) -* [Adding data from a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) -* [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) -* [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections) - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_6,AAC63365F37F6B307BA343F45706E388D24245D4," Connections to data behind a firewall - -Secure connections provide secure communication among resources in a hybrid cloud deployment, some of which might reside behind a firewall. You have the following options for secure connections between your environment and the cloud: - - - -* [Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enlink) -* [Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html?context=cdpaas&locale=enlocation) - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_7,AAC63365F37F6B307BA343F45706E388D24245D4," Satellite Connector - -A Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem, cloud, or Edge environment back to IBM Cloud. Your infrastructure needs only a container host, such as Docker. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui). - -See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.htmlsatctr) for instructions on configuring a Satellite Connector. - -Satellite Connector is the replacement for the deprecated Secure Gateway. For the Secure Gateway deprecation announcement, see [IBM Cloud docs: Secure Gateway Deprecation Overview](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview) - -" -AAC63365F37F6B307BA343F45706E388D24245D4_8,AAC63365F37F6B307BA343F45706E388D24245D4," Satellite location - -A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on-prem location. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane. A Satellite location is a superset of the capabilities of the Satellite Connector. If you need only client data communication, set up a Satellite Connector. - -See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.htmlsl) for instructions on configuring a Satellite location. - -" -AAC63365F37F6B307BA343F45706E388D24245D4_9,AAC63365F37F6B307BA343F45706E388D24245D4," VPNs - -Virtual Private Networks (VPNs) create virtual point-to-point connections by using tunneling protocols, and encryption and dedicated connections. They provide a secure method for sharing data across public networks. - -Following are the VPN technologies on IBM Cloud: - - - -* [IPSec VPN](https://cloud.ibm.com/catalog/infrastructure/ipsec-vpn): The VPN facilitates connectivity from your secure network to IBM IaaS platform’s private network. Any user on the account can be given VPN access. -* [VPN for VPC](https://cloud.ibm.com/vpc-ext/provision/vpngateway): With Virtual Private Cloud (VPC), you can provision generation 2 virtual server instances for VPC with high network performance. -* The Secure Gateway deprecation announcement provides information and scenarios for using VPNs as an alternative. See [IBM Cloud docs: Migration options](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-migration-optionsvirtual-private-network). - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_10,AAC63365F37F6B307BA343F45706E388D24245D4," Allow specific IP addresses - -Use this mechanism to control access to the IBM cloud console and to IBM watsonx. Access is allowed from the specified IP addresses only; access from all other IP addresses is denied. You can specify the allowed IP addresses for an individual user or for an account. - -When allowing specific IP addresses for Watson Studio, you must include the CIDR ranges for the Watson Studio nodes in each region (as well as the individual client system IPs that are allowed). You can include the CIDR ranges in IBM watsonx by following these steps: - - - -1. From the main menu, choose Administration > Cloud integrations. -2. Click Firewall configuration to display the IP addresses for the current region. Use CIDR notation. -3. Copy each CIDR range into the IP address restrictions for either a user or an account. Be sure to enter the allowed individual client IP addresses as well. Enter the IP addresses as a comma-separated list. Then, click Apply. -4. Repeat for each region to allow access for Watson Studio. - - - -For step-by-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips) - -" -AAC63365F37F6B307BA343F45706E388D24245D4_11,AAC63365F37F6B307BA343F45706E388D24245D4," Allow third party URLs on an internal network - -If you are running IBM watsonx behind a firewall, you must allowlist third party URLs to provide outbound browser access. The URLs include resources from IBM Cloud and other domains. IBM watsonx requires access to these domains for outbound browser traffic through the firewall. - -This list provides access only for core IBM watsonx functions. Specific services might require additional URLs. The list does not cover URLs required by the IBM Cloud console and its outbound requests. - - - -Table 2. Third party URLs allowlist for IBM watsonx - - Domain Description - - *.bluemix.net IBM legacy Cloud domain - still used in some flows - *.appdomain.cloud IBM Cloud app domain - cloud.ibm.com IBM Cloud global domain - *.cloud.ibm.com Various IBM Cloud subdomains - dataplatform.cloud.ibm.com IBM watsonx Dallas region - *.dataplatform.cloud.ibm.com CIBM watsonx subdomains - eum.instana.io Instana client side instrumentation - eum-orange-saas.instana.io Instana client side instrumentation - cdnjs.cloudflare.com Cloudflare CDN for some static resources - nebula-cdn.kampyle.com Medallia NPS - resources.digital-cloud-ibm.medallia.eu Medallia NPS - udc-neb.kampyle.com Medallia NPS - ubt.digital-cloud-ibm.medallia.eu Medallia NPS - cdn.segment.com Segment JS - api.segment.io Segment API - cdn.walkme.com WalkMe static resources - papi.walkme.com WalkMe API - ec.walkme.com WalkMe API - playerserver.walkme.com WalkMe player server - s3.walkmeusercontent.com WalkMe static resources - - - -" -AAC63365F37F6B307BA343F45706E388D24245D4_12,AAC63365F37F6B307BA343F45706E388D24245D4," Multi-tenancy - -IBM watsonx is hosted as a secure and compliant multi-tenant solution on IBM Cloud. See [Multi-Tenant](https://www.ibm.com/cloud/learn/multi-tenant) - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_0,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6," Security for IBM watsonx - -Security mechanisms in IBM watsonx provide protection for data, applications, identity, and resources. You can configure security mechanisms on five levels for IBM Cloud security functions. - -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_1,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6," Security levels in IBM watsonx - -Security for IBM watsonx is configured on levels to ensure that your data, application endpoints, and identity are protected on any cloud. The security levels are: - - - -1. [Network security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.html) – Network security protects the network infrastructure and the points where your database or applications interact with the cloud. For example, you can protect your network by allowing IP addresses, by connecting securely to databases and third-party clouds, and by securing endpoints. -2. [Enterprise security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-enterprise.html) – Enterprises are multiple IBM Cloud accounts in a hierarchy. For example, your company might have many teams that require one or more separate accounts for development, testing, and production environments. Or, you can configure an enterprise to isolate workloads in separate accounts to meet compliance guidelines. -3. [Account security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) – Account security includes IAM and Access group roles, Service IDs, monitoring, and other security mechanisms that are configured on IBM Cloud for your IBM Cloud account. -4. [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html) – Data security protects the IBM Cloud Object Storage service instance, provides data encryption for at-rest and in-motion data, and other security mechanisms related to data. -5. [Collaborator security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-collab.html) – Protect your workspaces by assigning role-based access controls to collaborators in IBM watsonx. - - - -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_2,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6,"IBM watsonx conforms to IBM Cloud security requirements. See [IBM Cloud docs: How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security). - -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_3,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6," Resiliency - -IBM watsonx is disaster resistant: - - - -* The metadata for your projects and catalogs is stored in a three-node dedicated Cloudant Enterprise cluster that spans multiple geographic locations. -* The files that are associated with projects and catalogs are protected by the level of resiliency that is specified by the IBM Cloud Object Storage plan. - - - -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_4,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6," Compliance - -See [Keep your data secure and compliant](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html). - -" -3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6_5,3A81B302EE01FDC0AC111CFF3ABFDB96E3A0CDD6," Learn more - - - -* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document) -* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) -* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) -* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03) -* [Managing security and compliance in IBM Cloud](https://cloud.ibm.com/docs/overview?topic=overview-manage-security-compliance) -* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB). -* [Software Product Compatibility Reports: IBM Watson Machine Learning service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479). - - - -Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_0,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Keeping your data secure and compliant - -Customer data security is paramount. The following information outlines some of the ways that customer data is protected when using IBM watsonx and what you are expected to do to help in these efforts. - - - -* [Customer responsibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=encustomer-responsibility) -* [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enhipaa) -* [IBM's commitment to GDPR](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=engdpr) -* [Content and Data Protection](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=encontent-and-data-protection) -* [GDPR statement that applies to IBM Watson Machine Learning log files](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enlogfiles) -* [Secure deletion from the IBM Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=ensecure-deletion) - - - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_1,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Customer responsibility - -Clients are responsible for ensuring their own compliance with various laws and regulations, including the European Union General Data Protection Regulation (GDPR). Clients are solely responsible for obtaining advice of competent legal counsel as to the identification and interpretation of any relevant laws and regulations that may affect the clients’ business and any actions the clients may need to take to comply with such laws and regulations. The products, services, and other capabilities described herein are not suitable for all customer situations and may have restricted availability. IBM does not provide legal, accounting, or auditing advice or represent or warrant that its services or products will ensure that clients are in compliance with any law or regulation. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_2,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," HIPAA readiness - -Watson Studio and Watson Machine Learning meet the required IBM controls that are commensurate with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Security and Privacy Rule requirements. - -These requirements include the appropriate administrative, physical, and technical safeguards required of Business Associates in 45 CFR Part 160 and Subparts A and C of Part 164. HIPAA readiness applies to the following plans: - - - -* The Watson Studio Professional plan in the Dallas (US South) region -* The Watson Machine Learning Standard plan in the Dallas (US South) region - - - -For other services, you must check the plan page in IBM Cloud for each to determine if it is HIPAA ready and whether you need to reprovision the service after you enable HIPAA support. - -HIPAA support from IBM requires that you agree to the terms of the [Business Associate Addendum (BAA) agreement](https://www.ibm.com/support/customer/csol/terms/?ref=i126-7356-04-12-2019-zz-en) with IBM for your IBM Cloud account. The BAA outlines IBM responsibilities, but also your responsibilities to maintain HIPAA compliance. After you enable HIPAA support in your IBM Cloud account, you cannot disable it. See [IBM Cloud Docs: Enabling the HIPAA Supported setting](https://cloud.ibm.com/docs/account?topic=account-eu-hipaa-supported). - -To enable HIPAA support for your IBM Cloud account: - - - -1. Log in to your IBM Cloud account. -2. Click Manage > Account and then Account settings. -3. In the HIPAA Supported section, click On. -4. Read the BAA and then select Accept and click Submit. - - - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_3,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," IBM's commitment to GDPR - -Learn more about IBM’s own [GDPR readiness journey and our GDPR capabilities](https://www.ibm.com/data-responsibility/gdpr/) and offerings to support your compliance journey. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_4,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Content and Data Protection - -The Data Processing and Protection data sheet (Data Sheet) provides information specific to the IBM Cloud Service regarding the type of Content enabled to be processed, the processing activities involved, the data protection features, and specifics on retention and return of Content. Any details or clarifications and terms, including customer responsibilities, around use of the Cloud Service and data protection features, if any, are set forth in this section. There may be more than one Data Sheet applicable to a customer's use of the IBM Cloud Service based upon options selected by customer. The Data Sheet may only be available in English and not available in local languages. Despite any practices of local law or custom, the parties agree that they understand English and it is an appropriate language regarding acquisition and use of the IBM Cloud Services. The following Data Sheets apply to the IBM Cloud Service and its available options. Customer acknowledges that i) IBM may modify Data Sheets from time to time at IBM's sole discretion and ii) such modifications will supersede prior versions. The intent of any modification to Data Sheet(s) will be to - - - -1. improve or clarify existing commitments, -2. maintain alignment to current adopted standards and applicable laws, or -3. provide additional commitments. No modification to Data Sheets will materially degrade the data protection of a IBM Cloud Service. - - - -See the [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enlearn-more) section for links to some of the data sheets that you can view. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_5,581F43AA02D6C6861D2FDF220617CF3FBB903AE5,"You, the customer, are responsible to take necessary actions to order, enable, or use available data protection features for a IBM Cloud Service and accept responsibility for use of the IBM Cloud Services if you fail to take such actions, including meeting any data protection or other legal requirements regarding Content. [IBM's Data Processing Addendum](http://ibm.com/dpa) (DPA) and DPA Exhibits apply and are referenced in as part of the Agreement, if and to the extent the European General Data Protection Regulation (EU/2016/679) (GDPR) applies to personal data contained in Content. The applicable Data Sheets for this IBM Cloud Service will serve as the DPA Exhibits. If the DPA applies, IBM's obligation to provide notice of changes to Subprocessors and Customer's right to object to such changes will apply as set out in DPA. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_6,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," GDPR statement that applies to IBM Watson Machine Learning log files - -Disclaimer: Client’s use of the deep learning training process includes the ability to write to the training log files. Personal data must not be written to these training log files as they are accessible to other users within Client’s Enterprise as well as to IBM as necessary to support the Cloud Service. - -Please pay close attention to data privacy principals when selecting a dataset for training data. Processing of PI is governed by vigorous legal requirements and is only allowed if it is based on an explicit legal basis. These regulations mandate that PI is processed only for the purpose it was collected for. No other processing in a manner that is incompatible with this initial purpose is permissible. For these and other constrains these regulations place on your use of PI, we highly recommend that you do not use ""real"" PI in your training dataset unless it is allowed or permissible. You may substitute real PI using test data that is available on the public sphere. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_7,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Secure deletion from the IBM Watson Machine Learning service - -Anyone that has personally identifiable information and data (PII) stored as part of using the IBM Watson Machine Learning service, has the right to obtain from the controller the erasure of that data without undue delay. The controller has the obligation to erase personal data without undue delay where one of the following conditions exist: - - - -* There is PII data stored in the IBM Watson Machine Learning service -* User email address and full name are stored as metadata related to the Machine Learning repository assets. -* User provided service credentials. -* Repository asset content, which is usually out of Machine Learning service control and potentially can contain any type of PII data in it. In this case, when users want to track PII data stored in assets, such as a model, they must: - - - -* Get training data reference from the model metadata. -* Scan training data for occurrence of PII data of particular user. -* If such data can be found in the training data set, the model should be considered as potentially holding this data in its content. - - - - - -Repository asset content, such as models, can be securely deleted by performing one of the methods [for permanently deleting personal data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.html?context=cdpaas&locale=enoptions-for-permanently-deleting-personal-data). - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_8,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Options for permanently deleting personal data - -There are several options that users can choose to delete their personal data permanently: - - - -* Remove the entire IBM Watson Machine Learning service instance from IBM Cloud. This is possible by sending an un-provisioning request via different channels, such as the IBM Cloud UI, CLI, or REST API. -* Use the [Watson Machine Learning REST ](https://cloud.ibm.com/apidocs/machine-learning-cp) to delete models or model deployments. - - - -For the IBM Watson Machine Learning service, personally identifiable information and data is removed completely from all data sources, including backups, after 30 days. - -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_9,581F43AA02D6C6861D2FDF220617CF3FBB903AE5," Learn more - - - -* [watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document) -* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) -* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) -* [IBM Cloud Object Storage terms](https://www.ibm.com/software/sla/sladb.nsf/sla/bm-7857-03) -* [How do I know that my data is safe?](https://cloud.ibm.com/docs/overview?topic=overview-security) -* [Data Security and Privacy Principles for IBM Cloud Services](https://www-03.ibm.com/software/sla/sladb.nsf/pdf/7745WW2/$file/Z126-7745-WW-2_05-2017_en_US.pdf) -* [IBM and GDPR](https://www.ibm.com/data-responsibility/gdpr/) -* [Software Product Compatibility Reports: IBM Watson Studio](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=95E9BEA0B35711E7A9EB066095601ABB) -* [Software Product Compatibility Reports: IBM Watson Machine Learning](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=6B5148E0537F11E6865BC3F213DB63F7) -" -581F43AA02D6C6861D2FDF220617CF3FBB903AE5_10,581F43AA02D6C6861D2FDF220617CF3FBB903AE5,"* [Software Product Compatibility Reports: IBM Watson Machine Learning Service](https://www.ibm.com/software/reports/compatibility/clarity-reports/report/html/softwareReqsForProduct?deliverableId=850D9360405711E5B2E4A36A7B0C4479) - - - -Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) -" -B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350_0,B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350," Setting up the Watson Studio and Watson Machine Learning services - -The Watson Studio and Watson Machine Learning services are provisioned automatically with a Lite plan when you sign up for IBM watsonx. To set up Watson Studio and Watson Machine Learning for an organization, you upgrade the service plans. You allow the node IP addresses access through the firewall. - -To set up the Watson Studio and Watson Machine Learning services, complete these tasks: - - - -1. [Upgrade the services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=enupgrade). -2. [Allow IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html?context=cdpaas&locale=ennode-ips). - - - -" -B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350_1,B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350," Step 1: Upgrade the services to the appropriate plans - -Required roles : You must be the IBM Cloud account Owner or Administrator. - -To upgrade the services: - - - -1. Determine the Watson Studio service plan that you need. The features and compute resources of Watson Studio vary across the service plans. See [Watson Studio service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). -2. While logged in to IBM watsonx, from the main menu, click Administration > Services > Service instances. -3. Click the menu next to the Watson Studio service and choose Upgrade service. -4. Choose the plan you want and click Upgrade. -5. Repeat the steps for the Watson Machine Learning service. The resources and number of deployment jobs vary across the Watson Machine Learning service plans. See [Watson Machine Learning service plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - - - -Make sure that object storage is configured to allow these users to create catalogs and projects. See [Setting up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlcos-delegation). - -All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services can now create projects and use all the Watson Studio and Watson Machine Learning tools. - -" -B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350_2,B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350," Step 2: Allow IP addresses for Watson Studio for your region - -The IP addresses for the Watson Studio nodes in each region must be configured as allowed IP addresses for the IBM Cloud account. When allowing specific IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio nodes in each region to allow a secure connection through the firewall. - -Required roles : You must have the Editor or higher IBM Cloud IAM Platform role to allow IP addresses. - -First look up the CIDR blocks in IBM watsonx, and then enter them into the Access(IAM) > Settings screen in IBM Cloud. Follow these steps: - - - -1. From the IBM watsonx main menu, select Administration > Cloud integrations. -2. Click Firewall configuration to display the IP addresses for the current region. -3. Checkmark Show IP ranges in CIDR notation. -4. Click the icon to copy a CIDR block to the clipboard. -5. Enter the CIDR block of IP addresses into the Access(IAM) > Settings > Restrict IP address access > Allowed IP addresses for the IBM Cloud account. -6. Then click Save. -7. Repeat for each CIDR block until all are entered. -8. Repeat for each region. - - - -For step-by-step instructions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips). - -" -B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350_3,B0DA6CD45BFD0D3A91F0B3C4E7615DE23FE4F350," Next steps - -Finish the remaining steps for [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). - -Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -91838636DE3442E218FD7BECCDE866113D10DDF3_0,91838636DE3442E218FD7BECCDE866113D10DDF3," Managing users and access - -As the account owner or administrator, you add the people in your organization to the IBM Cloud account and then assign them access permissions using roles that provide access to the services that they need. - -" -91838636DE3442E218FD7BECCDE866113D10DDF3_1,91838636DE3442E218FD7BECCDE866113D10DDF3," User management on IBM Cloud - -People who work in IBM watsonx must have a valid IBMid and be a member of the IBM Cloud account. Alternately, they must have a valid ID in a supported user registry. User management includes adding users to the account and then assigning appropriate roles to provide access to the services and actions that they need. See [Adding users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html). - -" -91838636DE3442E218FD7BECCDE866113D10DDF3_2,91838636DE3442E218FD7BECCDE866113D10DDF3," Access management using IBM Cloud Identity and Access Management (IAM) - -You control the actions that a user can perform for a specific service by assigning permissions with IBM Cloud IAM. You create user access groups containing roles to provide permissions for users. You can also assign roles and permissions to individual users. If necessary, you can create custom roles to satisfy your business requirements. - -" -91838636DE3442E218FD7BECCDE866113D10DDF3_3,91838636DE3442E218FD7BECCDE866113D10DDF3," Learn more - - - -* [Signing up for your organization's watsonx account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlorgacct) -* [Logging in to watsonx.ai through IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlappid) -* [IBM Cloud docs: Assigning access to resources by using access groups](https://cloud.ibm.com/docs/account?topic=account-access-getstarted) -* [IBM Cloud docs: Creating custom roles](https://cloud.ibm.com/docs/account?topic=account-custom-roles) -* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) -* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview) -* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups) -* [IBM Cloud docs: Best practices for organizing resources and assigning access](https://cloud.ibm.com/docs/account?topic=account-account_setup&interface=ui) - - - -Parent topic:[Setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_0,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Setting up the IBM Cloud account - -As an IBM Cloud account owner or administrator, you sign up for IBM watsonx.ai and set up payment for services in the IBM Cloud account. - -These steps describe the typical tasks for an IBM Cloud account owner to set up the account for an organization: - - - -1. [Sign up for watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=ensign-up). -2. [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enpaid-account) to add or update billing information. -3. [(Optional) Configure restrictions for the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html?context=cdpaas&locale=enrestrict). - - - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_1,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Step 1: Sign up for watsonx - -To sign up for watsonx.ai: - - - -1. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). -2. Select the service region. -3. Agree to the terms, Data Use Policy, and Cookie Use. -4. Log in with your IBMid (usually an email address) if you have an existing IBM Cloud account. If you don't have an IBM Cloud account, click Create an IBM Cloud account to create a new account. You must enter a credit card to create a Pay-As-You-Go IBM Cloud account. However, you are not charged until you buy paid service plans. - - - -Lite plans for Watson Studio and Watson Machine Learning are automatically provisioned for you. - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_2,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Step 2: Update your IBM Cloud account - -You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan. - -You must update your IBM Cloud account in the following circumstances: - - - -* You have a Trial account from signing up for watsonx. -* You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic). -* You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021. -* You want to change a Pay-As-You-Go plan to a subscription plan. - - - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_3,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Setting up a Pay-As-You-Go account - -You set up a Pay-As-You-Go by adding a credit card number and billing information. You pay only for billable services that you use, with no long-term contracts or commitments. You can provision paid plans for all services in the IBM Cloud services catalog, including plans in the watsonx services catalog. - -To set up a Pay-As-You-Go account: - - - -1. From the watsonx navigation menu, select Administration > Account and billing > Account. -2. Click Manage in IBM Cloud. -3. Log in to IBM Cloud. -4. Select Account settings. -5. Click Add credit card and enter your credit card and billing information. -6. Click Create account to submit your information. - - - -After your payment information is processed, your account is upgraded and you receive a monthly invoice for billable resource usage or instance fees. - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_4,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Setting up a subscription account - -With subscriptions, you commit to a minimum spending amount for a certain period and receive a discount on the overall cost. Subscriptions are limited to service plans in the watsonx catalog. - -Subscription credits are activated using a unique code that you receive by email. To activate the subscription, you apply the subscription code to an account. Be careful when selecting the account, because after you apply the subscription to an account, you can't undo it. - -To set up a watsonx subscription: - - - -1. From the watsonx navigation menu, select Administration > Account and billing > Upgrade service plans. -2. On the Upgrade service plans page, click Contact sales. - - - -Complete and submit the form to communicate with IBM Sales that you want to set up a subscription account for watsonx. An associate from IBM Sales will contact you to set up a subscription. When your subscription is ready, you receive an email from IBM containing a unique subscription code. - -To apply the subscription code to your account: - - - -1. Locate the unique code from the email that you received from IBM. -2. Log in to your IBM Cloud account, and select Manage > Account from the header. Be sure to select the correct account. -3. Select Account settings and locate the Subscription and feature codes section on the page. -4. Click Apply code. -5. Copy and paste the code from the email into the Apply a code field and click Apply. - - - -Your subscription account is active and you can upgrade your watsonx.ai services. - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_5,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Step 3: (Optional) Configure restrictions for the account - -Complete these optional tasks to secure your account: - - - -* Restrict the scope of resources that are available in IBM watsonx to the current account. See [Set the scope of resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources). -* Restrict access to specific IP addresses to protect the IBM Cloud account from unwanted access from unknown IP addresses. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). - - - -" -9FD50170823EF108E2CF4EBF083B0085845FC3BE_6,9FD50170823EF108E2CF4EBF083B0085845FC3BE," Next steps - - - -* [Add users to the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html) -* [Add more security constraints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.html) - - - -Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_0,D0DECE55336BC8393593243D829B9D4B1E6159FD," Add users to the account - -As an Administrator, you add the people in your organization who need access to IBM watsonx to the IBM Cloud account and then assign them the appropriate roles for their tasks. - - - -1. [Add nonadministrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enusers) to the IBM Cloud account and assign access groups or roles so that they can work in IBM watsonx. The new users receive an email invitation to join the account. They must accept the invitation to be added to the account. -2. Set up access groups to simplify permissions and role assignment. -3. Optional: [Add administrative users](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enadminuser) to the IBM Cloud account. - - - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_1,D0DECE55336BC8393593243D829B9D4B1E6159FD," Add nonadministrative users to your IBM Cloud account - -You invite users to your IBM Cloud account by sending an email invitation. The user accepts the invitation to join the account. You must assign them roles (or access groups) to provide the necessary permissions to work in IBM watsonx. For a baseline role assignment, you can provide minimum permissions by assigning the following roles in the Manage > Access(IAM) > Users > Invite users > Access policy screen in IBM Cloud: - - - -Table 1. Minimum roles for new IBM watsonx users - - Level Role Description - - Service All Identity and Access enabled services Can access all services that use IAM for access management; usually assigned only to administrators in a production environment - Resources All resources Scope of resources for which user has access - Resource group access Viewer Can view but not modify resource groups - Service access Reader Can perform read-only actions within a service - Platform access Viewer Can view but not modify service instances - - - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_2,D0DECE55336BC8393593243D829B9D4B1E6159FD," IBM account membership - -To be authorized for IBM watsonx, users must have existing IBMids. If the invited user does not have an IBMid, it is created for them when they join the account. - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_3,D0DECE55336BC8393593243D829B9D4B1E6159FD," Assigning roles - -To assign minimum permissions to individual users: - - - -1. From IBM watsonx, click Administration > Access (IAM) to open the Manage access and users page for your IBM Cloud account. -2. Click Users > Invite users+. -3. Enter one or more email addresses that are separated by commas, spaces, or line breaks. The limit is 100 email addresses. The settings apply to all the email addresses. -4. Click the Access policy tile. -5. Select All Identity and Access enabled services, then click Next to assign Resource access. -6. For Resources, choose All resources. Click Next. -7. For Resource group access, choose Viewer. Click Next -8. For Roles and action, choose the following minimum permissions: - - - -* In the Service access section, select Reader -* In the Platform access section, select Viewer. - - - -9. Review the settings and edit if necessary. -10. Click Add to save the policy. -11. Click Invite to send an email invitation to each email address. The policies are assigned to the users when they accept the invitation to join the account. - - - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_4,D0DECE55336BC8393593243D829B9D4B1E6159FD," Modifying a user's role - -When you change a user's role, their access to services changes. Their ability to complete work in IBM watsonx can be impacted if they do not have the necessary access. - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_5,D0DECE55336BC8393593243D829B9D4B1E6159FD," Optional: Add administrative users to your IBM Cloud account - -You can add administrative users with the Administrator role for account management. This role also provides the Manager role for all services in the account. - -To add a user as an IBM Cloud account administrator: - - - -1. Follow the steps to [add a non-administrative user](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.html?context=cdpaas&locale=enusers), except change these settings for an individual user's roles: - - - -* In the Service access section, select Manager. -* In the Platform access section, select Administrator. - - - -2. Alternatively, create an access group containing these roles and assign the user to the access group. -3. Click Invite. The new users receive an email invitation to join the account. They must accept the invitation to be added to the account. -4. After the user joins the account, add account management permissions. Click the user's name, then Access > Assign access under Access policies. -5. For the service to assign access to, choose All Account Management Services. -6. Next, in the Platform access section, select Administrator and click Add. -7. Click Assign. - - - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_6,D0DECE55336BC8393593243D829B9D4B1E6159FD," Next steps - - - -* Finish [setting up the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html). -* [Upgrade your service instances](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.htmlapp) to billable plans. - - - -" -D0DECE55336BC8393593243D829B9D4B1E6159FD_7,D0DECE55336BC8393593243D829B9D4B1E6159FD," Learn more - - - -* [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) -* [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts) -* [IBM Cloud docs: IAM access](https://cloud.ibm.com/docs/account?topic=account-userroles) -* [IBM Cloud docs: What is IBM Cloud Identity and Access Management](https://cloud.ibm.com/docs/account?topic=account-iamoverview) -* [IBM Cloud docs: Setting up access groups](https://cloud.ibm.com/docs/account?topic=account-groups) -* [IBM Cloud docs: Giving access to resources in resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs_manage_access) - - - -Parent topic:[Managing users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) -" -27DB2218237B89F557D3702F4270288E4460E9CB_0,27DB2218237B89F557D3702F4270288E4460E9CB," Setting up the IBM watsonx platform for administrators - -To set up the watsonx platform for your organization, sign up for IBM watsonx.ai, upgrade to a paid plan, set up the services that you need, and add your users with the appropriate permissions. - -IBM watsonx.ai on the watsonx platform includes cloud-based services that provide data preparation, data science, and AI modeling capabilities. The watsonx platform is protected by the same powerful security constraints that are available on IBM Cloud. - - - -Table 1. Configuration steps for IBM watsonx - - Task Location Required Role Description - - [Set up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) IBM Cloud Account Owner Set up a paid account. - [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) IBM Cloud Administrator Invite users to join the account, create user access groups, and assign roles or access groups to users to provide access. - [Set up IBM Cloud Object Storage for use with IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) IBM Cloud and IBM watsonx Administrator Create a test project to initialize IBM Cloud Object Storage and set the location to Global in each user's profile. - [Set up the Watson Studio and Watson Machine Learning services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/set-up-ws.html) IBM Cloud and IBM watsonx Administrator Upgrade to a paid plan. - [Create the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) IBM watsonx Administrator or Manager role for the Cloud Pak for Data service Add connections to the platform assets catalog for use by collaborators. -" -27DB2218237B89F557D3702F4270288E4460E9CB_1,27DB2218237B89F557D3702F4270288E4460E9CB," [Set up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html) IBM Cloud and IBM watsonx Administrator or Editor Create access policies and assign roles to users. - [Configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) (if necessary) IBM watsonx and cloud provider firewall configuration Administrator Configure inbound access through a firewall. - Optional. [Configure security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) IBM Cloud Administrator IBM watsonx has five security levels to ensure that data, application endpoints, and identity are protected. For a list of common security mechanisms, see [Common security mechanisms](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html?context=cdpaas&locale=ensecurity). - Optional. [Connect to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) IBM Cloud Administrator Securely connect to databases that are hosted behind a firewall. - Optional. [Configure integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) IBM Cloud and IBM watsonx Administrator Connect to services on other cloud platforms. - - - -" -27DB2218237B89F557D3702F4270288E4460E9CB_2,27DB2218237B89F557D3702F4270288E4460E9CB," Common security mechanisms - -As an IBM Cloud account owner or administrator, you set up security for the account by providing single sign-on, IAM role-based access control, secure communication, and other security constraints. - -Following are common security mechanisms for the IBM watsonx platform: - - - -* Encrypt your instance with your own key. See [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). -* Use IBM Key Protect to encrypt key data assets in Cloud Object Storage. See [Encrypting at rest data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.htmlencrypting-at-rest-data). -* Support single sign-on using SAML federation or Active Directory. See [SSO with Federated IDs](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-account.htmlsso-with-federated-ids). -* Configure secure connections to databases that are behind a firewall. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) -* Configure secure communication between services with Service Endpoints. See [Private network service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlprivate-network-service-endpoints). -* Control access at the IP address level. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). -" -27DB2218237B89F557D3702F4270288E4460E9CB_3,27DB2218237B89F557D3702F4270288E4460E9CB,"* Require personal credentials when creating connections. The default setting is shared credentials. See [Managing your account settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections). - - - -" -27DB2218237B89F557D3702F4270288E4460E9CB_4,27DB2218237B89F557D3702F4270288E4460E9CB," Learn more - - - -* HIPAA readiness is available for some regions and plans. See [HIPAA readiness](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.htmlhipaa). -* See [Security for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) for a complete list of security constraints available in IBM watsonx. -* See [Overview of watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) to understand the architecture of the platform. - - - -Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -" -323D19F0757433758D1743B0A62DACC98D286EC5_0,323D19F0757433758D1743B0A62DACC98D286EC5," Signing up for IBM watsonx as a Service - -IBM watsonx as a Service contains two components: watsonx.ai and watsonx.governance. You can sign up for a personal version of either watsonx.ai or watsonx.governance at no initial cost, or sign up through an email invitation to join your organization's account. Watsonx.ai provides all the tools that you need to work with foundation models and machine learning models. Watsonx.governance provides the tools that you need to govern models. - -After you sign up for the watsonx.ai, you can add the watsonx.governance component from the services catalog. If you sign up for watsonx.governance, watsonx.ai is included automatically. - - - -* [Signing up for a personal account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enpersonal) -* [Signing up for your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enorgacct) -* [Switching to your organization's account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enswitching) -* [Logging in using IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enappid) - - - -" -323D19F0757433758D1743B0A62DACC98D286EC5_1,323D19F0757433758D1743B0A62DACC98D286EC5," Signing up for a personal account - -When you sign up for watsonx.ai or watsonx.governance, you need an IBMid for an IBM Cloud account. If you don't already have an IBMid, you can create one while you sign up for watsonx.ai or watsonx.governance. For your IBM Cloud account, you enter your email address, personal information, and credit card information, which is used to verify your identity. You are charged only if you upgrade to a billable plan and then consume billable services. Lite plans do not incur charges. - -The free version of watsonx.ai contains Lite plans for the IBM Watson Studio and Watson Machine Learning services that provide the tools for working with foundation models and machine learning models. The free version of watsonx.governance contains the watsonx.ai services plus a Lite plan for the watsonx.governance service that provides the tools for governing models. The Cloud Object Storage service is also included to provide storage. - -To sign up for watsonx: - - - -1. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). -2. Select the IBM Cloud service region. You can select the Dallas or Frankfurt region. -3. Enter your IBM Cloud account username and password. If you don't have an IBM Cloud account, [create one](https://cloud.ibm.com/registration). -" -323D19F0757433758D1743B0A62DACC98D286EC5_2,323D19F0757433758D1743B0A62DACC98D286EC5,"4. If you see the Select account screen, select the account and resource group where you want to use watsonx. If you belong to an account with existing services, you can select it instead of your account. The Select account screen does not display if you have only one account and resource group. -5. Click Continue. The account activation process begins. - - - -Note:Stay with your default browser during the activation process. If you land on the IBM Cloud Dashboard, return to the [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx){: new_window} page or the [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south){: external} page and follow the link to log in with an existing account. - -After the activation process completes, your watsonx home page is shown. - -Bookmark your home page so that you can go directly to the watsonx site for your region to log in with your personal credentials. - -If you're in your own account, you have the necessary permissions for complete access to projects and deployment spaces. You can access another account by [switching to that account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html?context=cdpaas&locale=enswitching). - -To set up an account for your organization, so that other users can share services and resources, see [Set up an account for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html). - -" -323D19F0757433758D1743B0A62DACC98D286EC5_3,323D19F0757433758D1743B0A62DACC98D286EC5," Signing up for your organization's account - -Before you can access your organization's watsonx account, you must be a member of your organization's IBM Cloud account. Account administrators can invite users to join their organization's IBM Cloud account. - -The administrator provides the following information: - - - -* The IBM Cloud account name for watsonx. -* The resource group name for the watsonx account. -* The IBM Cloud service region. - - - -When the account administrator invites you, you receive an email from IBM Cloud with the title ""You are invited to join an account in IBM Cloud."" with the name of the account. - -To join your organization's account: - - - -1. Click the Join now link. The expiry is 30 days. -2. You are asked to log in with your IBMid. IBMids are assigned to IBM Cloud account members. If you don't have an IBMid, one is created for you when you join. -3. Continue to the next screen and confirm that your information is correct, then accept the invite. -4. Login in from the Welcome screen. You are now logged in to the IBM Cloud account. -5. Go to [Try IBM watsonx.ai](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_data_platform,cos&uucid=0b526de8c1c419db&utm_content=WXAWW) or [Try watsonx.governance](https://dataplatform.cloud.ibm.com/registration/stepone?context=wx&apps=data_science_experience,watson_machine_learning,cos,aiopenscale&uucid=0cf8ca3f38ace12f&utm_content=WXGWW®ions=us-south). -6. Follow the prompts to sign up with your IBMid. -7. On the Select account screen, select your organization's account and resource group. -8. Click Continue. - - - -You can see the name of the account you are currently working in on the menu bar. - -![Account name](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/ent-accountname.png) - -" -323D19F0757433758D1743B0A62DACC98D286EC5_4,323D19F0757433758D1743B0A62DACC98D286EC5," Switching to your organization's account - -You can switch to your organization's existing IBM Cloud account (or any other account for which you are a member) to share watsonx resources that are provisioned for that account. - -If you are not already an account member, the account administrator must invite you to the IBM Cloud account. You receive an email invitation to join the account. After you accept the invitation, you can access the account and watsonx. - -To switch to your organization's account: - - - -1. Log in to watsonx with your personal credentials. -2. Select your organization's account name from the account list on the page header. If you don't see the account list, click the Account Switcher![Account Switcher icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/account_switcher.png) to open it. - - - -To switch regions: - - - -1. Select the region from the region list on the page header. If you don't see the region list, click the Region Switcher![Region Switcher icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/region_switcher.png) to open it. - - - -" -323D19F0757433758D1743B0A62DACC98D286EC5_5,323D19F0757433758D1743B0A62DACC98D286EC5," Logging in to watsonx through IBM Cloud App ID (beta) - -IBM Cloud App ID integrates user authentication on IBM Cloud with user registries that are hosted on other identity providers. If App ID is configured for your IBM Cloud account, your administrator provides an alias to log in to watsonx. With App ID, you do not need to sign in to IBM Cloud. Instead, you log in to watsonx with the App ID alias. - -You cannot switch accounts when you log in through App ID. - -To log in with App ID: - - - -1. Go to watsonx and choose to log in with App ID (Beta). -2. Enter the alias that was [provided to you by your administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html). You are redirected to your company's login page. -3. Enter your company credentials on your company's login page. You are redirected back to watsonx. - - - -Select the Remember App ID checkbox to save the App ID alias for future logins. - -" -323D19F0757433758D1743B0A62DACC98D286EC5_6,323D19F0757433758D1743B0A62DACC98D286EC5," Next steps - - - -* Go back to [Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) and choose the right path for you. -* Add services from the services catalog. See [Creating and managing services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html). - - - -" -323D19F0757433758D1743B0A62DACC98D286EC5_7,323D19F0757433758D1743B0A62DACC98D286EC5," Learn more - - - -* [Get help](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-help.html) -* [Browser support](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/browser-support.html) -* [Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) - - - -Parent topic:[Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -" -E334A64775AE571C661CDCC847669F0E20C207FF_0,E334A64775AE571C661CDCC847669F0E20C207FF," Video library - -Watch short videos for data scientists, data engineers, and data stewards to learn about watsonx. The videos and accompanying tutorials are task-focused and provide hands-on experience by using the tools in watsonx. - -Note: These videos provides a visual method to learn the concepts and tasks in this documentation. If you are having difficulty viewing any of the videos on this page, visit the [Video playlists](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx-docs.html) page. - -" -E334A64775AE571C661CDCC847669F0E20C207FF_1,E334A64775AE571C661CDCC847669F0E20C207FF," First watch the IBM watsonx.ai overview video. - -![Watch Video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Select any video from the lists below to watch here. - -" -E334A64775AE571C661CDCC847669F0E20C207FF_2,E334A64775AE571C661CDCC847669F0E20C207FF," Quick start - -IBM watsonx.ai overview - - - -* Classify text -* Summarize large, complex documents -* Generate content -* Extract text from complex documents - - - -Get started - - - -* Create a project -* Collaborate in projects -* Tour the samples collection -* Load and analyze public data sets - - - -Work with data - - - -* Prepare data with Data Refinery - -* Generate synthetic tabular data - -* Analyze data in a Jupyter notebook - - - -IBM watsonx.governance - - - -* Track a model in an AI use case - -* Evaluate a prompt template - -* Track a prompt template - - - -Work with foundation models - - - -* Prompt a foundation model using Prompt Lab -* Prompt tips: Get started prompting foundation models -* Introduction to the retrieval-augmented generation pattern -* Tune a foundation model - - - -Build models - - - -* Build and deploy a model with AutoAI - -* Build and deploy a model in a Jupyter notebook - -* Build and deploy a model with SPSS Modeler -" -5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502_0,5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502," Visualizations of assets - -In your project, you can create visualizations of data assets to further explore and discover insights. To create and view visualizations, open a data asset and go to the Visualization tab. - -" -5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502_1,5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502," Requirements and restrictions - -You can view the visualization of assets under the following circumstances. - - - -* Required permissions -To view this page, you can have any role in a project. To edit or update information on this page, you must have the Editor or Admin role. -* Workspaces -You can view the asset visualization in projects. -* Types of assets -These types of assets create a visualization: - - - -* Data asset from file: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files -* Connected data assets - - - - - - - -* Collaboration -Visualization assets created by a user can be viewed or edited by other collaborators of the same project, depending on the assigned permissions. - - - -" -5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502_2,5287EF2A9E06AAEC6A8FE651FEBA2D46D2F07502," Learn more - - - -* [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) - - - -Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -" -438B5ABAC4D30492C2F192EA551E9514DF877831_0,438B5ABAC4D30492C2F192EA551E9514DF877831," IBM watsonx APIs - -You can perform many of the tasks for watsonx with APIs. - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_1,438B5ABAC4D30492C2F192EA551E9514DF877831," APIs for managing assets - -You can use a collection of REST APIs to manage data-related assets and the people who need to use these assets. See [Watson Data API](http://ibm.biz/wdp-api). - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_2,438B5ABAC4D30492C2F192EA551E9514DF877831," Connections in the Watson Data API - -Use the Watson Data API to create a connection in a catalog or project. See [Connections in the Watson Data API](https://cloud.ibm.com/apidocs/watson-data-apiconnections). - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_3,438B5ABAC4D30492C2F192EA551E9514DF877831," Python library for foundation models - -For the full library reference, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html). - -For examples of how to use the foundation models Python library, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html). - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_4,438B5ABAC4D30492C2F192EA551E9514DF877831," APIs for machine learning - -Watson Machine Learning allows for managing spaces, deployments, and assets programmatically by using: - - - -* [REST API](https://cloud.ibm.com/apidocs/machine-learning) -* [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) - - - -For links to sample Jupyter Notebooks that demonstrate how to manage spaces, deployments, and assets programmatically, see [Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_5,438B5ABAC4D30492C2F192EA551E9514DF877831," APIs for factsheets - -AI Factsheets allows managing settings, model entries, and report templates programmatically by using: - - - -* [REST API](https://cloud.ibm.com/apidocs/factsheets) -* [Python client library](https://s3.us.cloud-object-storage.appdomain.cloud/factsheets-client/index.htmlfactsheet-asset-elements) - - - -" -438B5ABAC4D30492C2F192EA551E9514DF877831_6,438B5ABAC4D30492C2F192EA551E9514DF877831," Learn more - - - -* [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api) -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_0,5BC1631D896899D03E7D8DD2296C21656DD169FF," What's new - -Check back each week to learn about new features and updates for IBM watsonx.ai. - -Tip: Occasionally, you must take a specific action after an update. To see all required actions, search this page for “Action required”. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_1,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 15 December 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_2,5BC1631D896899D03E7D8DD2296C21656DD169FF," Create user API keys for jobs and other operations - -15 Dec 2023 - -Certain runtime operations in IBM watsonx, such as jobs and model training, require an API key as a credential for secure authorization. With user API keys, you can now generate and rotate an API key directly in IBM watsonx as needed to help ensure your operations run smoothly. The API keys are managed in IBM Cloud, but you can conveniently create and rotate them in IBM watsonx. - -The user API key is account-specific and is created from Profile and settings under your account profile. - -For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_3,5BC1631D896899D03E7D8DD2296C21656DD169FF," New watsonx tutorials and videos - -15 Dec 2023 - -Try the new watsonx.governance and watsonx.ai tutorials to help you learn how to tune a foundation model, and evaluate and track a prompt template. - - - -New tutorials - - Tutorial Description Expertise for tutorial - - [Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) Tune a foundation model to enhance model performance. Use the Tuning Studio to tune a model without coding.

Intermediate

No code - [Evaluate and track a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-evaluate-prompt.html) Evaluate a prompt template to measure the performance of foundation model and track the prompt template through its lifecycle. Use the evaluation tool and an AI use case to track the prompt template.

Beginner

No code - - - -![Watch a video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/lc-video.png) Find more watsonx.governance and watsonx.ai videos in the [Video library](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/videos-wx.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_4,5BC1631D896899D03E7D8DD2296C21656DD169FF," New login session expiration and sign out due to inactivity - -15 Dec 2023 - -You are now signed out of IBM Cloud due to session expiration. Your session can expire due to login session expiration (24 hours by default) or inactivity (2 hours by default). You can change the default durations in the Access (IAM) settings in IBM Cloud. For more information, see [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-expiration). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_5,5BC1631D896899D03E7D8DD2296C21656DD169FF," IBM Cloud Databases for DataStax connector is deprecated - -15 Dec 2023 - -The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_6,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 08 December 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_7,5BC1631D896899D03E7D8DD2296C21656DD169FF," The Tuning Studio is available - -7 Dec 2023 - -The Tuning Studio helps you to guide a foundation model to return useful output. With the Tuning Studio, you can prompt tune the flan-t5-xl-3b foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation. Prompt tuning helps smaller, more computationally-efficient foundation models achieve results comparable to larger models in the same model family. By tuning and deploying a tuned version of a smaller model, you can reduce long-term inference costs. The Tuning Studio is available to users of paid plans in the Dallas region. - - - -* For more information, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html). -* To get started, see [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html). -* To run a sample notebook, go to [Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_8,5BC1631D896899D03E7D8DD2296C21656DD169FF," New client properties in Db2 connections for workload management - -08 Dec 2023 - -You can now specify properties in the following fields for monitoring purposes: Application name, Client accounting information, Client hostname, and Client user. These fields are optional and are available for the following connections: - - - -* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) -* [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) -* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) -* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html) - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_9,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 1 December 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_10,5BC1631D896899D03E7D8DD2296C21656DD169FF," Watsonx.governance is available! - -1 Dec 2023 - -Watsonx.governance extends the governance capabilities of Watson OpenScale to evaluate foundation model assets as well as machine learning assets. For example, evaluate foundation model prompt templates for dimensions such as accuracy or to detect the presence of hateful and abusive speech. You can also define AI use cases to address business problems, then track prompt templates or model data in factsheets to support compliance and governance goals. Watsonx.governance plans and features are available only in the Dallas region. - - - -* To view plan details, see [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) plans. -* For details on governance features, see [watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html). -* To get started, see [Provisioning and launching watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_11,5BC1631D896899D03E7D8DD2296C21656DD169FF," Explore with the AI risk atlas - -1 Dec 2023 - -You can now explore some of the risks of working with generative AI, foundation models, and machine learning models. Read about risks for privacy, fairness, explainability, value alignment, and other areas. See [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_12,5BC1631D896899D03E7D8DD2296C21656DD169FF," New versions of the IBM Granite models are available - -30 Nov 2023 - -The latest versions of the Granite models include these changes: - -granite-13b-chat-v2: Tuned to be better at question-answering, summarization, and generative tasks. With sufficient context, generates responses with the following improvements over the previous version: - - - -* Generates longer, higher-quality responses with a professional tone -* Supports chain-of-thought responses -* Recognizes mentions of people and can detect tone and sentiment better -* Handles white spaces in input more gracefully - - - -Due to extensive changes, test and revise any prompts that were engineered for v1 before you switch to the latest version. - -granite-13b-instruct-v2: Tuned specifically for classification, extraction, and summarization tasks. The latest version differs from the previous version in the following ways: - - - -* Returns more coherent answers of varied lengths and with a diverse vocabulary -* Recognizes mentions of people and can summarize longer inputs -* Handles white spaces in input more gracefully - - - -Engineered prompts that work well with v1 are likely to work well with v2 also, but be sure to test before you switch models. - -The latest versions of the Granite models are categorized as Class 2 models. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_13,5BC1631D896899D03E7D8DD2296C21656DD169FF," Some foundation models are now available at lower cost - -30 Nov 2023 - -Some popular foundation models were recategorized into lower-cost billing classes. - -The following foundation models changed from Class 3 to Class 2: - - - -* granite-13b-chat-v1 -* granite-13b-instruct-v1 -* llama-2-70b - - - -The following foundation model changed from Class 2 to Class 1: - - - -* llama-2-13b - - - -For more information about the billing classes, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_14,5BC1631D896899D03E7D8DD2296C21656DD169FF," A new sample notebook is available: Introduction to RAG with Discovery - -30 Nov 2023 - -Use the Introduction to RAG with Discovery notebook to learn how to apply the retrieval-augmented generation pattern in IBM watsonx.ai with IBM Watson Discovery as the search component. For more information, see [Introduction to RAG with Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_15,5BC1631D896899D03E7D8DD2296C21656DD169FF," Understand feature differences between watsonx as a service and software deployments - -30 Nov 2023 - -You can now compare the features and implementation of IBM watsonx as a Service and watsonx on Cloud Pak for Data software, version 4.8. See [Feature differences between watsonx deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/feature-matrix.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_16,5BC1631D896899D03E7D8DD2296C21656DD169FF," Change to how stop sequences are handled - -30 Nov 2023 - -When a stop sequence, such as a newline character, is specified in the Prompt Lab, the model output text ends after the first occurrence of the stop sequence. The model output stops even if the occurrence comes at the beginning of the output. Previously, the stop sequence was ignored if it was specified at the start of the model output. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_17,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 10 November 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_18,5BC1631D896899D03E7D8DD2296C21656DD169FF," A smaller version of the Llama-2 Chat model is available - -9 Nov 2023 - -You can now choose between using the 13b or 70b versions of the Llama-2 Chat model. Consider these factors when you make your choice: - - - -* Cost -* Performance - - - -The 13b version is a Class 2 model, which means it is cheaper to use than the 70b version. To compare benchmarks and other factors, such as carbon emissions for each model size, see the [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_19,5BC1631D896899D03E7D8DD2296C21656DD169FF," Use prompt variables to build reusable prompts - -Add flexibility to your prompts with prompt variables. Prompt variables function as placeholders in the static text of your prompt input that you can replace with text dynamically at inference time. You can save prompt variable names and default values in a prompt template asset to reuse yourself or share with collaborators in your project. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_20,5BC1631D896899D03E7D8DD2296C21656DD169FF," Announcing support for Python 3.10 and R4.2 frameworks and software specifications on runtime 23.1 - -9 Nov 2023 - -Action required - -You can now use IBM Runtime 23.1, which includes the latest data science frameworks based on Python 3.10 and R 4.2, to run Watson Studio Jupyter notebooks and R scripts, train models, and run Watson Machine Learning deployments. Update your assets and deployments to use IBM Runtime 23.1 frameworks and software specifications. - - - -* For information on the IBM Runtime 23.1 release and the included environments for Python 3.10 and R 4.2, see [Changing notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env). -* For details on deployment frameworks, see [Managing frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_21,5BC1631D896899D03E7D8DD2296C21656DD169FF," Use Apache Spark 3.4 to run notebooks and scripts - -Spark 3.4 with Python 3.10 and R 4.2 is now supported as a runtime for notebooks and RStudio scripts in projects. For details on available notebook environments, see [Compute resource options for the notebook editor in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) and [Compute resource options for RStudio in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_22,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 27 October 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_23,5BC1631D896899D03E7D8DD2296C21656DD169FF," Use a Satellite Connector to connect to an on-prem database - -26 Oct 2023 - -Use the new Satellite Connector to connect to a database that is not accessible via the internet (for example, behind a firewall). Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem environment back to IBM Cloud. For instructions, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_24,5BC1631D896899D03E7D8DD2296C21656DD169FF," Secure Gateway is deprecated - -26 Oct 2023 - -IBM Cloud announced the deprecation of Secure Gateway. For information, see the [Overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview). - -Action required - -If you currently have connections that are set up with Secure Gateway, plan to use an alternative communication method. In IBM watsonx, you can use the Satellite Connector as a replacement for Secure Gateway. See [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_25,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 20 October 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_26,5BC1631D896899D03E7D8DD2296C21656DD169FF," Maximum token sizes increased - -16 Oct 2023 - -Limits that were previously applied to the maximum number of tokens allowed in the output from foundation models are removed from paid plans. You can use larger maximum token values during prompt engineering from both the Prompt Lab and the Python library. The exact number of tokens allowed differs by model. For more information about token limits for paid and Lite plans, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_27,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 13 October 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_28,5BC1631D896899D03E7D8DD2296C21656DD169FF," New notebooks in Samples - -12 Oct 2023 - -Two new notebooks are available that use a vector database from Elasticsearch in the retrieval phase of the retrieval-augmented generation pattern. The notebooks demonstrate how to find matches based on the semantic similarity between the indexed documents and the query text that is submitted from a user. - - - -* [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) -* [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_29,5BC1631D896899D03E7D8DD2296C21656DD169FF," Intermediate solutions in Decision Optimization - -12 Oct 2023 - -You can now choose to see a sample of intermediate solutions while a Decision Optimization experiment is running. This can be useful for debugging or to see how the solver is progressing. For large models that take longer to solve, with intermediate solutions you can now quickly and easily identify any potential problems with the solve, without having to wait for the solve to complete. ![Graphical display showing run statistics with intermediate solutions.](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/do-rundisplay.png) You can configure the Intermediate solution delivery parameter in the Run configuration and select a frequency for these solutions. For more information, see [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__runmodel) and [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.htmlRunConfig__section_runconfig) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_30,5BC1631D896899D03E7D8DD2296C21656DD169FF," New Decision Optimization saved model dialog - -When you save a model for deployment from the Decision Optimization user interface, you can now review the input and output schema, and more easily select the tables that you want to include. You can also add, modify or delete run configuration parameters, review the environment, and the model files used. All these items are displayed in the same Save as model for deployment dialog. For more information, see [Deploying a Decision Optimization model by using the user interface](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_31,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 6 October 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_32,5BC1631D896899D03E7D8DD2296C21656DD169FF," Additional foundation models in Frankfurt - -5 Oct 2023 - -All foundation models that are available in the Dallas data center are now also available in the Frankfurt data center. The watsonx.ai Prompt Lab and foundation model inferencing are now supported in the Frankfurt region for these models: - - - -* granite-13b-chat-v1 -* granite-13b-instruct-v1 -* llama-2-70b-chat -* gpt-neox-20b -* mt0-xxl-13b -* starcoder-15.5b - - - -For more information on these models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - -For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_33,5BC1631D896899D03E7D8DD2296C21656DD169FF," Control the placement of a new column in the Concatenate operation (Data Refinery) - -6 Oct 2023 - -You now have two options to specify the position of the new column that results from the Concatenate operation: As the right-most column in the data set or next to the original column. - -![Concatenate operation column position](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/dr-concat-position.png) - -Previously, the new column was placed at the beginning of the data set. - -Important: - -Action required - -Edit the Concatenate operation in any of your existing Data Refinery flows to specify the new column position. Otherwise, the flow might fail. - -For information about Data Refinery operations, see [GUI operations in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_34,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 29 September 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_35,5BC1631D896899D03E7D8DD2296C21656DD169FF," IBM Granite foundation models for natural language generation - -28 Sept 2023 - -The first two models from the Granite family of IBM foundation models are now available in the Dallas region: - - - -* granite-13b-chat-v1: General use model that is optimized for dialogue use cases -* granite-13b-instruct-v1: General use model that is optimized for question answering - - - -Both models are 13B-parameter decoder models that can efficiently predict and generate language in English. They, like all models in the Granite family, are designed for business. Granite models are pretrained on multiple terabytes of data from both general-language sources, such as the public internet, and industry-specific data sources from the academic, scientific, legal, and financial fields. - -Try them out today in the Prompt Lab or run a [sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a) that uses the granite-13b-instruct-v1 model for sentiment analysis. - -Read the [Building AI for business: IBM’s Granite foundation models](https://www.ibm.com/blog/building-ai-for-business-ibms-granite-foundation-models/) blog post to learn more. - - - -* For more information on these models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). -* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). -* For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_36,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 22 September 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_37,5BC1631D896899D03E7D8DD2296C21656DD169FF," Decision Optimization Java models - -20 Sept 2023 - -Decision Optimization Java models can now be deployed in Watson Machine Learning. By using the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. You can now easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md). For more information, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_38,5BC1631D896899D03E7D8DD2296C21656DD169FF," New notebooks in Samples - -21 Sept 2023 - -You can use the following new notebooks in Samples: - - - -* [Use watsonx and LangChain to answer questions using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6) -* [Use watsonx and BigCode starcoder-15.5b to generate code](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6) - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_39,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 15 September 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_40,5BC1631D896899D03E7D8DD2296C21656DD169FF," Prompt engineering and synthetic data quick start tutorials - -14 Sept 2023 - -Try the new tutorials to help you learn how to: - - - -* Prompt foundation models: There are usually multiple ways to prompt a foundation model for a successful result. In the Prompt Lab, you can experiment with prompting different foundation models, explore sample prompts, as well as save and share your best prompts. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text using the retrieval-augmented generation pattern. -* Generate synthetic data: You can generate synthetic tabular data in watsonx.ai. The benefit to synthetic data is that you can procure the data on-demand, then customize to fit your use case, and produce it in large quantities. - - - - - - Tutorial Description Expertise for tutorial - - [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) Experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. Prompt a model using Prompt Lab without coding.

Beginner

No code - [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) Prompt a foundation model by leveraging information in a knowledge base. Use the retrieval-augmented generation pattern in a Jupyter notebook that uses Python code.

Intermediate

All code - [Generate synthetic tabular data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-generate-data.html) Generate synthetic tabular data using a graphical flow editor. Select operations to generate data.

Beginner

No code - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_41,5BC1631D896899D03E7D8DD2296C21656DD169FF," Watsonx.ai Community - -14 Sept 2023 - -You can now join the [watsonx.ai Community](https://community.ibm.com/community/user/watsonx/communities/community-home?communitykey=81927b7e-9a92-4236-a0e0-018a27c4ad6e) for AI architects and builders to learn, share ideas, and connect with others. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_42,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 8 September 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_43,5BC1631D896899D03E7D8DD2296C21656DD169FF," Generate synthetic tabular data with Synthetic Data Generator - -7 Sept 2023 - -Now available in the Dallas and Frankfurt regions, Synthetic Data Generator is a new graphical editor tool on watsonx.ai that you can use to generate tabular data to use for training models. Using visual flows and a statistical model, you can create synthetic data based on your existing data or a custom data schema. You can choose to mask your original data and export your synthetic data to a database or as a file. - -To get started, see [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_44,5BC1631D896899D03E7D8DD2296C21656DD169FF," Llama-2 Foundation Model for natural language generation and chat - -7 Sept 2023 - -The Llama-2 Foundation Model from Meta is now available in the Dallas region. Llama-2 Chat model is an auto-regressive language model that uses an optimized transformer architecture. The model is pretrained with publicly available online data, and then fine-tuned using reinforcement learning from human feedback. The model is intended for commercial and research use in English-language assistant-like chat scenarios. - - - -* For more information on the Llama-2 model, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). -* For a description of sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). -* For pricing details for Llama-2, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_45,5BC1631D896899D03E7D8DD2296C21656DD169FF," LangChain extension for the foundation models Python library - -7 Sept 2023 - -You can now use the LangChain framework with foundation models in watsonx.ai with the new LangChain extension for the foundation models Python library. - -This sample notebook demonstrates how to use the new extension: [Sample notebook](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_46,5BC1631D896899D03E7D8DD2296C21656DD169FF," Introductory sample for the retrieval-augmented generation pattern - -7 Sept 2023 - -Retrieval-augmented generation is a simple, powerful technique for leveraging a knowledge base to get factually accurate output from foundation models. - -See: [Introduction to retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_47,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 1 September 2023 - -31 Aug 2023 - -As of today it is not possible to add comments to a notebook from the notebook action bar. Any existing comments were removed. - -![Comments icon in the notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-comments.png) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_48,5BC1631D896899D03E7D8DD2296C21656DD169FF," StarCoder Foundation Model for code generation and code translation - -31 Aug 2023 - -The StarCoder model from Hugging Face is now available in the Dallas region. Use StarCoder to create prompts for generating code or for transforming code from one programming language to another. One sample prompt demonstrates how to use StarCoder to generate Python code from a set of instruction. A second sample prompt demonstrates how to use StarCoder to transform code written in C++ to Python code. - - - -* For more information on the StarCoder model, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). -* For a description of the sample prompts, see [Sample foundation model prompts for common tasks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_49,5BC1631D896899D03E7D8DD2296C21656DD169FF," IBM watsonx.ai is available in the Frankfurt region - -31 Aug 2023 - -Watsonx.ai is now generally available in the Frankfurt data center and can be selected as the preferred region when signing-up. The Prompt Lab and foundation model inferencing are supported in the Frankfurt region for these models: - - - -* mpt-7b-instruct2 -* flan-t5-xxl-11b -* flan-ul2-20b -* For more information on the supported models, see [Supported foundation models available with watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_50,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 25 August 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_51,5BC1631D896899D03E7D8DD2296C21656DD169FF," Additional cache enhancements available for Watson Pipelines - -21 August 2023 - -More options are available for customizing your pipeline flow settings. You can now exercise greater control over when the cache is used for pipeline runs. For details, see [Managing default settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_52,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 18 August 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_53,5BC1631D896899D03E7D8DD2296C21656DD169FF," Plan name updates for Watson Machine Learning service - -18 August 2023 - -Starting immediately, plan names are updated for the IBM Watson Machine Learning service, as follows: - - - -* The v2 Standard plan is now the Essentials plan. The plan is designed to give your organization the resources required to get started working with foundation models and machine learning assets. -* The v2 Professional plan is now the Standard plan. This plan provides resources designed to support most organizations through asset creation to productive use. - - - -Changes to the plan names do not change your terms of service. That is, if you are registered to use the v2 Standard plan, it will now be named Essentials, but all of the plan details will remain the same. Similarly, if you are registered to use the v2 Professional plan, there are no changes other than the plan name change to Standard. - -For details on what is included with each plan, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). For pricing information, find your plan on the [Watson Machine Learning plan page](https://cloud.ibm.com/catalog/services/watson-machine-learning) in the IBM Cloud catalog. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_54,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 11 August 2023 - -7 August 2023 - -On 31 August 2023, you will no longer be able to add comments to a notebook from the notebook action bar. Any existing comments that were added that way will be removed. - -![Comments icon in the notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/notebook-comments.png) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_55,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 4 August 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_56,5BC1631D896899D03E7D8DD2296C21656DD169FF," Increased token limit for Lite plan - -4 August 2023 - -If you are using the Lite plan to test foundation models, the token limit for prompt input and output is now increased from 25,000 to 50,000 per account per month. This gives you more flexibility for exploring foundation models and experimenting with prompts. - - - -* For details on watsonx.ai plans, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). -* For details on working with prompts, see [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_57,5BC1631D896899D03E7D8DD2296C21656DD169FF," Custom text analytics template (SPSS Modeler) - -4 August 2023 - -For SPSS Modeler, you can now upload a custom text analytics template to a project. This provides you with more flexibility to capture and extract key concepts in a way that is unique to your context. - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_58,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 28 July 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_59,5BC1631D896899D03E7D8DD2296C21656DD169FF," Foundation models Python library available - -27 July 2023 - -You can now prompt foundation models in watsonx.ai programmatically using a Python library. - -See: [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_60,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 14 July 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_61,5BC1631D896899D03E7D8DD2296C21656DD169FF," Control AI guardrails - -14 July 2023 - -You can now control whether AI guardrails are on or off in the Prompt Lab. AI guardrails remove potentially harmful text from both the input and output fields. Harmful text can include hate speech, abuse, and profanity. To prevent the removal of potentially harmful text, set the AI guardrails switch to off. See [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.htmlhap). - -![The Prompt Lab with AI guardrails set on](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/guardrails.png) - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_62,5BC1631D896899D03E7D8DD2296C21656DD169FF," Microsoft Azure SQL Database connection supports Azure Active Directory authentication (Azure AD) - -14 July 2023 - -You can now select Active Directory for the Microsoft Azure SQL Database connection. Active Directory authentication is an alternative to SQL Server authentication. With this enhancement, administrators can centrally manage user permissions to Azure. For more information, see [Microsoft Azure SQL Database connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html). - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_63,5BC1631D896899D03E7D8DD2296C21656DD169FF," Week ending 7 July 2023 - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_64,5BC1631D896899D03E7D8DD2296C21656DD169FF," Welcome to IBM watsonx.ai! - -7 July 2023 - -IBM watsonx.ai delivers all the tools that you need to work with machine learning and foundation models. - -Get started: - - - -* [Learn about watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -* [Learn about foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) -* [Engineer prompts with the Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) -* [Take quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html) -* [Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) - - - -" -5BC1631D896899D03E7D8DD2296C21656DD169FF_65,5BC1631D896899D03E7D8DD2296C21656DD169FF," Try generative AI search and answer in this documentation - -7 July 2023 - -You can see generative AI in action by trying the new generative AI search and answer option in the watsonx.ai documentation. The answers are generated by a large language model running in watsonx.ai and based on the documentation content. This feature is only available when you are viewing the documentation while logged in to watsonx.ai. - -Enter a question in the documentation search field and click the Try generative AI search and answer icon (![Try generative AI search and answer icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/bee.png)). The Generative AI search and answer pane opens and answers your question. - -![Shows the generative AI search and answer pane](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/gen-ai-search.png) -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_0,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Watson Machine Learning plans and compute usage - -You use Watson Machine Learning resources, which are measured in capacity unit hours (CUH), when you train AutoAI models, run machine learning models, or score deployed models. You use Watson Machine Learning resources, measured in resource units (RU), when you run inferencing services with foundation models. This topic describes the various plans you can choose, what services are included, and how computing resources are calculated. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_1,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Watson Machine Learning in Cloud Pak for Data as a Service and watsonx - -Important:The Watson Machine Learning plan includes details for watsonx.ai. Watsonx.ai is a studio of integrated tools for working with generative AI, powered by foundation models, and machine learning models. If you are using Cloud Pak for Data as a Service, then the details for working with foundation models and metering prompt inferencing using Resource Units do not apply to your plan. - -For more information on watsonx.ai, see: - - - -* [Overview of IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/overview-wx.html) -* [Comparison of IBM watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) -* [Signing up for IBM watsonx.ai](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html) - - - -If you are enabled for both watsonx and Cloud Pak for Data as a Service, you can switch between the two platforms. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_2,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Choosing a Watson Machine Learning plan - -View a comparison of plans and consider the details to choose a plan that fits your needs. - - - -* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-plan) -* [Capacity Unit Hours (CUH), tokens, and Resource Units (RU)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-meters) -* [Watson Machine Learning plan details](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-plan-details) -* [Capacity Unit Hours metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=encuh-metering) -* [Monitoring CUH and RU usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html?context=cdpaas&locale=enwml-track-usage) - - - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_3,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Watson Machine Learning plans - -Watson Machine Learning plans govern how you are billed for models you train and deploy with Watson Machine Learning and for prompts you use with foundation models. Choose a plan based on your needs: - - - -* Lite is a free plan with limited capacity. Choose this plan if you are evaluating Watson Machine Learning and want to try out the capabilities. The Lite plan does not support running a foundation model tuning experiment on watsonx. -* Essentials is a pay-as-you-go plan that gives you the flexibility to build, deploy, and manage models to match your needs. -* Standard is a high-capacity enterprise plan that is designed to support all of an organization's machine learning needs. Capacity unit hours are provided at a flat rate, while resource unit consumption is pay-as-you-go. - - - -For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/machine-learning). - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_4,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Capacity Unit Hours (CUH), tokens, and Resource Units (RU) - -For metering and billing purposes, machine learning models and deployments or foundation models are measured with these units: - - - -* Capacity Unit Hours (CUH) measure compute resource consumption per unit hour for usage and billing purposes. CUH measures all Watson Machine Learning activity except for Foundation Model inferencing. -* Resource Units (RU) measure foundation model inferencing consumption. Inferencing is the process of calling the foundation model to generate output in response to a prompt. Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. Choose a plan that corresponds to your usage requirements. For details on tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). -* A rate limit monitors and restricts the number of inferencing requests per second processed for foundation models for a given Watson Machine Learning plan instance. The rate limit is higher for paid plans than for the free Lite plan. - - - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_5,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Watson Machine Learning plan details - -The Lite plan provides enough free resources for you to evaluate the capabilities of watsonx.ai. You can then choose a paid plan that matches the needs of your organization, based on plan features and capacity. - - - -Table 1. Plan details - - Plan features Lite Essentials Standard - - Machine Learning usage in CUH 20 CUH per month CUH billing based on CUH rate multiplied by hours of consumption 2500 CUH per month - Foundation model inferencing in tokens or Resource Units (RU) 50,000 tokens per month Billed for usage (1000 tokens = 1 RU) Billed for usage (1000 tokens = 1 RU) - Max parallel Decision Optimization batch jobs per deployment 2 5 100 - Deployment jobs retained per space 100 1000 3000 - Deployment time to idle 1 day 3 days 3 days - HIPAA support NA NA Dallas region only
Must be enabled in your [IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security.htmlhipaa) - Rate limit per plan ID 2 inference requests per second 8 inference requests per second 8 inference requests per second - - - -Note: If you upgrade from Essentials to Standard, you cannot revert to an Essentials plan. You must create a new plan. - -For all plans: - - - -* Foundational Model inferencing Resource Units (RU) can be used for Prompt Lab inferencing, including input and output. That is, the prompt you enter for input is counted in addition to the generated output. (watsonx only) -* Foundation model inferencing is available only for the Dallas and Frankfurt data centers. (watsonx only) -* Foundation model tuning in the Tuning Studio is available only in the Dallas data center. (watsonx only) -* Three model classes determine the RU rate. The price per RU differs according to model class. (watsonx only) -* Capacity-unit-hour (CUH) rate consumption for training is based on training tool, hardware specification, and runtime environment. -* Capacity-unit-hour (CUH) rate consumption for deployment is based on deployment type, hardware specification, and software specification. -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_6,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1,"* Watson Machine Learning places limits on the number of [deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) retained for each single [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). If you exceed your limit, you cannot create new deployment jobs until you delete existing jobs or upgrade your plan. By default, jobs metadata will be auto-delete after 30 days. You can override this value when creating a job. See [Managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html). -* Time to idle refers to the amount of time to consider a deployment active between scoring requests. If a deployment does not receive scoring requests for a given duration, it is treated as inactive, or idle, and billing stops for all frameworks other than SPSS. -* A plan allows for at least the stated rate limit, and the actual rate limit can be higher than the stated limit. For example, the Lite plan might process more than 2 requests per second without issuing an error. If you have a paid plan and believe you are reaching the rate limit in error, contact IBM Support for assistance. - - - -For plan details and pricing, see [IBM Cloud Machine Learning](https://cloud.ibm.com/catalog/services/watson-machine-learning). - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_7,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Resource unit metering (watsonx) - -Resource Units billing is based on the rate of the billing class for the foundation model multipled by the number of Resource Units (RU). A Resource Unit is equal to 1000 tokens from the input and output of foundation model inferencing. The three foundation model billing classes have different RU rates. - - - -Table 2. Foundation model billing details - - Model Origin Billing class Price per RU - - granite-13b-instruct-v2 IBM Class 2 $0.0018 per RU - granite-13b-instruct-v1 IBM Class 2 $0.0018 per RU - granite-13b-chat-v2 IBM Class 2 $0.0018 per RU - granite-13b-chat-v1 IBM Class 2 $0.0018 per RU - flan-t5-xl-3b Open source Class 1 $0.0006 per RU - flan-t5-xxl-11b Open source Class 2 $0.0018 per RU - flan-ul2-20b Open source Class 3 $0.0050 per RU - gpt-neox-20b Open source Class 3 $0.0050 per RU - llama-2-13b-chat Open source Class 1 $0.0006 per RU - llama-2-70b-chat Open source Class 2 $0.0018 per RU - mpt-7b-instruct2 Open source Class 1 $0.0006 per RU - mt0-xxl-13b Open source Class 2 $0.0018 per RU - starcoder-15.5b Open source Class 2 $0.0018 per RU - Tuned foundation model Custom Class 1 $0.0006 per RU - - - - - -* For more information about each model, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). -* For information about tuned foundation models, see [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html). -* For information about regional support for each model, see [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers). - - - -Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_8,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Capacity Unit Hours metering (watsonx and Watson Machine Learning) - -CUH consumption is affected by the computational hardware resources you apply for a task as well as other factors such as the software specification and model type. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_9,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," CUH consumption rates by asset type - - - -Table 3. CUH consumption rates by asset type - - Asset type Capacity type Capacity units per hour - - AutoAI experiment 8 vCPU and 32 GB RAM 20 - Decision Optimization training 2 vCPU and 8 GB RAM
4 vCPU and 16 GB RAM
8 vCPU and 32 GB RAM
16 vCPU and 64 GB RAM 6
7
9
13 - Decision Optimization deployments 2 vCPU and 8 GB RAM
4 vCPU and 16 GB RAM
8 vCPU and 32 GB RAM
16 vCPU and 64 GB RAM 30
40
50
60 - Machine Learning models
(training, evaluating, or scoring) 1 vCPU and 4 GB RAM
2 vCPU and 8 GB RAM
4 vCPU and 16 GB RAM
8 vCPU and 32 GB RAM
16 vCPU and 64 GB RAM 0.5
1
2
4
8 - Foundation model tuning experiment
(watsonx only) NVIDIA A100 80GB GPU 43 - - - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_10,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," CUH consumption by deployment and framework type - -CUH consumption for deployments is calculated using these formulas: - - - -Table 4. CUH consumption by deployment and framework type - - Deployment type Framework CUH calculation - - Online AutoAI, Python functions and scripts, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny deployment_active_duration no_of_nodes CUH_rate_for_capacity_type_framework - Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost score_duration_in_seconds no_of_nodes CUH_rate_for_capacity_type_framework - Batch all frameworks job_duration_in_seconds no_of_nodes CUH_rate_for_capacity_type_framework - - - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_11,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Monitoring resource usage - -You can track CUH or RU usage for assets you own or collaborate on in a project or space. If you are an account owner or administrator, you can track CUH or RU usage for an entire account. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_12,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Tracking CUH or RU usage in a project - -To monitor CUH or RU consumption in a project: - - - -1. Navigate to the Manage tab for a project. -2. Click Resources to review a summary of resource consumption for assets in the project or space, or to review resource consumption details for particular assets. - -![Tracking resources in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/Resource-tracking.png) - - - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_13,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Tracking CUH usage for an account - -You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator or the Watson Machine Learning service owner. For details, see [Monitoring resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html). - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_14,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Tracking CUH consumption for machine learning in a notebook - -To calculate capacity unit hours in a notebook, use: - -CP = client.service_instance.get_details() -CUH = CUH[""entity""][""capacity_units""]/(36001000) -print(CUH) - -For example: - -'capacity_units': {'current': 19773430} - -19773430/(36001000) - -returns 5.49 CUH - -For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation. - -" -CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1_15,CAAA2E09B5B6F7AC550E936268B45D3CB7A412A1," Learn more - - - -* [Compute options for AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) -* [Compute options for model training and scoring](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) - - - -Parent topic:[Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html) -" -3DF4040F4F2E5704EF44E6742585EE853A2F2A37_0,3DF4040F4F2E5704EF44E6742585EE853A2F2A37," Watson Studio service plans - -The plan you choose for Watson Studio affects the features and capabilities that you can use. - -When you provision or upgrade Watson Studio, you can choose between a Lite and a Professional plan. - -See the plan pages in [IBM Cloud catalog: Watson Studio](https://cloud.ibm.com/catalog/services/watson-studio) for pricing and feature information. - -IBM Cloud account owners can choose between the Lite (unpaid) and Professional (paid) plan. - -Under the Professional plan, you can provision multiple Watson Studio instances in an IBM Cloud account. The Professional plan allows unlimited users and charges for compute usage which is measured in capacity unit hours (CUH). The Professional plan is the only paid plan option. - -Under the Lite plan, you can provision one Watson Studio instance per IBM Cloud account. The Lite plan allows only one user and limits the CUH to 10 hours per month. Collaborators in your projects must have their own Watson Studio Lite plans. - -Both Watson Studio plans contain these features without additional services: - - - -* Watson services APIs to run in notebooks. -* [Jupyter notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) to analyze data with Python or R code. -* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) to analyze data with R code. -* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) to develop predictive models on a graphical canvas. -* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) to shape and cleanse data. -* [Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) to orchestrate an end-to-end flow of assets from creation through deployment. -" -3DF4040F4F2E5704EF44E6742585EE853A2F2A37_1,3DF4040F4F2E5704EF44E6742585EE853A2F2A37,"* [Small runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools. -* [Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-spark). The maximum number of Spark executors that can be used is restricted by the service plan. -* [Environments with the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) with pre-trained models for language processing tasks that you can run on unstructured data. -* [Environments with Decision Optimization libraries](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments. -* [Connectors to data sources](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). -* [Collaboration](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) in projects and deployment spaces. -* [Samples](https://dataplatform.cloud.ibm.com/gallery) for resources to help you learn and samples that you can use. - - - -Both Watson Studio plans contain these features that also require the Watson Machine Learning service: - - - -* [Machine learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) to build analytical models. -* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) to automatically create a set of model candidates. -" -3DF4040F4F2E5704EF44E6742585EE853A2F2A37_2,3DF4040F4F2E5704EF44E6742585EE853A2F2A37,"* [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) to collaboratively train a model with multiple remote parties without sharing data. -* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) to build models that solve business problems. -* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) to generate synthetic tabular data. - - - -The Watson Studio Professional plan includes features that are not available in the Lite plan, including the following: - - - -* [Encrypt your IBM Cloud Object Storage instance with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). -* [Large runtime environments with 8 or more vCPUs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) as compute resources for analytical tools. -* [GPU environments for running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-gpu). -* [Export projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html). - - - -The Professional plan charges for [compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) consumed per month. Compute usage is measured in capacity unit hours (CUH). For details on computing resource allocation and consumption, see [Runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html). - - - -Table 1. Feature differences between Watson Studio plans - - Feature Lite Professional - - Custom encryption keys ✓ -" -3DF4040F4F2E5704EF44E6742585EE853A2F2A37_3,3DF4040F4F2E5704EF44E6742585EE853A2F2A37," Connectors ✓ ✓ - Large environments ✓ - Spark environments 2 executors Up to 35 executors - GPU environments ✓ Dallas region only - Export projects ✓ - Collaborators 1 Unlimited - Processing usage 10 CUH per month Unlimited - pay per CUH - HIPAA readiness ✓ Dallas region only - - - -" -3DF4040F4F2E5704EF44E6742585EE853A2F2A37_4,3DF4040F4F2E5704EF44E6742585EE853A2F2A37," Learn more - - - -* [Watson Studio service overview](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html) -* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-wdp.html) -* [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) -* [Upgrade your plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html). - - - -Parent topic:[Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html) -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_0,B508DA024EE4722C3919C4D1118CF0410713A9C5," Adding data to a project - -After you create a project, the next step is to add data assets to it so that you can work with data. All the collaborators in the project are automatically authorized to access the data in the project. - -Different asset types can have duplicate names. However, you can't add an asset type with the same name multiple times. - -You can use the following methods to add data assets to projects: - - - - Method When to use - - [Add local files](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enfiles) You have data in CSV or similar files on your local system. - [Add Samples data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=encommunity) You want to use sample data sets. - [Add database connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) You need to connect to a remote data source. - [Add data from a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) You need one or more tables or files from a remote data source. - [Add connected folder assets from IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) You need a folder in IBM Cloud Object Storage that contains a dynamic set of files, such as a news feed. - [Convert files in project storage to assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html?context=cdpaas&locale=enos) You want to convert files that you created in the project into data assets. - - - -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_1,B508DA024EE4722C3919C4D1118CF0410713A9C5," Add local files - -You can add a file from your local system as a data asset in a project. - -Required permissions : You must have the Editor or Admin role in the project. - -Restrictions : - The file cannot be empty. : - The file name can't exceed 255 characters. - -: - The maximum size for files that you can load with the UI is 5 GB. You can [load larger files to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html) with APIs. - -Important: You can't add executable files to a project. All other types files that you add to a project are not checked for malicious code. You must ensure that your files do not contain malware or other types of malicious software that other collaborators might download. - -To add data files to a project: - - - -1. From your project's Assets page, click the Upload asset to project icon (![Shows the find data icon.](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)). You can also click the same icon (![Shows the find data icon.](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) from within a notebook or canvas. -2. In the pane that opens, browse for the files or drag them onto the pane. You must stay on the page until the load is complete. - - - -The files are saved in the object storage that is associated with your project and are listed as data assets on the Assets page of your project. - -When you click the data asset name, you can see this information about data assets from files: - - - -* The asset name and description -* The tags for the asset -* The name of the person who created the asset -* The size of the data -* The date when the asset was added to the project -* The date when the asset was last modified -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_2,B508DA024EE4722C3919C4D1118CF0410713A9C5,"* A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of the data, for CSV, Avro, Parquet, TSV, Microsoft Excel, PDF, text, JSON, and image files -* A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of the data, for CSV, Avro, Parquet, TSV, and Microsoft Excel files - - - -You can update the contents of a data asset from a file by adding a file with the same name and format to the project and then choosing to replace the existing data asset. - -You can remove the data asset by choosing the Delete option from the action menu next to the asset name. Choose the Prepare data option to refine the data with [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html). - -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_3,B508DA024EE4722C3919C4D1118CF0410713A9C5," Add Samples data sets - -You can add data sets from Samples to your project: - - - -1. In Samples, find the card for the data set that you want to add. -2. Click the Add to Project icon from the action bar, select the project, and click Add. - - - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_4,B508DA024EE4722C3919C4D1118CF0410713A9C5," Convert files in project storage to assets - -The storage for the project contains the data assets that you uploaded to the project, but it can also contain other files. For example, you can save a DataFrame in a notebook in the project environment storage. You can convert files in project storage to assets. - -To convert files in project storage to assets: - - - -1. From the Assets tab of your project, click Import asset. -2. Select Project files. -3. Select the data_asset folder. -4. Select the asset and click Import. - - - -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_5,B508DA024EE4722C3919C4D1118CF0410713A9C5," Next steps - - - -* [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -* [Analyze the data and work with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - - - -" -B508DA024EE4722C3919C4D1118CF0410713A9C5_6,B508DA024EE4722C3919C4D1118CF0410713A9C5," Learn more - - - -* [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) -* [Publishing data assets to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) - - - -Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) -" -FD3B0E405075464C05C52E5BB0C414A870B06334_0,FD3B0E405075464C05C52E5BB0C414A870B06334," Administering a project - -If you have the Admin role in a project, you can perform administrative tasks for the project. - - - -* [Manage collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [Mark data assets in project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html) -* [Stop all active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) -* [Export a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) -* [Manage project access tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) -* [Remove assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmlremove-asset) -* [Edit a locked asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmleditassets) -* [Delete the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=endelete-project) -* [Copy a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=encopy-project) -* [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) - - - -Note: In the activity log, the user ID for some activities might display icp4d-dev instead of admin. - -" -FD3B0E405075464C05C52E5BB0C414A870B06334_1,FD3B0E405075464C05C52E5BB0C414A870B06334," Delete a project - -If you have the Admin role in a project, you can delete it. All project assets, associated files in the project's storage, and the associated storage for the project are also deleted. Data in a remote data source that is accessed through a connection is not affected. - -To delete a project, choose Project > View All Projects and then choose Delete from the ACTIONS menu next to the project name. - -" -FD3B0E405075464C05C52E5BB0C414A870B06334_2,FD3B0E405075464C05C52E5BB0C414A870B06334," Copy a project - -You can copy an existing project by [exporting it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html), and then [importing it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) with a different name. - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_0,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Amazon S3 connection - -To access your data in Amazon S3, create a connection asset for it. - -Amazon S3 (Amazon Simple Storage Service) is a service that is offered by Amazon Web Services (AWS) that provides object storage through a web service interface. - -For other types of S3-compliant connections, you can use the [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html). - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_1,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Create a connection to Amazon S3 - -To create the connection asset, you need these connection details: - - - -* Bucket: Bucket name that contains the files. If your AWS credentials have permissions to list buckets and access all buckets, then you only need to supply the credentials. If your credentials don't have the privilege to list buckets and can only access a particular bucket, then you need to specify the bucket. -* Endpoint URL: Use for an AWS GovCloud instance. Include the region code. For example, https://s3..amazonaws.com. For the list of region codes, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.htmlregional-endpoints). -* Region: Amazon Web Services (AWS) region. If you specify an Endpoint URL that is not for the AWS default region (us-west-2), then you should also enter a value for Region. - - - -Select Server proxy to access the Amazon S3 data source through a proxy server. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection. - - - -* Proxy host: The proxy URL. For example, https://proxy.example.com. -* Proxy port number: The port number to connect to the proxy server. For example, 8080 or 8443. -* The Proxy username and Proxy password fields are optional. - - - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_2,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Credentials - -The combination of Access key and Secret key is the minimum credentials. - -If the Amazon S3 account owner has set up temporary credentials or a Role ARN (Amazon Resource Name), enter the values provided by the Amazon S3 account owner for the applicable authentication combination: - - - -* Access key, Secret key, and Session token -* Access key, Secret key, Role ARN, Role session name, and optional Duration seconds -* Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds - - - -For setup instructions for the Amazon S3 account owner, see [Setting up temporary credentials or a Role ARN for Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-az3-tempcreds.html). - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_3,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_4,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_5,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Where you can use this connection - -You can use Amazon S3 connections in the following workspaces and tools: - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_6,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_7,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC,"Catalogs - - - -* Platform assets catalog - - - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_8,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Amazon S3 setup - -See the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-up-s3.html) for the setup steps. - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_9,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Restriction - -Folders cannot be named with the slash symbol (/) because the slash symbol is a delimiter for the file structure. - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_10,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Supported file types - -The Amazon S3 connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC_11,1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC," Learn more - -[Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) - -Related connection: [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -E73062F1E8466AB5604358A0AD0D66F31C81507C_0,E73062F1E8466AB5604358A0AD0D66F31C81507C," Setting up temporary credentials or a Role ARN for Amazon S3 - -Instead of adding another IAM user to your Amazon S3 account, you can grant them access with temporary security credentials and a Session token. Or, you can create a Role ARN (Amazon Resource Name) and then grant permission to that role to access the account. The trusted user can then use the role. - -You can assign role policies to the temporary credentials to limit the permissions. For example, you can assign read-only access or access to a particular S3 bucket. - -Prerequisite: You must be the IAM owner of the Amazon S3 account. - -You can set up one of the following authentication combinations: - - - -* Access key, Secret key, and Session token -* Access key, Secret key, Role ARN, Role session name, and optional Duration seconds -* Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds - - - -" -E73062F1E8466AB5604358A0AD0D66F31C81507C_1,E73062F1E8466AB5604358A0AD0D66F31C81507C," Access key, Secret key, and Session token - -Use the AWS Security Token Service (AWS STS) operations in the AWS API to obtain temporary security credentials. These credentials consist of an Access key, a Secret key, and a Session token that expires within a configurable amount of time. For instructions, see the AWS documentation: [Requesting temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html). - -" -E73062F1E8466AB5604358A0AD0D66F31C81507C_2,E73062F1E8466AB5604358A0AD0D66F31C81507C," Access key, Secret key, Role ARN, Role session name, and optional Duration seconds - -If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. Create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) - -The Role ARN is the Amazon Resource Name for connection's role. -The Role session name identifies the session to S3 administrators. For example, your IAM username. -The Duration seconds parameter is optional. The minimum is 15 minutes. The maximum is 36 hours, the default is 1 hour. The duration seconds timer starts every time that the connection is established. - -You then provide values for the Access key, Secret key, Role ARN, Role session name, and optional Duration seconds to the user who will create the connection. - -" -E73062F1E8466AB5604358A0AD0D66F31C81507C_3,E73062F1E8466AB5604358A0AD0D66F31C81507C," Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds - -If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. With this combination, the External ID is a unique string that you specify and that the user must enter for extra security. First, create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). To create the External ID, see [How to use an external ID when granting access to your AWS resources to a third party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html). - -You then provide the values for the Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds to the user who will create the connection. - -" -E73062F1E8466AB5604358A0AD0D66F31C81507C_4,E73062F1E8466AB5604358A0AD0D66F31C81507C," Learn more - -[Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) - -Parent topic:[Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_0,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Amazon RDS for MySQL connection - -To access your data in Amazon RDS for MySQL, create a connection asset for it. - -Amazon RDS for MySQL is a MySQL relational database that runs on the Amazon Relational Database Service (RDS). - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_1,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Supported versions - -MySQL database versions 5.6 through 8.0 - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_2,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Create a connection to Amazon RDS for MySQL - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_3,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_4,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_5,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Where you can use this connection - -You can use Amazon RDS for MySQL connections in the following workspaces and tools: - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_6,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD,"Projects - - - -* Data Refinery -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_7,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD,"Catalogs - - - -* Platform assets catalog - - - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_8,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Amazon RDS for MySQL setup - -For setup instructions, see these topics: - - - -* [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) -* [Connecting to a DB Instance Running the MySQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html) - - - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_9,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for MySQL documentation](https://aws.amazon.com/rds/mysql) for the correct syntax. - -" -BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD_10,BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD," Learn more - -[Amazon RDS for MySQL](https://aws.amazon.com/rds/mysql) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -82199291DA8DBB7656B03232F5BE43BA4D343654_0,82199291DA8DBB7656B03232F5BE43BA4D343654," Amazon RDS for Oracle connection - -To access your data in Amazon RDS for Oracle, create a connection asset for it. - -Amazon RDS for Oracle is an Oracle relational database that runs on the Amazon Relational Database Service (RDS). - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_1,82199291DA8DBB7656B03232F5BE43BA4D343654," Supported Oracle versions and editions - - - -* Oracle Database 19c (19.0.0.0) -* Oracle Database 12c Release 2 (12.2.0.1) -* Oracle Database 12c Release 1 (12.1.0.2) - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_2,82199291DA8DBB7656B03232F5BE43BA4D343654," Create a connection to Amazon RDS for Oracle - -To create the connection asset, you'll need these connection details: - - - -* Either the Oracle Service name or the Oracle System ID (SID) for the database. -* Hostname or IP address of the database -* Port number of the database. (Default is 1521) -* SSL certificate (if required by the database server) - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_3,82199291DA8DBB7656B03232F5BE43BA4D343654," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_4,82199291DA8DBB7656B03232F5BE43BA4D343654," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_5,82199291DA8DBB7656B03232F5BE43BA4D343654," Where you can use this connection - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_6,82199291DA8DBB7656B03232F5BE43BA4D343654,"Projects - -You can use Amazon RDS for Oracle connections in the following workspaces and tools: - - - -* Data Refinery -* Decision Optimization - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_7,82199291DA8DBB7656B03232F5BE43BA4D343654,"Catalogs - - - -* Platform assets catalog - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_8,82199291DA8DBB7656B03232F5BE43BA4D343654," Amazon RDS for Oracle setup - -To set up the Oracle database on Amazon, see these topics: - - - -* [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) -* [Creating an Oracle DB instance and connecting to a database on an Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.Oracle.html) -* [Connecting to your Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToOracleInstance.html) - - - -" -82199291DA8DBB7656B03232F5BE43BA4D343654_9,82199291DA8DBB7656B03232F5BE43BA4D343654," Learn more - -[Amazon RDS for Oracle](https://aws.amazon.com/rds/oracle/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -EC284AB3DC23241975F76E0519B531407042AEF1_0,EC284AB3DC23241975F76E0519B531407042AEF1," Amazon RDS for PostgreSQL connection - -To access your data in Amazon RDS for PostgreSQL, create a connection asset for it. - -Amazon RDS for PostgreSQL is a PostgreSQL relational database that runs on the Amazon Relational Database Service (RDS). - -" -EC284AB3DC23241975F76E0519B531407042AEF1_1,EC284AB3DC23241975F76E0519B531407042AEF1," Supported versions - -PostgreSQL database versions 9.4, 9.5, 9.6, 10, 11 and 12 - -" -EC284AB3DC23241975F76E0519B531407042AEF1_2,EC284AB3DC23241975F76E0519B531407042AEF1," Create a connection to Amazon RDS for PostgreSQL - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -" -EC284AB3DC23241975F76E0519B531407042AEF1_3,EC284AB3DC23241975F76E0519B531407042AEF1," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -EC284AB3DC23241975F76E0519B531407042AEF1_4,EC284AB3DC23241975F76E0519B531407042AEF1," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -EC284AB3DC23241975F76E0519B531407042AEF1_5,EC284AB3DC23241975F76E0519B531407042AEF1," Where you can use this connection - -You can use Amazon RDS for PostgreSQL connections in the following workspaces and tools: - -" -EC284AB3DC23241975F76E0519B531407042AEF1_6,EC284AB3DC23241975F76E0519B531407042AEF1,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -EC284AB3DC23241975F76E0519B531407042AEF1_7,EC284AB3DC23241975F76E0519B531407042AEF1,"Catalogs - - - -* Platform assets catalog - - - -" -EC284AB3DC23241975F76E0519B531407042AEF1_8,EC284AB3DC23241975F76E0519B531407042AEF1," Amazon RDS for PostgreSQL setup - -For setup instructions, see these topics: - - - -* [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) -* [Connecting to a DB Instance Running the PostgreSQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html) - - - -" -EC284AB3DC23241975F76E0519B531407042AEF1_9,EC284AB3DC23241975F76E0519B531407042AEF1," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for PostgreSQL documentation](https://aws.amazon.com/rds/postgresql/) for the correct syntax. - -" -EC284AB3DC23241975F76E0519B531407042AEF1_10,EC284AB3DC23241975F76E0519B531407042AEF1," Learn more - -[Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_0,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Microsoft Azure SQL Database connection - -To access your data in a Microsoft Azure SQL Database, create a connection asset for it. - -Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure. - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_1,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Create a connection to Microsoft Azure SQL Database - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Select Use Active Directory if the server has been set up to use Azure Active Directory authentication (Azure AD). Enter your Azure AD user and password. -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_2,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_3,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_4,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Where you can use this connection - -You can use Microsoft Azure SQL Database connections in the following workspaces and tools: - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_5,47B819E4861856FAB3C5627661EDD8E59FBED8A2,"Catalogs - - - -* Platform assets catalog - - - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_6,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/connect-query-content-reference-guide) for the correct syntax. - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_7,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Microsoft Azure SQL Database setup - -[Getting started with single databases in Azure SQL Database](https://docs.microsoft.com/en-ca/azure/azure-sql/database/quickstart-content-reference-guide) - -" -47B819E4861856FAB3C5627661EDD8E59FBED8A2_8,47B819E4861856FAB3C5627661EDD8E59FBED8A2," Learn more - -[Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -270242015183DA2517DF613FD951623042268BEE_0,270242015183DA2517DF613FD951623042268BEE," Microsoft Azure Blob Storage connection - -To access your data in Microsoft Azure Blob Storage, create a connection asset for it. - -Azure Blob Storage is used for storing large amounts of data in the cloud. - -" -270242015183DA2517DF613FD951623042268BEE_1,270242015183DA2517DF613FD951623042268BEE," Create a connection to Microsoft Azure Blob Storage - -To create the connection asset, you need these connection details: - -Connection string: Authentication is managed by the Azure portal access keys. - -" -270242015183DA2517DF613FD951623042268BEE_2,270242015183DA2517DF613FD951623042268BEE," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -270242015183DA2517DF613FD951623042268BEE_3,270242015183DA2517DF613FD951623042268BEE," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -270242015183DA2517DF613FD951623042268BEE_4,270242015183DA2517DF613FD951623042268BEE," Where you can use this connection - -You can use Azure Blob Storage connections in the following workspaces and tools: - -" -270242015183DA2517DF613FD951623042268BEE_5,270242015183DA2517DF613FD951623042268BEE,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -270242015183DA2517DF613FD951623042268BEE_6,270242015183DA2517DF613FD951623042268BEE,"Catalogs - - - -* Platform assets catalog - - - -" -270242015183DA2517DF613FD951623042268BEE_7,270242015183DA2517DF613FD951623042268BEE," Azure Blob Storage connection string setup - -Set up blob storage and access keys on the Microsoft Azure portal. For instructions see: - - - -* [Quickstart: Upload, download, and list blobs with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal) -* [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) - - - -Example connection string, which you can find in the ApiKeys section of the container: - -DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net - -" -270242015183DA2517DF613FD951623042268BEE_8,270242015183DA2517DF613FD951623042268BEE," Supported file types - -The Azure Blob Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -270242015183DA2517DF613FD951623042268BEE_9,270242015183DA2517DF613FD951623042268BEE," Learn more - -[Microsoft Azure](https://azure.microsoft.com) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -BC75F4741F871360B8E3CE356754C329323306F7_0,BC75F4741F871360B8E3CE356754C329323306F7," Microsoft Azure Data Lake Storage connection - -To access your data in Microsoft Azure Data Lake Storage, create a connection asset for it. - -Azure Data Lake Storage (ADLS) is a scalable data storage and analytics service that is hosted in Azure, Microsoft's public cloud. The Microsoft Azure Data Lake Storage connection supports access to both Gen1 and Gen2 Azure Data Lake Storage repositories. - -" -BC75F4741F871360B8E3CE356754C329323306F7_1,BC75F4741F871360B8E3CE356754C329323306F7," Create a connection to Microsoft Azure Data Lake Storage - -To create the connection asset, you need these connection details: - - - -* WebHDFS URL: The WebHDFS URL for accessing HDFS. -To connect to a Gen 2 ADLS, use the format, https://.dfs.core.windows.net/ -Where is the name you used when you created the ADLS instance. -For , use the name of the container you created. For more information, see the [Microsoft Data Lake Storage Gen2 documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read). - -* Tenant ID: The Azure Active Directory tenant ID -* Client ID: The client ID for authorizing access to Microsoft Azure Data Lake Storage -* Client secret: The authentication key that is associated with the client ID for authorizing access to Microsoft Azure Data Lake Storage - - - -Select Server proxy to access the Azure Data Lake Storage data source through a proxy server. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection. - - - -* Proxy host: The proxy URL. For example, https://proxy.example.com. -* Proxy port number: The port number to connect to the proxy server. For example, 8080 or 8443. -* The Proxy protocol selection for HTTP or HTTPS is optional. - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -BC75F4741F871360B8E3CE356754C329323306F7_2,BC75F4741F871360B8E3CE356754C329323306F7," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -BC75F4741F871360B8E3CE356754C329323306F7_3,BC75F4741F871360B8E3CE356754C329323306F7," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -BC75F4741F871360B8E3CE356754C329323306F7_4,BC75F4741F871360B8E3CE356754C329323306F7," Where you can use this connection - -You can use Microsoft Azure Data Lake Storage connections in the following workspaces and tools: - -" -BC75F4741F871360B8E3CE356754C329323306F7_5,BC75F4741F871360B8E3CE356754C329323306F7,"Projects - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -BC75F4741F871360B8E3CE356754C329323306F7_6,BC75F4741F871360B8E3CE356754C329323306F7,"Catalogs - - - -* Platform assets catalog - - - -" -BC75F4741F871360B8E3CE356754C329323306F7_7,BC75F4741F871360B8E3CE356754C329323306F7," Azure Data Lake Storage authentication setup - -To set up authentication, you need a tenant ID, client (or application) ID, and client secret. - - - -* Gen1: - - - -1. Create an Azure Active Directory (Azure AD) web application, get an application ID, authentication key, and a tenant ID. -2. Then, you must assign the Azure AD application to the Azure Data Lake Storage account file or folder. Follow Steps 1, 2, and 3 at [Service-to-service authentication with Azure Data Lake Storage using Azure Active Directory](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory). - - - -* Gen2: - - - -1. Follow instructions in [Acquire a token from Azure AD for authorizing requests from a client application](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app). These steps create a new identity. After you create the identity, set permissions to grant the application access to your ADLS. The Microsoft Azure Data Lake Storage connection will use the associated Client ID, Client secret, and Tenant ID for the application. -2. Give the Azure App access to the storage container using Storage Explorer. For instructions, see [Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-explorermanaging-access). - - - - - -" -BC75F4741F871360B8E3CE356754C329323306F7_8,BC75F4741F871360B8E3CE356754C329323306F7," Supported file types - -The Microsoft Azure Data Lake Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -BC75F4741F871360B8E3CE356754C329323306F7_9,BC75F4741F871360B8E3CE356754C329323306F7," Learn more - -[Azure Data Lake](https://azure.microsoft.com/en-us/solutions/data-lake) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_0,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Microsoft Azure File Storage connection - -To access your data in Microsoft Azure File Storage, create a connection asset for it. - -Azure Files are Microsoft's cloud file system. They are managed file shares that are accessible via the Server Message Block (SMB) protocol or the Network File System (NFS) protocol. - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_1,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Create a connection to Microsoft Azure File Storage - -To create the connection asset, you need these connection details: - -Connection string: Authentication is managed by the Azure portal access keys. - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_2,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_3,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_4,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Where you can use this connection - -You can use Microsoft Azure File Storage connections in the following workspaces and tools: - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_5,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F,"Projects - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_6,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F,"Catalogs - - - -* Platform assets catalog - - - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_7,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Azure File Storage setup - -Set up storage and access keys on the Microsoft Azure portal. For instructions see [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage). -Example connection string, which you can find in the ApiKeys section of the container: - -DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net - -Choose the method to create and manage your Azure Files: - - - -* [Quickstart: Create and manage Azure Files share with Windows virtual machines](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-windows) -* [Quickstart: Create and manage Azure file shares with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal) -* [Quickstart: Create and manage an Azure file share with Azure PowerShell](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-powershell) -* [Quickstart: Create and manage Azure file shares using Azure CLI](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli) -* [Quickstart: Create and manage Azure file shares with Azure Storage Explorer](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-storage-explorer) - - - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_8,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Restriction - -Microsoft Azure's maximum file size is 1 TB. - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_9,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Supported file types - -The Azure File Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_10,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Known issue - -During the upload, the data is appended in portions to a temporary blob and then converted into the file. Depending on the size of the streamed content, there might be a delay in creating the file. Wait until all the data is uploaded. - -" -C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F_11,C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F," Learn more - -[Azure Files](https://azure.microsoft.com/en-us/services/storage/files/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -4800C7E7C443EC310D775747125585F4671534FC_0,4800C7E7C443EC310D775747125585F4671534FC," Google BigQuery connection - -To access your data in Google BigQuery, you must create a connection asset for it. - -Google BigQuery is a fully managed, serverless data warehouse that enables scalable analysis over petabytes of data. - -" -4800C7E7C443EC310D775747125585F4671534FC_1,4800C7E7C443EC310D775747125585F4671534FC," Create a connection to Google BigQuery - -To create the connection asset, choose an authentication method. Choices include authentication with or without workload identity federation. - -" -4800C7E7C443EC310D775747125585F4671534FC_2,4800C7E7C443EC310D775747125585F4671534FC,"Without workload identity federation - - - -* Credentials: The contents of the Google service account key JSON file -* Client ID, Client secret, Access token, and Refresh token - - - -" -4800C7E7C443EC310D775747125585F4671534FC_3,4800C7E7C443EC310D775747125585F4671534FC,"With workload identity federation -You use an external identity provider (IdP) for authentication. An external identity provider uses Identity and Access Management (IAM) instead of service account keys. IAM provides increased security and centralized management. You can use workload identity federation authentication with an access token or with a token URL. - -You can configure a Google BigQuery connection for workload identity federation with any identity provider that complies with the OpenID Connect (OIDC) specification and that satisfies the Google Cloud requirements that are described in [Prepare your external IdP](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersoidc). The requirements include: - - - -* The identity provider must support OpenID Connect 1.0. -* The identity provider's OIDC metadata and JWKS endpoints must be publicly accessible over the internet. Google Cloud uses these endpoints to download your identity provider's key set and uses that key set to validate tokens. -* The identity provider is configured so that your workload can obtain ID tokens that meet these criteria: - - - -* Tokens are signed with the RS256 or ES256 algorithm. -* Tokens contain an aud claim. - - - - - -For examples of the workload identity federation configuration steps and the Google BigQuery connection details for Amazon Web Services (AWS) and Microsoft Azure, see [Workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html). - -" -4800C7E7C443EC310D775747125585F4671534FC_4,4800C7E7C443EC310D775747125585F4671534FC," Workload Identity Federation with access token connection details - - - -* Access token: An access token from the identity provider to connect to BigQuery. -* Security Token Service audience: The security token service audience that contains the project ID, pool ID, and provider ID. Use this format: - -//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID - -For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsrest). -* Service account email: The email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). -* Service account token lifetime (optional): The lifetime in seconds of the service account access token. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). -* Token format: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token. -* Token field name: The name of the field in the JSON response that contains the token. This field appears only when the Token format is JSON. -* Token type: AWS Signature Version 4 request, Google OAuth 2.0 access token, ID token, JSON Web Token (JWT), or SAML 2.0. - - - -" -4800C7E7C443EC310D775747125585F4671534FC_5,4800C7E7C443EC310D775747125585F4671534FC," Workload Identity Federation with token URL connection details - - - -* Security Token Service audience: The security token service audience that contains the project ID, pool ID, and provider ID. Use this format: - -//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID - -For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsrest). -* Service account email: The email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). -* Service account token lifetime (optional): The lifetime in seconds of the service account access token. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). -* Token URL: The URL to retrieve a token. -* HTTP method: HTTP method to use for the token URL request: GET, POST, or PUT. -* Request body (for POST or PUT methods): The body of the HTTP request to retrieve a token. -* HTTP headers: HTTP headers for the token URL request in JSON or as a JSON body. Use format: ""Key1""=""Value1"",""Key2""=""Value2"". -* Token format: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token. -* Token field name: The name of the field in the JSON response that contains the token. This field appears only when the Token format is JSON. -" -4800C7E7C443EC310D775747125585F4671534FC_6,4800C7E7C443EC310D775747125585F4671534FC,"* Token type: AWS Signature Version 4 request, Google OAuth 2.0 access token, ID token, JSON Web Token (JWT), or SAML 2.0. - - - -" -4800C7E7C443EC310D775747125585F4671534FC_7,4800C7E7C443EC310D775747125585F4671534FC," Other properties - -Project ID (optional) - -Output JSON string format: JSON string format for output values that are complex data types (for example, nested or repeated). - - - -* Pretty: Values are formatted before sending them to output. Use this option to visually read a few rows. -* Raw: (Default) No formatting. Use this option for the best performance. - - - -" -4800C7E7C443EC310D775747125585F4671534FC_8,4800C7E7C443EC310D775747125585F4671534FC," Permissions - -The connection to Google BigQuery requires the following BigQuery permissions: - - - -* bigquery.job.create -* bigquery.tables.get -* bigquery.tables.getData - - - -Use one of three ways to gain these permissions: - - - -* Use the predefined BigQuery Cloud IAM role bigquery.admin, which includes these permissions; -* Use a combination of two roles, one from each column in the following table; or -* Create a custom role. See [Create and manage custom roles](https://cloud.google.com/iam/docs/creating-custom-roles). - - - - - - First role Second role - - bigquery.dataEditor bigquery.jobUser - bigquery.dataOwner bigquery.user - bigquery.dataViewer - - - -For more information about permissions and roles in Google BigQuery, see [Predefined roles and permissions](https://cloud.google.com/bigquery/docs/access-control). - -" -4800C7E7C443EC310D775747125585F4671534FC_9,4800C7E7C443EC310D775747125585F4671534FC," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -4800C7E7C443EC310D775747125585F4671534FC_10,4800C7E7C443EC310D775747125585F4671534FC," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -4800C7E7C443EC310D775747125585F4671534FC_11,4800C7E7C443EC310D775747125585F4671534FC," Where you can use this connection - -You can use Google BigQuery connections in the following workspaces and tools: - -" -4800C7E7C443EC310D775747125585F4671534FC_12,4800C7E7C443EC310D775747125585F4671534FC,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -4800C7E7C443EC310D775747125585F4671534FC_13,4800C7E7C443EC310D775747125585F4671534FC,"Catalogs - - - -* Platform assets catalog - - - -" -4800C7E7C443EC310D775747125585F4671534FC_14,4800C7E7C443EC310D775747125585F4671534FC," Google BigQuery setup - -[Quickstart by using the Cloud Console](https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui) - -" -4800C7E7C443EC310D775747125585F4671534FC_15,4800C7E7C443EC310D775747125585F4671534FC," Learn more - - - -* [Google BigQuery documentation](https://cloud.google.com/bigquery/docs) -* [Google BigQuery workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_0,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Box connection - -To access your data in Box, create a connection asset for it. - -The Box platform is a cloud content management and file sharing service. - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_1,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Prerequisite: Create a custom app in Box - -Before you create a connection to Box, you create a custom app in the Box Developer Console. You can create an app for application-level access that users can use to share files, or you can create an app for enterprise-wide access to all user accounts. With enterprise-wide access, users do not need to share files and folders with the application. - - - -1. Go to the [Box Developer Console](https://app.box.com/developers/console), and follow the wizard to create a Custom App. For the Authentication Method, select OAuth 2.0 with JWT (Server Authentication). -2. Make the following selections in the Configuration page. Otherwise, keep the default settings. - - - -1. Select one of two choices for App Access Level: - - - -* Keep the default App Access Only selection to allow access where users share files. - -* Select App + Enterprise Access to create an app with enterprise-wide access to all user accounts. - - - -2. Under Add and Manage Public Keys, click Generate a Public/Private Keypair. This selection requires that two-factor authentication is enabled on the Box account, but you can disable it afterward. The generated key pair produces a config (_config.json) file for you to download. You will need the information in this file to create the connection in your project. - - - -3. If you selected an App + Enterprise Access, under Advanced Features, select both of these check boxes: - - - -* Make API calls using the as-user header -* Generate user access tokens - - - -4. Submit the app client ID to the Box enterprise administrator for authorization: Go to your application in the [Box Developer Console](https://app.box.com/developers/console) and select the General link from the left sidebar in your application. Scroll down to the App Authorization section. - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_2,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_3,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Create the Box connection - -Enter the values from the downloaded config file for these settings: - - - -* Client ID -* Client Secret -* Enterprise ID -* Private Key (Replace each n with a newline) -* Private Key Password (The passphrase value in the config file) -* Public Key (The publicKeyID value in the config file) - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_4,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Enterprise-wide app - -If you configured an enterprise-wide access app, enter the username of the Box user account in the Username field. - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_5,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Application-level app - -Users must explicitly share their files with the app's email address in order for the app to access the files. - - - -1. Make a REST call to the connection to find out the app email address. For example: - -PUT https://api.dataplatform.cloud.ibm.com/v2/connections/{connection_id}/actions/get_user_info?project_id={project_id} - -Request body: - -{} - -Returns: - -{ -""login_name"": ""AutomationUser_123467_aBcDEFg12h@boxdevedition.com"" -} -2. Share the files and folders in Box that you want accessible from Watson Studio with the login name that was returned by the REST call. - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_6,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_7,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Where you can use this connection - -You can use the Box connection in the following workspaces and tools: - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_8,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08,"Projects - - - -* Data Refinery -* Synthetic Data Generator - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_9,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08,"Catalogs - - - -* Platform assets catalog - - - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_10,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Limitation - -If you have thousands of files in a Box folder, the connection might not be able to retrieve the files before a time-out. Jobs or profiling that use the Box files might not work. - -Workaround: Reorganize the file hierarchy in Box so that there are fewer files in the same folder. - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_11,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Supported file types - -The Box connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08_12,B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08," Learn more - -[Managing custom apps](https://support.box.com/hc/articles/360044196653-Managing-custom-apps) - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_0,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Apache Cassandra connection - -To access your data in Apache Cassandra, create a connection asset for it. - -Apache Cassandra is an open source, distributed, NoSQL database. - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_1,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Supported versions - -Apache Cassandra 2.0 or later - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_2,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Create a connection to Apache Cassandra - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port number -* Keyspace -* Username and password -* SSL certificate (if required by the database server) - - - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_3,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_4,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_5,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Where you can use this connection - -You can use Apache Cassandra connections in the following workspaces and tools: - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_6,E03DD29F683C4F22A7084C9AB8F1488C380170F0,"Catalogs - - - -* Platform assets catalog - - - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_7,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Apache Cassandra setup - - - -* [Installing Cassandra](https://cassandra.apache.org/doc/latest/getting_started/installing.html) -* [Configuring Cassandra](https://cassandra.apache.org/doc/latest/getting_started/configuring.html) -* [CREATE KEYSPACE](https://cassandra.apache.org/doc/latest/cql/ddl.htmlcreate-keyspace) - - - -" -E03DD29F683C4F22A7084C9AB8F1488C380170F0_8,E03DD29F683C4F22A7084C9AB8F1488C380170F0," Learn more - - - -* [cassandra.apache.org](https://cassandra.apache.org/) -* [Cassandra Documentation](https://cassandra.apache.org/doc/latest/architecture/overview.html) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_0,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Google Cloud Storage connection - -To access your data in Google Cloud Storage, create a connection asset for it. - -Google Cloud Storage is an online file storage web service for storing and accessing data on Google Cloud Platform Infrastructure. - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_1,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Create a connection to Google Cloud Storage - -To create the connection asset, you need these connection details: - - - -* Project ID -* Credentials: The contents of the Google service account key JSON file -* Client ID and Client secret -* Access token -* Refresh token - - - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_2,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_3,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_4,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Where you can use this connection - -You can use Google Cloud Storage connections in the following workspaces and tools: - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_5,D5D7BD00BD17EFE339C848F46345F2192FDA5C11,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_6,D5D7BD00BD17EFE339C848F46345F2192FDA5C11,"Catalogs - - - -* Platform assets catalog - - - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_7,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Supported file types - -The Google Cloud Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -D5D7BD00BD17EFE339C848F46345F2192FDA5C11_8,D5D7BD00BD17EFE339C848F46345F2192FDA5C11," Learn more - -[Google Cloud Storage documentation](https://cloud.google.com/storage/docs/introduction) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_0,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," IBM Cloudant connection - -To access your data in IBM Cloudant, create a connection asset for it. - -Cloudant is a JSON document database available in IBM Cloud. - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_1,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Create a connection to Cloudant - -To create the connection asset, you need these connection details: - - - -* URL to the Cloudant database -* Database name -* Username and password - - - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_2,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_3,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_4,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Where you can use this connection - -You can use Cloudant connections in the following workspaces and tools: - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_5,1D393A73EBC623578DD6DA2C09C20E97FAE074D4,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_6,1D393A73EBC623578DD6DA2C09C20E97FAE074D4,"Catalogs - - - -* Platform assets catalog - - - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_7,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Cloudant setup - -To set up the Cloudant database on IBM Cloud, see [Getting started with IBM Cloudant](https://cloud.ibm.com/docs/Cloudant?topic=Cloudant-getting-started-with-cloudant). -When you create your Cloudant service, for Authentication method, select IAM and legacy credentials. - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_8,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Restriction - -IBM Cloud Query (CQ) is not supported. - -" -1D393A73EBC623578DD6DA2C09C20E97FAE074D4_9,1D393A73EBC623578DD6DA2C09C20E97FAE074D4," Learn more - -[IBM Cloudant docs](https://cloud.ibm.com/docs/Cloudant) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -DEDC196B72E081228279B523BA3585AED93C2370_0,DEDC196B72E081228279B523BA3585AED93C2370," Cloudera Impala connection - -To access your data in Cloudera Impala, create a connection asset for it. - -Cloudera Impala provides SQL queries directly on your Apache Hadoop data stored in HDFS or HBase. - -" -DEDC196B72E081228279B523BA3585AED93C2370_1,DEDC196B72E081228279B523BA3585AED93C2370," Supported versions - -Cloudera Impala 1.3+ - -" -DEDC196B72E081228279B523BA3585AED93C2370_2,DEDC196B72E081228279B523BA3585AED93C2370," Create a connection to Cloudera Impala - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -" -DEDC196B72E081228279B523BA3585AED93C2370_3,DEDC196B72E081228279B523BA3585AED93C2370," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -DEDC196B72E081228279B523BA3585AED93C2370_4,DEDC196B72E081228279B523BA3585AED93C2370," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -DEDC196B72E081228279B523BA3585AED93C2370_5,DEDC196B72E081228279B523BA3585AED93C2370," Where you can use this connection - -You can use Cloudera Impala connections in the following workspaces and tools: - -" -DEDC196B72E081228279B523BA3585AED93C2370_6,DEDC196B72E081228279B523BA3585AED93C2370,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -DEDC196B72E081228279B523BA3585AED93C2370_7,DEDC196B72E081228279B523BA3585AED93C2370,"Catalogs - - - -* Platform assets catalog - - - -" -DEDC196B72E081228279B523BA3585AED93C2370_8,DEDC196B72E081228279B523BA3585AED93C2370," Cloudera Impala setup - -[Cloudera Impala installation](https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_ig_install_impala.html) - -" -DEDC196B72E081228279B523BA3585AED93C2370_9,DEDC196B72E081228279B523BA3585AED93C2370," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -DEDC196B72E081228279B523BA3585AED93C2370_10,DEDC196B72E081228279B523BA3585AED93C2370," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Impala SQL Language Reference](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala_langref.html) for the correct syntax. - -" -DEDC196B72E081228279B523BA3585AED93C2370_11,DEDC196B72E081228279B523BA3585AED93C2370," Learn more - -[Cloudera Impala documentation](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_0,981BCFC7F5817524CB8E0C9FE04E3F267A268926," IBM Cognos Analytics connection - -To access your data in Cognos Analytics, create a connection asset for it. - -Cognos Analytics is an AI-fueled business intelligence platform that supports the entire analytics cycle, from discovery to operationalization. - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_1,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Supported versions - -IBM Cognos Analytics 11 - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_2,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Supported content types - - - -* Report (except Reports that require prompts) -* Query - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_3,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Create a connection to Cognos Analytics - -To create the connection asset, you need these connection details: - - - -* Gateway URL -* SSL certificate (if required by the database server) - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_4,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Credentials - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_5,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_6,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_7,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Where you can use this connection - -You can use Cognos Analytics connections in the following workspaces and tools: - - - -* Data Refinery -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_8,981BCFC7F5817524CB8E0C9FE04E3F267A268926,"Catalogs - - - -* Platform assets catalog - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_9,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Cognos Analytics setup - -Instructions for setting up Cognos Analytics: [Getting started in Cognos Analytics](https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_gtstd.doc/c_gtstd_ica_overview.html). - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_10,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Restrictions - - - -* You can use this connection only for source data. You cannot write to data or export data with this connection. -* Notebooks: Self-signed certificates are not supported for notebooks. The SSL certificate that is imported into the Cognos Analytics server must be signed by a trusted root authority. To confirm that the certificate is signed by a trusted root authority, enter the Cognos Analytics URL into a browser and verify that there is a padlock to the left of the URL. If the certificate is self-signed, the Cognos Analytics server administrator must replace it with a trusted TLS certificate. - - - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_11,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Running SQL statements - -To ensure that your SQL statements run correctly, refer to [Working with Queries in SQL](https://www.ibm.com/docs/SSEP7J_11.2.0/com.ibm.swg.ba.cognos.ug_cr_rptstd.doc/c_cr_rptstd_wrkdat_working_with_sql_mdx_rel.html) in the Cognos Analytics documentation for the correct syntax. - -" -981BCFC7F5817524CB8E0C9FE04E3F267A268926_12,981BCFC7F5817524CB8E0C9FE04E3F267A268926," Learn more - -[Cognos Analytics documentation](https://www.ibm.com/docs/cognos-analytics/11.0.0) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -C119B8D62C156451A8B8665E8969422803527DF3_0,C119B8D62C156451A8B8665E8969422803527DF3," IBM Cloud Databases for MySQL connection - -To access your data in IBM Cloud Databases for MySQL, create a connection asset for it. - -IBM Cloud Databases for MySQL extends the capabilities of MySQL by offering an auto-scaling deployment system managed on IBM Cloud that delivers high availability, redundancy, and automated backups. IBM Cloud Databases for MySQL was formerly known as IBM Cloud Compose for MySQL. - -" -C119B8D62C156451A8B8665E8969422803527DF3_1,C119B8D62C156451A8B8665E8969422803527DF3," Create a connection to IBM Cloud Databases for MySQL - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password - - - -" -C119B8D62C156451A8B8665E8969422803527DF3_2,C119B8D62C156451A8B8665E8969422803527DF3," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -C119B8D62C156451A8B8665E8969422803527DF3_3,C119B8D62C156451A8B8665E8969422803527DF3," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -C119B8D62C156451A8B8665E8969422803527DF3_4,C119B8D62C156451A8B8665E8969422803527DF3," Where you can use this connection - -You can use IBM Cloud Databases for MySQL connections in the following workspaces and tools: - -" -C119B8D62C156451A8B8665E8969422803527DF3_5,C119B8D62C156451A8B8665E8969422803527DF3,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -C119B8D62C156451A8B8665E8969422803527DF3_6,C119B8D62C156451A8B8665E8969422803527DF3,"Catalogs - - - -* Platform assets catalog - - - -" -C119B8D62C156451A8B8665E8969422803527DF3_7,C119B8D62C156451A8B8665E8969422803527DF3," IBM Cloud Databases for MySQL setup - -[IBM Cloud Databases for MySQL](https://cloud.ibm.com/catalog/services/compose-for-mysql) - -" -C119B8D62C156451A8B8665E8969422803527DF3_8,C119B8D62C156451A8B8665E8969422803527DF3," Restriction - -For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to an IBM Cloud Databases for MySQL connected data asset. - -" -C119B8D62C156451A8B8665E8969422803527DF3_9,C119B8D62C156451A8B8665E8969422803527DF3," Learn more - -[IBM Cloud Databases for MySQL Help](https://help.compose.com/docs/mysql-compose-for-mysqlsection-compose-for-mysql-for-all) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_0,CD1C58AD8E180AB922890FA8182FF7F51C589962," IBM Cloud Object Storage (infrastructure) connection - -To access your data in IBM Cloud Object Storage (infrastructure), create a connection asset for it. - -The Cloud Object Storage (infrastructure) connection is for object storage that was formerly on SoftLayer. SoftLayer was replaced by IBM Cloud. You cannot provision a new instance for Cloud Object Storage (infrastructure). This connection is for users who set up an earlier instance on SoftLayer. - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_1,CD1C58AD8E180AB922890FA8182FF7F51C589962," Create a connection to Cloud Object Storage (infrastructure) - -To create the connection asset, you need this information. - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_2,CD1C58AD8E180AB922890FA8182FF7F51C589962," Required connection values - -The Login URL is required, plus one of the following values for authentication: - - - -* Access Key and Secret Key -* Credentials If you plan to use the S3 API, you must enter an Access Key. - - - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_3,CD1C58AD8E180AB922890FA8182FF7F51C589962," Connection values in the Cloud Object Storage Resource list - -The values for these fields are found in the Cloud Object Storage Resource list. - -To find the Login URL: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service. From the menu, select Endpoints. -4. Copy the value of the public endpoint that is in the same region as the bucket that you want to use. - - - -To find the values for Access key and the Secret Key: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service, and then click the Service credentials tab. -4. Expand the Key name that you want to use. Copy the values without the quotation marks: -5. Access Key: access_key_id -6. Secret Key: secret_access_key - -Note: Alternatively, you can use the contents of the JSON file in Credentials to copy the values for the Access Key and Secret Key. - - - -To find the Credentials: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service, and then click the Service credentials tab. -4. Expand the Key name that you want to use. -5. Copy the entire JSON file. Include the opening and closing braces { } symbols. - - - -For Certificates -(Optional) Enter the self-signed SSL certificate that was created by a tool such as OpenSSL. - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_4,CD1C58AD8E180AB922890FA8182FF7F51C589962," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_5,CD1C58AD8E180AB922890FA8182FF7F51C589962," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_6,CD1C58AD8E180AB922890FA8182FF7F51C589962," Where you can use this connection - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_7,CD1C58AD8E180AB922890FA8182FF7F51C589962,"Projects - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_8,CD1C58AD8E180AB922890FA8182FF7F51C589962,"Catalogs - - - -* Platform assets catalog - - - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_9,CD1C58AD8E180AB922890FA8182FF7F51C589962," Supported file types - -The Cloud Object Storage (infrastructure) connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -CD1C58AD8E180AB922890FA8182FF7F51C589962_10,CD1C58AD8E180AB922890FA8182FF7F51C589962," Learn more - -Related connection: [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_0,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," IBM Cloud Object Storage connection - -To access your data in IBM Cloud Object Storage (COS), create a connection asset for it. - -IBM Cloud Object Storage on IBM Cloud provides unstructured data storage for cloud applications. Cloud Object Storage offers S3 API and application binding with regional and cross-regional resiliency. - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_1,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Create a connection to IBM Cloud Object Storage - -To create the connection asset, you need these connection details: - - - -* Bucket name. (Optional. If you do not enter the bucket name, then the credentials must have permission to list all the buckets.) -* Login URL. To find the Login URL: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service. From the menu, select Endpoints. -4. Optional: Use the Select resiliency and Select location menus to filter the choices. -5. Copy the value of the public endpoint that is in the same region as the bucket that you want to use. - - - -* SSL certificate: (Optional). A self-signed certificate that was created by a tool such as OpenSSL. - - - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_2,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Credentials - -Use one of the following combination of values for authentication: - - - -* Service credentials - - - - - -* Resource instance ID and API key - - - - - -* Resource instance ID, API key, Access key, and Secret key (In this combination, the Resource instance ID and API key are used for authentication. The Access key and Secret key are stored.) -* Access key and Secret key - - - -To find the value for Service credentials: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service, and then click the Service credentials tab. -4. Expand the Key name that you want to use. -5. Copy the entire JSON file. Include the opening and closing braces { } symbols. - - - -To find the values for the API key, Access key, Secret key, and the Resource instance ID: - - - -1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). -2. Expand the Storage resource. -3. Click the Cloud Object Storage service, and then click the Service credentials tab. -4. Expand the Key name that you want to use. Copy the values without the quotation marks: -5. API key: apikey -6. Access key: access_key_id -7. Secret key: secret_access_key -8. Resource instance ID: resource_instance_id - - - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_3,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_4,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_5,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Where you can use this connection - -You can use IBM Cloud Object Storage connections in the following workspaces and tools: - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_6,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779,"Projects - - - -* AutoAI -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_7,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779,"Catalogs - - - -* Platform assets catalog - - - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_8,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Connecting to the Cloud Object Storage service with the S3 API - -To connect to Cloud Object Storage with the S3 API, you need the Login URL, an Access key and a Secret key. - -The API key is a token that is used to call the Watson IoT Platform HTTP APIs. Users are assigned roles and they can generate an API key that they can use to authorize calls to API endpoints. For more information, see the [IBM Cloud Object Storage S3 API documentation](https://cloud.ibm.com/docs/cloud-object-storage/api-reference?topic=cloud-object-storage-compatibility-api). - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_9,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," IBM Cloud Object Storage setup - -[Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage) - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_10,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Supported file types - -The IBM Cloud Object Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779_11,EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779," Learn more - -[Controlling access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -2BD7112457A8B916F3B4701580570C85AE1B520E_0,2BD7112457A8B916F3B4701580570C85AE1B520E," Microsoft Azure Cosmos DB connection - -To access your data in Microsoft Azure Cosmos DB, create a connection asset for it. - -Azure Cosmos DB is a fully managed NoSQL database service. - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_1,2BD7112457A8B916F3B4701580570C85AE1B520E," Create a connection to Microsoft Azure Cosmos DB - -To create the connection asset, you need these connection details: - - - -* Hostname -* Port number -* Master key: The Azure Cosmos Database primary read-write key - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_2,2BD7112457A8B916F3B4701580570C85AE1B520E," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_3,2BD7112457A8B916F3B4701580570C85AE1B520E," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_4,2BD7112457A8B916F3B4701580570C85AE1B520E," Where you can use this connection - -You can use Microsoft Azure Cosmos DB connections in the following workspaces and tools: - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_5,2BD7112457A8B916F3B4701580570C85AE1B520E,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_6,2BD7112457A8B916F3B4701580570C85AE1B520E,"Catalogs - - - -* Platform assets catalog - - - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_7,2BD7112457A8B916F3B4701580570C85AE1B520E," Azure Cosmos DB setup - - - -* Set up Azure Cosmos DB: [Azure portal](https://docs.microsoft.com/en-us/azure/cosmos-db/create-cosmosdb-resources-portal) -* Secure access to data in Azure Cosmos DB: [Master keys](https://docs.microsoft.com/en-us/azure/cosmos-db/secure-access-to-datamaster-keys) - - - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_8,2BD7112457A8B916F3B4701580570C85AE1B520E," Restrictions - -Only the Core (SQL) API is supported. - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_9,2BD7112457A8B916F3B4701580570C85AE1B520E," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Azure Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/) for the correct syntax. - -" -2BD7112457A8B916F3B4701580570C85AE1B520E_10,2BD7112457A8B916F3B4701580570C85AE1B520E," Learn more - -[Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -D377DA7CF67645F321593FA8B1536BE2F0753333_0,D377DA7CF67645F321593FA8B1536BE2F0753333," IBM Watson Query connection - -To access your data in Watson Query, create a connection asset for it. A Watson Query connection is created automatically in a catalog or project when you publish a virtual object to a catalog or assign it to a project. The Watson Query connection was formerly named the Data Virtualization connection. - -Watson Query integrates data sources across multiple types and locations and turns all this data into one logical data view. This virtual data view makes the job of getting value out of your data easy. - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_1,D377DA7CF67645F321593FA8B1536BE2F0753333," Create a Watson Query connection - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address of the database -* Port number -* Instance ID -* [Credentials information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html?context=cdpaas&locale=encreds) -* Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* SSL certificate (if required by the database server) - - - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_2,D377DA7CF67645F321593FA8B1536BE2F0753333," Credentials - -Connecting to a Watson Query instance in IBM Cloud - - - -* API key: Enter an IAM API key. Prerequisites: - - - -1. Add the user ID as an IAM user or as a service ID to your IBM Cloud account. For instructions, see the Console user experience section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iamconsole-ux) topic. -2. The Watson Query Manager of the Watson Query instance must add IAM users by selecting Data > Watson Query > Administration > User management from the IBM watsonx navigation menu. - - - - - -Connecting to a Watson Query instance in Cloud Pak for Data (on-prem) - - - -* User credentials: Enter your Cloud Pak for Data username and password. -* API key: Enter an API key value with your Cloud Pak for Data username and a Cloud Pak for Data API key. Use this syntax: user_name:api_key - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_3,D377DA7CF67645F321593FA8B1536BE2F0753333," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_4,D377DA7CF67645F321593FA8B1536BE2F0753333," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_5,D377DA7CF67645F321593FA8B1536BE2F0753333," Where you can use this connection - -You can use Watson Query connections in the following workspaces and tools: - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_6,D377DA7CF67645F321593FA8B1536BE2F0753333,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_7,D377DA7CF67645F321593FA8B1536BE2F0753333,"Catalogs - - - -* Platform assets catalog - - - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_8,D377DA7CF67645F321593FA8B1536BE2F0753333," Setup - -[Getting started with Watson Query ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-virtualize.html) - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_9,D377DA7CF67645F321593FA8B1536BE2F0753333," Restriction - -You can use this connection only for source data. You cannot write to data with this connection. - -" -D377DA7CF67645F321593FA8B1536BE2F0753333_10,D377DA7CF67645F321593FA8B1536BE2F0753333," Learn more - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_0,EBA93B500AF79FE9BC96FF2FFC71078766532A86," IBM Cloud Databases for DataStax connection - -To access your data in IBM Cloud Databases for DataStax, create a connection asset for it. - -Important: The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release. - -IBM Cloud Databases for DataStax is a scale-out NoSQL database in IBM Cloud that is built on Apache Cassandra. It’s designed to power real-time applications with high availability and massive scalability. - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_1,EBA93B500AF79FE9BC96FF2FFC71078766532A86," Create a connection to IBM Cloud Databases for DataStax - -To create the connection asset, you need these connection details: - - - -* Host name -* Port number -* Username and password -* Keyspace -* SSL certificate (if required by the database server) - - - -Recommended values to insert into ""SSL certificate"", ""Key certificate"" and ""Private key"" fields can be found in secure-connect-bundle.zip. It can be downloaded from the Databases for DataStax instance (tab Overview). After downloading secure connect bundle, unzip it, and you'll find the following: - - - -* SSL certificate property: contents of ca.crt -* Private key property: contents of key - - - -In order to paste contents of key into Private key property, it has to be parsed to one-line. It can be done for example by executing following in the console: tr -d 'n' < key. The output from this command can be put into Private key property. - - - -* Key certificate property: contents of cert - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_2,EBA93B500AF79FE9BC96FF2FFC71078766532A86," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_3,EBA93B500AF79FE9BC96FF2FFC71078766532A86," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_4,EBA93B500AF79FE9BC96FF2FFC71078766532A86," Where you can use this connection - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_5,EBA93B500AF79FE9BC96FF2FFC71078766532A86,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_6,EBA93B500AF79FE9BC96FF2FFC71078766532A86,"Catalogs - - - -* Platform assets catalog - - - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_7,EBA93B500AF79FE9BC96FF2FFC71078766532A86," IBM Cloud Databases for DataStax setup - -[Getting Started with IBM Cloud Databases for DataStax](https://cloud.ibm.com/docs/databases-for-cassandra) - -" -EBA93B500AF79FE9BC96FF2FFC71078766532A86_8,EBA93B500AF79FE9BC96FF2FFC71078766532A86," Learn more - -[IBM Cloud Databases for DataStax Documentation](https://cloud.ibm.com/databases/databases-for-cassandra/create) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_0,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," IBM Data Virtualization Manager for z/OS connection - -To access your data in Data Virtualization Manager for z/OS, create a connection asset for it. - -Use the Data Virtualization Manager for z/OS connection to access data in your z/OS mainframe environment. - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_1,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," Supported versions - -IBM Data Virtualization Manager for z/OS 1.1.0 - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_2,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," Create a connection to Data Virtualization Manager for z/OS - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_3,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_4,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_5,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268," Where you can use this connection - -You can use Data Virtualization Manager for z/OS connections in the following workspaces and tools: - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -323360F59C9E5C3D6BE4B7CD36927C23C0DFA268_6,323360F59C9E5C3D6BE4B7CD36927C23C0DFA268,"Catalogs - - - -* Platform assets catalog - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_0,456A228A64C2AD85E66F1FE0DE558B4A426B197C," IBM Db2 Big SQL connection - -To access your data in IBM Db2 Big SQL, create a connection asset for it. - -IBM Db2 Big SQL is a high performance massively parallel processing (MPP) SQL engine for Hadoop that makes querying enterprise data from across the organization an easy and secure experience. - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_1,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Supported versions - -Db2 Big SQL for Version 4.1+ - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_2,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Create a connection to IBM Db2 Big SQL - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_3,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_4,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_5,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Where you can use this connection - -You can use IBM Db2 Big SQL connections in the following workspaces and tools: - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_6,456A228A64C2AD85E66F1FE0DE558B4A426B197C,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_7,456A228A64C2AD85E66F1FE0DE558B4A426B197C,"Catalogs - - - -* Platform assets catalog - - - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_8,456A228A64C2AD85E66F1FE0DE558B4A426B197C," IBM Db2 Big SQL setup - -[Installing IBM Db2 Big SQL](https://www.ibm.com/docs/SSCRJT_5.0.3/com.ibm.swg.im.bigsql.doc/doc/hdp_bigsql_versions.html) - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_9,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ IBM Db2 Big SQL documentation](https://www.ibm.com/docs/SSCRJT_5.0.4/com.ibm.swg.im.bigsql.doc/doc/bi_sql_access.html) for the correct syntax. - -" -456A228A64C2AD85E66F1FE0DE558B4A426B197C_10,456A228A64C2AD85E66F1FE0DE558B4A426B197C," Learn more - -[Db2 Big SQL documentation](https://www.ibm.com/docs/en/db2-big-sql/5.0.3) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_0,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," IBM Db2 on Cloud connection - -To access your data in IBM Db2 on Cloud, you must create a connection asset for it. - -Db2 on Cloud is an SQL database that is managed by IBM Cloud and is provisioned for you in the cloud. - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_1,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Create a connection to Db2 on Cloud - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_2,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_3,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_4,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Where you can use this connection - -You can use Db2 on Cloud connections in the following workspaces and tools: - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_5,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_6,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3,"Catalogs - - - -* Platform assets catalog - - - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_7,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Db2 on Cloud setup - -[Getting started with Db2 on Cloud](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-getting-started) - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_8,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Structured Query Language (SQL)](https://www.ibm.com/docs/SSFMBX/com.ibm.swg.im.dashdb.sql.ref.doc/doc/c0004100.html) topic in the Db2 on Cloud documentation for the correct syntax. - -" -8C953B2300B547AD82EDD9697CAB8A5985F5EAD3_9,8C953B2300B547AD82EDD9697CAB8A5985F5EAD3," Learn more - - - -* [Db2 on Cloud documentation](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-about) -* [SSL connectivity](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-ssl_support) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_0,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," IBM Db2 Warehouse connection - -To access your data in IBM Db2 Warehouse, create a connection asset for it. - -IBM Db2 Warehouse is an analytics data warehouse that gives you a high level of control over your data and applications. You can use the IBM Db2 Warehouse connection to connect to a database in these products: - - - -* IBM Db2 Warehouse in IBM Cloud -* IBM Db2 Warehouse on-prem - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_1,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Create a connection to Db2 Warehouse - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address of the database server -* Port number -* API key or Username and password -* Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* SSL certificate (if required by the database server) - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_2,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Credentials - -For Credentials, you can enter either an API key or a username and password - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_3,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Authenticating with an API Key - -You can use an API key to authenticate to Db2 Warehouse in IBM Cloud. - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_4,C61D407536D31A069AA857469A0EEBFEF1C0E1B8,"Db2 Warehouse in IBM Cloud -First add the user ID as an IAM user or as a service ID. For instructions, see the Console user experience section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iamconsole-ux) topic. - -If users want to authenticate with Db2 Warehouse with an IAM API key, the administrator of the Db2 Warehouse instance can add the IAM users by using the User management console, and then the users can each create an API key for themselves by using the IAM access management console. - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_5,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_6,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_7,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Where you can use this connection - -You can use Db2 Warehouse connections in the following workspaces and tools: - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_8,C61D407536D31A069AA857469A0EEBFEF1C0E1B8,"Catalogs - - - -* Platform assets catalog - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_9,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Db2 Warehouse setup - - - -* IBM Db2 Warehouse on Cloud: [Getting started with Db2 Warehouse on Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-getting-started) -* IBM Db2 Warehouse on-prem: [Setting up Db2 Warehouse](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/admin/local_setup.html) - - - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_10,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the product documentation in [Learn more ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html?context=cdpaas&locale=enlm) for the correct syntax. - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_11,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Known issue - -On Data Refinery, system-level schemas aren’t filtered out. - -" -C61D407536D31A069AA857469A0EEBFEF1C0E1B8_12,C61D407536D31A069AA857469A0EEBFEF1C0E1B8," Learn more - - - -* IBM Db2 Warehouse on Cloud [product documentation](https://cloud.ibm.com/docs/Db2whc) (IBM Cloud) -* IBM Db2 Warehouse on-prem [product documentation](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/local_overview.html) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_0,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," IBM Db2 connection - -To access your data in an IBM Db2 database, you must create a connection asset for it. - -IBM Db2 is a database that contains relational data. - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_1,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Supported versions - -IBM Db2 10.1 and later - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_2,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Create a connection to Db2 - -To create the connection asset, you need the following connection details: - - - -* Database -* Hostname or IP address -* Username and password See [Credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html?context=cdpaas&locale=encreds). -* Port -* Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). - - - - - -* SSL certificate (if required by your database server) - - - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_3,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7,"For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_4,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_5,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_6,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Where you can use this connection - -You can use Db2 connections in the following workspaces and tools: - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_7,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_8,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7,"Catalogs - - - -* Platform assets catalog - - - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_9,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Structured Query Language (SQL)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/c0004100.html) topic in the IBM Db2 product documentation for the correct syntax. - -" -9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7_10,9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7," Learn more - -[IBM Db2 product documentation](https://www.ibm.com/docs/db2) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_0,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," IBM Db2 for i connection - -To access your data in IBM Db2 for i, create a connection asset for it. - -Db2 for i is the relational database manager that is fully integrated on your system. Because it is integrated on the system, Db2 for i is easy to use and manage. - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_1,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Supported versions - -IBM DB2 for i 7.2+ - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_2,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Prerequisites - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_3,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Obtain the certificate file - -A certificate file on the Db2 for i server is required to use this connection. To obtain an IBM Db2 Connect Unlimited Edition license certificate file, go to [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.gs.doc/doc/t0010264.html). For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.licensing.doc/doc/t0057375.html). - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_4,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Run the bind command - -Run the following commands from the Db2 client that is configured to access the Db2 for i server. -You need to run the bind command only once per remote database per Db2 client version. - -db2 connect to DBALIAS user USERID using PASSWORD -db2 bind path@ddcs400.lst blocking all sqlerror continue messages ddcs400.msg grant public -db2 connect reset - -For information about bind commands, see [Binding applications and utilities](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html). - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_5,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Run catalog commands - -Run the following catalog commands from the Db2 client that is configured to access the Db2 for i server: - - - -1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name - -Example: -db2 catalog tcpip node db2i123 remote 192.0.2.0 server 446 - -2. db2 catalog dcs database local_name as real_db_name - -Example: -db2 catalog dcs database db2i123 as db2i123 - -3. db2 catalog database local_name as alias at node node_name authentication server - -Example: -db2 catalog database db2i123 as db2i123 at node db2i123 authentication server - - - -For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html). - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_6,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Create a connection to Db2 for i - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port number -* Location: The unique name of the Db2 location you want to access -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_7,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_8,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_9,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Where you can use this connection - -You can use Db2 for i connections in the following workspaces and tools: - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_10,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_11,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6,"Catalogs - - - -* Platform assets catalog - - - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_12,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Restriction - -For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Db2 for i connection connected data asset. - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_13,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Db2 for i SQL reference](https://www.ibm.com/docs/ssw_ibm_i_72/db2/rbafzintro.htm) for the correct syntax. - -" -2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6_14,2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6," Learn more - -[IBM Db2 for i documentation](https://www.ibm.com/docs/i) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -BE7F45C3E17998A50B8414D623007ED668B37C04_0,BE7F45C3E17998A50B8414D623007ED668B37C04," IBM Db2 for z/OS connection - -To access your data in IBM Db2 for z/OS, create a connection asset for it. - -Db2 for z/OS is an enterprise data server for IBM Z. It manages core business data across an enterprise and supports key business applications. - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_1,BE7F45C3E17998A50B8414D623007ED668B37C04," Supported versions - -IBM Db2 for z/OS version 11 and later - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_2,BE7F45C3E17998A50B8414D623007ED668B37C04," Prerequisites - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_3,BE7F45C3E17998A50B8414D623007ED668B37C04," Obtain the certificate file - -A certificate file on the Db2 for z/OS server is required to use this connection.These steps must be done on the Db2 for z/OS server: Obtain an IBM Db2 Connect Unlimited Edition license certificate file from [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/en/db2/11.5?topic=apis-installing-data-server-driver-jdbc-sqlj). For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/en/db2/11.5?topic=li-activating-license-certificate-file-db2-connect-unlimited-edition). - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_4,BE7F45C3E17998A50B8414D623007ED668B37C04," Run the bind command - -Run the following commands from the Db2 client that is configured to access the Db2 for z/OS server. -You need to run the bind command only once per remote database per Db2 client version. - -db2 connect to DBALIAS user USERID using PASSWORD -db2 bind path@ddcsmvs.lst blocking all sqlerror continue messages ddcsmvs.msg grant public -db2 connect reset - -For information about bind commands, see [Binding applications and utilities (Db2 Connect Server)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html?pos=2). - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_5,BE7F45C3E17998A50B8414D623007ED668B37C04," Run catalog commands - -Run the following catalog commands from the Db2 client that is configured to access the Db2 for z/OS server: - - - -1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name - -Example: -db2 catalog tcpip node db2z123 remote 192.0.2.0 server 446 - -2. db2 catalog dcs database local_name as real_db_name - -Example: -db2 catalog dcs database db2z123 as db2z123 - -3. db2 catalog database local_name as alias at node node_name authentication server - -Example: -db2 catalog database db2z123 as db2z123 at node db2z123 authentication server - - - -For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html). - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_6,BE7F45C3E17998A50B8414D623007ED668B37C04," Create a connection to Db2 for z/OS - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port number -* Collection ID: The ID of the collections of packages to use -* Location: The unique name of the Db2 location you want to access -* Username and password -* Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). -* SSL certificate (if required by the database server) - - - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_7,BE7F45C3E17998A50B8414D623007ED668B37C04,"For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_8,BE7F45C3E17998A50B8414D623007ED668B37C04," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_9,BE7F45C3E17998A50B8414D623007ED668B37C04," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_10,BE7F45C3E17998A50B8414D623007ED668B37C04," Where you can use this connection - -You can use Db2 for z/OS connections in the following workspaces and tools: - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_11,BE7F45C3E17998A50B8414D623007ED668B37C04,"Projects - - - -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_12,BE7F45C3E17998A50B8414D623007ED668B37C04,"Catalogs - - - -* Platform assets catalog - - - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_13,BE7F45C3E17998A50B8414D623007ED668B37C04," Restriction - -For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Db2 for z/OS connected data asset. - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_14,BE7F45C3E17998A50B8414D623007ED668B37C04," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Db2 for z/OS and SQL concepts](https://www.ibm.com/docs/db2-for-zos/12?topic=zos-db2-sql-concepts) for the correct syntax. - -" -BE7F45C3E17998A50B8414D623007ED668B37C04_15,BE7F45C3E17998A50B8414D623007ED668B37C04," Learn more - -[IBM Db2 for z/OS documentation](https://www.ibm.com/docs/db2-for-zos) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_0,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," IBM Cloud Databases for PostgreSQL connection - -To access your data in IBM Cloud Databases for PostgreSQL, you must create a connection asset for it. - -IBM Cloud Databases for PostgreSQL is an open source object-relational database that is highly customizable. It’s a feature-rich enterprise database with JSON support. - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_1,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Create a connection to IBM Cloud Databases for PostgreSQL - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address of the database -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_2,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_3,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_4,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Where you can use this connection - -You can use IBM Cloud Databases for PostgreSQL connections in the following workspaces and tools: - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_5,92C74E7245DFE20BF93F1D73039ED4DF95375C6F,"Projects - - - -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_6,92C74E7245DFE20BF93F1D73039ED4DF95375C6F,"Catalogs - - - -* Platform assets catalog - - - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_7,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," IBM Cloud Databases for PostgreSQL setup - -[IBM Cloud Databases for PostgreSQL setup](https://cloud.ibm.com/catalog/services/databases-for-postgresql) - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_8,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Restriction - -For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to an IBM Cloud Databases for PostgreSQL connected data asset. - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_9,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [IBM Cloud Databases for PostgreSQL documentation](https://www.postgresql.org/docs/9.1/ecpg-commands.html) for the correct syntax. - -" -92C74E7245DFE20BF93F1D73039ED4DF95375C6F_10,92C74E7245DFE20BF93F1D73039ED4DF95375C6F," Learn more - -[ IBM Cloud Databases for PostgreSQL documentation](https://cloud.ibm.com/catalog/services/databases-for-postgresql) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_0,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Apache Derby connection - -To access your data in Apache Derby, create a connection asset for it. - -Apache Derby is a relational database management system developed by the Apache Software Foundation. - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_1,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Create a connection to Apache Derby - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_2,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_3,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_4,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Where you can use this connection - -You can use Apache Derby connections in the following workspaces and tools: - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_5,7551CB2BCD77B26AE0E05154A3E8CC51C070D707,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_6,7551CB2BCD77B26AE0E05154A3E8CC51C070D707,"Catalogs - - - -* Platform assets catalog - - - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_7,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Apache Derby setup - -[Apache Derby installation](https://db.apache.org/derby/papers/DerbyTut/install_software.htmlderby_download) - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_8,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Apache Derby documentation](https://db.apache.org/derby/docs/10.8/ref/index.html) for the correct syntax. - -" -7551CB2BCD77B26AE0E05154A3E8CC51C070D707_9,7551CB2BCD77B26AE0E05154A3E8CC51C070D707," Learn more - -[Apache Derby documentation](https://db.apache.org/derby/papers/DerbyTut/install_software.htmlderby) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_0,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Dremio connection - -To access your data in Dremio, create a connection asset for it. - -Dremio is an open data lake platform. It supports all the major third-party data sources. - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_1,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Create a connection to Dremio - -To create the connection asset, you need these connection details: - - - -* Username and password -* Hostname: You can create a Dremio Cloud instance only in the European Union (EU) or the United States (US). Use sql.dremio.cloud for the US, and use sql.eu.dremio.cloud for the EU. Dremio Software can be hosted anywhere. -* Port number: The default port for Dremio Cloud instances is 443 and for Dremio Software it is 31010. -* Dremio Cloud Project ID: See [Obtaining the ID of a Project](https://docs.dremio.com/cloud/cloud-entities/projects/.obtaining-the-id-of-a-project). -* SSL certificate: - - - -* Select Port is SSL-enabled if you provided Dremio Cloud Project ID. -* Select Port is SSL-enabled and provide SSL Certificate if you want to connect to Dremio Software with SSL. - - - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_2,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Choose the method for creating a connection based on where you are in the platform - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_3,E700C898EC2EFE96C76C2CAC042E063529B23D3B,"In a project -Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_4,E700C898EC2EFE96C76C2CAC042E063529B23D3B,"In a deployment space -Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_5,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_6,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Where you can use this connection - -You can use the Dremio connection in the following workspaces and tools: - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_7,E700C898EC2EFE96C76C2CAC042E063529B23D3B,"Projects - - - -* Data Refinery - - - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_8,E700C898EC2EFE96C76C2CAC042E063529B23D3B,"Catalogs - - - -* Platform assets catalog - - - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_9,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Dremio setup - -Dremio can be set up in various deployments, see [Dremio Cluster Deployment](https://docs.dremio.com/current/get-started/cluster-deployments/). To set up Dremio Cloud, see [Dremio Cloud](https://docs.dremio.com/cloud/). - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_10,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Restrictions - -You can use this connection only for reading data. You cannot write data or export data with this connection. - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_11,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Dremio SQL Reference](https://docs.dremio.com/software/sql-reference/) for the correct syntax. - -" -E700C898EC2EFE96C76C2CAC042E063529B23D3B_12,E700C898EC2EFE96C76C2CAC042E063529B23D3B," Learn more - - - -* [Dremio Software documentation](https://docs.dremio.com/current/) -* [Dremio Cloud documentation](https://docs.dremio.com/cloud/) - - - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_0,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Dropbox connection - -To access your data in Dropbox, create a connection asset for it. - -Dropbox is a cloud storage service that lets you host and synchronize files on your devices. - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_1,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Create a connection to Dropbox - -To create the connection asset, you need an Access token. - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_2,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_3,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_4,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Where you can use this connection - -You can use Dropbox connections in the following workspaces and tools: - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_5,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator (Synthetic Data Generator service) - - - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_6,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34,"Catalogs - - - -* Platform assets catalog - - - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_7,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Dropbox setup - -[Dropbox plans](https://www.dropbox.com/plans) - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_8,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Supported file types - -The Dropbox connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34_9,1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34," Learn more - -[Dropbox quick start guides](https://help.dropbox.com/guide) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_0,36D7217BC75C917100AE7DF27DC14FDB919D1609," Elasticsearch connection - -To access your data in Elasticsearch, create a connection asset for it. - -Elasticsearch is a distributed, open source search and analytics engine. Use the Elasticsearch connection to access JSON documents in Elasticsearch indexes. - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_1,36D7217BC75C917100AE7DF27DC14FDB919D1609," Supported versions - -Elasticsearch version 6.0 or later - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_2,36D7217BC75C917100AE7DF27DC14FDB919D1609," Create a connection to Elasticsearch - -To create the connection asset, you need these connection details: - - - -* Username and password -(Optional) Anonymous access -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_3,36D7217BC75C917100AE7DF27DC14FDB919D1609," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_4,36D7217BC75C917100AE7DF27DC14FDB919D1609," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_5,36D7217BC75C917100AE7DF27DC14FDB919D1609," Where you can use this connection - -You can use Elasticsearch connections in the following workspaces and tools: - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_6,36D7217BC75C917100AE7DF27DC14FDB919D1609,"Projects - - - -* Data Refinery -* SPSS Modeler - - - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_7,36D7217BC75C917100AE7DF27DC14FDB919D1609,"Catalogs - - - -* Platform assets catalog - - - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_8,36D7217BC75C917100AE7DF27DC14FDB919D1609," Elasticsearch setup - -[Set up Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html) - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_9,36D7217BC75C917100AE7DF27DC14FDB919D1609," Restrictions - - - -* For Elasticsearch versions earlier than version 7, read is limited to 10,000 rows. -* For Data Refinery, the only supported action on the target file is to append all the rows of the Data Refinery flow output to the existing data set. - - - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_10,36D7217BC75C917100AE7DF27DC14FDB919D1609," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Elasticsearch Guide for SQL](https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-sql.html) for the correct syntax. - -" -36D7217BC75C917100AE7DF27DC14FDB919D1609_11,36D7217BC75C917100AE7DF27DC14FDB919D1609," Learn more - - - -* [Elasticsearch](https://www.elastic.co/elasticsearch/) -* [Elastic Docs](https://www.elastic.co/guide/en/elastic-stack/current/overview.html) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_0,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," FTP (remote file system) connection - -To access your data with the FTP protocol, create a connection asset for it. - -FTP is a standard communication protocol that is used to transfer files from a server to a client on a computer network. - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_1,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," Create an FTP connection - -To create the connection asset, you need these connection details: - - - -* Connection mode: The connection method configured on the FTP server: - - - -* Anonymous -* Basic authentication (with username and password) -* SFTP Tectia: Transfer data sets that are in Multiple Virtual Storage (MVS) format to or from an IBM z/OS mainframe computer. MVS data sets use a period (.) to separate the qualifiers in the data set names. To write to an MVS data set, select Access MVS Dataset and enter the file transfer advice (FTADV) strings in key-value pairs separated by commas. For information, see the [Tectia documentation](https://info.ssh.com/hubfs/2021%20Support%20manuals%20documents/TectiaServer_zOS_UserManual.pdf). -* SSH: File transfer over a secure channel that uses the Secure Shell protocol. Also requires username and password. -* SSL: File transfer that uses File Transport Protocol (FTP), which supports secure transmission via SSL (sslTLSv2) protocol. Also requires username and password. - - - -* Hostname or IP address -* Port number of the FTP server -* SSH mode: Private key and Key passphrase - - - - - -* Authentication method: - - - -* Username and password -* Username, password, private key. If you use an encrypted private key, you will need a key passphrase. -* Username and private key. If you use an encrypted private key, you will need a key passphrase. - - - - - -If you use a private key, make sure that the key is an RSA private key that is generated by the ssh-keygen tool. The private key must be in the PEM format. - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). This selection is available for the SSH connection mode only. - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_2,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_3,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_4,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," Where you can use this connection - -You can use FTP connections in the following workspaces and tools: - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_5,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_6,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8,"Catalogs - - - -* Platform assets catalog - - - -" -8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8_7,8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8," Supported file types - -The FTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_0,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Generic S3 connection - -To access your data from a storage service that is compatible with the Amazon S3 API, create a connection asset for it. - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_1,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Create a Generic S3 connection - -To create the connection asset, you need these connection details: - - - -* Endpoint URL: The endpoint URL to access to S3 -* Bucket(optional): The name of the bucket that contains the files -* Region (optional): S3 region. Specify a region that matches the regional endpoint. -* Access key: The access key (username) that authorizes access to S3 -* Secret key: The password associated with the Access key ID that authorizes access to S3 -* The SSL certificate of the trusted host. The certificate is required when the host certificate is not signed by a known certificate authority. -* Disable chunked encoding: Select if the storage does not support chunked encoding. -* Enable global bucket access: Consult the documentation for your S3 data source for whether to select this property. -* Enable path style access: Consult the documentation for your S3 data source for whether to select this property. - - - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_2,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Choose the method for creating a connection based on where you are in the platform - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_3,631559E8401C52C3AAC6D64E1F9DA0F765FC4846,"In a project -Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_4,631559E8401C52C3AAC6D64E1F9DA0F765FC4846,"In a deployment space -Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_5,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_6,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Where you can use this connection - -You can use the Generic S3 connection in the following workspaces and tools: - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_7,631559E8401C52C3AAC6D64E1F9DA0F765FC4846,"Projects - - - -* Data Refinery - - - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_8,631559E8401C52C3AAC6D64E1F9DA0F765FC4846,"Catalogs - - - -* Platform assets catalog - - - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_9,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Generic S3 connection setup - -For setup information, consult the documentation of the S3-compatible data source that you are connecting to. - -" -631559E8401C52C3AAC6D64E1F9DA0F765FC4846_10,631559E8401C52C3AAC6D64E1F9DA0F765FC4846," Supported file types - -The Generic S3 connection supports these file types: Avro, CSV, delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -Related connection: [Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -908121C9993CBEDB4916C19D84605A9728FD27E5_0,908121C9993CBEDB4916C19D84605A9728FD27E5," Greenplum connection - -To access your data in Greenplum, you must create a connection asset for it. - -Greenplum is a massively parallel processing (MPP) database server that supports next generation data warehousing and large-scale analytics processing. - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_1,908121C9993CBEDB4916C19D84605A9728FD27E5," Supported versions - -Greenplum 3.2+ - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_2,908121C9993CBEDB4916C19D84605A9728FD27E5," Create a connection to Greenplum - -To create the connection asset, you need the following connection details: - - - -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_3,908121C9993CBEDB4916C19D84605A9728FD27E5," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_4,908121C9993CBEDB4916C19D84605A9728FD27E5," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_5,908121C9993CBEDB4916C19D84605A9728FD27E5," Where you can use this connection - -You can use Greenplum connections in the following workspaces and tools: - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_6,908121C9993CBEDB4916C19D84605A9728FD27E5,"Catalogs - - - -* Platform assets catalog - - - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_7,908121C9993CBEDB4916C19D84605A9728FD27E5," Greenplum setup - -[Greenplum Database Installation Guide](https://docs.vmware.com/en/VMware-Greenplum/5/greenplum-database/install_guide-install_guide.html) - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_8,908121C9993CBEDB4916C19D84605A9728FD27E5," Restriction - -For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Greenplum connected data asset. - -" -908121C9993CBEDB4916C19D84605A9728FD27E5_9,908121C9993CBEDB4916C19D84605A9728FD27E5," Learn more - - - -* [Greenplum database](https://greenplum.org/) -* [Greenplum documentation](https://docs.Greenplum.org/6-8/common/gpdb-features.html) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_0,049DC7FC73042985F3258EF2CF3BB05114F7F175," Apache HDFS connection - -To access your data in Apache HDFS, create a connection asset for it. - -Apache Hadoop Distributed File System (HDFS) is a distributed file system that is designed to run on commodity hardware. Apache HDFS was formerly Hortonworks HDFS. - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_1,049DC7FC73042985F3258EF2CF3BB05114F7F175," Create a connection to Apache HDFS - -To create the connection asset, you need these connection details. The WebHDFS URL is required. -The available properties in the connection form depend on whether you select Connect to Apache Hive so that you can write tables to the Hive data source. - - - -* WebHDFS URL to access HDFS. -* Hive host: Hostname or IP address of the Apache Hive server. -* Hive database: The database in Apache Hive. -* Hive port number: The port number of the Apache Hive server. The default value is 10000. -* Hive HTTP path: The path of the endpoint such as gateway/default/hive when the server is configured for HTTP transport mode. -* SSL certificate (if required by the Apache Hive server). - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_2,049DC7FC73042985F3258EF2CF3BB05114F7F175," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_3,049DC7FC73042985F3258EF2CF3BB05114F7F175," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_4,049DC7FC73042985F3258EF2CF3BB05114F7F175," Where you can use this connection - -You can use Apache HDFS connections in the following workspaces and tools: - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_5,049DC7FC73042985F3258EF2CF3BB05114F7F175,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_6,049DC7FC73042985F3258EF2CF3BB05114F7F175,"Catalogs - - - -* Platform assets catalog - - - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_7,049DC7FC73042985F3258EF2CF3BB05114F7F175," Apache HDFS setup - -[Install and set up a Hadoop cluster](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.htmlPrerequisites) - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_8,049DC7FC73042985F3258EF2CF3BB05114F7F175," Supported file types - -The Apache HDFS connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -" -049DC7FC73042985F3258EF2CF3BB05114F7F175_9,049DC7FC73042985F3258EF2CF3BB05114F7F175," Learn more - -[Apache HDFS Users Guide](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -4451C208E0350C4C480F50929BD6735588B6F2BC_0,4451C208E0350C4C480F50929BD6735588B6F2BC," Apache Hive connection - -To access your data in Apache Hive, you must create a connection asset for it. - -Apache Hive is a data warehouse software project that provides data query and analysis and is built on top of Apache Hadoop. - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_1,4451C208E0350C4C480F50929BD6735588B6F2BC," Supported versions - -Apache Hive 1.0.x, 1.1.x, 1.2.x. 2.0.x, 2.1.x, 3.0.x, 3.1.x. - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_2,4451C208E0350C4C480F50929BD6735588B6F2BC," Create a connection to Apache Hive - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Port number -* HTTP path (Optional): The path of the endpoint such as the gateway, default, or hive if the server is configured for the HTTP transport mode. -* If required by the database server, the SSL certificate - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_3,4451C208E0350C4C480F50929BD6735588B6F2BC," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_4,4451C208E0350C4C480F50929BD6735588B6F2BC," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_5,4451C208E0350C4C480F50929BD6735588B6F2BC," Where you can use this connection - -You can use the Apache Hive connection in the following workspaces and tools: - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_6,4451C208E0350C4C480F50929BD6735588B6F2BC,"Catalogs - - - -* Platform assets catalog - - - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_7,4451C208E0350C4C480F50929BD6735588B6F2BC," Apache Hive setup - -[Apache Hive installation and configuration](https://cwiki.apache.org/confluence/display/Hive/GettingStartedGettingStarted-InstallationandConfiguration) - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_8,4451C208E0350C4C480F50929BD6735588B6F2BC," Restriction - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_9,4451C208E0350C4C480F50929BD6735588B6F2BC," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [SQL Operations](https://cwiki.apache.org/confluence/display/Hive/GettingStartedGettingStarted-SQLOperations) in the Apache Hive documentation for the correct syntax. - -" -4451C208E0350C4C480F50929BD6735588B6F2BC_10,4451C208E0350C4C480F50929BD6735588B6F2BC," Learn more - -[Apache Hive documentation](https://cwiki.apache.org/confluence/display/Hive/GettingStarted) - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_0,2245768521F36E6F2DF594E1BF3111DD63CC824A," HTTP connection - -To access your data from a URL, create an HTTP connection asset for it. - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_1,2245768521F36E6F2DF594E1BF3111DD63CC824A," Supported file - -Use the full path in the URL to the file that you want to read. You cannot browse for files. - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_2,2245768521F36E6F2DF594E1BF3111DD63CC824A," Certificates - -Enter the SSL certificate of the host to be trusted. The SSL certificate is needed only when the host certificate is not signed by a known certificate authority. - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_3,2245768521F36E6F2DF594E1BF3111DD63CC824A," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_4,2245768521F36E6F2DF594E1BF3111DD63CC824A," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_5,2245768521F36E6F2DF594E1BF3111DD63CC824A," Where you can use this connection - -You can use HTTP connections in the following workspaces and tools: - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_6,2245768521F36E6F2DF594E1BF3111DD63CC824A,"Projects - - - -* Data Refinery -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_7,2245768521F36E6F2DF594E1BF3111DD63CC824A,"Catalogs - - - -* Platform assets catalog - - - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_8,2245768521F36E6F2DF594E1BF3111DD63CC824A," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -2245768521F36E6F2DF594E1BF3111DD63CC824A_9,2245768521F36E6F2DF594E1BF3111DD63CC824A," Supported file types - -The HTTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -FA15A8A5795BAEC1D8933A768407294110203E03_0,FA15A8A5795BAEC1D8933A768407294110203E03," IBM Informix connection - -To access your data in an IBM Informix database, create a connection asset for it. - -IBM Informix is a database that contains relational, object-relational, or dimensional data. You can use the Informix connection to access data from an on-prem Informix database server or from IBM Informix on Cloud. - -" -FA15A8A5795BAEC1D8933A768407294110203E03_1,FA15A8A5795BAEC1D8933A768407294110203E03," Supported Informix versions (on-prem) - - - -* Informix 14.10 and later. This version does not support the Progress DataDirect JDBC driver, which is used by the Informix connection. The Informix connection supports Informix 14.10 features that are comparable to previous Informix versions, but not the new features. Issues related to DataDirect's JDBC driver are not supported. -* Informix 12.10 and later -* Informix 11.0 and later -* Informix 10.0 and later -* Informix 9.2 and later - - - -" -FA15A8A5795BAEC1D8933A768407294110203E03_2,FA15A8A5795BAEC1D8933A768407294110203E03," Create a connection to Informix - -To create the connection asset, you need these connection details: - - - -* Name of the database server -* Name of the database -* Hostname or IP address of the database -* Port number (Default is 1526) -* Username and password - - - -On-prem Informix database servers: For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -FA15A8A5795BAEC1D8933A768407294110203E03_3,FA15A8A5795BAEC1D8933A768407294110203E03," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -FA15A8A5795BAEC1D8933A768407294110203E03_4,FA15A8A5795BAEC1D8933A768407294110203E03," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -FA15A8A5795BAEC1D8933A768407294110203E03_5,FA15A8A5795BAEC1D8933A768407294110203E03," Where you can use this connection - -You can use Informix connections in the following workspaces and tools: - -" -FA15A8A5795BAEC1D8933A768407294110203E03_6,FA15A8A5795BAEC1D8933A768407294110203E03,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -FA15A8A5795BAEC1D8933A768407294110203E03_7,FA15A8A5795BAEC1D8933A768407294110203E03,"Catalogs - - - -* Platform assets catalog - - - -" -FA15A8A5795BAEC1D8933A768407294110203E03_8,FA15A8A5795BAEC1D8933A768407294110203E03," Informix setup - -To set up Informix, see these topics: - - - -* Informix on-prem: [Creating a database server after installation](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.inst.doc/ids_inst_023.htm) -* Informix on Cloud: [Getting started with Informix on Cloud](https://cloud.ibm.com/docs/InformixOnCloud/InformixOnCloud.html) - - - -" -FA15A8A5795BAEC1D8933A768407294110203E03_9,FA15A8A5795BAEC1D8933A768407294110203E03," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Guide to SQL: Syntax](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.sqls.doc/sqls.htm) in the product documentation for the correct syntax. - -" -FA15A8A5795BAEC1D8933A768407294110203E03_10,FA15A8A5795BAEC1D8933A768407294110203E03," Learn more - - - -* [Informix product documentation](https://www.ibm.com/docs/informix-servers/14.10) (on-prem) -* [IBM Informix on Cloud](https://www.ibm.com/cloud/informix) -* [IBM Informix on Cloud FAQ](https://www.ibm.com/cloud/informix/faq) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_0,102D3D188E3EDC4A4AA55F731966EBB22C827822," Looker connection - -To access your data in Looker, create a connection asset for it. - -Looker is a business intelligence software and big data analytics platform that helps you explore, analyze and share real-time business analytics. - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_1,102D3D188E3EDC4A4AA55F731966EBB22C827822," Create a connection to Looker - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port number of the Looker server -* Client ID and Client secret - - - -Before you configure the connection, set up API3 credentials for your Looker instance. For details, see [Looker API Authentication](https://www.ibm.com/links?url=https%3A%2F%2Fdocs.looker.com%2Freference%2Fapi-and-integration%2Fapi-auth). - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_2,102D3D188E3EDC4A4AA55F731966EBB22C827822," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_3,102D3D188E3EDC4A4AA55F731966EBB22C827822," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_4,102D3D188E3EDC4A4AA55F731966EBB22C827822," Where you can use this connection - -You can use Looker connections in the following workspaces and tools: - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_5,102D3D188E3EDC4A4AA55F731966EBB22C827822,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_6,102D3D188E3EDC4A4AA55F731966EBB22C827822,"Catalogs - - - -* Platform assets catalog - - - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_7,102D3D188E3EDC4A4AA55F731966EBB22C827822," Looker setup - -[Set up and administer Looker](https://docs.looker.com/admin-options) - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_8,102D3D188E3EDC4A4AA55F731966EBB22C827822," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_9,102D3D188E3EDC4A4AA55F731966EBB22C827822," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the Looker documentation, [Using SQL Runner](https://docs.looker.com/data-modeling/learning-lookml/sql-runner-create-queries), for the correct syntax. - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_10,102D3D188E3EDC4A4AA55F731966EBB22C827822," Supported file types - -The Looker connection supports these file types: CSV, Delimited text, Excel, JSON. - -" -102D3D188E3EDC4A4AA55F731966EBB22C827822_11,102D3D188E3EDC4A4AA55F731966EBB22C827822," Learn more - -[Looker documentation](https://docs.looker.com/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_0,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," MariaDB connection - -To access your data in MariaDB, create a connection asset for it. - -MariaDB is an open source relational database. You can use the MariaDB connection to connect to either a MariaDB server or to a Microsoft Azure Database for MariaDB service in the cloud. - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_1,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Supported versions - - - -* MariaDB server: 10.5.5 -* Microsoft Azure Database for MariaDB: 10.3 - - - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_2,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Create a connection to MariaDB - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_3,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_4,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_5,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Where you can use this connection - -You can use MariaDB connections in the following workspaces and tools: - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_6,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_7,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93,"Catalogs - - - -* Platform assets catalog - - - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_8,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," MariaDB setup - -Setup depends on whether you are connecting from a local MariaDB server or a Microsoft Azure Database for MariaDB database service in the cloud. - - - -* MariaDB server: [MariaDB Administration](https://mariadb.com/kb/en/mariadb-administration/) -* Microsoft Azure Database for MariaDB: [Quickstart: Create an Azure Database for MariaDB server by using the Azure portal](https://docs.microsoft.com/en-us/azure/mariadb/quickstart-create-mariadb-server-database-using-azure-portal) - - - -" -FECB1C7B603627E1CF1386AD0EBDFE57FA485F93_9,FECB1C7B603627E1CF1386AD0EBDFE57FA485F93," Learn more - - - -* [MariaDB Foundation](https://mariadb.org/) -* [Microsoft Azure Database for MariaDB](https://azure.microsoft.com/en-us/services/mariadb/) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_0,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," MongoDB connection - -To access your data in MongoDB, create a connection asset for it. - -MongoDB is a distributed database that stores data in JSON-like documents. - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_1,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Supported editions and versions - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_2,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," MongoDB editions - - - -* MongoDB Community -* IBM Cloud Databases for MongoDB. See [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) for this data source. -* MongoDB Atlas -* WiredTiger Storage Engine - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_3,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," MongoDB versions - - - -* MongoDB 3.6 and later, 4.x, 5.x, and 6.x -* Microsoft Azure Cosmos DB for MongoDB 3.6 and later, 4.x - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_4,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Create a connection to MongoDB - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Authentication database: The name of the database in which the user was created. -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_5,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_6,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_7,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Where you can use this connection - -You can use MongoDB connections in the following workspaces and tools: - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_8,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_9,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E,"Catalogs - - - -* Platform assets catalog - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_10,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," MongoDB setup - -[MongoDB installation](https://docs.mongodb.com/manual/installation/) - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_11,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Restrictions - - - -* You can only use this connection for source data. You cannot write to data or export data with this connection. -* MongoDB Query Language (MQL) is not supported. - - - -" -FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E_12,FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E," Learn more - - - -* [MongoDB tutorials](https://docs.mongodb.com/manual/tutorial/) -* [mongodb.com](https://www.mongodb.com/) - - - -Related connection: [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -3721517369CF4EA8476BCBB39040542BA2A212D8_0,3721517369CF4EA8476BCBB39040542BA2A212D8," IBM Cloud Databases for MongoDB connection - -To access your data in IBM Cloud Databases for MongoDB, create a connection asset for it. - -IBM Cloud Databases for MongoDB is a MongoDB database that is managed by IBM Cloud. It uses a JSON document store with a rich query and aggregation framework. - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_1,3721517369CF4EA8476BCBB39040542BA2A212D8," Supported editions - - - -* MongoDB Community Edition -* MongoDB Enterprise Edition - - - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_2,3721517369CF4EA8476BCBB39040542BA2A212D8," Create a connection to IBM Cloud Databases for MongoDB - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Authentication database: The name of the database in which the user was created. -* Username and password -* SSL certificate (if required by the database server) - - - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_3,3721517369CF4EA8476BCBB39040542BA2A212D8," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_4,3721517369CF4EA8476BCBB39040542BA2A212D8," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_5,3721517369CF4EA8476BCBB39040542BA2A212D8," Where you can use this connection - -You can use IBM Cloud Databases for MongoDB connections in the following workspaces and tools: - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_6,3721517369CF4EA8476BCBB39040542BA2A212D8,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_7,3721517369CF4EA8476BCBB39040542BA2A212D8,"Catalogs - - - -* Platform assets catalog - - - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_8,3721517369CF4EA8476BCBB39040542BA2A212D8," IBM Cloud Databases for MongoDB setup - -[Getting Started Tutorial](https://cloud.ibm.com/docs/databases-for-mongodb?topic=databases-for-mongodb-getting-started-tutorial) - -" -3721517369CF4EA8476BCBB39040542BA2A212D8_9,3721517369CF4EA8476BCBB39040542BA2A212D8," Restrictions - - - -* You can only use this connection for source data. You cannot write to data or export data with this connection. -* MongoDB Query Language (MQL) is not supported. - - - -Related connection: [MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -BA78048BAD0EE0F455762254F562704769EA4149_0,BA78048BAD0EE0F455762254F562704769EA4149," MySQL connection - -To access your data in MySQL, create a connection asset for it. - -MySQL is an open-source relational database management system. - -" -BA78048BAD0EE0F455762254F562704769EA4149_1,BA78048BAD0EE0F455762254F562704769EA4149," Supported versions - - - -* MySQL Enterprise Edition 5.0+ -* MySQL Community Edition 4.1, 5.0, 5.1, 5.5, 5.6, 5.7 - - - -" -BA78048BAD0EE0F455762254F562704769EA4149_2,BA78048BAD0EE0F455762254F562704769EA4149," Create a connection to MySQL - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP Address -* Port number -* Character Encoding -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -BA78048BAD0EE0F455762254F562704769EA4149_3,BA78048BAD0EE0F455762254F562704769EA4149," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -BA78048BAD0EE0F455762254F562704769EA4149_4,BA78048BAD0EE0F455762254F562704769EA4149," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -BA78048BAD0EE0F455762254F562704769EA4149_5,BA78048BAD0EE0F455762254F562704769EA4149," Where you can use this connection - -You can use MySQL connections in the following workspaces and tools: - -" -BA78048BAD0EE0F455762254F562704769EA4149_6,BA78048BAD0EE0F455762254F562704769EA4149,"Projects - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -BA78048BAD0EE0F455762254F562704769EA4149_7,BA78048BAD0EE0F455762254F562704769EA4149,"Catalogs - - - -* Platform assets catalog - - - -" -BA78048BAD0EE0F455762254F562704769EA4149_8,BA78048BAD0EE0F455762254F562704769EA4149," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [MySQL documentation](https://dev.mysql.com/doc/) for the correct syntax. - -" -BA78048BAD0EE0F455762254F562704769EA4149_9,BA78048BAD0EE0F455762254F562704769EA4149," MySQL setup - -[MySQL Installation ](https://dev.mysql.com/doc/mysql-getting-started/en/) - -" -BA78048BAD0EE0F455762254F562704769EA4149_10,BA78048BAD0EE0F455762254F562704769EA4149," Learn more - -[MySQL documentation](https://dev.mysql.com/doc/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_0,4CA6000DC674CB2486D905F8531FC19BC88F887A," OData connection - -To access your data in OData, create a connection asset for it. - -The OData (Open Data) protocol is a REST-based data access protocol. The OData connection reads data from a data source that uses the OData protocol. - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_1,4CA6000DC674CB2486D905F8531FC19BC88F887A," Supported versions - -The OData connection is supported on OData protocol version 2 or version 4. - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_2,4CA6000DC674CB2486D905F8531FC19BC88F887A," Create a connection to OData - -To create the connection asset, you need these connection details: - -Credentials type: - - - -* API Key -* Basic -* None - - - -Encryption: -SSL certificate (if required by the database server) - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_3,4CA6000DC674CB2486D905F8531FC19BC88F887A," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_4,4CA6000DC674CB2486D905F8531FC19BC88F887A," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_5,4CA6000DC674CB2486D905F8531FC19BC88F887A," Where you can use this connection - -You can use the OData connection in the following workspaces and tools: - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_6,4CA6000DC674CB2486D905F8531FC19BC88F887A,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_7,4CA6000DC674CB2486D905F8531FC19BC88F887A,"Catalogs - - - -* Platform assets catalog - - - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_8,4CA6000DC674CB2486D905F8531FC19BC88F887A," OData setup - -To set up the OData service, see [How to Use Web API OData to Build an OData V4 Service without Entity Framework](https://www.odata.org/blog/how-to-use-web-api-odata-to-build-an-odata-v4-service-without-entity-framework/). - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_9,4CA6000DC674CB2486D905F8531FC19BC88F887A," Restrictions - - - -* For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. -* For SPSS Modeler, you cannot create new entity sets. - - - -" -4CA6000DC674CB2486D905F8531FC19BC88F887A_10,4CA6000DC674CB2486D905F8531FC19BC88F887A," Learn more - -[www.odata.org](https://www.odata.org/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -96B0DF7161FF334810F77FE93235BD4D548164A7_0,96B0DF7161FF334810F77FE93235BD4D548164A7," Oracle connection - -To access your data in Oracle, you must create a connection asset for it. - -Oracle is a multi-model database management system. - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_1,96B0DF7161FF334810F77FE93235BD4D548164A7," Supported versions - - - -* Oracle 19c and 21c - - - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_2,96B0DF7161FF334810F77FE93235BD4D548164A7," Create a connection to Oracle - -To create the connection asset, you need the following connection details: - - - -* Service name or Database (SID) -* Hostname or IP address -* Port number -* SSL certificate (if required by the database server) -* Alternate servers: A list of alternate database servers to use for failover for new or lost connections. -Syntax: (servername1[:port1]]]]...) - -The server name (servername1, servername2, and so on) is required for each alternate server entry. The port number (port1, port2, and so on) and the connection properties (property=value) are optional for each alternate server entry. If the port is unspecified, the port number of the primary server is used. - -If the port number of the primary server is not specified, the default port number 1521 is used. - -The optional connection properties are the ServiceName and SID. -* Metadata discovery: The setting determines whether comments on columns (remarks) and aliases for schema objects such as tables or views (synonyms) are retrieved when assets are added by using this connection. - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_3,96B0DF7161FF334810F77FE93235BD4D548164A7," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. For more information, see [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_4,96B0DF7161FF334810F77FE93235BD4D548164A7," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_5,96B0DF7161FF334810F77FE93235BD4D548164A7," Where you can use this connection - -You can use Oracle connections in the following workspaces and tools: - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_6,96B0DF7161FF334810F77FE93235BD4D548164A7,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_7,96B0DF7161FF334810F77FE93235BD4D548164A7,"Catalogs - - - -* Platform assets catalog - - - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_8,96B0DF7161FF334810F77FE93235BD4D548164A7," Oracle setup - -[Oracle installation](https://docs.oracle.com/cd/E11882_01/server.112/e10897/install.htmADMQS002) - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_9,96B0DF7161FF334810F77FE93235BD4D548164A7," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Oracle Supported SQL Syntax and Functions](https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sqlserver-supported-sql-syntax-functions.html) for the correct syntax. - -" -96B0DF7161FF334810F77FE93235BD4D548164A7_10,96B0DF7161FF334810F77FE93235BD4D548164A7," Learn more - -[Oracle product documentation](https://docs.oracle.com/en/database/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_0,4517071B8CDD91311A13DECDD9D0A7FD761AA616," IBM Planning Analytics connection - -To access your data in Planning Analytics, create a connection asset for it. - -Planning Analytics (formerly known as ""TM1"") is an enterprise performance management database that stores data in in-memory multidimensional OLAP cubes. - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_1,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Supported versions - -IBM Planning Analytics, version 2.0.5 or later - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_2,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Create a connection to Planning Analytics - -To create the connection asset, you need these connection details: - - - -* TM1 server API root URL -* Authentication type (Basic or CAM Credentials) -* Username and password -* SSL certificate (if required by the database server) - - - -For authentication setup information, see [Authenticating and managing sessions](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_auth.html). - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_3,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_4,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_5,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Where you can use this connection - -You can use Planning Analytics connections in the following workspaces and tools: - - - -* Data Refinery -* Decision Optimization experiments -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_6,4517071B8CDD91311A13DECDD9D0A7FD761AA616,"Catalogs - - - -* Platform assets catalog - - - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_7,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Planning Analytics setup - -Enable TM1 REST APIs on the TM1 Server. See TMI REST API [Installation and configuration](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_install.html). - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_8,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Cube dimension order - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_9,4517071B8CDD91311A13DECDD9D0A7FD761AA616,"Versions earlier than TM1 11.4 -For best performance, do not combine string and numeric data in a single cube. However, if the cube does include both string and numeric data, the string elements must be in the last dimension when the cube is created. Reordering dimensions later is ignored. - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_10,4517071B8CDD91311A13DECDD9D0A7FD761AA616,"Version TM1 11.4 or later -The default setting in Planning Analytics for cube creation is current. This setting might cause errors or unexpected results when you use the Planning Analytics connection. Instead, set the interaction property use_creation_order value to true. - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_11,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Restriction - -For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. - -" -4517071B8CDD91311A13DECDD9D0A7FD761AA616_12,4517071B8CDD91311A13DECDD9D0A7FD761AA616," Learn more - -[Planning Analytics product documentation](https://www.ibm.com/docs/planning-analytics/2.0.0) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_0,1BED610A414085E625BD32AAE5FFAC81B41F97E0," PostgreSQL connection - -To access your data in PostgreSQL, you must create a connection asset for it. - -PostgreSQL is an open source and customizable object-relational database. - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_1,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Supported versions - - - -* PostgreSQL 15.0 and later -* PostgreSQL 14.0 and later -* PostgreSQL 13.0 and later -* PostgreSQL 12.0 and later -* PostgreSQL 11.0 and later -* PostgreSQL 10.1 and later -* PostgreSQL 9.6 and later - - - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_2,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Create a connection to PostgreSQL - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_3,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_4,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_5,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Where you can use this connection - -You can use PostgreSQL connections in the following workspaces and tools: - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_6,1BED610A414085E625BD32AAE5FFAC81B41F97E0,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_7,1BED610A414085E625BD32AAE5FFAC81B41F97E0,"Catalogs - - - -* Platform assets catalog - - - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_8,1BED610A414085E625BD32AAE5FFAC81B41F97E0," PostgreSQL setup - -[PostgreSQL installation](https://www.pgadmin.org/docs/pgadmin4/latest/getting_started.html) - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_9,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [SQL Syntax](https://www.postgresql.org/docs/current/sql-syntax.html) in the PostgreSQL documentation. - -" -1BED610A414085E625BD32AAE5FFAC81B41F97E0_10,1BED610A414085E625BD32AAE5FFAC81B41F97E0," Learn more - -[PostgreSQL documentation](https://www.postgresql.org/docs/) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_0,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Presto connection - -To access your data in Presto, create a connection asset for it. - -Presto is a fast and reliable SQL engine for Data Analytics and the Open Lakehouse. - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_1,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Supported versions - - - -* Version 0.279 and earlier - - - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_2,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Create a connection to Presto - -To create the connection asset, you need these connection details: - - - -* Hostname or IP address -* Port -* Username -* Password (required if you connect to Presto with SSL enabled) -* SSL certificate (if required by the Presto server) - - - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_3,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Connecting to Presto within IBM watsonx.data - -To connect to a Presto server within watsonx.data on IBM Cloud, use these connection details: - - - -* Username: ibmlhapikey -* Password (for SSL-enabled, which is the default): An IBM Cloud API key. For more information, see [Connecting to Presto server](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-con-presto-serv). - - - -To connect to a Presto server within watsonx.data on Cloud Pak for Data or stand-alone watsonx.data, use the username and password that you use for the watsonx.data console. - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_4,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Choose the method for creating a connection based on where you are in the platform - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_5,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022,"In a project -Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_6,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022,"In a deployment space -Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_7,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_8,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Where you can use this connection - -You can use the Presto connection in the following workspaces and tools: - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_9,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_10,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022,"Catalogs - - - -* Platform assets catalog - - - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_11,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Presto setup - -To set up Presto, see [Presto installation](https://prestodb.io/docs/current/installation.html). - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_12,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_13,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Limitation - -The Presto connection does not support the Apache Cassandra Time data type. - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_14,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [SQL Statement Syntax](https://prestodb.io/docs/current/sql.html) for the correct syntax. - -" -2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022_15,2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022," Learn more - -[Presto documentation](https://prestodb.io/docs/current/index.html) - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_0,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," IBM Netezza Performance Server connection - -To access your data in IBM Netezza Performance Server, you must create a connection asset for it. - -Netezza Performance Server is a platform for high-performance data warehousing and analytics. - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_1,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Supported versions - - - -* IBM Netezza Performance Server 11.x -* IBM Netezza appliance software 7.0.x, 7.1.x, 7.2.x - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_2,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Create a connection to Netezza Performance Server - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_3,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_4,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_5,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Where you can use this connection - -You can use Netezza Performance Server connections in the following workspaces and tools: - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_6,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_7,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60,"Catalogs - - - -* Platform assets catalog - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_8,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Netezza Performance Server setup - - - -* [Netezza Performance Server Getting started](https://www.ibm.com/docs/SSTNZ3/get-started/get_strt.html) -* [PureData System for Analytics Initial system setup](https://www.ibm.com/docs/psfa/7.2.1?topic=overview-initial-system-setup-information) - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_9,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the product documentation: - - - -* [Netezza Performance Server SQL command reference](https://www.ibm.com/docs/SSTNZ3/nps-cpds-20X/dbuser/r_dbuser_ntz_sql_command_reference.html) -* [PureData System for Analytics IBM Netezza SQL Extensions toolkit](https://www.ibm.com/docs/en/psfa/7.2.1?topic=netezza-sql-extensions-toolkit) - - - -" -2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60_10,2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60," Learn more - - - -* [IBM Netezza Performance Server documentation](https://www.ibm.com/docs/netezza) -* [IBM PureData System for Analytics documentation](https://www.ibm.com/docs/psfa) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_0,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Amazon Redshift connection - -To access your data in Amazon Redshift, create a connection asset for it. - -Amazon Redshift is a data warehouse product that forms part of the larger cloud-computing platform Amazon Web Services (AWS). - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_1,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Create a connection to Amazon Redshift - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_2,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_3,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_4,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Where you can use this connection - -You can use Amazon Redshift connections in the following workspaces and tools: - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_5,85DFC4B40DA36A5D66892B5B231C9743C67D7E71,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_6,85DFC4B40DA36A5D66892B5B231C9743C67D7E71,"Catalogs - - - -* Platform assets catalog - - - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_7,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Amazon Redshift setup - -See [Amazon Redshift setup prerequisites](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-prereq.html) for setup information. - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_8,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [ Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/cm_chap_SQLCommandRef.html) for the correct syntax. - -" -85DFC4B40DA36A5D66892B5B231C9743C67D7E71_9,85DFC4B40DA36A5D66892B5B231C9743C67D7E71," Learn more - -[Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/index.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -02326495F914D005BDE7360F0826E8C140816613_0,02326495F914D005BDE7360F0826E8C140816613," Salesforce.com connection - -To access your data in Salesforce.com, create a connection asset for it. - -Salesforce.com is a cloud-based software company which provides customer relationship management (CRM). The Salesforce.com connection supports the standard SQL query language to select, insert, update, and delete data from Salesforce.com products and other supported products that use the Salesforce API. - -" -02326495F914D005BDE7360F0826E8C140816613_1,02326495F914D005BDE7360F0826E8C140816613," Other supported products that use the Salesforce API - - - -* Salesforce AppExchange -* FinancialForce -* Service Cloud -* ServiceMax -* Veeva CRM - - - -" -02326495F914D005BDE7360F0826E8C140816613_2,02326495F914D005BDE7360F0826E8C140816613," Create a connection to Salesforce.com - -To create the connection asset, you need these connection details: - - - -* The username to access the Salesforce.com server. -* The password and security token to access the Salesforce.com server. In the Password field, append your security token to the end of your password. For example, MypasswordMyAccessToken. For information about access tokens, see [Reset Your Security Token](https://help.salesforce.com/articleView?id=sf.user_security_token.htm&type=5). -* The Salesforce.com server name. The default is login.salesforce.com. - - - -" -02326495F914D005BDE7360F0826E8C140816613_3,02326495F914D005BDE7360F0826E8C140816613," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -02326495F914D005BDE7360F0826E8C140816613_4,02326495F914D005BDE7360F0826E8C140816613," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -02326495F914D005BDE7360F0826E8C140816613_5,02326495F914D005BDE7360F0826E8C140816613," Where you can use this connection - -You can use Salesforce.com connections in the following workspaces and tools: - -" -02326495F914D005BDE7360F0826E8C140816613_6,02326495F914D005BDE7360F0826E8C140816613,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -02326495F914D005BDE7360F0826E8C140816613_7,02326495F914D005BDE7360F0826E8C140816613,"Catalogs - - - -* Platform assets catalog - - - -" -02326495F914D005BDE7360F0826E8C140816613_8,02326495F914D005BDE7360F0826E8C140816613," Restriction - -You can only use this connection for source data. You cannot write to data or export data with this connection. - -" -02326495F914D005BDE7360F0826E8C140816613_9,02326495F914D005BDE7360F0826E8C140816613," Known issue - -The following objects in the SFORCE schema are not supported: APPTABMEMBER, CONTENTDOCUMENTLINK, CONTENTFOLDERITEM, CONTENTFOLDERMEMBER, DATACLOUDADDRESS, DATACLOUDCOMPANY, DATACLOUDCONTACT, DATACLOUDANDBCOMPANY, DATASTATISTICS, ENTITYPARTICLE, EVENTBUSSUBSCRIBER, FIELDDEFINITION, FLEXQUEUEITEM, ICONDEFINITION, IDEACOMMENT, LISTVIEWCHARINSTANCE, LOGINEVENT, OUTGOINGEMAIL, OUTGOINGEMAILRELATION, OWNERCHANGEOPTIONINFO, PICKLISTVALUEINFO, PLATFORMACTION, RECORDACTIONHISTORY, RELATIONSHIPDOMAIN, RELATIONSHIPINFO, SEARCHLAYOUT, SITEDETAIL, USERAPPMENUITEM, USERENTITYACCESS, USERFIELDACCESS, USERRECORDACCESS, VOTE. - -" -02326495F914D005BDE7360F0826E8C140816613_10,02326495F914D005BDE7360F0826E8C140816613," Learn more - - - -* [Get Started with Salesforce](https://help.salesforce.com/s/articleView?id=sf.basics_welcome_salesforce_users.htm&type=5) -* [Salesforce editions with API access](https://help.salesforce.com/s/articleView?id=000326486&type=1) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_0,9288D1E76019A9D1F08873E515B9798906CC1C4B," SAP ASE connection - -To access your data in SAP ASE, create a connection asset for it. - -SAP ASE is a relational model database server. SAP ASE was formerly Sybase. - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_1,9288D1E76019A9D1F08873E515B9798906CC1C4B," Supported versions - -SAP Sybase ASE 11.5+, 16.0+ - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_2,9288D1E76019A9D1F08873E515B9798906CC1C4B," Create a connection to SAP ASE - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_3,9288D1E76019A9D1F08873E515B9798906CC1C4B," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_4,9288D1E76019A9D1F08873E515B9798906CC1C4B," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_5,9288D1E76019A9D1F08873E515B9798906CC1C4B," Where you can use this connection - -You can use SAP ASE connections in the following workspaces and tools: - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_6,9288D1E76019A9D1F08873E515B9798906CC1C4B,"Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_7,9288D1E76019A9D1F08873E515B9798906CC1C4B,"Catalogs - - - -* Platform assets catalog - - - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_8,9288D1E76019A9D1F08873E515B9798906CC1C4B," SAP ASE setup - -[Get Started with SAP ASE](https://www.sap.com/canada/products/sybase-ase/get-started.html) - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_9,9288D1E76019A9D1F08873E515B9798906CC1C4B," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [SAP ASE documentation](https://help.sap.com/viewer/product/SAP_ASE/16.0.4.1/en-US?task=whats_new_task) for the correct syntax. - -" -9288D1E76019A9D1F08873E515B9798906CC1C4B_10,9288D1E76019A9D1F08873E515B9798906CC1C4B," Learn more - -[SAP ASE technical information](https://www.sap.com/canada/products/sybase-ase/technical-information.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_0,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," SAP IQ connection - -To access your data in SAP IQ, create a connection asset for it. - -SAP IQ is a column-based, petabyte scale, relational database software system used for business intelligence, data warehousing, and data marts. SAP IQ was formerly Sybase IQ. - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_1,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Create a connection to SAP IQ - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_2,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_3,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_4,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Where you can use this connection - -You can use SAP IQ connections in the following workspaces and tools: Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_5,BC1AD7048032258F29E1D4081A2EEC98B36D13CF,"Catalogs - - - -* Platform assets catalog - - - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_6,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," SAP IQ setup - -[Get Started with SAP IQ](https://www.sap.com/canada/products/sybase-iq-big-data-management/get-started.html) - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_7,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_8,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [SAP IQ SQL Reference](https://help.sap.com/docs/SAP_IQ/a898e08b84f21015969fa437e89860c8/7b5bd4e8cdcb4593aba6f2895572b0a9.html) for the correct syntax. - -" -BC1AD7048032258F29E1D4081A2EEC98B36D13CF_9,BC1AD7048032258F29E1D4081A2EEC98B36D13CF," Learn more - -[SAP IQ technical information](https://www.sap.com/canada/products/sybase-iq-big-data-management/technical-information.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_0,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," SAP OData connection - -To access your data in SAP OData, create a connection asset for it. - -Use the SAP OData connection to extract data from a SAP system through its exposed OData services. - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_1,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Supported SAP OData products - -The SAP OData connection is supported on SAP products that support the OData protocol version 2. Example products are S4/HANA (on premises or cloud), ERP, and CRM. - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_2,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Create a connection to SAP OData - -To create the connection asset, you need these connection details: - -Credentials type: - - - -* API Key -* Basic -* None - - - -Encryption: -SSL certificate (if required by the database server) - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_3,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_4,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_5,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Where you can use this connection - -You can use the SAP OData connection in the following workspaces and tools: - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_6,6CD7B46E0C35165BFE21BE4967B68481E9BE840F,"Projects - - - -* Data Refinery -* SPSS Modeler -* Synthetic Data Generator - - - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_7,6CD7B46E0C35165BFE21BE4967B68481E9BE840F,"Catalogs - - - -* Platform assets catalog - - - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_8,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," SAP OData setup - -See [Prerequisites for using the SAP ODATA Connector](https://www.ibm.com/support/pages/node/886655) for the SAP Gateway setup instructions. - -" -6CD7B46E0C35165BFE21BE4967B68481E9BE840F_9,6CD7B46E0C35165BFE21BE4967B68481E9BE840F," Restrictions - - - -* For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. -* For SPSS Modeler, you cannot create new entity sets. - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_0,8B9532ADFC4FE3D9213BBE56DA3323C759426287," SingleStoreDB connection - -To access your data in SingleStoreDB, create a connection asset for it. - -SingleStoreDB is a fast, distributed, and highly scalable cloud-based SQL database. You can use SingleStoreDB to power real-time and data-intensive applications. - -Use SingleStoreDB and watsonx.ai for generative AI applications. Benefits include semantic search, fast ingest, and low-latency response times for foundation models and traditional machine learning. - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_1,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Create a connection to SingleStoreDB - -To create the connection asset, you need these connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Username and password -* SSL certificate (if required by the database server) - - - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_2,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Choose the method for creating a connection based on where you are in the platform - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_3,8B9532ADFC4FE3D9213BBE56DA3323C759426287,"In a project -Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_4,8B9532ADFC4FE3D9213BBE56DA3323C759426287,"In a deployment space -Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_5,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_6,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Where you can use this connection - -You can use the SingleStoreDB connection in the following workspaces and tools: - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_7,8B9532ADFC4FE3D9213BBE56DA3323C759426287,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler - - - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_8,8B9532ADFC4FE3D9213BBE56DA3323C759426287,"Catalogs - - - -* Platform assets catalog - - - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_9,8B9532ADFC4FE3D9213BBE56DA3323C759426287," SingleStoreDB setup - -To set up SingleStoreDB, see [Getting Started with SingleStoreDB Cloud](https://docs.singlestore.com/cloud/getting-started-with-singlestoredb-cloud/). - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_10,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the SingleStore Docs [SQL Reference](https://docs.singlestore.com/db/v8.1/reference/sql-reference/) for the correct syntax. - -" -8B9532ADFC4FE3D9213BBE56DA3323C759426287_11,8B9532ADFC4FE3D9213BBE56DA3323C759426287," Learn more - - - -* [SingleStoreDB Cloud](https://docs.singlestore.com/) -* [SingleStoreDB with IBM](https://www.ibm.com/products/singlestore) for information about the IBM partnership with SingleStoreDB that provides a single source of procurement, support, and security. - - - -Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_0,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Snowflake connection - -To access your data in Snowflake, you must create a connection asset for it. - -Snowflake is a cloud-based data storage and analytics service. - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_1,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Create a connection to Snowflake - -To create the connection asset, you need the following connection details: - - - -* Account name: The full name of your account -* Database name -* Role: The default access control role to use in the Snowflake session -* Warehouse: The virtual warehouse - - - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_2,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Credentials - -Authentication method: - - - -* Username and password -* Key-Pair: Enter the contents of the private key and the key passphrase (if configured). These properties must be set up by the Snowflake administrator. For information, see [Key Pair Authentication & Key Pair Rotation](https://docs.snowflake.com/en/user-guide/key-pair-auth) in the Snowflake documentation. -* Okta URL endpoint: If your company uses native Okta SSO authentication, enter the Okta URL endpoint for your Okta account. Example: https://.okta.com. Leave this field blank if you want to use the default authentication of Snowflake. For information about federated authentication provided by Okta, see [Native SSO](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.htmlnative-sso-okta-only). - - - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_3,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_4,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_5,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Where you can use this connection - -You can use Snowflake connections in the following workspaces and tools: - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_6,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE,"Projects - - - -* Data Refinery -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_7,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE,"Catalogs - - - -* Platform assets catalog - - - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_8,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Snowflake setup - -[General Configuration ](https://docs.snowflake.com/en/user-guide/gen-conn-config.html) - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_9,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Snowflake SQL Command Reference](https://docs.snowflake.com/en/sql-reference-commands.html) for the correct syntax. - -" -0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE_10,0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE," Learn more - -[Snowflake in 20 Minutes](https://docs.snowflake.com/en/user-guide/getting-started-tutorial.html) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_0,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Microsoft SQL Server connection - -You can create a connection asset for Microsoft SQL Server. - -Microsoft SQL Server is a relational database management system. - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_1,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Supported versions - - - -* Microsoft SQL Server 2000+ -* Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) -* Microsoft SQL Server 7.0 - - - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_2,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Create a connection to Microsoft SQL Server - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Either the Port number or the Instance name. If the server is configured for dynamic ports, use the Instance name. -* Username and password -* Select Use Active Directory if the Microsoft SQL Server has been set up in a domain that uses NTLM (New Technology LAN Manager) authentication. Then enter the name of the domain that is associated with the username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_3,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_4,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_5,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Where you can use this connection - -You can use Microsoft SQL Server connections in the following workspaces and tools: - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_6,7946DCF2F69A7420490A7B5CA677C2273DE5764B,"Projects - - - -* Data Refinery -* Decision Optimization -* Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). -* SPSS Modeler -* Synthetic Data Generator - - - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_7,7946DCF2F69A7420490A7B5CA677C2273DE5764B,"Catalogs - - - -* Platform assets catalog - - - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_8,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Microsoft SQL Server setup - -[Microsoft SQL Server installation](https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-ver15) - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_9,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Restriction - -Except for NTLM authentication, Windows Authentication is not supported. - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_10,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Transact-SQL Reference](https://docs.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-ver15) for the correct syntax. - -" -7946DCF2F69A7420490A7B5CA677C2273DE5764B_11,7946DCF2F69A7420490A7B5CA677C2273DE5764B," Learn more - -[Microsoft SQL Server documentation](https://docs.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver15) - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -F64414C7F435B4B5E5A681A5F561C07780037836_0,F64414C7F435B4B5E5A681A5F561C07780037836," IBM Cloud Data Engine connection - -To access your data in IBM Cloud Data Engine, create a connection asset for it. - -IBM Cloud Data Engine is a service on IBM Cloud that you use to build, manage, and consume data lakes and their table assets in IBM Cloud Object Storage (COS). IBM Cloud Data Engine provides functions to load, prepare, and query big data that is stored in various formats. It also includes a metastore with table definitions. IBM Cloud Data Engine was formerly named ""IBM Cloud SQL Query."" - -" -F64414C7F435B4B5E5A681A5F561C07780037836_1,F64414C7F435B4B5E5A681A5F561C07780037836," Prerequisites - -" -F64414C7F435B4B5E5A681A5F561C07780037836_2,F64414C7F435B4B5E5A681A5F561C07780037836," Create a connection to IBM Cloud Data Engine - -To create the connection asset, you need these connection details: - - - -* The Cloud Resource Name (CRN) of the IBM Cloud Data Engine instance. Go to the IBM Cloud Data Engine service instance in your resources list in your IBM Cloud dashboard and copy the value of the CRN from the deployment details. -* Target Cloud Object Storage: A default location where IBM Cloud Data Engine stores query results. You can specify any Cloud Object Storage bucket that you have access to. You can also select the default Cloud Object Storage bucket that is created when you open the IBM Cloud Data Engine web console for the first time from IBM Cloud dashboard. See the Target location field in the IBM Cloud Data Engine web console. -* IBM Cloud API key: An API key for a user or service ID that has access to your IBM Cloud Data Engine and Cloud Object Storage services (for both the Cloud Object Storage data that you want to query and the default target Cloud Object Storage location). - - - -You can create a new API key for your own user: - - - -1. In the IBM Cloud console, go to Manage > Access (IAM). -2. In the left navigation, select API keys. -3. Select Create an IBM Cloud API Key. - - - -" -F64414C7F435B4B5E5A681A5F561C07780037836_3,F64414C7F435B4B5E5A681A5F561C07780037836," Credentials - -IBM Cloud Data Engine uses the SSO credentials that are specified as a single API key, which authenticates a user or service ID. -The API key must have the following properties: - - - -* Manage permission for the IBM Cloud Data Engine instance -* Read access to all Cloud Object Storage locations that you want to read from -* Write access to the default Cloud Object Storage target location -* Write access to the IBM Cloud Data Engine instance - - - -" -F64414C7F435B4B5E5A681A5F561C07780037836_4,F64414C7F435B4B5E5A681A5F561C07780037836," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -F64414C7F435B4B5E5A681A5F561C07780037836_5,F64414C7F435B4B5E5A681A5F561C07780037836," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -F64414C7F435B4B5E5A681A5F561C07780037836_6,F64414C7F435B4B5E5A681A5F561C07780037836," Where you can use this connection - -You can use IBM Cloud Data Engine connections in the following workspaces and tools: - -" -F64414C7F435B4B5E5A681A5F561C07780037836_7,F64414C7F435B4B5E5A681A5F561C07780037836,"Projects - - - -* Data Refinery -* Notebooks. See the Notebook [tutorial](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e82c765fd1165439caccfc4ce8579a25) for using the IBM Cloud Data Engine (SQL Query) API to run SQL statements. -* SPSS Modeler -* Synthetic Data Generator - - - -" -F64414C7F435B4B5E5A681A5F561C07780037836_8,F64414C7F435B4B5E5A681A5F561C07780037836,"Catalogs - - - -* Platform assets catalog - - - -" -F64414C7F435B4B5E5A681A5F561C07780037836_9,F64414C7F435B4B5E5A681A5F561C07780037836," Restrictions - -You can only use this connection for source data. You cannot write to data or export data with this connection. - -" -F64414C7F435B4B5E5A681A5F561C07780037836_10,F64414C7F435B4B5E5A681A5F561C07780037836," IBM Cloud Data Engine setup - -To set up IBM Cloud Data Engine on IBM Cloud Object Storage, see [Getting started with IBM Cloud Data Engine](https://cloud.ibm.com/docs/sql-query/sql-query.htmloverview?cm_sp=Cloud-Product-_-OnPageNavLink-IBMCloudPlatform_IBMCloudObjectStorage-_-COSsql_LearnMore). - -" -F64414C7F435B4B5E5A681A5F561C07780037836_11,F64414C7F435B4B5E5A681A5F561C07780037836," Supported encryption - -By default, all objects that are stored in IBM Cloud Object Storage are encrypted by using randomly generated keys and an all-or-nothing-transform (AONT). For details, see [Encrypting your data](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-encryption). Additionally, you can use managed keys to encrypt the SQL query texts and error messages that are stored in the job information. See [Encrypting SQL queries with Key Protect](https://cloud.ibm.com/docs/sql-query?topic=sql-query-keyprotect). - -" -F64414C7F435B4B5E5A681A5F561C07780037836_12,F64414C7F435B4B5E5A681A5F561C07780037836," Running SQL statements - -[Video to learn how you can get started to run a basic query](https://cloud.ibm.com/docs/sql-query?topic=sql-query-overviewrunning) - -" -F64414C7F435B4B5E5A681A5F561C07780037836_13,F64414C7F435B4B5E5A681A5F561C07780037836," Learn more - - - -* [IBM Cloud Data Engine](https://www.ibm.com/cloud/sql-query) -* [Connecting to a Cloud Data Lake with IBM Cloud Pak for Data](https://www.ibm.com/cloud/blog/connecting-to-a-cloud-data-lake-with-ibm-cloud-pak-for-data) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_0,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Tableau connection - -To access your data in Tableau, you must create a connection asset for it. - -Tableau is an interactive data visualization platform. - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_1,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Supported products - -Tableau Server 2020.3.3 and Tableau Cloud - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_2,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Create a connection to Tableau - -To create the connection asset, you need the following connection details: - - - -* Hostname or IP address -* Port number -* Site: The name of the Tableau site to use -* For Authentication method, you need either a username and password or an Access token (with Access token name and Access token secret). -* SSL certificate (if required by the database server) - - - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_3,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_4,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_5,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Where you can use this connection - -You can use Tableau connections in the following workspaces and tools: Projects - - - -* SPSS Modeler -* Synthetic Data Generator - - - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_6,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9,"Catalogs - - - -* Platform assets catalog - - - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_7,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Tableau setup - - - -* [Get Started with Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/get_started_server.htm) -* [Get Started with Tableau Server on Windows](https://help.tableau.com/current/server/en-us/get_started_server.htm) -* [Get Started with Tableau Cloud](https://help.tableau.com/current/online/en-us/to_get_started.htm) - - - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_8,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Restriction - -You can use this connection only for source data. You cannot write to data or export data with this connection. - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_9,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Run Initial SQL](https://help.tableau.com/current/online/en-us/connect_basic_initialsql.htm) for the correct syntax. - -" -4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9_10,4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9," Learn more - - - -* [Tableau](https://www.tableau.com/) -* [SSL for Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/ssl.htm) -* [SSL for Tableau Server on Windows](https://help.tableau.com/current/server/en-us/ssl.htm) -* [Security in Tableau Cloud](https://help.tableau.com/current/online/en-us/to_security.htm) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -71A4244C07321F32F283E49CFD6D6AFA19639744_0,71A4244C07321F32F283E49CFD6D6AFA19639744," Teradata connection - -To access your data in Teradata, you must create a connection asset for it. - -Teradata provides database and analytics-related services and products. - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_1,71A4244C07321F32F283E49CFD6D6AFA19639744," Supported versions - -Teradata databases 15.10, 16.10, 17.00, 17.10, and 17.20 - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_2,71A4244C07321F32F283E49CFD6D6AFA19639744," Create a connection to Teradata - -To create the connection asset, you need the following connection details: - - - -* Database name -* Hostname or IP address -* Port number -* Client character set: IMPORTANT: Do not enter a value unless you are instructed by IBM support. The character set value overrides the Teradata JDBC drivers normal mapping of the Teradata session character sets. Data corruption can occur if you specify the wrong character set. If no value is specified, UTF16 is used. -* Authentication method: Select the security mechanism to use to authenticate the user: - - - -* TD2 (Teradata Method 2): Use the Teradata security mechanism. -* LDAP: Use an LDAP security mechanism for external authentication. - - - -* Username and password -* SSL certificate (if required by the database server) - - - -For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_3,71A4244C07321F32F283E49CFD6D6AFA19639744," Choose the method for creating a connection based on where you are in the platform - -In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). - -In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). - -In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_4,71A4244C07321F32F283E49CFD6D6AFA19639744," Next step: Add data assets from the connection - - - -* See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_5,71A4244C07321F32F283E49CFD6D6AFA19639744," Where you can use this connection - -You can use Teradata connections in the following workspaces and tools: - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_6,71A4244C07321F32F283E49CFD6D6AFA19639744,"Projects - - - -* Decision Optimization -* SPSS Modeler -* Synthetic Data Generator - - - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_7,71A4244C07321F32F283E49CFD6D6AFA19639744,"Catalogs - - - -* Platform assets catalog - - - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_8,71A4244C07321F32F283E49CFD6D6AFA19639744," Running SQL statements - -To ensure that your SQL statements run correctly, refer to the [Teradata SQL documentation](https://docs.teradata.com/reader/eWpPpcMoLGQcZEoyt5AjEg/9iudpbZXGZ_rAb7c6PL54g) for the correct syntax. - -" -71A4244C07321F32F283E49CFD6D6AFA19639744_9,71A4244C07321F32F283E49CFD6D6AFA19639744," Learn more - - - -* [Teradata documentation](https://docs.teradata.com/) -* [Teradata Community](https://support.teradata.com/community) - - - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) - -Teradata JDBC Driver 17.00.00.03 Copyright (C) 2023 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering. -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_0,53EE442D78ABE20AAA100DDA3FF139E566842C2E," Connectors - -You can add connections to a broad array of data sources in projects. Source connections can be used to read data; target connections can be used to load (save) data. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save data to the target. - -From a project, you must [create a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to a data source before you can read data from it or load data to it. - - - -* [IBM services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enibm) -* [Third-party services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enthird) - - - - - -* [Supported connectors by tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enst) - - - -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_1,53EE442D78ABE20AAA100DDA3FF139E566842C2E," IBM services - - - -* [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html). Supports source connections only. -* [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) -* [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html). Supports source connections only. -* [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) -* [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) -* [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) -* [IBM Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) -* [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) -* [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html). Supports source connections only. -* [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_2,53EE442D78ABE20AAA100DDA3FF139E566842C2E,"* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) -* [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) -* [IBM Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) -* [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) -* [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) -* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) -* [IBM Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) -* [IBM Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) -* [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) -* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html). Supports source connections only. - - - -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_3,53EE442D78ABE20AAA100DDA3FF139E566842C2E," Third-party services - - - -* [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) -* [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html) -* [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) -* [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) -* [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) -* [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) -* [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) -* [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) -* [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html). Supports source connections only. -* [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) -* [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html). Supports source connections only. -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_4,53EE442D78ABE20AAA100DDA3FF139E566842C2E,"* [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html). Supports source connections only. -* [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) -* [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html) -* [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) -* [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) -* [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) -* [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) -* [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) -* [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html). Supports source connections only. -* [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html). Supports source connections only. -* [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) -* [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_5,53EE442D78ABE20AAA100DDA3FF139E566842C2E,"* [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) -* [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) -* [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) -* [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) -* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) -* [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html). Supports source connections only. -* [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) -* [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) -* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) -* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) -* [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html). Supports source connections only. -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_6,53EE442D78ABE20AAA100DDA3FF139E566842C2E,"* [Salesforce.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html). Supports source connections only. -* [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) -* [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html). Supports source connections only. -* [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) -* [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html) -* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) -* [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html). Supports source connections only. -* [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) -Teradata JDBC Driver 17.00.00.03 Copyright (C) 2023 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering.. - - - -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_7,53EE442D78ABE20AAA100DDA3FF139E566842C2E," Supported connectors by tool - -The following tools support connections: - - - -* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) -* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refinery-datasources.html) -* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html) -* [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html) -* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html) -* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html) - - - -" -53EE442D78ABE20AAA100DDA3FF139E566842C2E_8,53EE442D78ABE20AAA100DDA3FF139E566842C2E," Learn more - - - -* [Asset previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) -* [Profiles of assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) -* [Troubleshooting connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html) - - - -Parent topic: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_0,2200315EA9DA921EDFF8A3322417BB211F15B4EB," Adding data from a connection to a project - -A connected data asset is a pointer to data that is accessed through a connection to an external data source. You create a connected data asset by specifying a connection, any intermediate structures or paths, and a relational table or view, a set of partitioned data files, or a file. When you access a connected data asset, the data is dynamically retrieved from the data source. - -You can also add a connected folder asset that is accessed through a connection in the same way. See [Add a connected folder asset to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html). - -Partitioned data assets have previews and profiles like relational tables. However, you cannot yet shape and cleanse partitioned data assets with the Data Refinery tool. - -To add a data asset from a connection to a project: - - - -1. From the project page, click the Assets tab, and then click Import assets > Connected data. -2. Select an existing connection asset as the source of the data. If you don't have any connection assets, cancel and go to New asset > Connect to a data source, and [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). -3. Select the data you want. You can select multiple connected data assets from the same connection. Click Import. For partitioned data, select the folder that contains the files. If the files are recognized as partitioned data, you see the message This folder contains a partitioned data set. -4. Type a name and description. -5. Click Create. The asset appears on the project Assets page. - - - -When you click on the asset name, you can see this information about connected assets: - - - -* The asset name and description -* The tags for the asset -* The name of the person who created the asset -* The size of the data -* The date when the asset was added to the project -* The date when the asset was last modified -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_1,2200315EA9DA921EDFF8A3322417BB211F15B4EB,"* A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of relational data -* A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data - - - -Watch this video to see how to create a connection and add connected data to a project. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. - 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. - 00:16 From here, you can add different elements to the project. - 00:20 In this case, you want to add a connection. - 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. - 00:39 And you can filter the list based on compatible services. - 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. - 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. - 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. - 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. - 01:17 First, test the connection and then create the connection. - 01:25 The new connection now displays in the list of data assets. - 01:30 Next, add connected data assets to this project. - 01:37 Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. - 01:43 Then select the schema and table. - 01:50 You can see that this will add a reference to the data within this connection and include it in the target project. - 01:58 Provide a name and a description and click ""Create"". -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_2,2200315EA9DA921EDFF8A3322417BB211F15B4EB," 02:06 The data now displays in the list of data assets. - 02:09 Open the data set to get a preview; and from here you can move directly into refining the data. - 02:17 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_3,2200315EA9DA921EDFF8A3322417BB211F15B4EB," Next steps - - - -* [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -* [Analyze the data or build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - - - -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_4,2200315EA9DA921EDFF8A3322417BB211F15B4EB," Learn more - - - -* [Connected folder assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) -* [Connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -" -2200315EA9DA921EDFF8A3322417BB211F15B4EB_5,2200315EA9DA921EDFF8A3322417BB211F15B4EB,"Parent topic: - -[Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_0,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Controlling access to Cloud Object Storage buckets - -A bucket is a logical abstraction that provides a container for data. Buckets in Cloud Object Storage are created in IBM Cloud. Within a Cloud Object Storage instance, you can use policies to restrict users' access to buckets. - -Here's how it works: - -![A Cloud Object Storage instance with two buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/COSInstanceAndBuckets.svg) - -In this illustration, two credentials are associated with a Cloud Object Storage instance. Each of the credentials references an IAM service ID in which policies are defined to control which bucket that service ID can access. By using a specific credential when you add a Cloud Object Storage connection to a project, only the buckets accessible to the service ID associated with that credential are visible. - -To create connections that restrict users' access to buckets, follow these steps. - -First, in IBM Cloud: - - - -1. [Create a Cloud Object Storage instance and several buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encreatebucket) -2. [Create a service credential and Service ID for each combination of buckets that you want users to be able to access](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encredentials) -3. [Verify that the service IDs were created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enverify) -4. [Edit the policies of each service ID to provide access to the appropriate buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enpolicy) -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_1,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3,"5. [Copy values from each of the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encopy) -6. [Copy the endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enendpoint) - -Then, in your project: -7. [Add Cloud Object Storage connections that use the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enadd) -8. [Test users' access to buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=entest) - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_2,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 1: Create a Cloud Object Storage instance and several buckets - - - -1. From the [IBM Cloud catalog](https://cloud.ibm.com/catalogservices), search for Object Storage, then create a Cloud Object Storage instance. -2. Select Buckets in the navigation pane. -3. Create as many buckets as you need. - -For example, create three buckets: dept1-bucket, dept2-bucket, and dept3-bucket. - -![Buckets page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/BucketsAndObjectsPage.png) - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_3,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 2: Create a service credential and Service ID for each combination of buckets that you want users to be able to access - - - -1. Select Service credentials in the navigation pane. -2. Click New Credential. -3. In the Add new credential dialog, provide a name for the credential and select the appropriate access role. -4. Within the Select Service ID field, click Create New Service ID. -5. Enter a name for the new service ID. We recommend using the same or a similar name to that of the credential for easy identification. -6. Click Add. -7. Repeat steps 2 to 6 for each credential that you want to create. - -For example, create three credentials: cos-all-access, dept1-dept2-buckets-only, and dept2-dept3-buckets-only. - -![Service credentials page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceCredentialsPage.png) - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_4,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 3: Verify that the service IDs were created - - - -1. In the IBM Cloud page header, click Manage > Access (IAM). -2. Select Service IDs in the navigation pane. -3. Confirm that the service IDs you created in steps 2d and 2e are visible. - -![Service IDs page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceIDsPage.png) - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_5,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 4: Edit the policies of each service ID to provide access to the appropriate buckets - - - -1. Open each service ID in turn. -2. On the Access policies tab, select Edit from the Actions menu to view the policy. -3. If necessary, edit the policy to provide access to the appropriate buckets. -4. If needed, create one or more new policies. - - - -1. Remove the existing, default policy which provides access to all of the buckets in the Cloud Object Storage instance. -2. Click Assign access. -3. For Resource type, specify ""bucket"". -4. For Resource ID, specify a bucket name. -5. In the Select roles section, select Viewer from the ""Assign platform access roles"" list and select Writer from the ""Assign service access roles"" list. - - - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_6,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Example 1 - -By default, the policy for the cos-all-access service ID provides Writer access to the Cloud Object Storage instance. - -![Access policies tab for the cos-all-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageForcosallaccess.png) - -Because you want this service ID and the corresponding credential to provide users with access to all of the buckets, no edits are required. - -![Edit policy page for the cos-all-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/cosallaccessServiceIDPolicy.png) - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_7,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Example 2 - -By default, the policy for the ""dept1-dept2-buckets-only"" service ID provides Writer access to the Cloud Object Storage instance. Because you want this service ID and the corresponding credential to provide users with access only to the dept1-bucket and dept2-bucket buckets, remove the default policy and create two access policies, one for dept1-bucket and one for dept2-bucket. - -![Access policies tab for the dept1-dept2-buckets-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageFordept1dept2bucketsonly.png) - -![Edit Policy page for the dept1-bucket-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept1.png) - -![Edit Policy page for the dept2-bucket-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept2.png) - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_8,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 5: Copy values from each of the service credentials that you created - - - -1. Return to your IBM Cloud Dashboard and select Cloud Object Storage from the Storage list. -2. Select Service credentials in the navigation pane. -3. Click the View credentials action for one of the service IDs that you created in step 2. -4. Copy the ""apikey"" value and the ""resource_instance_id"" value to a temporary location, such as a desktop note. - -![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_apikey.png) - -![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_resourceinstanceid.png) -5. Repeat steps 3 and 4 for each credential. - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_9,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 6: Copy the Endpoint - - - -1. Select Endpoint in the navigation pane. -2. Copy the URL of the endpoint that you want to connect to. Save the value to a temporary location, such as a desktop note. - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_10,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Step 7: Add Cloud Object Storage connections that use the service credentials that you created - - - -1. Return to your project on the Assets tab, and click New asset > Connect to a data source.. -2. On the New connection page, click Cloud Object Storage. -3. Name the new connection and enter the login URL (from the Endpoints page) as well as the ""apikey"" and ""resource_instance_id"" values that you copied in step 5 from one of the service credentials. -4. Repeat steps 3 to 5 for each service credential. - -The connections will be visible in the Data assets section of the project. - - - -" -D9B02A6929162AF5F13E95C700CE0E548F6A9EE3_11,D9B02A6929162AF5F13E95C700CE0E548F6A9EE3," Test users' access to buckets - -Going forward, when you add a data asset from a Cloud Object Storage connection to a project, you'll see only the buckets that the policies allow you to access. To test this: - - - -1. From a project, click New asset > Connected data. Or from a catalog, click Add to project > Connected data. -2. In the Connection source section, click Select source. - -On the Select connection source page, you can see the Cloud Object Storage connections that you created. -3. Select one of the Cloud Object Storage connections to see that only the buckets accessible to the service ID associated with that bucket's credential are visible. - - - -Parent topic:[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) -" -A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC_0,A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC," Adding connections to projects - -You need to create a connection asset for a data source before you can access or load data to or from it. A connection asset contains the information necessary to establish a connection to a data source. - -Create connections to multiple types of data sources, including IBM Cloud services, other cloud services, on-prem databases, and more. - -See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for the list of data sources. - -To create a new connection in a project: - - - -1. Go to the project page, and click the Assets tab. -2. Click New asset > Connect to a data source. -3. Choose the kind of connection: - - - -* Select New connection (the default) to create a new connection in the project. -* Select Platform connections to select a connection that has already been created at the platform level. -* Select Deployed services to connect to a data source from a cloud service this is integrated with IBM watsonx. - - - -4. Choose a data source. -5. Enter the connection information that is required for the data source. Typically, you need to provide information like the hostname, port number, username, and password. -6. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. - - - -* Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. -" -A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC_1,A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC,"* Shared: With shared credentials, all users access the connection with the credentials that you provide. Shared credentials can potentially be retrieved by a user who has access to the connection asset. Because the credentials are shared, it is difficult to audit access to the connection, to identify the source of data loss, or identify the source of a security breach. - - - - - - - -1. For Private connectivity: To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Securing connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). -2. If available, click Test connection. -3. Click Create. The connection appears on the Assets page. You can edit the connection by clicking the connection name on the Assets page. -4. Add tables, files, or other types of data from the connection by [creating a connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). - - - -Connections with personal credentials are marked with a key icon (![the key symbol for private connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)) on the Assets page and are locked. If you are authorized to access the connection, you can unlock it by entering your credentials the first time you select it. This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. Connections with personal credentials are already unlocked if you created the connections yourself. - -Watch this video to see how to create a connection and add connected data to a project. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. -" -A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC_2,A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC," 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. - 00:16 From here, you can add different elements to the project. - 00:20 In this case, you want to add a connection. - 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. - 00:39 And you can filter the list based on compatible services. - 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. - 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. - 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. - 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. - 01:17 First, test the connection and then create the connection. - 01:25 The new connection now displays in the list of data assets. - 01:30 Next, add connected data assets to this project. - 01:37 Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. - 01:43 Then select the schema and table. - 01:50 You can see that this will add a reference to the data within this connection and include it in the target project. - 01:58 Provide a name and a description and click ""Create"". - 02:06 The data now displays in the list of data assets. - 02:09 Open the data set to get a preview; and from here you can move directly into refining the data. - 02:17 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -" -A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC_3,A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC," Next step - -Go to [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/asset_browser.html), and select the connection. Drill down to a schema, and table or view. - -" -A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC_4,A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC," Learn more - - - -* [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) -* [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) -* [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) - - - -Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -" -1093C3D02F71F4FBA221375302D20BC761E70AEF_0,1093C3D02F71F4FBA221375302D20BC761E70AEF," Creating jobs in Data Refinery - -You can create a job to run a Data Refinery flow directly in Data Refinery. - -To create a Data Refinery flow job: - - - -1. In Data Refinery, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the Data Refinery toolbar and select Save and create a job. -2. Define the job details by entering a name and a description (optional). -3. On the Configure page, select an environment runtime for the job, and optionally modify the job retention settings. -4. On the Schedule page, you can optionally add a one-time or repeating schedule. - -If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. - -You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. -5. Optional: Set up notifications for the job. You can select the type of alerts to receive. -6. Review the job settings. Then, create the job and run it immediately, or create the job and run it later. - -The Data Refinery flow job is listed in the Jobs in your project. - - - -" -1093C3D02F71F4FBA221375302D20BC761E70AEF_1,1093C3D02F71F4FBA221375302D20BC761E70AEF," Learn more - - - -* [Compute resource options for Data Refinery in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) -* [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) -* [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) - - - -Parent topic: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9_0,FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9," Creating jobs in the Notebook editor - -You can create a job to run a notebook directly in the Notebook editor. - -To create a notebook job: - - - -1. In the Notebook editor, click ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the menu bar and select Create a job. -2. Define the job details by entering a name and a description (optional). -3. On the Configure page, select: - - - -* A notebook version. The most recently saved version of the notebook is used by default. If no version of the notebook exists, you must create a version by clicking ![the versions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/versions.png) from the notebook action bar. -* A runtime. By default, the job uses the same environment template that was selected for the notebook. -* Advanced configuration to add environment variables and select the job run retention settings. - - - -* The environment variables that are passed to the notebook when the job is started and affect the execution of the notebook. - -Each variable declaration must be made for a single variable in the following format VAR_NAME=foo and appear on its own line. - -For example, to determine which data source to access if the same notebook is used in different jobs, you can set the variable DATA_SOURCE to DATA_SOURCE=jdbc:db2//db2.server.com:1521/testdata in the notebook job that trains a model and to DATA_SOURCE=jdbc:db2//db2.server.com:1521/productiondata in the job where the model runs on real data. In another example, the variables BATCH_SIZE, NUM_CLASSES and EPOCHS that are required for a Keras model can be passed to the same notebook with different values in separate jobs. -* Select the job run result output. You can select: - - - -* Log & notebook to store the output files of specific runs, the log file, and the resulting notebook. This is the default that is set for all new jobs. Select: - - - -" -FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9_1,FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9,"* To compare the results of different job runs, not just by viewing the log file. By keeping the output files of specific job runs, you can compare the results of job runs to fine tune your code. For example, by configuring different environment variables when the job is started, you can change the way the code in the notebook behaves and then compare these differences (including graphics) step by step between runs. - -Note: - - - -* The job run retention value is set to 5 by default to avoid creating too many run output files. This means that the last 5 job run output files will be retained. You need to adjust this value if you want to compare more run output files. -* You cannot use the results of a specific job run to create a URL to enable ""Sharing by URL"". If you want to use a specific job result run as the source of what is shown via ""Share by URL"", you must create a new job and select Log & updated version. - - - -* To view the logs. - - - -* Log only to store the log file only. The resulting notebook is discarded. Select: - - - -* To view the logs. - - - -* Log & updated version to store the log file and update the output cells of the version you used as input to this task. Select: - - - -* To view the logs. -* To share the result of a job run via ""Share by URL"". - - - - - - - -* Retention configuration to set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. The retention value is set to 5 by default (the last 5 job run output files are retained). - -Be mindful when changing the default as too many job run files can quickly use up project storage. - - - -4. On the Schedule page, you can optionally add a one-time or repeating schedule. - -If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. - -" -FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9_2,FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9,"You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. - -An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. -5. Optionally set to see notifications for the job. You can select the type of alerts to receive. -6. Review the job settings. Then create the job and run it immediately, or create the job and run it later. All notebook code cells are run and all output cells are updated. - -The notebook job is listed under Jobs in your project. To view the notebook run output, click the job and then Run result on the Job run details page. - - - -" -FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9_3,FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9," Learn more - - - -* [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) -* [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) -* [Environments for the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) - - - -Parent topic:[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -1C863B2624AB2712318442337C917143C19E7DDD_0,1C863B2624AB2712318442337C917143C19E7DDD," Creating jobs for Pipelines - -You can create jobs for Pipelines. - -To create a Pipelines job: - - - -1. Open your Pipelines asset from the project. -2. Click Run pipeline > Create a job. -3. On the Create a job page, you can choose the asset version that you'd like to run. The most recently saved version of the Pipelines is used by default. -4. Give a name and optional description for your job. Click next. -5. Define your IAM API key. The most recently used API key is used by default. If you'd like to use a new API key, click Generate new API key. Click next. -6. You can schedule your job by toggling Schedule off to Schedule to run. You can choose either or both options: - - - -* Start on: Choose a date for your scheduled job to run. The time zone is GMT-0400 (Eastern Daylight Time). If you do not choose a start date, the job will never run automatically and must be started manually. - -* Repeat: You can choose to schedule the repeated frequency (every minute to every month), exclude running the job on certain days, and choose an end date. If you do not choose to repeat the job, it runs one time if a start date is given, or does not run. - - - -7. Review your job settings and click Create. The Pipelines job is listed under Jobs in your project. - - - -" -1C863B2624AB2712318442337C917143C19E7DDD_1,1C863B2624AB2712318442337C917143C19E7DDD," Learn more - - - -* [Viewing jobs across projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html) - - - -Parent topic:[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -27FCAB0041FEB8B819E329A319B12D2F4167318A_0,27FCAB0041FEB8B819E329A319B12D2F4167318A," Creating SPSS Modeler jobs - -You can create a job to run an SPSS Modeler flow. - -To create an SPSS Modeler job: - - - -1. In SPSS Modeler, click the Create a job icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select Create a job. A wizard will appear. Click Next to proceed through each page of the wizard as described here. -2. Define the job details by entering a name and a description (optional). If desired, you can also specify retention settings for the job. Select Job run retention settings to set how long to retain finished job runs and job run artifacts such as logs. You can select one of the following retention methods. Be mindful when changing the default as too many job run files can quickly use up project storage. - - - -* By duration (days). Specify the number of days to retain job runs and job artifacts. The retention value is set to 7 days by default (the last 7 days of job runs retained). -* By amount. Specify the last number of finished job runs and job artifacts to keep. The retention value is set to 200 jobs by default. - - - -3. On the Flow parameters page, you can set values for flow parameters if any exist for the flow. They are, in effect, user-defined variables that are saved and persisted with the flow. Parameters are often used in scripting to control the behavior of the script by providing information about fields and values that don't need to be hard coded in the script. See [Setting properties for flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_properties.html) for more information. - -For example, your flow might contain a parameter called age_param that you choose to set to 40 here, and a parameter called bp_param you might set to HIGH. -4. On the Configuration page, you can choose whether the job will run the entire flow or one or more branches of the flow. -5. On the Schedule page, you can optionally add a one-time or repeating schedule. - -" -27FCAB0041FEB8B819E329A319B12D2F4167318A_1,27FCAB0041FEB8B819E329A319B12D2F4167318A,"If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. - -You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. -6. Optionally turn on notifications for the job. You can select the type of alerts to receive. -7. Review the job settings. Click Save to create the job. - -The SPSS Modeler job is listed under Jobs in your project. - - - -" -27FCAB0041FEB8B819E329A319B12D2F4167318A_2,27FCAB0041FEB8B819E329A319B12D2F4167318A," Learn more - - - -* [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) -* [SPSS Modeler documentation](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) - - - -Parent topic: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -9370EEEF3D5414148EFC5CC390B4EFBE3020F23D_0,9370EEEF3D5414148EFC5CC390B4EFBE3020F23D," Downloading data assets from a project - -You can download data assets from a project to your local system. - -Important: Take care when you download assets. Collaborators can upload any type of file into a project, including files that have malware or other types of malicious code. - -Required permissions : You must have the Editor or Admin role in the project to download an asset. - -" -9370EEEF3D5414148EFC5CC390B4EFBE3020F23D_1,9370EEEF3D5414148EFC5CC390B4EFBE3020F23D," Download a data asset - -To download a data asset that is in the project's storage, select Download from the ACTION menu next to the asset name. - -For an alternate method of downloading data assets for a project, select Files in the Data side panel. Checkmark the data asset and select Download from the ACTION menu in the side panel. - -" -9370EEEF3D5414148EFC5CC390B4EFBE3020F23D_2,9370EEEF3D5414148EFC5CC390B4EFBE3020F23D," Download a connected data asset - -To download a connected data asset, use Data Refinery to run a job that saves the file as the output of a Data Refinery flow. The output of the Data Refinery flow is a new CSV file in the project’s storage. - - - -1. Click the asset name to open it. -2. Click Prepare data. -3. From the Jobs menu, click Save and create a job. Enter a job name and click Create and Run. -4. Go back to the Assets page. Refresh the page. By default, the downloadable asset is named table-name_shaped.csv. -5. Choose Download from the ACTION menu next to the asset name. - - - -" -9370EEEF3D5414148EFC5CC390B4EFBE3020F23D_3,9370EEEF3D5414148EFC5CC390B4EFBE3020F23D," Learn more - - - -* [Download a data asset from a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/download.html) -* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) - - - -Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -EEE6714A8CF20E18EA398651B41E0278071EE42B_0,EEE6714A8CF20E18EA398651B41E0278071EE42B," Exporting a project - -You can share assets in a project with others and copy a project by exporting them as a ZIP file to your desktop. The project readme file is added to the exported ZIP file by default. - -" -EEE6714A8CF20E18EA398651B41E0278071EE42B_1,EEE6714A8CF20E18EA398651B41E0278071EE42B," Requirements and restrictions - -Required role : You need Admin or Editor role in the project to export a project. - -Restrictions : - You cannot export assets larger than 500 MB : - If your project is marked as sensitive, you can't export data assets, connections or connected data from the project. : - Be mindful when selecting assets to always also include the dependencies of those assets, for example the data assets or connections for a data flow, a notebook, connected data, or jobs. There is no check for dependencies. If you don't include the dependencies, subsequent project imports do not work. : - You can only export and share assets across projects created in watsonx.ai. You can't export a project from Cloud Pak for Data as a Service and import it into watsonx.ai, or the other way around. You can however, move projects between Cloud Pak for Data as a Service and watsonx.ai. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html). : - Exporting a project from one region and importing the assets to a project or space in another region can result in an error creating the assets. The error message An unexpected response was returned when creating asset is a symptom of this restriction. : - Exporting a project is not available for all Watson Studio plans. See [Watson Studio plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). - -" -EEE6714A8CF20E18EA398651B41E0278071EE42B_2,EEE6714A8CF20E18EA398651B41E0278071EE42B," Exporting a project to desktop - -Exporting a project packs the project assets that you select into a single ZIP file that can be shared like any other file. - -To export project assets to desktop: - - - -1. Open the project you want to export assets from. -2. Check whether the assets that you include in your export, for example notebooks or connections, don't contain credentials or other sensitive information that you don't want to share. You should remove this information before you begin the export. Only private connection credentials are removed. -3. Optional. Add information to the readme on the Overview page of your project about the assets that you include in the export. For example, you can give a brief description of the analytics use case of the added assets and the data analysis methods that are used. -4. Click ![the Export to desktop icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/export-proj.png) from the project toolbar. -5. Select the assets to add. You can filter by asset type or customize the project export settings by selecting preferences (the settings icon to the right of the window title) which are applied each time you export the project. -6. Optional: Change the name of the project export file. -7. Supply a password if you want to export connections that have shared credentials. Note that this password must be provided to decrypt these credentials on project import. -8. Click Export. Do not leave the page while the export is running. - -When you export to desktop, the file is saved to the Downloads folder by default. If a ZIP file with the same name already exists, the existing file isn't overwritten. - -Ensure that your browser settings download the ZIP file to the desktop as a .zip file and not as a folder. Compressing this folder to enable project import leads to an error. Note also that you cannot manually add other assets to an exported project ZIP file on your desktop. - -The status of a project export is tracked on the project's Overview page. - - - -" -EEE6714A8CF20E18EA398651B41E0278071EE42B_3,EEE6714A8CF20E18EA398651B41E0278071EE42B," Learn more - - - -* [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -* [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_0,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Managing feature groups (beta) - -Create a feature group to preserve a set of columns of a data asset along with associated metadata for use with Machine Learning models. - -Required service : You must have these services. - -- Watson Studio (for projects) - -Required permissions : To view this page, you can have any role in a project. : To edit or update information on this page, you must have the Editor or Admin role in the project. - -Workspaces : You can view the asset feature group in these workspaces: : Projects - -Types of assets : These types of assets can have a feature group: : Tabular: CSV, TSV, Parquet, xls, xslx, avro, text, json files : [Connected data types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) that are structured and supported in Watson Studio. - -Data size : No limit - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_1,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Feature groups (beta) - -Create a feature group to preserve a set of columns of a particular data asset along with the metadata used for Machine Learning. For example, if you have a set of features for a credit approval model, you can preserve the features used to train the model, as well as some metadata, including which column is used as the prediction target, and which columns are used for bias detection. Feature groups make it simple to preserve the metadata for the features used to train a machine learning model so other data scientists can use the same features. You can see the feature group tab when you preview a particular asset. - - - -* [Creating a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=encreate-featuregrp) -* [Editing a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enedit-featuregrp) -* [Removing features or a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enremove-featuregrp) -* [Using the Python API for feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enapi-featuregrp) - - - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_2,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Creating a feature group in a project - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_3,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Before you begin - -If you create a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) for the data asset before creating a feature group you can select profile metadata to add values to the feature. - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_4,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Create a feature group - -You can select particular columns of data assets to form a feature group. - - - -1. In the project Assets tab, click the name of the relevant asset to open the preview and select the Feature group tab. Here you can create a feature group or view and edit an existing one. An asset can have only one feature group. Click New feature group. - -![Create a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group3.png) -2. Select the columns that you want to be used in the feature group. Select the Name checkbox to include all the columns as features. - -![Select the feature group columns](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group-columns1.png) - - - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_5,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Editing a feature group - -When you have selected the columns of the data asset to be used in the feature group, you can then view each feature and edit it to specify the role it will have in Machine Learning models. - -![View feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-example3.png) - - - -1. Click a feature name and click Edit this feature. A window opens displaying the following tabs: - - - -* Details - provide the following information about the feature. - -![Details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-details.png) - -Select a Role to be assigned to the feature: - - - -* Input: the feature can be used as input for training a Machine Learning model. -* Target: the feature to be used as the prediction target when the data is used to train a Machine Learning model. -* Identifier: the primary key, such as customer ID, used to identify the input data. - -Enter a Description, Recipe (any method or formula used to create values for the feature) and any Tags. - - - -* Value descriptions - -![Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-values.png) - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_6,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5,"Value descriptions allow you to clarify the meaning of specific values. For example, consider a column ""credit evaluation"" with the values -1, 0 and 1. You can use value descriptions to provide meaning for these values. For example, -1 might mean ""evaluation rejected"". You can enter descriptions for particular values. For numerical values, you can also specify a range. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets, and click Add. For example, to describe all age values between 18 and 24 as ""millenials"", enter [18,24] as the value and millenials as the description. If you have a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) defined, the profile values are displayed in the value descriptions list. From here you can select one value or multiple values. -* Fairness information - -![Fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-fairness.png) - -You can define Monitor or Reference groups of values for monitoring bias. The values that are more at risk of biased outcomes can be placed in the Monitor group. These values are then compared to values in the Reference group. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets. For example, to monitor all age values between 18 and 35, enter [18,35]. Then select Monitor or Reference and click Add. You can also specify Favorable outcomes. See [Fairness in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) for more information about fairness. - - - -2. When you have edited the feature, click Save. You can now see your changes in the Feature Details window. Close this window to return to the feature group. - - - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_7,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Removing features from a group - -To remove a feature from a group: - - - -1. Preview the asset in the project and select the Feature group tab. -2. In the Features table that is displayed, select the feature (or features) that you want to remove. -3. In the toolbar that appears, select Remove from group. - -![Removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-remove3.png) - - - -The feature, or feature group if you selected all the features, is removed. - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_8,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Searching for a feature group - -You can [search for assets or columns across all projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.htmlfilter). To filter your search results to find assets with a feature group, select Data to see the filter options, and select Feature group. Assets containing a feature group will then be listed in the search results. - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_9,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Using the Python API to create and use feature groups - -You can also use the [assetframe-lib Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) in notebooks to create and edit feature groups. This library also allows you use feature metadata like fairness information when creating machine learning models. - -" -A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5_10,A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5," Learn more - -For examples on how to create and use feature groups in notebooks: - - - -* [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples -" -5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149_0,5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149," Adding a connected folder asset to a project - -You can create a connected folder asset based on a path within an IBM Cloud Object Storage system that is accessed through a connection. You can view the files and subfolders that share the path with the connected folder asset. The files that you can view within the connected folder asset are not themselves data assets. For example, you can create a connected folder asset for a path that contains news feeds that are continuously updated. - -Required permissions : You must have the Admin or Editor role in the project to add a connected folder asset. - -Watch this video to see how to add a connected folder asset in a project, then follow the steps below the video. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -To add a connected folder asset from a connection to a project: - - - -1. If necessary, [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). Include an Access Key and a Secret Key to your IBM Cloud Object Storage connection to enable the downloading of files within the connected folder asset. If you're using an existing IBM Cloud Object Storage connection asset that doesn't have an Access Key and Secret Key, edit the connection asset and add them. -2. Click Import assets > Connected data. -3. Select an existing connection asset as the source of the data. -4. Select the folder you want and click Import. -5. Type a name and description. -6. Click Create. The connected folder asset appears on the project Assets page in the Data assets category. - - - -Click the connected folder asset name to view the contents of the connected folder asset. Click the eye (![eye icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/visibility-on.svg)) icon next to a file name to view the contents of the files within the folder that have these formats: - - - -* CSV -* JSON -* Parquet - - - -" -5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149_1,5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149,"You can refine the files within a connected folder asset and then save the result as a data asset. While viewing the connected folder asset, select a file and then click Prepare data. - -You can view the files within the connected folder asset if the IBM Cloud Object Storage connection asset that's associated with the connected folder asset has an Access Key and a Secret Key (also known as HMAC credentials). For more information about HMAC credentials, see [IBM Cloud Object Storage Service credentials](https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.htmlservice-credentials). - -" -5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149_2,5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149," Next steps - - - -* [Refining a file within the folder](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) - - - -Parent topic:[Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -" -E4ECA1E5E22F94051D1C8E115D9D874658B5697A_0,E4ECA1E5E22F94051D1C8E115D9D874658B5697A," Getting and preparing data in a project - -After you create a project, or join one, the next step is to add data to the project and prepare the data for analysis. - -Required permissions : You must have the Admin or Editor role in a project to add or prepare data. - -You can add data assets from your local system, from a catalog, from the Samples, or from connections to data sources. - -You can add these types of data assets to a project: - - - -* Data assets from files from your local system, including structured data, unstructured data, and images. The files are stored in the project's IBM Cloud Object Storage bucket. -* Connection assets that contain information for connecting to data sources. You can add connections to IBM or third-party data sources. See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). -* Connected data assets that specify a table, view, or file that is accessed through a connection to a data source. -* Connected folder assets that specify a path in IBM Cloud Object Storage. - - - -To get started quickly, take a tutorial. See [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html). - -To [refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) by cleansing and shaping it, you can: - - - -* Select the Prepare data tile on your watsonx home page. -* Add the data to the project, then open the data asset and click Prepare data. - - - -To [manage feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for a data asset, open the data asset and go to its Feature group page. - -To create [synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html), you can: - - - -" -E4ECA1E5E22F94051D1C8E115D9D874658B5697A_1,E4ECA1E5E22F94051D1C8E115D9D874658B5697A,"* Select the Prepare data tile on your watsonx home page. -* Select the Generate synthetic tabular data tile. - - - -" -E4ECA1E5E22F94051D1C8E115D9D874658B5697A_2,E4ECA1E5E22F94051D1C8E115D9D874658B5697A," Learn more - - - -* [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -* [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -* [Refining data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -* [Adding connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) -" -A4CEA84825E512A73C509437103B89B6EF363D5B_0,A4CEA84825E512A73C509437103B89B6EF363D5B," Importing a project - -You can create a project that is preloaded with assets by importing the project. - -" -A4CEA84825E512A73C509437103B89B6EF363D5B_1,A4CEA84825E512A73C509437103B89B6EF363D5B," Requirements - -A local file of a previously exported project : Importing a project from a local file is a method of copying a project. You can import a project from a file on your local system only if the ZIP file that you select was exported from a IBM watsonx project as a compressed file. You can import only projects that you exported from watsonx.ai. You cannot import a compressed file that was exported from a Cloud Pak for Data as a Service project. - -: If the exported file that you select to import was encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. - -A sample project from Samples : You can create a project [from a project sample](https://dataplatform.cloud.ibm.com/samples?context=wx) to learn how to work with data in tools, such as notebooks to prepare data, analyze data, build and train models, and visualize analysis results. - -: The sample projects show how to accomplish goals, for example, to load and explore data, to create and train machine learning models for predictive analysis. Each project includes the required assets, such as notebooks, and all the data sets that you need to complete the example use case. - -" -A4CEA84825E512A73C509437103B89B6EF363D5B_2,A4CEA84825E512A73C509437103B89B6EF363D5B," Importing a project from a local file or sample - -To import a project: - - - -1. Click New project on the home page or on your Projects page. -2. Choose whether to create a project based on an exported project file or a sample project. -3. Upload a project file or select a sample project. -4. On the New project screen, add a name and optional description for the project. -5. If the project file that you select to import is encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. If you enter an incorrect password, the project file imports successfully, but sensitive connection properties are falsely decrypted. -6. Select the Restrict who can be a collaborator checkbox to restrict collaborators to members of your organization. You can't change this setting after you create the project. -7. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one. -8. Click Create. You can start adding resources if your project is empty, or begin working with the resources you imported. - - - -" -A4CEA84825E512A73C509437103B89B6EF363D5B_3,A4CEA84825E512A73C509437103B89B6EF363D5B," Learn more - - - -* [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -* [Exporting project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) - - - -Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -" -97B722619AFC616F13BEB20CD7A8FBC29CFF50D1,97B722619AFC616F13BEB20CD7A8FBC29CFF50D1," Viewing jobs across projects - -You can view the jobs that exist across projects for assets that run in tools, such as notebooks, Data Refinery flows, and SPSS Modeler flows. - -To view the status of jobs or job runs in projects: - - - -1. From the navigation menu, select Projects > Jobs. -2. Select a view scope: - - - -* Jobs with finished runs: all jobs that contain finished runs -* Finished runs: all job runs that have finished -* Jobs with active runs: all jobs that contain that contain active runs -* Active runs: all job runs that are still active - - - -3. Click ![the Filters icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/edit-filters.png) from the table toolbar to further narrow down the returned search results for the view scope you selected. The filter options vary depending the view scope selection, for example, for jobs with active runs, you can filter by run state, job type and project, whereas for finished runs by time, run state, whether the runs were started manually or by a schedule, job type, run duration and project. - - - -Parent topic:[Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_0,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Creating and managing jobs in a project - -You create jobs to run assets or files in tools, such as Data Refinery flows, SPSS Modeler flows, Notebooks, and scripts, in a project. - -When you create a job you define the properties for the job, such as the name, definition, environment runtime, schedule and notification specifications on different pages. You can run a job immediately or wait for the job to run at the next scheduled interval. - -Each time a job is started, a job run is created, which you can monitor and use to compare with the job run history of previous runs. You can view detailed information about each job run, job state changes, and job failures in the job run log. - -How you create a job depends on the asset or file. - - - -Job creation options for assets or files - - Asset or file Create job in tool Create job from the Assets page More information - - Data Refinery flow ✓ ✓ [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) - SPSS Modeler flow ✓ ✓ [Creating jobs in SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html) - Notebook created in the Notebook editor ✓ ✓ [Creating jobs in the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) - Pipelines ✓ [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) - - - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_1,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Creating jobs from the Assets page - -You can create a job to run an asset from the project's Assets page. - -Required permissions : You must have an Editor or Admin role in the project. - -Restriction:You cannot run a job by using an API key from a [service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). - -To create jobs for a listed asset from the Assets page of a project: - - - -1. Select the asset from the section for your asset type and choose New job from the menu icon with the lists of options (![actions icon three vertical dots](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) at the end of the table row. -2. Define the job details by entering a name and a description (optional). -3. If you can select Setting, specify the settings that you want for the job. -4. If you can select Configure, choose an environment runtime for the job. Depending on the asset type, you can optionally configure more settings, for example environment variables or script arguments. - -To avoid accumulating too many finished job runs and job run artifacts, set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. -5. On the Schedule page, you can optionally add a one-time or repeating schedule. - -If you select the Repeat option and unit of Minutes with the value of n, the job runs at the start of the hour, and then at every multiple of n. For example, if you specify a value of 11 it will run at 0, 11, 22, 33, 44 and 55 minutes of each hour. - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_2,28C4D682B46E9723F538988BB2BDB1EB65618E5E,"If you also select the Start of Schedule option, the job starts to run at the first multiple of n of the hour that occurs after the time that you provide in the Start Time field. For example, if you enter 10:24 for the Start of Time value, and you select Repeat and set the job to repeat every 14 minutes, then your job will run at 10:42, 10:56, 11:00, 11:14. 11:28, 11:42, 11:56, and so on. - -You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. - -An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. -6. (Optional): Select to see notifications for the job. You can select the type of alerts to receive. -7. Review the job settings. Then, create the job and run it immediately, or create the job and run it later. - - - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_3,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Managing jobs - -You can view all of the jobs that exist for your project from the project's Jobs page. With Admin or Editor role for the project, you can view and edit the job details. You can run jobs manually and you can delete jobs. With Viewer role for the project, you can only view the job details. You can't run or delete jobs with Viewer role. - -To view the details of a specific job, click the job. From the job's details page, you can: - - - -* View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support. -* Edit job settings by clicking Edit job, for example to change schedule settings or to pick another environment template. -* Run the job manually by clicking ![the run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/run-job.png) from the job's action bar. You can start a scheduled job based on the schedule and on demand. -* Delete the job by clicking ![the bin icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/delete-job.png) from the job's action bar. - - - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_4,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Viewing and editing jobs in a tool - -You can view and edit job settings associated with an asset directly in the following tools: - - - -* Data Refinery -* DataStage -* Match 360 -* Notebook editor or viewer -* Pipelines - - - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_5,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Viewing and editing jobs in Data Refinery, Notebooks, and Pipelines - - - -1. In the tool, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select Save and view jobs. This action lists the jobs that exist for the asset. -2. Select a job to see its details. You can change job settings by clicking Edit job. - - - -" -28C4D682B46E9723F538988BB2BDB1EB65618E5E_6,28C4D682B46E9723F538988BB2BDB1EB65618E5E," Learn more - - - -* [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) -* [Creating jobs in the Notebook editor or Notebook viewer](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) -* [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) - - - -Parent topic:[Working in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -83198577CAC405AD1A9BF68BE7A5CAEB020D57D4_0,83198577CAC405AD1A9BF68BE7A5CAEB020D57D4," Leaving a project - -You can leave a project from within the project or from the Projects page. - -" -83198577CAC405AD1A9BF68BE7A5CAEB020D57D4_1,83198577CAC405AD1A9BF68BE7A5CAEB020D57D4," Restrictions - -If you are the only collaborator in the project with the Admin role, you must [assign the Admin role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to another collaborator before you can leave the project. - -" -83198577CAC405AD1A9BF68BE7A5CAEB020D57D4_2,83198577CAC405AD1A9BF68BE7A5CAEB020D57D4," Leaving a project from within the project - -To leave a project from within the project: - - - -1. Open the project. -2. On the Manage tab, go to the General page. -3. In the Danger zone section, click Leave project. -4. Click Leave. - - - -" -83198577CAC405AD1A9BF68BE7A5CAEB020D57D4_3,83198577CAC405AD1A9BF68BE7A5CAEB020D57D4," Leaving multiple projects - -To leave one or more projects from the Projects page: - - - -1. Select View all projects from the navigation menu. -2. Select one or more projects to leave. -3. Click Leave. -4. Click Leave to confirm. - - - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -784686DA695F28F867BC35C4416CB8D767D58B7A_0,784686DA695F28F867BC35C4416CB8D767D58B7A," Managing assets in projects - -You can manage assets in a project by adding them, editing them, or deleting them. - - - -* [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -* You can add other types of assets by clicking New asset or Import assets on the project's Assets page. -* [Edit assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=eneditassets) -* [Download assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) -* [Delete assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=enremove-asset) -* [Search for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) - - - -" -784686DA695F28F867BC35C4416CB8D767D58B7A_1,784686DA695F28F867BC35C4416CB8D767D58B7A," Edit assets - -You can edit the properties of all types of assets, such as the asset name, description, and tags. See [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). - -The role you need to edit an asset depends on the asset type. See [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). - -Data assets from files, connected data assets, or imported data assets : - Click the data asset name to open the asset. For some types of data, you can see an [asset preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html). : - To edit the data asset properties, such as its name, tags, and description, click the corresponding edit icon (![edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/edit.svg)) on the information pane. : - To create or update a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data, click the Profile tab. : - To cleanse and shape relational data, click Prepare data to open the data asset in Data Refinery. - -: When you change the name of data assets with file attachments that you uploaded into the project, the file attachments are also renamed. You must update any references to the data asset in code-based assets, like notebooks, to the new data asset name, otherwise, the code-based asset won't run. - -Connection assets : Click the connection asset name to edit the connection properties, such as the name, description, and connection details. - -Assets that you create with tools : Click the name of the asset on the Assets page to open it in its tool. - -" -784686DA695F28F867BC35C4416CB8D767D58B7A_2,784686DA695F28F867BC35C4416CB8D767D58B7A,"On the Assets page of a project, the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/lockicon-new.png)) indicates that another collaborator is editing the asset or locked the asset to prevent editing by other users. - - - -* Enabled lock: You can unlock the asset if you locked it or if you have the Admin role in the project. -* Disabled lock: You can't unlock a locked asset if you didn't lock it and you have the Editor or Viewer role in the project. - - - -When you unlock an asset that another collaborator is editing, you take control of the asset. The other collaborator is not notified and any changes made by that collaborator are overwritten by your edits. - -" -784686DA695F28F867BC35C4416CB8D767D58B7A_3,784686DA695F28F867BC35C4416CB8D767D58B7A," Delete an asset from a project - -Required permissions : You must have the Admin or Editor role to delete assets from the project. - -To delete an asset from a project, choose the Delete or the Remove option from the action menu next to the asset on the project Assets page. When you delete an asset, its associated file, if it has one, is also deleted. However, when you delete a connected data asset, the data in the associated data source is not affected. - -Depending on the type of asset, other related assets might also be deleted. - -" -784686DA695F28F867BC35C4416CB8D767D58B7A_4,784686DA695F28F867BC35C4416CB8D767D58B7A," Learn more - - - -* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) - - - -Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_0,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Working in projects - -A project is a collaborative workspace where you work with data and other assets to accomplish a particular goal. - -By default, your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx.ai. - -Your project can include these types of resources: - - - -* [Collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=encollaboration) are the people who you work with in your project. -* [Data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=endata) are what you work with. Data assets often consist of raw data that you work with to refine. -* [Tools and their associated assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=entools) are how you work with data. -* [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enenv) are how you configure compute resources for running assets in tools. -* [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enjobs) are how you manage and schedule the running of assets in tools. -* [Project documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=endocs) and notifications are how you stay informed about what's happening in the project. -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_1,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512,"* [Asset storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enstorage) is where project information and files are stored. -* [Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=eninteg) are how you incorporate external tools. - - - -You can customize projects to suit your goals. You can change the contents of your project and almost all of its properties at any time. However, you must make these choices when you create the project because you can't change them later: - - - -* The instance of IBM Cloud Object Storage to use for project storage. - - - -You can view projects that you create and collaborate in by selecting Projects > View all projects in the navigation menu, or by viewing the Projects pane on the main page. - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_2,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Collaboration in projects - -As a project creator, you can add other collaborators and assign them roles that control which actions they can take. You automatically have the Admin role in the project, and if you give other collaborators the Admin role, they can add collaborators too. See [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) and [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). - -Tip: If appropriate, add at least one other user as a project administrator to ensure that someone is able to manage the project if you are unavailable. - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_3,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Collaboration on assets - -All collaborators work with the same copy of each asset. Only one collaborator can edit an asset at a time. While a collaborator is editing an asset in a tool, that asset is locked. Other collaborators can view a locked asset, but not edit it. See [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html). - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_4,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Data assets - -You can add these types of data assets to projects: - - - -* Data assets from local files or the Samples -* Connections to cloud and on-premises data sources -* Connected data assets from an existing connection asset that provide read-only access to a table or file in an external data source -* Folder data assets to view the files within a folder in a file system - - - -Learn more about data assets: - - - -* [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -* [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -* [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.htmldata) -* [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) - - - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_5,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Tools and their associated assets - -When you run a tool, you create an asset that contains the information for a specific goal. For example, when you run the Data Refinery tool, you create a Data Refinery flow asset that defines the set of ordered operations to run on a specific data asset. Each tool has one or more types of associated assets that run in the tool. - -For a mapping of assets to the tools that you use to create them, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_6,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Environments - -Environments control your compute resources. An environment template specifies hardware and software resources to instantiate the environment runtimes that run your assets in tools. - -Some tools have an automatically selected environment template. However, for other tools, you can choose between multiple environments. When you create an asset in a tool, you assign an environment to it. You can change the environment for an asset when you run it. - -Watson Studio includes a set of default environment templates that vary by coding language, tool, and compute engine type. You can also create custom environment templates or add services that provide environment templates. - -The compute resources that you consume in a project are tracked. Depending on your offering plan, you have a limit to your monthly compute resources or you pay for all compute resources. - -See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_7,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Jobs - -A job is a single run of an asset in a tool with a specified environment runtime. You can schedule one or repeating jobs, monitor, edit, stop, or cancel jobs. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_8,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Asset storage - -Each project has a dedicated, secure storage bucket that contains: - - - -* Data assets that you upload to the project as files. -* Data assets from files that you copy from another workspace. -* Files that you save to the project with a tool. -* Files for assets that run in tools, such as notebooks. -* Saved models. -* The project readme file and internal project files. - - - -When you create a project, you must select an instance of IBM Cloud Object Storage or create a new instance. You cannot change the IBM Cloud Object Storage instance after you create the workspace. See [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html). - -When you delete a project, its storage bucket is also deleted. - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_9,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Integrations with external tools - -Integrations provide a method to interact with tools that are external to the project. - -You can integrate with a Git repository to [publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_10,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Project documentation and notifications - -While you create a project, you can add a short description to document the purpose or goal of the project. You can edit the description later, on the project's Settings page. - -You can mark the project as sensitive. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project. - -The Overview page of a project contains a readme file where you can document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file. - -You can view recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. By you lists assets that you edited, ordered by most recent. By all lists assets that are edited by others and also by you, ordered by most recent. - -All collaborators in a project are notified when a collaborator changes an asset. - -" -7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512_11,7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512," Learn more - - - -* [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) -* [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) -* [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -* [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) -* [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) -* [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) -* [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) -" -C324305E8F756140B7B96492D73D35BB32794119_0,C324305E8F756140B7B96492D73D35BB32794119," Marking a project as sensitive - -When you create a project, you can mark the project as sensitive to prevent project collaborators from moving sensitive data out of the project. - -Marking a project as sensitive prevents collaborators of a project, including administrators, from downloading or exporting data assets, connections, or connected data from a project. These sensitive assets cannot be added to a catalog or promoted to a space either. Project collaborators with Admin or Editor role can export assets like notebooks or models from the project. - -When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project. - -" -C324305E8F756140B7B96492D73D35BB32794119_1,C324305E8F756140B7B96492D73D35BB32794119," Restrictions - - - -* You cannot mark a project as sensitive after the project is created. -* You cannot mark projects that use Git integration as sensitive. - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -A6D3281CF9382FA606CF60727452A304A5CCDFA5_0,A6D3281CF9382FA606CF60727452A304A5CCDFA5," Adding platform connections - -You can add connections to the Platform assets catalog to share them across your organization. All collaborators in the Platform assets catalog can see the connections in the catalog. However, only users with the credentials for the data source can use a platform connection in a project to create a connected data asset. - -Required permissions : To create a platform connection, you must be a collaborator in the Platform assets catalog with one of these roles: - - - -* Editor -* Admin - - - -If you're not a collaborator in the Platform assets catalog, ask someone who is a collaborator to add you or tell you who has the Admin role in the catalog. - -You create connections to these types of data sources: - - - -* IBM Cloud services -* Other cloud services -* On-premises databases - - - -See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for a full list of data sources. Watch this video to see how to add platform connections. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -To create a platform connection: - - - -1. From the main menu, choose Data > Platform connections. -2. Click New connection. -3. Choose a data source. -4. If necessary, enter the connection information required for your data source. Typically, you need to provide information like the host, port number, username, and password. -5. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. - - - -" -A6D3281CF9382FA606CF60727452A304A5CCDFA5_1,A6D3281CF9382FA606CF60727452A304A5CCDFA5,"* Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. -* Shared: With shared credentials, all users access the connection with the credentials that you provide. - - - -6. To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). -7. Click Create. The connection appears on the Connections page. You can edit the connection by clicking the connection name. - - - -Alternatively, you can create a connection in a project and then publish it to the Platform assets catalog. - -To publish a connection from a project to the Platform assets catalog: - - - -1. Locate the connection in the project's Assets tab in the Data assets section. -2. From the Actions menu (![Actions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)), select Publish to catalog. -3. Select Platform assets catalog and click Publish. - - - -" -A6D3281CF9382FA606CF60727452A304A5CCDFA5_2,A6D3281CF9382FA606CF60727452A304A5CCDFA5," Next step - - - -* [Add a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) - - - -" -A6D3281CF9382FA606CF60727452A304A5CCDFA5_3,A6D3281CF9382FA606CF60727452A304A5CCDFA5," Learn more - - - -* [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -* [Creating the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) -* [Set the credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections) - - - -Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) -" -E70109F320A53829D66F6E07EE0A9B79B59AEE13_0,E70109F320A53829D66F6E07EE0A9B79B59AEE13," Your sandbox project - -A project is where you work with data and models by using tools. When you sign up for watsonx.ai, your sandbox project is created automatically, and you can start working in it immediately. - -Initially, your sandbox project is empty. To start working, click a task tile on the home page or go to the Assets page in your project, click New asset, and select a task. Each task can result in an asset that is saved in the project. Many tasks include samples that you can use. You can find sample prompts, notebooks, data sets, and other assets in the Samples from the home page. You can share your work by [adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to your project. If you need to work with data, you can [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to your project. - -If your sandbox project is your only project, then any task that you select occurs in the context of your sandbox project. When you have multiple projects, you can change the default project by selecting a project from the Open in list on the home page. - -Other projects that you create have the same functionality as your sandbox project, except that your Watson Machine Learning service instance is automatically associated with your sandbox project. You must manually associate your Watson Machine Learning service instance with other projects. - -" -E70109F320A53829D66F6E07EE0A9B79B59AEE13_1,E70109F320A53829D66F6E07EE0A9B79B59AEE13," Manually creating a sandbox project - -If you switch from Cloud Pak for Data as a Service to watsonx, you can create a sandbox project from the watsonx home page when the following conditions are met: - - - -* You have one or more instances of the Watson Machine Learning service. -* You have exactly one instance of the IBM Cloud Object Storage service. - - - -To manually create a sandbox project, click Create sandbox in the Projects section. - -Otherwise, you can create a different project. See [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html). You are guided through associating a Watson Machine Learning service with the project when you open certain tools. - -You can switch an existing project from the Cloud Pak for Data as a Service to watsonx. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html). - -" -E70109F320A53829D66F6E07EE0A9B79B59AEE13_2,E70109F320A53829D66F6E07EE0A9B79B59AEE13," Learn more - - - -* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -* [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) -* [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -* [Manage assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) -* [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) -* [Object storage for workspaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) - - - -Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_0,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for assets and artifacts across the platform - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_1,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for assets across the platform - -You can use the global search bar to search for assets across all the projects and deployment spaces to which you have access. - - - -* [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enrestrictions) -* [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=ensearch) -* [Selecting results](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enresult) - - - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_2,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Requirements and restrictions - -You can find assets and artifacts under the following circumstances. - - - -* Required permissions -You can have any role in projects or deployment spaces to find assets. - - - - - -* Workspaces - - - -* You can search for assets that are in these workspaces: - - - -* Projects -* Deployment spaces - - - - - -* Types of assets -You can search for all types of assets. -* Restrictions - - - -* Your search results include only assets in workspaces that you belong to. - - - - - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_3,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for assets - -To search for an asset, you can enter one or more words in the global search field. The search results are matches from these properties of assets: - - - -* Name -* Description -* Tags -* Table name - - - -You can customize your searches with these techniques: - - - -* [Searching for the start of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enstart) -* [Searching for a part of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enpart) -* [Searching for a phrase](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enphrase) -* [Searching for multiple alternative words](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enmultiple) - - - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_4,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for the start of a word - -To search for words starting with a letter or letters, enter the first 1-3 letters of the word. If you enter only one letter, words starting with that letter are returned. If you enter two or three letters, words starting with those letters will be prioritized over the words containing those letters. For example, if you search for i , you will get results like initial and infinite , but not definite. If you search for in you will additionally get results containing definite ranked lower in the results list. - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_5,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for a part of a word - -To search for partial word matches, include more than 3 letters. For example, if you search for conn, you might get results like connection and disconnect. - -Only the first 12 characters in a word are used in the search. Any search terms that you enter that are longer than 12 characters are truncated to the first 12 characters. - -Searches for partial words don't work in the description fields. - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_6,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for a phrase - -To search for a specific phrase, surround the phrase with double quotation marks. For example, if you search for ""payment plan prediction"", your results contain exactly that phrase. - -You can include a quoted phrase within a longer search string. For example, if you search for credit card ""payment plan prediction"", you might get results that contain credit card, credit, card, and payment plan prediction. - -When you search for a phrase in English, natural language analysis optimizes the search results in the following ways: - - - -* Words that are not important to the search intent are removed from the search query. -* Phrases in the search string that are common in English are automatically ranked higher than results for individual words. - - - -For example, if you search for find credit card interest in United States, you might get the following results: - - - -* Matches for credit card interest and United States are prioritized. -* Matches for credit, card, interest, United, and States are returned. -* Matches for in are not returned. - - - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_7,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Searching for multiple alternative words - -To find results that contain any of your search terms, enter multiple words. For example, if you search for machine learning, the results contain the word machine, the word learning, or both words. - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_8,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Selecting results - -To select the best result, look at which property of the asset or artifact matches your search string. The matching text is highlighted. - -The highest scoring results are for matches to the name of the asset. Multiple assets can have the same name. However, the name of the project, deployment or space, is shown underneath the asset name so you can determine which result is the one you want. - -Click an asset name to view it in its project or deployment space. - -Results are prioritized in this order: - - - -1. Matches of quoted phrases or common phrases (for English only) -2. Exact matches of complete words -3. Partial matches of complete words -4. Fuzzy matches - - - -From the search results, you can click Preview to view more information in the side panel. - -" -977C81385F7825613F1EDBD3C0DBF44C259BA8D7_9,977C81385F7825613F1EDBD3C0DBF44C259BA8D7," Filtering and sorting results - -You can filter search results by these properties: - - - -* Type of asset -* Tags -* Owners (for some types of assets) -* The user who modified the asset -* The time period when the asset was last modified -* Projects (assets only) -* Workspaces -* Schema -* Table -* Contains: Feature group - - - -You can sort results by the most relevant or the last modified date. - -Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -" -44BA508199B214448CB22B7658127E16DD4E7ABF_0,44BA508199B214448CB22B7658127E16DD4E7ABF," Connecting to data behind a firewall - -To connect to a database that is not accessible via the internet (for example, behind a firewall), you must set up a secure communication path between your on-premises data source and IBM Cloud. Use a Satellite Connector, a Satellite location, or a Secure Gateway instance for the secure communication path. - - - -* [Set up a Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensatctr): Satellite Connector is the replacement for Secure Gateway. Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem, cloud, or Edge environment back to IBM Cloud. Your infrastructure needs only a container host, such as Docker. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui). - - - - - -* [Set up a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensl): A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on-prem location. It supports managed cloud services on on-premises, such as Managed OpenShift and Managed Databases, supported remotely by IBM Cloud PaaS SRE resources. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane. A Satellite location is a superset of the capabilities of the Satellite Connector. If you need only client data communication, set up a Satellite Connector. - - - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_1,44BA508199B214448CB22B7658127E16DD4E7ABF,"* [Configure a Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=engateway): Secure Gateway is IBM Cloud's former solution for communication between on-prem or third-party cloud environments. Secure Gateway is now [deprecated by IBM Cloud](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview). For a new connection, set up a Satellite Connector instead. - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_2,44BA508199B214448CB22B7658127E16DD4E7ABF," Set up a Satellite Connector - -To set up a Satellite Connector, you create the Connector in your IBM Cloud account. Next, you configure agents to run in your local Docker host platform on-premises. Finally, you create the endpoints for your data source that IBM watsonx uses to access the data source from IBM Cloud. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_3,44BA508199B214448CB22B7658127E16DD4E7ABF," Requirements for a Satellite Connector - -Required permissions : You must have Administrator access to the Satellite service in IAM access policies to do the steps in IBM Cloud. - -Required host systems : Minimum one x86 Docker host in your own infrastructure to run the Connector container. See [Minimum requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=uimin-requirements). - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_4,44BA508199B214448CB22B7658127E16DD4E7ABF," Setting up a Satellite Connector - -Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. - - - -1. Access the Create connector page in IBM Cloud from one of these places: - - - -* Log in to the [Connectors](https://cloud.ibm.com/satellite/connectors) page in IBM Cloud. -* In IBM watsonx: - - - -1. Go to the project page. Click the Assets tab. -2. Click New asset > Connect to a data source. -3. Select the IBM watsonx connector. -4. In the Create connection page, scroll down to the Private connectivity section, and click the IBM Cloud Satellite tile. -5. Click Configure Satellite and then log in to IBM Cloud. -6. Click Create connector. - - - - - -2. Follow the steps for [Creating a Connector](https://cloud.ibm.com/docs/satellite?topic=satellite-create-connector). -3. Set up the Connector agent containers in your local Docker host environment. For high availability, use three agents per connector that are deployed on separate Docker hosts. It is best to use a separate infrastructure and network connectivity for each agent. Follow the steps for [Running a Connector agent](https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally). -The agents will appear in the Active Agents list for the connector. -4. In IBM watsonx, go back to the Create connection page. In the Private connectivity section, click Reload, and then select the Satellite Connector that you created. - - - -In the [Satellite Connectors dashboard](https://cloud.ibm.com/satellite/connectors) in IBM Cloud, for each connection that you create, a user endpoint is added in the Satellite Connector. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_5,44BA508199B214448CB22B7658127E16DD4E7ABF," Set up a Satellite location - -Use the Satellite location feature of IBM Cloud Satellite to securely connect to a Satellite location that you configure for your IBM Cloud account. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_6,44BA508199B214448CB22B7658127E16DD4E7ABF," Requirements for a Satellite location - -Required permissions : You must be the Admin in the IBM Cloud account to do the tasks in IBM Cloud. - -Required host systems : You need at least three computers or virtual machines in your own infrastructure to act as Satellite hosts. Confirm the [host system requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs). (The IBM Cloud docs instructions for additional features such as Red Hat OpenShift clusters and Kubernetes are not required for a connection in IBM watsonx.) - -Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_7,44BA508199B214448CB22B7658127E16DD4E7ABF," Setting up a Satellite location - -Configure the Satellite location in IBM Cloud. - - - -* [Task 1: Create a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask1) -* [Task 2: Attach the hosts to the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask2) -* [Task 3: Assign the hosts to the control plane](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask3) -* [Task 4: Create the connection secured with a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask4) -* [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=enmaintain) - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_8,44BA508199B214448CB22B7658127E16DD4E7ABF," Task 1: Create a Satellite location - -A Satellite location is a representation of an environment in your infrastructure provider, such as an on-prem data center or cloud. To connect to data sources in IBM watsonx, you need three computers or virtual machines. To create the Satellite location: - - - -1. Access the Create a Satellite location setup page in IBM Cloud from one of these places: - - - -* Log in to [IBM Cloud](https://cloud.ibm.com/satellite/overview), and select Create location. -* In IBM watsonx: - - - -1. Go to the project page. Click the Assets tab. -2. Click New asset > Connect to a data source. -3. Select the connector. -4. In the Create connection page, scroll down to the Private connectivity section, and click the IBM Cloud Satellite tile. -5. Click Configure Satellite and then log in to IBM Cloud. -6. Click Create location. - -These instructions follow the On-premises & edge template. Depending on your infrastructure, you can select a different template. Refer to the template instructions and the information at [Understanding Satellite location and hosts](https://cloud.ibm.com/docs/satellite?topic=satellite-location-host) in the IBM Cloud docs. - - - - - -2. Click Edit to modify the Satellite location information: - - - -* Name: You can use this field to differentiate between different networks such as my US East network or my Japan network. -* The Tags and Description fields are optional. -* Managed from: Select the IBM Cloud region that is closest to where your host machines physically reside. -* Resource group: is set to default by default. -* Zones: IBM automatically spreads the control plane instances across three zones within the same IBM Cloud multizone metro. For example, if you manage your location from the wdc metro in the US East region, your Satellite location control plane instances are spread across the us-east-1, us-east-2, and us-east-3 zones. This zonal spread ensures that your control plane is available, even if one zone becomes unavailable. -* Red Hat CoreOS: Do not select this option. Leave it cleared or as No. -" -44BA508199B214448CB22B7658127E16DD4E7ABF_9,44BA508199B214448CB22B7658127E16DD4E7ABF,"* Object storage: Click Edit to enter the exact name of an existing IBM Cloud Object Storage bucket that you want to use to back up Satellite location control plane data. Otherwise, a new bucket is automatically created in an Object Storage instance in your account. - - - -3. Review your order details, and then click Create location. - -A location control plane is deployed to one of the zones that are located in the IBM Cloud region that you selected. The control plane is ready for you to attach hosts to it. - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_10,44BA508199B214448CB22B7658127E16DD4E7ABF," Task 2: Attach the hosts to the Satellite location - -Attach three hosts that conform to the [host requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs) to the Satellite location. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_11,44BA508199B214448CB22B7658127E16DD4E7ABF," Important considerations for Satellite location hosts - - - -* Satellite hosts are dedicated servers and cannot be shared with other applications. You cannot log in to a host with SSH. The root password will be changed. -* You need only three hosts for IBM watsonx connections. -* Worker nodes are not required. Only control plane hosts are needed for IBM watsonx connections. -* The Red Hat OpenShift Container Platform (OCP) is not needed for IBM watsonx connections. -* Container Linux CoreOS Linux is not needed for IBM watsonx connections. -* Hosts connect to IBM Cloud with the TLS 1.3 protocol. - - - -To attach the hosts to the Satellite location: - - - -1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location. -2. Click Attach Hosts to generate and download a script. -3. Run the script on all the hosts to be attached to the Satellite location. -4. Save the attach script in case you attach more hosts to the location in the future. The token in the attach script is an API key, which must be treated and protected as sensitive information. See [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=enmaintain). - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_12,44BA508199B214448CB22B7658127E16DD4E7ABF," Task 3: Assign the hosts to the control plane - -To assign the hosts: - - - -1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location. -2. For each host, click the overflow menu (![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/actions.png)) and then select Assign. Assign one host to each zone. - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_13,44BA508199B214448CB22B7658127E16DD4E7ABF," Task 4: Create the connection secured with a Satellite location - -To create the secure connection: - - - -1. In IBM watsonx, go to the project page. Click the Assets tab. -2. Click New asset > Connect to a data source. -3. Select the connector. -4. In the Create connection form, complete the connection details. The hostname or IP address and the port of the data source must be available from each host that is attached to the Satellite location. -5. Click Reload, and then select the Satellite location that you created. - - - -In the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations) in IBM Cloud, for each connection that you create, a link endpoint is created with Destination typeLocation, and Created byConnectivity in the Satellite location. - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_14,44BA508199B214448CB22B7658127E16DD4E7ABF," Maintaining the Satellite location - - - -* The host attach script expires one year from the creation date. To make sure that the hosts don't have authentication problems, download a new copy of the host attach script at least once per year. -* Save the attach script in case you attach more hosts to the location in the future. If you generate a new host attach script, it detaches all the existing hosts. -* Hosts can be reclaimed by detaching them from the Satellite location and reloading the operating system in the infrastructure provider. - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_15,44BA508199B214448CB22B7658127E16DD4E7ABF," Configure a Secure Gateway - -The IBM Cloud Secure Gateway service provides a remote client to create a secure connection to a database that is not externalized to the internet. You can provision a Secure Gateway service in one service region and use it in service instances that you provisioned in other regions. After you create an instance of the Secure Gateway service, you add a Secure Gateway. - -Important: Secure Gateway is deprecated by IBM Cloud. For information see [Secure Gateway deprecation overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overviewq082720___AMQ_SSL_ALLOW_DEFAULT_CERT__title__1). - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_16,44BA508199B214448CB22B7658127E16DD4E7ABF," Prerequisite - -When you log in to IBM watsonx, select Enable Cloud Foundry access. - -Note: Not all connections support Secure Gateway. If the connection supports Secure Gateway, the IBM Cloud Secure Gateway tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Secure Gateway in the New connection page. - -To configure a secure gateway: - - - -1. Configure a secure gateway from the Create connection screen: - - - -1. Click the IBM Cloud Secure Gateway tile. -2. Click New Secure Gateway and then Create Secure Gateway. -Otherwise, from the main menu in IBM watsonx, choose Administration > Services > Services catalog and then select Secure Gateway. - - - -2. Select a service plan and click Create. -3. On the Services instances page, find the Secure Gateway service and click its name. -4. Follow the instructions to add a gateway [Adding a gateway](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-add-sg-gw). To maintain security for the connection, make sure that you configure the Secure Gateway to require a security token. Make sure you copy your Gateway ID and security token. -5. From within your new gateway, on the Clients tab, click Connect Client to open the Connect Client pane. -6. Select the client download for your operating system. -7. Follow the instructions for [installing the Client](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-client-install). -8. Depending on the resource authentication protocol that you specify, you might need to upload a certificate. A destination is created when the connection is first established. -9. In IBM watsonx, go to the project page. Click the Assets tab. In the Private connectivity section, click Reload, and then select the secure gateway that you created. - - - -" -44BA508199B214448CB22B7658127E16DD4E7ABF_17,44BA508199B214448CB22B7658127E16DD4E7ABF," Learn more - - - -* [Getting started with IBM Cloud Satellite](https://cloud.ibm.com/docs/satellite?topic=satellite-getting-started) -* [Secure Gateway deprecation](https://cloud.ibm.com/docs/SecureGateway) - - - -Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) -" -C122739764B1EC75B64E1B740F493BAD8616A9DB_0,C122739764B1EC75B64E1B740F493BAD8616A9DB," Adding very large objects to a project's Cloud Object Storage - -The amount of data you can load to a project's Cloud Object Storage at any one time depends on where you load the data from. If you are loading the data in the product UI, the limit is 5 GB. To add larger objects to a project's Cloud Object Storage, you can use an API or an FTP client. - - - -* [The Cloud Object Storage API](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=enapi) -* An FTP client -* [The IBM Cloud Object Storage Python SDK](https://github.com/IBM/ibm-cos-sdk-python) (in case you can't use an FTP client) - - - -" -C122739764B1EC75B64E1B740F493BAD8616A9DB_1,C122739764B1EC75B64E1B740F493BAD8616A9DB," Load data in multiple parts by using the Cloud Object Storage API - -With the Cloud Object Storage API, you can load data objects as large as 5 GB in a single PUT, and objects as large as 5 TB by loading the data into object storage as a set of parts which can be loaded independently in any order and in parallel. After all of the parts have been loaded, they are presented as a single object in Cloud Object Storage. - -You can load files with these formats and mime types in multiple parts: - - - -* application/xml -* application/pdf -* text/plain; charset=utf-8 - - - -To load a data object in multiple parts: - - - -1. Initiate a [multipart load](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsinitiate-a-multipart-upload): - - - -curl -X ""POST"" ""https://(endpoint)/(bucket-name)/(object-name)?uploads"" --H ""Authorization: bearer (token)"" - -The values for bucket-name and token are on the project's General page on the Manage tab. Click Manage in IBM Cloud on the Watson Studio for the endpoint value. - - - -1. Load the parts by specifying arbitrary sequential part numbers and an UploadId for the object: - - - -curl -X ""PUT"" ""https://(endpoint)/(bucket-name)/(object-name)?partNumber=(sequential-integer)&uploadId=(upload-id)"" --H ""Authorization: bearer (token)"" --H ""Content-Type: (content-type)"" - -Replacecontent-type with application/xml, application/pdf or text/plain; charset=utf-8. - - - -1. Complete the multipart load: - - - -curl -X ""POST"" ""https://(endpoint)/(bucket-name)/(object-name)?uploadId=(upload-id)"" --H ""Authorization: bearer (token)"" --H ""Content-Type: text/plain; charset=utf-8"" -" -C122739764B1EC75B64E1B740F493BAD8616A9DB_2,C122739764B1EC75B64E1B740F493BAD8616A9DB,"-d $' - -1 -(etag) - - -2 -(etag) - - - - -1. Add your file to the project as an asset. From the Assets page of your project, click the Upload asset to project icon. Then, from the Files pane, click the action menu and select Add as data set. - - - -" -C122739764B1EC75B64E1B740F493BAD8616A9DB_3,C122739764B1EC75B64E1B740F493BAD8616A9DB," Next steps - - - -* [Refining the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -* [Analyzing the data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - - - -" -C122739764B1EC75B64E1B740F493BAD8616A9DB_4,C122739764B1EC75B64E1B740F493BAD8616A9DB," Learn more - - - -* [Storing very large objects in Cloud Object Storage](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsstore-very-large-objects) -* [Using curl to store very large objects](https://cloud.ibm.com/docs/services/cloud-object-storage/cli?topic=cloud-object-storage-using-curl-using-curl-) - - - -Parent topic:[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_0,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Switching the platform for a project - -You can switch the platform for some projects between the Cloud Pak for Data as a Service and the watsonx platform. When you switch the platform for a project, you can use the tools that are specific to that platform. - -For example, you might switch an existing Cloud Pak for Data as a Service project to watsonx so that you can use the Prompt Lab tool and create prompt and prompt session assets. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html). - -Important:Foundation model inferencing with the Prompt Lab is available in the Dallas and Frankfurt regions. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab. - - - -* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enrequirements) -* [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enrestrictions) -* [What happens when you switch a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enconsequences) -* [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enmove-one) -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_1,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3,"* [Switching multiple projects to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enmove-many) - - - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_2,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Requirements - -You can switch a project from one platform to the other if you have the required accounts and permissions. - -Required accounts : You must be signed up for both Cloud Pak for Data as a Service and watsonx. - -Required permissions : You must have the Admin role in the project that you want to switch. - -Required services : The current account that you are working in must have both of these services provisioned: : - Watson Studio : - Watson Machine Learning - -Project settings : The project must have the Restrict who can be a collaborator setting enabled. On Cloud Pak for Data as a Service, you can enable this setting during project creation. On watsonx, this setting is automatic. - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_3,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Restrictions - -To switch a project from Cloud Pak for Data as a Service to watsonx, all the assets in the project must have asset types that are supported by both platforms. - -Projects that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: - - - -* AutoAI experiment -* COBOL copybook -* Connected data asset -* Connection -* Data asset from a file -* Data Refinery flow -* Decision Optimization experiment -* Federated Learning experiment -* Folder asset -* Jupyter notebook -* Model -* Python function -* Script -* SPSS Modeler flow -* Visualization - - - -You can’t switch a project that contains assets that are specific to Cloud Pak for Data as a Service. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that project to watsonx. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx projects, you can't switch a project that contains pipeline assets because pipelines can reference unsupported assets. - -You can switch a project that contains assets from watsonx to Cloud Pak for Data as a Service. However, assets that are only supported in watsonx are not available on Cloud Pak for Data as a Service. These assets include: - - - -* Prompt Lab assets -* Synthetic data flows - - - -For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_4,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," What happens when you switch the platform for a project - -Switching a project between platforms has the following effects: - -Collaborators : Collaborators in the project receive notifications of the switch on the original platform. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project. - -Jobs : Scheduled jobs are switched. Any jobs that are running at the time of the switch continue until completion on the original platform. Any jobs that are scheduled for times after the switch are run on the destination platform. Job history is not retained. - -Environments : Custom environment templates are retained. - -Project history : Recent activity and asset activities are not retained. - -Resource usage : Resource usage is cumulative because you continue to use the same service instances. - -Storage : The project's IBM Cloud Object Storage bucket remains the same. - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_5,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Switch the platform for a project - -You can switch the platform for a project from within the project on the original platform. You can switch between either Cloud Pak for Data as a Service and watsonx. - -To switch the platform for a project: - - - -1. On the original platform, go to the project's Manage tab, select the General page, and in the Controls section, click Switch platform. If you don't see a Switch platform button or the button is not active, you can't switch the project. -2. Select the destination platform and click Switch platform. - - - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_6,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Switching multiple projects to watsonx - -You can switch one or more eligible projects to watsonx from Cloud Pak for Data as a Service from the watsonx home page. - - - -1. On the watsonx home page, click the Switch projects icon (![Switch projects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/move-projects-icon.svg)). -2. Select the projects that you want to switch. Only the projects that meet the requirements are listed. -3. Optional. You can view the projects that contain unsupported asset types and the projects for which you don't have the Admin role. -4. Click the Switch projects icon. - - - -" -3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3_7,3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3," Learn more - - - -* [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) -* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) -* [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) -" -6922B4D2CB89EB1EF4AA112AF8B7922327062B95_0,6922B4D2CB89EB1EF4AA112AF8B7922327062B95," Adding task credentials - -A task credential is a form of user authentication that is required by some services to perform operations in projects and spaces, for example to run certain tasks in a service or to enable the execution of long operations such as scheduled jobs without interruption. - -In IBM watsonx, IBM Cloud API keys are used as task credentials. You can either provide an existing IBM Cloud API key, or you can generate a new key. Only one task credential can be stored per user, per IBM Cloud account, and is stored securely in a vault. - -You can generate and rotate API keys in Profile and settings > User API key. - -Any user with an IBM Cloud account can create an API key. The API key can be seen as a type of user name and password, enabling access to resources in your IBM Cloud account and should never be shared. - -If your service requires a task credential to perform an operation, you are prompted to provide it in the form of an existing or newly generated API key. - -Note that service administrators are responsible for defining a strategy to revoke task credentials when these are no longer required. - -" -6922B4D2CB89EB1EF4AA112AF8B7922327062B95_1,6922B4D2CB89EB1EF4AA112AF8B7922327062B95," Learn more - - - -* [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) -* [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) - - - -Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_0,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Workload identity federation examples - -Workload identity federation for the Google BigQuery connection is supported by any identity provider that supports OpenID Connect (OIDC) or SAML 2.0. - -These examples are for [AWS with Amazon Cognito](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=enaws) and for [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=enazure). - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_1,92B00BEE2E48F01962BBBBAC49CF87587710F35F," AWS - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_2,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Configure workload identity federation in Amazon Cognito - - - -1. Create an OIDC identity provider (IdP) with Cognito by following the instructions in the Amazon documentation: - - - -* [Step 1. Create a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-as-user-directory.html) -* [Step 2. Add an app client and set up the hosted UI](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-app-integration.html) - - - -For more information, see [Getting started with Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-getting-started.html). -2. Create a group and user in the IdP with the AWS console. Or you can use AWS CLI: - -CLIENT_ID=YourClientId -ISSUER_URL=https://cognito-idp.YourRegion.amazonaws.com/YourPoolId -POOL_ID=YourPoolId -USERNAME=YourUsername -PASSWORD=YourPassword -GROUPNAME=YourGroupName - -aws cognito-idp admin-create-user --user-pool-id $POOL_ID --username $USERNAME --temporary-password Temp-Pass1 -aws cognito-idp admin-set-user-password --user-pool-id $POOL_ID --username $USERNAME --password $PASSWORD --permanent -aws cognito-idp create-group --group-name $GROUPNAME --user-pool-id $POOL_ID -aws cognito-idp admin-add-user-to-group --user-pool-id $POOL_ID --username $USERNAME --group-name $GROUPNAME -3. From the AWS console, click View Hosted UI and log in to the IDP UI in a browser to ensure that any new password challenge is resolved. -4. Get an IdToken with the AWS CLI: - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_3,92B00BEE2E48F01962BBBBAC49CF87587710F35F,"aws cognito-idp admin-initiate-auth --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id $CLIENT_ID --auth-parameters USERNAME=$USERNAME,PASSWORD=$PASSWORD --user-pool-id $POOL_ID - -For more information on the Amazon Cognito User Pools authentication flow, see [AdminInitiateAuth](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AdminInitiateAuth.html). - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_4,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Configure Google Cloud for Amazon Cognito - -When you create the provider in Google Cloud, use these settings: - - - -* Set Issuer (URL) to https://cognito-idp.YourRegion.amazonaws.com/YourPoolId. -* Set Allowed Audience to your client ID. -* Under Attribute Mapping, map google.subject to assertion.sub. - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_5,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Create the Google BigQuery connection with Amazon Cognito workload identity federation - - - -1. Choose the Workload Identity Federation with access token authentication method. -2. For the Security Token Service audience field, use this format: - -//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID -3. For the Service account e-mail, enter the email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). -4. (Optional) Specify a value for the Service account token lifetime in seconds. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). -5. Set Token format to Text -6. Set Token type to ID token - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_6,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Azure - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_7,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Configure workload identity federation in Azure - - - -1. [Create an Azure AD application and service principal](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portalregister-an-application-with-azure-ad-and-create-a-service-principal). -2. Set an Application ID URI for the application. You can use the default Application ID URI (api://APPID) or specify a custom URI. - -You can skip the instructions on creating a managed identity. -3. Follow the instructions to [create a new application secret](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portaloption-2-create-a-new-application-secret) to get an access token with the REST API. - -For more information, see [Configure workload identity federation with Azure](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsazure). - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_8,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Configure Google Cloud for Azure - - - -1. Follow the instructions: [Configure workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsconfigure). -2. Follow the instructions: [Create the workload identity pool and provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_the_workload_identity_pool_and_provider). When you configure the provider, use these settings: - - - -* Set Issuer (URL) to https://sts.windows.net/TENANTID/, where TENANTID is the tenant ID that you received when you set up Azure Active Directory. -* Set the Allowed audience to the client ID that you received when you set up the app registration. Or specify another Application ID URI that you used when you set up the application identity in Azure. -* Under Attribute Mapping, map google.subject to assertion.sub. - - - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_9,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Create the Google BigQuery connection with Azure workload identity federation - - - -1. Choose one of these authentication methods: - - - -* Workload Identity Federation with access token -* Workload Identity Federation with token URL - - - -2. For the Security Token Service audience field, use the format that is described in [Authenticate a workload that uses the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsazure_7). For example: - -//iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID -3. For the Service account e-mail, enter the email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). -4. (Optional) Specify a value for the Service account token lifetime in seconds. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). -5. If you specified Workload Identity Federation with token URL, use these values: - - - -* Token URL: https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token. This URL will fetch a token from Azure. -* HTTP method: POST -* HTTP headers: ""Content-Type""=""application/x-www-form-urlencoded;charset=UTF-8"",""Accept""=""application/json"" -* Request body: grant_type=client_credentials&client_id=CLIENT_ID&client_secret=CLIENT_SECRET&scope=APPLICATION_ID_URI/.default - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_10,92B00BEE2E48F01962BBBBAC49CF87587710F35F,"6. For Token type, select ID token for an identity provider that complies with the OpenID Connect (OIDC) specification. For information, see [Token types](https://cloud.google.com/docs/authentication/token-types). -7. The Token format option depends on that authentication selection: - - - -* Workload Identity Federation with access token: Select Text if you supplied the raw token value in the Access token field. -* Workload Identity Federation with token URL: For a response from the token URL in JSON format with the access token that is returned in a field named access_token, use these settings: - - - -* Token format: JSON -* Token field name: access_token - - - - - - - -" -92B00BEE2E48F01962BBBBAC49CF87587710F35F_11,92B00BEE2E48F01962BBBBAC49CF87587710F35F," Learn more - - - -* [Workload identity federation (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation) -* [Configure workload identity federation on the identity provider (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds) -* [Generate a credentials configuration file (Google Cloud)](https://github.com/googleapis/google-auth-library-javaworkforce-identity-federation) - - - -Parent topic:[Google BigQuery connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_0,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Evaluating AI models with Watson OpenScale - -IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure that they remain fair, explainable, and compliant no matter where your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production. - -Required service : Watson Machine Learning - -Training data format : Relational: Tables in relational data sources : Tabular: Excel files (.xls or .xlsx), CSV files : Textual: In the supported relational tables or files - -Connected data : Cloud Object Storage (infrastructure) : Db2 - -Data size : Any - -Enterprises use model evaluation as part of an AI governance strategy to make sure that models in development and production meet established compliance standards. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions. You can evaluate models regardless of the tools and frameworks that you use to build and run models. - -Watch this short video to learn more about Watson OpenScale: - -This video provides a visual method to learn the concepts and tasks in this documentation. - -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_1,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Trustworthy AI in action - -To learn more about model evaluation in action, see [How AI picks the highlights from Wimbledon fairly and fast](https://www.ibm.com/blog/how-ai-picks-the-highlights-from-wimbledon-fairly-and-fast/). - -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_2,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Components of Watson OpenScale - -Watson OpenScale has four main areas: - - - -* Insights: The Insights page displays the models that you are monitoring and provides status on the results of model evaluations. -* Explain a transaction: The Explanations page describes how the model determined a prediction. You can understand and be confident in the model by viewing some of the most important factors that led to its predictions. -* Configuration: The Configuration page can be used to select a database, set up a machine learning provider, and optionally add integrated services. -* Support: The Support page provides you with resources to get the help you need with Watson OpenScale. Access product documentation or connect with IBM Community on Stack Overflow. To create a service ticket with the IBM Support team, click Manage tickets. - - - -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_3,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Evaluations - -Evaluations validate your deployments against specified metrics. Configure alerts that indicate when a threshold is crossed for a metric. Watson OpenScale evaluates your deployments based on three default monitors: - - - -* Quality describes the model’s ability to provide correct outcomes based on labeled test data called Feedback data. -* Fairness describes how evenly the model delivers favorable outcomes between groups. The Fairness monitor looks for biased outcomes in your model. -* Drift warns you of a drop in accuracy or data consistency. - - - -Note:You can also create Custom evaluations for your deployment. - -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_4,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Next steps - -" -777F72F32FD20E96C4A5F0CCA461FE9A79334E96_5,777F72F32FD20E96C4A5F0CCA461FE9A79334E96," Learn more - -Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_0,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Configuring drift v2 evaluations in watsonx.governance - -You can configure drift v2 evaluations with watsonx.governance to measure changes in your data over time to ensure consistent outcomes for your model. Use drift v2 evaluations to identify changes in your model output, the accuracy of your predictions, and the distribution of your input data. - -The following sections describe the steps that you must complete to configure drift v2 evaluations with watsonx.governance: - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_1,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Set sample size - -watsonx.governance uses sample sizes to understand how to process the number of transactions that are evaluated during evaluations. You must set a minimum sample size to indicate the lowest number of transactions that you want watsonx.governance to evaluate. You can also set a maximum sample size to indicate the maximum number of transactions that you want watsonx.governance to evaluate. - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_2,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Configure baseline data - -watsonx.governance uses payload records to establish the baseline for drift v2 calculations. You must configure the number of records that you want to calculate as your baseline data. - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_3,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Set drift thresholds - -You must set threshold values for each metric to enable watsonx.governance to understand how to identify issues with your evaluation results. The values that you set create alerts on the evaluation summary page that appear when metric scores violate your thresholds. You must set the values between the range of 0 to 1. The metric scores must be lower than the threshold values to avoid violations. - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_4,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Supported drift v2 metrics - -When you enable drift v2 evaluations, you can view a summary of evaluation results with metrics for the type of model that you're evaluating. - -The following drift v2 metrics are supported by watsonx.governance: - - - -* Output drift - -watsonx.governance calculates output drift by measuring the change in the model confidence distribution. - How it works: -watsonx.governance measures how much your model output changes from the time that you train the model. -To evaluate prompt templates, watsonx.governance calculates output drift by measuring the change in distribution of prediction probabilities. The prediction probability is calculated by aggregating the log probabilities of the tokens from the model output. -When you upload payload data with CSV files, you must include prediction_probability values or output drift cannot be calculated. -For regression models, watsonx.governance calculates output drift by measuring the change in distribution of predictions on the training and payload data. -For classification models, watsonx.governance calculates output drift for each class probability by measuring the change in distribution for class probabilities on the training and payload data. -For multi-classification models, watsonx.governance also aggregates output drift for each class probability by measuring a weighted average. - Do the math: -watsonx.governance uses the following formulas to calculate output drift: - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Entity extraction - Question answering - - - - - -* Model quality drift - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_5,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"watsonx.governance calculates model quality drift by comparing the estimated runtime accuracy to the training accuracy to measure the drop in accuracy. - How it works: -watsonx.governance builds its own drift detection model that processes your payload data when you configure drift v2 evaluations to predict whether your model generates accurate predictions without the ground truth. The drift detection model uses the input features and class probabilities from your model to create its own input features. - Do the math: -watsonx.governance uses the following formula to calculate model quality drift: ![model quality score](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-model-quality-score.svg) watsonx.governance calculates the accuracy of your model as the base_accuracy by measuring the fraction of correctly predicted transactions in your training data. During evaluations, your transactions are scored against the drift detection model to measure the amount of transactions that are likely predicted correctly by your model. These transactions are compared to the total number of transactions that watsonx.governance processes to calculate the predicted_accuracy. If the predicted_accuracy is less than the base_accuracy, watsonx.governance generates a model quality drift score. - Applies to prompt template evaluations: No - - - - - -* Feature drift - -watsonx.governance calculates feature drift by measuring the change in value distribution for important features. - How it works: -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_6,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"watsonx.governance calculates drift for categorical and numeric features by measuring the probability distribution of continuous and discrete values. To identify discrete values for numeric features, watsonx.governance uses a binary logarithm to compare the number of distinct values of each feature to the total number of values of each feature. watsonx.governance uses the following binary logarithm formula to identify discrete numeric features: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. - Do the math: -watsonx.governance uses the following formulas to calculate feature drift: -- [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: No - - - - - -* Prediction drift - -Prediction drift measures the change in distribution of the LLM predicted classes. - Do the math: -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_7,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate prediction drift. - Applies to prompt template evaluations: Yes - Task types: Text classification - - - - - -* Input metadata drift - -Input metadata drift measures the change in distribution of the LLM input text metadata. - How it works: -watsonx.governance calculates the following metadata with the LLM input text: -Character count: Total number of characters in the input text -Word count: Total number of words in the input text -Sentence count: Total number of sentences in the input text -Average word length: Average length of words in the input text -Average sentence length: Average length of the sentences in the input text -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_8,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"watsonx.governance calculates input metadata drift by measuring the change in distribution of the metadata columns. The input token count column, if present in the payload, is also used to compute the input metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the input metadata drift. To identify discrete numeric input metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. For discrete input metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate input metadata drift. For continuous input metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) formulas to calculate input metadata drift. - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Entity extraction - Question answering - - - - - -* Output metadata drift - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_9,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"Output metadata drift measures the change in distribution of the LLM output text metadata. - How it works: -watsonx.governance calculates the following metadata with the LLM output text: -Character count: Total number of characters in the output text -Word count: Total number of words in the output text -Sentence count: Total number of sentences in the output text -Average word length: Average length of words in the output text -Average sentence length: Average length of the sentences in the output text -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_10,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"watsonx.governance calculates output metadata drift by measuring the change in distribution of the metadata columns. The output token count column, if present in the payload, is also used to compute the output metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the output metadata drift. To identify discrete numeric output metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. For discrete output metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate input metadata drift. For continuous output metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) formulas to calculate output metadata drift: - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Question answering - - - -watsonx.governance uses the following formulas to calculate drift v2 evaluation metrics: - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_11,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Total variation distance - -Total variation distance measures the maximum difference between the probabilities that two probability distributions, baseline (B) and production (P), assign to the same transaction as shown in the following formula: - -![Probability distribution formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance-0.svg) - -If the two distributions are equal, the total variation distance between them becomes 0. - -watsonx.governance uses the following formula to calculate total variation distance: - -![Total variation distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance.svg) - - - -* 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data. -* ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples. -* ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample. -* ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample. - - - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_12,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC,"The ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-4.svg) denominator represents the total area under the density function plots for production and baseline data. These summations are an approximation of the integrations over the domain space and both these terms should be 1 and total should be 2. - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_13,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Overlap coefficient - -watsonx.governance calculates the overlap coefficient by measuring the total area of the intersection between two probability distributions. To measure dissimilarity between distributions, the intersection or the overlap area is subtracted from 1 to calculate the amount of drift. watsonx.governance uses the following formula to calculate the overlap coefficient: - -![Overlap coefficient formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-5.svg) - - - -* 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data. -* ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples. -* ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample. -* ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample. - - - -" -AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC_14,AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC," Jensen Shannon distance - -Jensen Shannon Distance is the normalized form of Kullback-Liebler (KL) Divergence that measures how much one probability distribution differs from the second probabillity distribution. Jensen Shannon Distance is a symmetrical score and always has a finite value. - -watsonx.governance uses the following formula to calculate the Jensen Shannon distance for two probability distributions, baseline (B) and production (P): - -![Jensen Shannon distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-jensen-shannon-distance.svg) - -![KL Divergence is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-KL-divergence.svg) is the KL Divergence. - -Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_0,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Evaluating prompt templates in deployment spaces - -You can evaluate prompt templates in deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. - -With watsonx.governance, you can evaluate prompt templates in deployment spaces to measure how effectively your foundation models generate responses for the following task types: - - - -* [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlclassification) -* [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsummarization) -* [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlgeneration) -* [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlqa) -* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlextraction) - - - -Prompt templates are saved prompt inputs for foundation models. You can evaluate prompt template deployments in pre-production and production spaces. - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_1,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Before you begin - -You must have access to a watsonx.governance deployment space to evaluate prompt templates. For more information, see [Setting up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html). - -To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlaccount) to a watsonx account that has watsonx.governance and watsonx.ai instances that are installed and open a deployment space. You must be assigned the Admin or Editor roles for the account to open deployment spaces. - -In your project, you must also [create and save a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.htmlcreating-and-running-a-prompt) and [promote a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html). You must specify at least one variable when you create prompt templates to enable evaluations. - -The following sections describe how to evaluate prompt templates in deployment spaces and review your evaluation results: - - - -* [Evaluating prompt templates in pre-production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-pre-prod) -* [Evaluating prompt templates in production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-prod) - - - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_2,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Evaluating prompt templates in pre-production spaces - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_3,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Activate evaluation - -To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. - -![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_4,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Select dimensions - -The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. - -![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) - -watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: - -![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_5,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Select test data - -You must upload a CSV file that contains test data with reference columns and columns for each prompt variable. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.htmlcreating-prompt-variables) to the associated columns from your test data. - -![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data-preprod-spaces.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_6,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Review and evaluate - -You can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs. You must select Evaluate to run the evaluation. - -![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-preprod-spaces.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_7,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Reviewing evaluation results - -When your evaluation finishes, you can review a summary of your evaluation results on the Evaluations tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. - -To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. - -The Actions menu also provides the following options to help you analyze your results: - - - -* Evaluate now: Run evaluation with a different test data set -* All evaluations: Display a history of your evaluations to understand how your results change over time. -* Configure monitors: Configure evaluation thresholds and sample sizes. -* View model information: View details about your model to understand how your deployment environment is set up. - - - -![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-results-preprod.png) - -If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_8,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Evaluating prompt templates in production spaces - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_9,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Activate evaluation - -To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. - -![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_10,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Select dimensions - -The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can provide a label column name for the reference output that you specify in your feedback data. You can also expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. - -![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimensions-pre-prod-spaces.png) - -watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: - -![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_11,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Review and evaluate - -You can review the selections for the prompt task type and the type of evaluation that runs. You can also select View payload schema or View feedback schema to validate that your column names match the prompt variable names in the prompt template. You must select Activate to run the evaluation. - -![Review and evaluate selections](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-prod-spaces.png) - -To generate evaluation results, select Evaluate now in the Actions menu to open the Import test data window when the evaluation summary page displays. - -![Select evaluate now](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-evaluate-now-prod-space.png) - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_12,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Import test data - -In the Import test data window, you can select Upload payload data or Upload feedback data to upload a CSV file that contains labeled columns that match the columns in your payload and feedback schemas. - -![Import test data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-import-test-data-prod-space.png) - -When your upload completes successfully, you can select Evaluate now to run your evaluation. - -" -F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958_13,F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958," Reviewing evaluation results - -When your evaluation finishes, you can review a summary of your evaluation results on the Evaluations tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. - -To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. - -The Actions menu also provides the following options to help you analyze your results: - - - -* Evaluate now: Run evaluation with a different test data set -* Configure monitors: Configure evaluation thresholds and sample sizes. -* View model information: View details about your model to understand how your deployment environment is set up. - - - -![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-eval-results-prod-spaces.png) - -If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_0,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Evaluating prompt templates in projects - -You can evaluate prompt templates in projects to measure the performance of foundation model tasks and understand how your model generates responses. - -With watsonx.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types: - - - -* [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlclassification) -* [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsummarization) -* [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlgeneration) -* [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlqa) -* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlextraction) - - - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_1,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Before you begin - -You must have access to a watsonx.governance project to evaluate prompt templates. For more information, see [Setting up Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html). - -To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlaccount) to a watsonx account that has watsonx.governance and watsonx.ai instances that are installed and open a project. You must be assigned the Admin or Editor roles for the account to open projects. - -In your project, you must use the watsonx.ai [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to create and save a prompt template. You must specify variables when you create prompt templates to enable evaluations. The Try section in the Prompt Lab must contain at least one variable. - -Watch this video to see how to evaluate a prompt template in a project. - -This video provides a visual method to learn the concepts and tasks in this documentation. - -The following sections describe how to evaluate prompt templates in projects and review your evaluation results. - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_2,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Running evaluations - -To run prompt template evaluations, you can click Evaluate when you open a saved prompt template on the Assets tab in watsonx.governance to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your project. - -![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-prompt.png) - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_3,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Select dimensions - -The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. - -![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) - -watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: - -![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_4,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Select test data - -You must upload a CSV file that contains test data with reference columns and columns for each prompt variable. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.htmlcreating-prompt-variables) to the associated columns from your test data. - -![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data.png) - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_5,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Review and evaluate - -Before you run your prompt template evaluation, you can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs. - -![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-prompt-eval-select.png) - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_6,B8581C38346F1FE8900D18DB8FCEF8145F5965BC," Reviewing evaluation results - -When your evaluation completes, you can review a summary of your evaluation results on the Evaluate tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. - -If you are assigned the Viewer role for your project, you can select Evaluate from the asset list on the Assets tab to view evaluation results. - -![Run prompt template evaluation from asset list](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-asset.png) - -To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. - -The Actions menu also provides the following options to help you analyze your results: - - - -* Evaluate now: Run evaluation with a different test data set -* All evaluations: Display a history of your evaluations to understand how your results change over time. -* Configure monitors: Configure evaluation thresholds and sample sizes. -* View model information: View details about your model to understand how your deployment environment is set up. - - - -![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-analyze-prompt-eval-results.png) - -" -B8581C38346F1FE8900D18DB8FCEF8145F5965BC_7,B8581C38346F1FE8900D18DB8FCEF8145F5965BC,"If you [track prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. - -Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html). -" -259E5A974F6170CBFDF7B0014CC1A0A0111423DE,259E5A974F6170CBFDF7B0014CC1A0A0111423DE," Feedback logging in watsonx.governance - -You can enable feedback logging in watsonx.governance to configure model evaluations. - -To [manage feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html) for configuring quality and generative AI quality evaluations, watsonx.governance must log your feedback data in the feedback logging table. - -Generative AI quality evaluations use feedback data to generate results for the following task types when you evaluate prompt templates: - - - -* Text summarization -* Content generation -* Question answering -* Entity extraction - - - -Quality evaluations use feedback data to generate results for text classification tasks. -" -DAC8A5E350D74E41C1738F4E2A02258FECF9D20D_0,DAC8A5E350D74E41C1738F4E2A02258FECF9D20D," Managing data for model evaluations in watsonx.governance - -To enable model evaluations in watsonx.governance, you must prepare your data for logging to generate insights. - -You must provide your model data to watsonx.governance in a format that it supports to enable model evaluations. watsonx.governance processes your model transactions and logs the data in the watsonx.governance data mart. The data mart is the logging database that stores the data that is used for model evaluations. The following sections describe the different types of data that watsonx.governance logs for model evaluations: - -" -DAC8A5E350D74E41C1738F4E2A02258FECF9D20D_1,DAC8A5E350D74E41C1738F4E2A02258FECF9D20D," Payload data - -Payload data contains the input and output transactions for your deployment. To configure explainability and fairness and drift evaluations, watsonx.governance must receive payload data from your model that it stores in a payload logging table. The payload logging table contains the feature and prediction columns that exist in your training data and a prediction probability column that contains the model's confidence in the prediction that it provides. The table also includes timestamp and ID columns to identify each scoring request that you send to watsonx.governance as shown in the following example: - -![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) - -You must send scoring requests to provide watsonx.governance with a log of your model transactions. For more information, see [Managing payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-payload-logging.html). - -" -DAC8A5E350D74E41C1738F4E2A02258FECF9D20D_2,DAC8A5E350D74E41C1738F4E2A02258FECF9D20D," Feedback data - -Feedback data is labeled data that matches the structure of training data and includes known model outcomes that are compared to your model predictions to measure the accuracy of your model. watsonx.governance uses feedback data to enable you to configure quality evaluations. You must upload feedback data regularly to watsonx.governance to continuously measure the accuracy of your model predictions. For more information, see [Managing feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html). - -" -DAC8A5E350D74E41C1738F4E2A02258FECF9D20D_3,DAC8A5E350D74E41C1738F4E2A02258FECF9D20D," Learn more - -[Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) -" -FB7F7B9A220C66F7E3407CA9553D974CD4A14402_0,FB7F7B9A220C66F7E3407CA9553D974CD4A14402," Managing feedback data for watsonx.governance - -You must provide feedback data to watsonx.governance to enable you to configure quality and generative AI quality evaluations and determine any changes in your model predictions. - -When you provide feedback data to watsonx.governance, you can regularly evaluate the accuracy of your model predictions. - -" -FB7F7B9A220C66F7E3407CA9553D974CD4A14402_1,FB7F7B9A220C66F7E3407CA9553D974CD4A14402," Feedback logging - -watsonx.governance stores the feedback data that you provide as records in a feedback logging table. - -The feedback logging table contains the following columns when you evaluate prompt templates: - - - -* Required columns: - - - -* Prompt variable(s): Contains the values for the variables that are created for prompt templates -* reference_output: Contains the ground truth value - - - -* Optional columns: - - - -* _original_prediction: Contains the output that's generated by the foundation model - - - - - -" -FB7F7B9A220C66F7E3407CA9553D974CD4A14402_2,FB7F7B9A220C66F7E3407CA9553D974CD4A14402," Uploading feedback data - -You can use a feedback logging endpoint to upload data for quality evaluations. You can also upload feedback data with a CSV file. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). - -" -FB7F7B9A220C66F7E3407CA9553D974CD4A14402_3,FB7F7B9A220C66F7E3407CA9553D974CD4A14402," Learn more - -[Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) - -Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html) -" -D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65_0,D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65," Managing payload data for watsonx.governance - -You must provide payload data to configure drift v2 and generative AI quality evaluations in watsonx.governance. - -Payload data contains all of your model transactions. You can log payload data with watsonx.governance to enable evaluations. To log payload data, watsonx.governance must receive scoring requests. - -" -D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65_1,D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65," Logging payload data - -When you send a scoring request, watsonx.governance processes your model transactions to enable model evaluations. watsonx.governance scores the data and stores it as records in a payload logging table within the watsonx.governance data mart. - -The payload logging table contains the following columns when you evaluate prompt templates: - - - -* Required columns: - - - -* Prompt variable(s): Contains the values for the variables that are created for prompt templates -* generated_text: Contains the output that's generated by the foundation model - - - -* Optional columns: - - - -* input_token_count: Contains the number of tokens in the input text -* generated_token_count: Contains the number of tokens in the generated text -* prediction_probability: Contains the aggregate value of log probabilities of generated tokens that represent the winning output - - - - - -The table can also include timestamp and ID columns to store your data as scoring records. - -You can view your payload logging table by accessing the database that you specified for the data mart or by using the [Watson OpenScale Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html) as shown in the following example: - -![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) - -" -D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65_2,D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65," Sending payload data - -If you are using IBM Watson Machine Learning as your machine learning provider, watsonx.governance automatically logs payload data when your model is scored. - -After you configure evaluations, you can also use a payload logging endpoint to send scoring requests to run on-demand evaluations. For production models, you can also upload payload data with a CSV file to send scoring requests. For more information see, [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). - -Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html) -" -2339DEEF952ECF06246F2A5DAED6925E00F52D64_0,2339DEEF952ECF06246F2A5DAED6925E00F52D64," watsonx.governance model health monitor evaluation metrics - -watsonx.governance enables model health monitor evaluations by default to help you understand your model behavior and performance. You can use model health metrics to determine how efficiently your model deployment processes your transactions. - -" -2339DEEF952ECF06246F2A5DAED6925E00F52D64_1,2339DEEF952ECF06246F2A5DAED6925E00F52D64," Supported model health metrics - -The following metric categories for model health evaluations are supported by watsonx.governance. Each category contains metrics that provide details about your model performance: - - - -* Scoring requests - -watsonx.governance calculates the number of scoring requests that your model deployment receives during model health evaluations. This metric category is supported for traditional machine learning models and foundation models. - - - - - -* Records - -watsonx.governance calculates the total, average, minimum, maximum, and median number of transaction records that are processed across scoring requests during model health evaluations. This metric category is supported for traditional machine learning models and foundation models. - - - - - -* Token count - -watsonx.governance calculates the number of tokens that are processed across scoring requests for your model deployment. This metric category is supported for foundation models only. watsonx.governance calculates the following metrics to measure token count during evaluations: - Input token count: Calculates the total, average, minimum, maximum, and median input token count across multiple scoring requests during evaluations - Output token count: Calculates the total, average, minimum, maximum, and median output token count across scoring requests during evaluations - - - - - -* Throughput and latency - -" -2339DEEF952ECF06246F2A5DAED6925E00F52D64_2,2339DEEF952ECF06246F2A5DAED6925E00F52D64,"watsonx.governance calculates latency by tracking the time that it takes to process scoring requests and transaction records per millisecond (ms). Throughput is calculated by tracking the number of scoring requests and transaction records that are processed per second. To calculate throughput and latency, watsonx.governance uses the response_time value from your scoring requests to track the time that your model deployment takes to process scoring requests. For Watson Machine Learning deployments, Watson OpenScale automatically detects the response_time value when you configure evaluations. For external and custom deployments, you must specify the response_time value when you send scoring requests to calculate throughput and latency as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=, request=openscale_input, response=openscale_output, response_time=, user_id=) ] ) watsonx.governance calculates the following metrics to measure thoughput and latency during evaluations: - API latency: Time taken (in ms) to process a scoring request by your model deployment. - API throughput: Number of scoring requests processed by your model deployment per second - Record latency: Time taken (in ms) to process a record by your model deployment - Record throughput: Number of records processed by your model deployment per second This metric category is supported for traditional machine learning models and foundation models. - - - - - -* Users - -" -2339DEEF952ECF06246F2A5DAED6925E00F52D64_3,2339DEEF952ECF06246F2A5DAED6925E00F52D64,"watsonx.governance calculates the number of users that send scoring requests to your model deployments. This metric category is supported for traditional machine learning models and foundation models. To calculate the number of users, watsonx.governance uses the user_id from scoring requests to identify the users that send the scoring requests that your model receives. For external and custom deployments, you must specify the user_id value when you send scoring requests to calculate the number of users as shown in the following example from the Watson OpenScale [Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html): python from ibm_watson_openscale.supporting_classes.payload_record import PayloadRecord client.data_sets.store_records( data_set_id=payload_data_set_id, request_body=[ PayloadRecord( scoring_id=, request=openscale_input, response=openscale_output, response_time=, user_id=). --> value to be supplied by user ] ) When you view a summary of the Users metric in watsonx.governance, you can use the real-time view to see the total number of users and the aggregated views to see the average number of users. - - - - - -* Payload size - -watsonx.governance calculates the total, average, minimum, maximum, and median payload size of the transaction records that your model deployment processes across scoring requests in kilobytes (KB). watsonx.governance does not support payload size metrics for image models. This metric category is supported for traditional machine learning models only. -" -CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D_0,CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D," Configuring quality evaluations in watsonx.governance - -watsonx.governance quality evaluations measure your foundation model's ability to provide correct outcomes - -When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of quality evaluation results for the text classification task type. - -The summary displays scores and violations for metrics that are calculated with default settings. - -To configure quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric. The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds. The metric scores must be higher than the threshold values to avoid violations. Higher metric values indicate better scores. - -" -CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D_1,CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D," Supported quality metrics - -When you enable quality evaluations in watsonx.governance, you can generate metrics that help you determine how well your foundation model predicts outcomes. - -watsonx.governance supports the following quality metrics: - - - -* Accuracy - -- Description: The proportion of correct predictions - Default thresholds: Lower limit = 80% - Problem types: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Understanding accuracy: Accuracy can mean different things depending on the type of algorithm: - Multi-class classification: Accuracy measures the number of times any class was predicted correctly, normalized by the number of data points. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation. - - - - - -* Weighted true positive rate - -- Description: Weighted mean of class TPR with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The True positive rate is calculated by the following formula:number of true positives TPR = _________________________________________________________ number of true positives + number of false negatives - - - - - -* Weighted false positive rate - -- Description: Weighted mean of class FPR with weights equal to class probability. For more details, see [Multi-class classification](https://spark.apache.org/docs/2.1.0/mllib-evaluation-metrics.htmlmulticlass-classification){: external} in the Apache Spark documentation. - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted False Positive Rate is the application of the FPR with weighted data.number of false positives FPR = ______________________________________________________ (number of false positives + number of true negatives) - - - - - -" -CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D_2,CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D,"* Weighted recall - -- Description: Weighted mean of recall with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Weighted recall (wR) is defined as the number of true positives (Tp) over the number of true positives plus the number of false negatives (Fn) used with weighted data.number of true positives Recall = ______________________________________________________ number of true positives + number of false negatives - - - - - -* Weighted precision - -- Description: Weighted mean of precision with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: Precision (P) is defined as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp).number of true positives Precision = ________________________________________________________ number of true positives + the number of false positives - - - - - -* Weighted F1-Measure - -- Description: Weighted mean of F1-measure with weights equal to class probability - Default thresholds: Lower limit = 80% - Problem type: Multiclass classification - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - Do the math: The Weighted F1-Measure is the result of using weighted data.precision * recall F1 = 2 * ____________________ precision + recall - - - - - -* Matthews correlation coefficient - -- Description: Measures the quality of binary and multiclass classifications by accounting for true and false positives and negatives. Balanced measure that can be used even if the classes are different sizes. A correlation coefficient value between -1 and +1. A coefficient of +1 represents a perfect prediction, 0 an average random prediction and -1 and inverse prediction. - Default thresholds: Lower limit = 80 - Chart values: Last value in the timeframe - Metrics details available: Confusion matrix - - - - - -" -CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D_3,CCCF5EC3E34E81E3E25FFE29317CDAC2ED1C936D,"* Label skew - -- Description: Measures the asymmetry of label distributions. If skewness is 0, the dataset is perfectly balanced, it if is less than -1 or greater than 1, the distribution is highly skewed, anything in between is moderately skewed. - Default thresholds: -- Lower limit = -0.5 - Upper limit = 0.5 - Chart values: Last value in the timeframe - - - -Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) -" -2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB_0,2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB," watsonx.governance generative AI quality evaluations - -You can use watsonx.governance generative AI quality evaluations to measure how well your foundation model performs tasks. - -When you [evaluate prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html), you can review a summary of generative AI quality evaluation results for the following task types: - - - -* Text summarization -* Content generation -* Entity extraction -* Question answering - - - -The summary displays scores and violations for metrics that are calculated with default settings. - -To configure generative AI quality evaluations with your own settings, you can set a minimum sample size and set threshold values for each metric as shown in the following example: - -![Configure generative AI quality evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) - -The minimum sample size indicates the minimum number of model transaction records that you want to evaluate and the threshold values create alerts when your metric scores violate your thresholds. The metric scores must be higher than the lower threshold values to avoid violations. Higher metric values indicate better scores. - -" -2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB_1,2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB," Supported generative AI quality metrics - -The following generative AI quality metrics are supported by watsonx.governance: - - - -* ROUGE - -[ROUGE](https://github.com/huggingface/evaluate/tree/main/metrics/rouge) is a set of metrics that assess how well a generated summary or translation compares to one or more reference summaries or translations. The generative AI quality evaluation calculates the rouge1, rouge2, and rougeLSum metrics. - Task types: - Text summarization - Content generation - Question answering - Entity extraction - Parameters: - Use stemmer: If true, users Porter stemmer to strip word suffixes. Defaults to false. - Thresholds: - Lower bound: 0.8 - Upper boud: 1.0 - - - - - -* SARI - -[SARI](https://github.com/huggingface/evaluate/tree/main/metrics/sari) compares the predicted simplified sentences against the reference and the source sentences and explicitly measures the goodness of words that are added, deleted, and kept by the system. - Task types: - Text summarization - Thresholds: - Lower bound: 0 - Upper bound: 100 - - - - - -* METEOR - -[METEOR](https://github.com/huggingface/evaluate/tree/main/metrics/meteor) is calculated with the harmonic mean of precision and recall to capture how well-ordered the matched words in machine translations are in relation to human-produced reference translations. - Task types: - Text summarization - Content generation - Parameters: - Alpha: Controls relative weights of precision and recall -- Beta: Controls shape of penalty as a function of fragmentation. - Gamma: The relative weight assigned to fragmentation penalty. -- Thresholds: - Lower bound: 0 - Upper bound: 1 - - - - - -* Text quality - -" -2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB_2,2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB,"Text quality evaluates the output of a model against [SuperGLUE](https://github.com/huggingface/evaluate/tree/af3c30561d840b83e54fc5f7150ea58046d6af69/metrics/super_glue) datasets by measuring the [F1 score](https://github.com/huggingface/evaluate/tree/main/metrics/f1), [precision](https://github.com/huggingface/evaluate/tree/main/metrics/precision), and [recall](https://github.com/huggingface/evaluate/tree/main/metrics/recall) against the model predictions and its ground truth data. It is calculated by normalizing the input strings and checking the number of similar tokens between the predictions and references. - Task types: - Text summarization - Content generation - Thresholds: - Lower bound: 0.8 - Upper bound: 1 - - - - - -* BLEU - -[BLEU](https://github.com/huggingface/evaluate/blob/main/metrics/bleu/README.md) evaluates the quality of machine-translated text when translated from one natural language to another by comparing individual translated segments to a set of reference translations. - Task types: - Text summarization - Content generation - Question answering - Parameters: - Max order: Maximum n-gram order to use when completing BLEU score - Smooth: Whether or not to apply Lin et al. 2004 smoothing - Thresholds: - Lower bound: 0.8 - Upper bound: 1 - - - - - -* Sentence similarity - -" -2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB_3,2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB,"[Sentence similarity](https://huggingface.co/tasks/sentence-similarity::text=Sentence%20Similarity%20is%20the%20task,similar%20they%20are%20between%20them) determines how similar two texts are by converting input texts into vectors that capture semantic information and calculating their similarity. It measures Jaccard similarity and Cosine similarity. - Task types: Text summarization - Thresholds: - Lower limit: 0.8 - Upper limit: 1 - - - - - -* PII - -[PII](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.htmlrule-based-pii) measures if the provided content contains any personally identifiable information in the input and output data by using the Watson Natural Language Processing Entity extraction model. - Task types: - Text summarization - Content generation - Question answering - Thresholds: - Upper limit: 0 - - - - - -* HAP - -HAP measures if there is any toxic content in the input data provided to the model, and also any toxic content in the model generated output. - Task types: - Text summarization - Content generation - Question answering - Thesholds - Upper limit: 0 - - - - - -* Readability - -The readability score determines the readability, complexity, and grade level of the model's output. - Task types: - Text summarization - Content generation - Thresholds: - Lower limit: 60 - - - - - -* Exact match - -" -2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB_4,2EC85CF6AB5E5A276DA78F1129AD3F1F5C92F5BB,"[Exact match](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match) returns the rate at which the input predicted strings exactly match their references. - Task types: - Question answering - Entity extraction - Parameters: - Regexes to ignore: Regex expressions of characters to ignore when calculating the exact matches. - Ignore case: If True, turns everything to lowercase so that capitalization differences are ignored. - Ignore punctuation: If True, removes punctuation before comparing strings. - Ignore numbers: If True, removes all digits before comparing strings. - Thresholds: - Lower limit: 0.8 - Upper limit: 1 - - - - - -* Multi-label/class metrics - -Multi-label/class metrics measure model performance for multi-label/multi-class predictions. - Metrics: - Micro F1 score - Macro F1 score - Micro precision - Macro precision - Micro recall - Macro recall - Task types: Entity extraction - Thresholds: - Lower limit: 0.8 - Upper limit: 1 - - - -Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) -" -B924D359F00DB1671F86ACA7A3EE226206DFBED1,B924D359F00DB1671F86ACA7A3EE226206DFBED1," Configuring model evaluations in watsonx.governance - -Configure watsonx.governance evaluations to generate insights about your model performance. - -You can configure the following types of evaluations in watsonx.governance: - - - -* [Quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-accuracy.html) -Evaluates how well your model predicts correct outcomes that match labeled test data. -* [Drift v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html) -Evaluates changes in your model output, the accuracy of your predictions, and the distribution of your input data -* [Generative AI quality](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitor-gen-quality.html) -Measures how well your foundation model performs tasks - - - -watsonx.governance also enables [model health evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-model-health-metrics.html) by default to help you determine how efficiently your model deployment processes transactions. - -Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html) -" -DE9CE5D0599D0D181890911721738BA3DEE01E34,DE9CE5D0599D0D181890911721738BA3DEE01E34," Payload logging in watsonx.governance - -You can enable payload logging in watsonx.governance to configure model evaluations. - -To [manage payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html) for configuring drift v2, generative AI quality, and model health evaluations, watsonx.governance must log your payload data in the payload logging table. - -Generative AI quality evaluations use payload data to generate results for the following task types when you evaluate prompt templates: - - - -* Text summarization -* Content generation -* Question answering - - - -Drift v2 and model health evaluations use payload data to generate results for the following task types when you evaluate prompt templates: - - - -* Text classification -* Text summarization -* Content generation -* Entity extraction -* Question answering - - - -You can log your payload data with the payload logging endpoint or by uploading a CSV file. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) - -Parent topic:[Managing payload data in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html) -" -E54340E1EF02D2436758A56105B3182481FF1783_0,E54340E1EF02D2436758A56105B3182481FF1783," watsonx.governance offering plan options - -The watsonx.governance service enables responsible, transparent, and explainable AI. - -The available plans depend on the region where you are provisioning the service from the IBM Cloud catalog. - - - -* In the Dallas region, provision a [watsonx.governance plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html?context=cdpaas&locale=enwos-plan-options-xgov-plans). -* In the Frankfurt region, provision an [Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options2.html) plan. - - - -" -E54340E1EF02D2436758A56105B3182481FF1783_1,E54340E1EF02D2436758A56105B3182481FF1783," watsonx.governance plans (Dallas only) - -Watsonx.governance offers a free Lite plan and a paid Essentials plan. - -With watsonx.governance you can: - - - -* Evaluate machine learning models for dimensions such as fairness, quality, or drift. -* Define AI use cases in a collaborative, open way to define a business problem and track the solution. -* Capture the details for machine learning models, in each stage of their lifecycle, and store the data in factsheets within an associated AI use case. -* Maintain collections of AI uses cases in inventories, where you can manage access. - - - -For Large Language Models in watsonx.ai, you can also: - - - -* Evaluate prompt templates across multiple dimensions such as quality, Personally Identifable Information (PII) in prompt input and outputs, and Abuse or Profanity in Prompt input and ouput. -* Monitor metrics for Large Language Model performance. -* Automatically capture metadata in a Factsheet from development to deployment, for each stage in the lifecycle. - - - -" -E54340E1EF02D2436758A56105B3182481FF1783_2,E54340E1EF02D2436758A56105B3182481FF1783," watsonx.governance Lite plan features - -Lite plan features include: - - - -* Maximum of 200 resource units -* 1 resource unit per predictive model evaluation -* 1 resource unit per foundational model evaluation -* 1 resource unit per global explanation, with a maximum of 500 local explanations -* 1 resource unit per 500 local explanations -* Maximum of 1,000 records per evaluation -* Limit of 3 rows per use case -* Limit of 3 use cases -* Limit of 1 inventory - - - -" -E54340E1EF02D2436758A56105B3182481FF1783_3,E54340E1EF02D2436758A56105B3182481FF1783," watsonx.governance Essential plan features - -Essential plan features include: - - - -* Maximum of 500 inventories -* 1 resource unit per predictive model evaluation -* 1 resource unit per foundational model evaluation -* 1 resource unit per global explanation, with a maximum of 500 local explanations -* 1 resource unit per 500 local explanations -* Maximum of 50,000 records per evaluation - - - -" -E54340E1EF02D2436758A56105B3182481FF1783_4,E54340E1EF02D2436758A56105B3182481FF1783," Next steps - -[Provisioning and launching the watsonx.governance service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) - -Parent topic:[watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html) -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_0,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Watson OpenScale offering plan options - -The Watson OpenScale enables responsible, transparent, and explainable AI. - -With Watson OpenScale you can: - - - -* Evaluate machine learning models for dimensions such as fairness, quality, or drift. -* Explore transactions to gain insights about your model. - - - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_1,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Watson OpenScale legacy offering plans - -Important:The legacy offering plan for Watson OpenScale is available only in the Frankfurt region. In the Dallas region, the [watsonx.governance plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-plan-options.html) are available instead. - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_2,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Watson OpenScale Standard v2 plan - -Watson OpenScale offers a Standard v2 plan that charge users on a per model basis. - -There are no restrictions or limitations on payload data, feedback rows, or explanations under the Standard v2 instance. - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_3,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Regional limitations - -Watson OpenScale is not available in some regions. See [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html) for more details. - -Note:The regional availability for every service can also be found in the [IBM watsonx catalog](https://dataplatform.cloud.ibm.com/data/catalog?target=services&context=cpdaas). - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_4,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Quota limits - -To avoid performance issues and manage resources efficiently, Watson OpenScale sets the following quota limits: - - - - Asset Limit - - DataMart 100 per instance - Service providers 100 per instance - Integrated systems 100 per instance - Subscriptions 100 per service provider - Monitor instances 100 per subscription - - - -Every asset in Watson OpenScale has a hard limitation of 10000 instances of the asset per service instance. - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_5,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," PostgreSQL databases for Watson OpenScale - -You can use a PostgreSQL database for your Watson OpenScale instance. PostgreSQL is a powerful, open source object-relational database that is highly customizable and compliant with many security standards. - -If your model processes personally identifiable information (PII), use a PostgreSQL database for your model. PostgreSQL is compliant with: - - - -* GDPR -* HIPAA -* PCI-DSS -* SOC 1 Type 2 -* SOC 2 Type 2 -* ISO 27001 -* ISO 27017 -* ISO 27018 -* ISO 27701 - - - -" -E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB_6,E45EEB80195E54D02A6F6CB7505F1FB73B4D4DAB," Next steps - -[Managing the Watson OpenScale service](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-provision-launch.html) - -Parent topic:[watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html) -" -6199BBB097894542EA31C726D8EF4A3357EED1E2_0,6199BBB097894542EA31C726D8EF4A3357EED1E2," Provisioning and launching watsonx.governance - -You can provision and launch your watsonx.governance service instance to start monitoring your model assets. - -Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html). - -Required permissions : To provision and launch a watsonx.governance service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.htmliamroles). - -" -6199BBB097894542EA31C726D8EF4A3357EED1E2_1,6199BBB097894542EA31C726D8EF4A3357EED1E2," Launching a watsonx.governance service instance - -Before you launch watsonx.governance, you must [create a service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) from your watsonx account. - -To launch watsonx.governance from IBM watsonx: - - - -1. From the navigation menu ![Navigation menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/navigation-menu.svg), choose Administration > Services > Service instances. -2. Click your watsonx.governance service instance. -3. From the Service Details page, click Launch watsonx.governance. - - - -" -6199BBB097894542EA31C726D8EF4A3357EED1E2_2,6199BBB097894542EA31C726D8EF4A3357EED1E2," Managing watsonx.governance - -You can manage your Watson OpenScale service instance by upgrading it or deleting it. - -You can upgrade watsonx.governance from a free Lite plan to a paid plan by using the IBM Cloud dashboard: - -Note:Upgrade to a paid plan if you are getting error messages, such as 403 Errors.AIQFM0011: 'Lite plan has exceeded the 50,000 rows limitation for Debias or Deployment creation failed. Error: 402. - - - -1. From the watsonx.governance dashboard, click your profile. -2. Click View upgrade options. -3. Select the Essential plan and click Upgrade. - - - -You can also delete the watsonx.governance service instance and related data. After 30 days of inactivity, the data mart is automatically deleted for a Lite plan. - -When the data mart is deleted, it includes the service configuration settings and tables: - - - -* All configuration tables are deleted including the following configuration tables and files: - - - -* Bindings -* Subscriptions -* Settings - - - -* All the tables that are created for model evaluation are deleted, including, but not limited to, the following tables: - - - -* Payload -* Feedback -* Manual labeling -* monitors -* Performance -* Explanation -* Annotation tables - - - - - -Lite plan services are deleted after 30 days of inactivity. Even if you don't delete your instance from IBM Cloud, your data mart is deleted after 30 days of inactivity. - -As a user of the Essential plan, your data mart is not automatically deleted. You can delete your watsonx.governance service instance from IBM Cloud and use the command-line interface to delete the data mart. - -Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html) -" -60CC59B176B08462143EA591DAC074060AD988C7_0,60CC59B176B08462143EA591DAC074060AD988C7," Sending model transactions in watsonx.governance - -You must send model transactions from your deployment to watsonx.governance to enable model evaluations. - -To generate accurate results for your model evaluations constantly, watsonx.governance must continue to receive new data from your deployment. watsonx.governance provides different methods that you can use to send transactions for model evaluations. - -" -60CC59B176B08462143EA591DAC074060AD988C7_1,60CC59B176B08462143EA591DAC074060AD988C7," Importing data - -When you review evaluation results in watsonx.governance, you can import data by selecting Evaluate now in the Actions menu to import payload and feedback data for your model evaluations. - -![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-eval-results-prod-spaces.png) - -For pre-production models, you must upload a CSV file that contains examples of input and output data. To run evaluations with imported data, you must map prompt variables to the associated columns in your CSV file and select Upload and evaluate as shown in the following example: - -![Upload CSV file](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-map-prompt-variables-preprod.png) - -For production models, you can select Upload payload data or Upload feedback data in the Import test data window to upload a CSV file as shown in the following example: - -![Import test data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-import-test-data-prod-space.png) - -The CSV file must contain labeled columns that match the columns in your payload and feedback schemas. When your upload completes successfully, you can select Evaluate now to run your evaluations with your imported data. - -" -60CC59B176B08462143EA591DAC074060AD988C7_2,60CC59B176B08462143EA591DAC074060AD988C7," Using endpoints - -For production models, Watson OpenScale supports endpoints that you can use to provide data in formats that enable evaluations. You can use the payload logging endpoint to send scoring requests for drift evaluations and use the feedback logging endpoint to provide feedback data for quality evaluations. For more information about the data formats, see [Managing data for model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.htmlit-dbo-active). - -Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html) -" -422554C1DCEBABC93CB859B4A896908DA48A540D_0,422554C1DCEBABC93CB859B4A896908DA48A540D," Setting up watsonx.governance - -You can set up watsonx.governance to monitor model assets in your IBM watsonx projects or deployment spaces. To set up watsonx.governance, you can manage users and roles for your organization to control access to your projects or deployment spaces. - -To set up watsonx.governance, complete the following tasks: - - - -* [Creating access policies](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-access-policies) -* [Managing users and roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html?context=cdpaas&locale=enwos-users-wx) - - - -" -422554C1DCEBABC93CB859B4A896908DA48A540D_1,422554C1DCEBABC93CB859B4A896908DA48A540D," Creating access policies - -You can complete the following steps to invite users to an IBM Cloud account that has a watsonx.governance instance installed and assign service access. - -Required roles : Users must have have the Reader, Writer, or higher IBM Cloud IAM Platform roles for service access. Users that are assigned the Writer role or higher can access information across projects and deployment spaces in watsonx.governance. - - - -1. From the IBM Cloud homepage, click Manage > Access (IAM). -2. From the IAM dashboard, click Users and select Invite user. -3. Complete the following fields: - - - -* How do you want to assign access? : Access policy. -* Which service do you want to assign access to? : watsonx.governance and click Next. -* How do you want to scope the access : Select the scope of access for users and click Next. - - - -* If you select Specific resources, select an attribute type and specify a value for each condition that you add. -* If you select Service instance in the Attribute type list, specify your instance in the Value field. - - - - - -4. If you have multiple instances, you must find the data mart ID to specify the instance that you want to assign users access to. You can use one of the following methods to find the data mart ID: - - - -* On the Insights dashboard, click a model deployment tile and go to Actions > View model information to find the data mart ID. -* On the Insights dashboard, click the navigation menu on a model deployment tile and select Configure monitors. Then, go to the Endpoints tab and find the data mart ID in the Integration details section of the Model information tab. - - - -5. Select the Reader role in the Service access list. -6. Assign access to users. - - - -* If you are assigning access to new users, click Add, and then click Invite in the Access summary pane. -* If you are assigning access to existing users, click Add, and then click Assign in the Access summary pane. - - - - - -" -422554C1DCEBABC93CB859B4A896908DA48A540D_2,422554C1DCEBABC93CB859B4A896908DA48A540D," watsonx.governance users and roles - -You can assign roles to watsonx.governance users to collaborate on model evaluations in [projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.htmladd-collaborators) and [deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.htmladding-collaborators). - -The following table lists permissions for roles that you can assign for access to evaluations. The Operator and Viewer roles are equivalent. - - - -Table 1. Operations by role -The first row of the table describes separate roles that you can choose from when creating a user. Each column provides a checkmark in the role category for the capability associated with that role. - - Operations Admin role Editor role Viewer/Operator role - - Evaluation ✔ ✔ - View evaluation result ✔ ✔ ✔ - Configure monitoring condition ✔ ✔ - View monitoring condition ✔ ✔ ✔ - Upload training data CSV file in model risk management ✔ ✔ - - - -Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) -" -225192BB81696D14887CC55070A6DFA14B3315F7_0,225192BB81696D14887CC55070A6DFA14B3315F7," Adding data to Data Refinery - -After you [create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) and you [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) or you [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to the project, you can then add data to Data Refinery and start prepping that data for analysis. - -You can add data to Data Refinery in one of several ways: - - - -* Select Prepare data from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) of a data asset in the All assets list for the project -* Preview a data asset in the project and then click Prepare data -* Navigate to Data Refinery first and then add data to it - - - -" -225192BB81696D14887CC55070A6DFA14B3315F7_1,225192BB81696D14887CC55070A6DFA14B3315F7," Navigate to Data Refinery - - - -1. Access Data Refinery from within a project. Click the Assets tab. -2. Click New asset > Prepare and visualize data. -3. Select the data that you want to work with from Data assets or from Connections. - -From Data assets: - - - -* Select a data file (the selection includes data files that were already shaped with Data Refinery) -* Select a connected data asset - - - -From Connections: - - - -* Select a connection and file -* Select a connection, folder, and file -* Select a connection, schema, and table or view - - - -Data Refinery supports these file types: Avro, CSV, delimited text files, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the ""sas7bdat"" extension (read only), TSV (read only) - -Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html). - -Data connections marked with a key icon (![the key symbol for private connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)) are locked. If you are authorized to access the data source, you are asked to enter your personal credentials the first time you select it. This one-time step permanently unlocks the connection for you. After you have unlocked the connection, the key icon is no longer displayed. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). -4. Click Add to load the data into Data Refinery. - - - -" -225192BB81696D14887CC55070A6DFA14B3315F7_2,225192BB81696D14887CC55070A6DFA14B3315F7," Next steps - - - -* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -* [Validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html) -* [Use visualizations to gain insights into your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) - - - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_0,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Interactive code templates in Data Refinery - -Data Refinery provides interactive templates for you to code operations, functions, and logical operators. Access the templates from the command-line text box at the top of the page. The templates include interactive assistance to help you with the syntax options. - -Important: Support is for the operations and functions in the user interface. If you insert other operations or functions from an open source library, the Data Refinery flow might fail. See the command-line help and be sure to use the list of operations or functions from the templates. Use the examples in the templates to further customize the syntax as needed. - - - -* [Operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enoperations) -* [Functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enfunctions) -* [Logical operators](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html?context=cdpaas&locale=enlogical_operators) - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_1,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Operations - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_2,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," arrange - -arrange(`) -Sort rows, in ascending order, by the specified columns. - -arrange(desc(`)) -Sort rows, in descending order, by the specified column. - -arrange(`, ) -Sort rows, in ascending order, by each specified, successive column, keeping the order from the prior sort intact. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_3,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," count - -count() -Total the data by group. - -count(`) -Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values). - -count(`, wt=) -Group the data by the specified column and return the number of rows with unique values (for string values) or return the total for each group (for numeric values) in the specified weight column. - -count(`, wt=()) -Group the data by the specified column and return the result of the function applied to the specified weight column. - -count(`, wt=(), sort = ) -Group the data by the specified column and return the result of the function applied to the specified weight column, sorted or not. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_4,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," distinct - -distinct() -Keep distinct, unique rows based on all columns or on specified columns. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_5,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," filter - -filter(` provide_value) -Keep rows that meet the specified condition and filter out all other rows. -For the Boolean column type, provide_value should be uppercase TRUE or FALSE. - -filter(`== ) -Keep rows that meet the specified filter conditions based on logical value TRUE or FALSE. - -filter(() provide_value) -Keep rows that meet the specified condition and filter out all other rows. The condition can apply a function to a column on the left side of the operator. - -filter(` ) -Keep rows that meet the specified condition and filter out all other rows. The condition can apply a function to a column on the right side of the operator. - -filter() -Keep rows that meet the specified condition and filter out all other rows. The condition can apply a logical function to a column. - -filter(` provide_value provide_value ) -Keep rows that meet the specified conditions and filter out all other rows. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_6,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," group_by - -group_by(`) -Group the data based on the specified column. - -group_by(desc(`)) -Group the data, in descending order, based on the specified column. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_7,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," mutate - -mutate(provide_new_column = `) -Add a new column and keep existing columns. - -mutate(provide_new_column = ) -Add a new column by using the specified expression, which applies a function to a column. Keep existing columns. - -mutate(provide_new_column = case_when(` provide_value_or_column_to_compare provide_value_or_column_to_replace, provide_value_or_column_to_compare provide_value_or_column_to_replace, TRUE provide_default_value_or_column)) -Add a new column by using the specified conditional expression. - -mutate(provide_new_column = ` ) -Add a new column by using the specified expression, which performs a calculation with existing columns. Keep existing columns. - -mutate(provide_new_column = coalesce(`, )) -Add a new column by using the specified expression, which replaces missing values in the new column with values from another, specified column. As an alternative to specifying another column, you can specify a value, a function on a column, or a function on a value. Keep existing columns. - -mutate(provide_new_column = if_else(` provide_value, provide_value_for_true, provide_value_for_false)) -Add a new column by using the specified conditional expression. Keep existing columns. - -mutate(provide_new_column = `, provide_new_column = ) -Add multiple new columns and keep existing columns. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_8,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36,"mutate(provide_new_column = n()) -Count the values in the groups. Ensure grouping is done already using group_by. Keep existing columns. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_9,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," mutate_all - -mutate_all(funs()) -Apply the specified function to all of the columns and overwrite the existing values in those columns. Specify whether to remove missing values. - -mutate_all(funs(. provide_value)) -Apply the specified operator to all of the columns and overwrite the existing values in those columns. - -mutate_all(funs(""provide_value"" = . provide_value)) -Apply the specified operator to all of the columns and create new columns to hold the results. Give the new columns names that end with the specified value. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_10,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," mutate_at - -mutate_at(vars(`), funs()) -Apply functions to the specified columns. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_11,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," mutate_if - -mutate_if(, ) -Apply functions to the columns that meet the specified condition. - -mutate_if(, funs( . provide_value)) -Apply the specified operator to the columns that meet the specified condition. - -mutate_if(, funs()) -Apply functions to the columns that meet the specified condition. Specify whether to remove missing values. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_12,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," rename - -rename(provide_new_column = `) -Rename the specified column. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_13,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," sample_frac - -sample_frac(provide_number_between_0_and_1, weight=`,replace=) -Generate a random sample based on a percentage of the data. weight is optional and is the ratio of probability the row will be chosen. Provide a numeric column. replace is optional and its Default is FALSE. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_14,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," sample_n - -sample_n(provide_number_of_rows,weight=`,replace=) -Generate a random sample of data based on a number of rows. weight is optional and is the ratio of probability the row will be chosen. Provide a numeric column. replace is optional and its default is FALSE. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_15,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," select - -select(`) -Keep the specified column. - -select(-`) -Remove the specified column. - -select(starts_with(""provide_text_value"")) -Keep columns with names that start with the specified value. - -select(ends_with(""provide_text_value"")) -Keep columns with names that end with the specified value. - -select(contains(""provide_text_value"")) -Keep columns with names that contain the specified value. - -select(matches (""provide_text_value"")) -Keep columns with names that match the specified value. The specified value can be text or a regular expression. - -select(`:) -Keep the columns in the specified range. Specify the range as from one column to another column. - -select(`, everything()) -Keep all of the columns, but make the specified column the first column. - -select(`, ) -Keep the specified columns. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_16,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," select_if - -select_if() Keep columns that meet the specified condition. Supported functions include: - - - -* contains -* ends_with -* matches -* num_range -* starts_with - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_17,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," summarize - -summarize(provide_new_column = ()) -Apply aggregate functions to the specified columns to reduce multiple column values to a single value. Be sure to group the column data first by using the group_by operation. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_18,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," summarize_all - -summarize_all() -Apply an aggregate function to all of the columns to reduce multiple column values to a single value. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation. - -summarize_all(funs()) -Apply multiple aggregate functions to all of the columns to reduce multiple column values to a single value. Create new columns to hold the results. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_19,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," summarize_if - -summarize_if(,...) -Apply aggregate functions to columns that meet the specified conditions to reduce multiple column values to a single value. Specify whether to remove missing values. Be sure to group the column data first by using the group_by operation. Supported functions include: - - - -* count -* max -* mean -* min -* standard deviation -* sum - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_20,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," tally - -tally() -Counts the number of rows (for string columns) or totals the data (for numeric values) by group. Be sure to group the column data first by using the group_by operation. - -tally(wt=`) -Counts the number of rows (for string columns) or totals the data (for numeric columns) by group for the weighted column. - -tally( wt=(), sort = ) -Applies a function to the specified weighted column and returns the result, by group, sorted or not. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_21,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," top_n - -top_n(provide_value) -Select the top or bottom N rows (by value) in each group. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows. - -top_n(provide_value, `) -Select the top or bottom N rows (by value) in each group, based on the specified column. Specify a positive integer to select the top N rows; specify a negative integer to select the bottom N rows. - -If duplicate rows affect the count, use the Remove duplicates GUI operation prior to using the top_n() operation. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_22,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," transmute - -transmute( = ) -Add a new column or overwrite an existing one by using the specified expression. Keep only columns that are mentioned in the expression. - -transmute( = ) -Add a new column or overwrite an existing one by applying a function to the specified column. Keep only columns that are mentioned in the expression. - -transmute( = ) -Add a new column or overwrite an existing one by applying an operator to the specified column. Keep only columns that are mentioned in the expression. - -transmute( = , = ) -Add multiple new columns. Keep only columns that are mentioned in the expression. - -transmute( = if_else( provide_value, provide_value_for_true, provide_value_for_false)) -Add a new column or overwrite an existing one by using the specified conditional expressions. Keep only columns that are mentioned in the expressions. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_23,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," ungroup - -ungroup() -Ungroup the data. - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_24,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Functions - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_25,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Aggregate - - - -* mean -* min -* n -* sd -* sum - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_26,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Logical - - - -* is.na - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_27,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Numerical - - - -* abs -* coalesce -* cut -* exp -* floor - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_28,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Text - - - -* c -* coalesce -* paste -* tolower -* toupper - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_29,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Type - - - -* as.character -* as.double -* as.integer -* as.logical - - - -" -5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36_30,5B0B25C87C4C2D91E8376D2AFF3726E4CA355F36," Logical operators - - - -* < - -* <= - -* >= - -* > - -* between - -* != - -* == - -* %in% - - - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_0,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Managing Data Refinery flows - -A Data Refinery flow is an ordered set of steps to cleanse, shape, and enhance data. As you [refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.htmlrefine) by [applying operations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html) to a data set, you dynamically build a customized Data Refinery flow that you can modify in real time and save for future use. - -These are actions that you can do while you refine your data: - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_1,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,"Working with the Data Refinery flow - - - -* [Save a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensave) -* [Run or schedule a job for Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enjobs) -* [Rename a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enrename) - - - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_2,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,"Steps - - - -* [Undo or redo a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enundo) -* [Edit, duplicate, insert, or delete a step](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-duplicate) -* [View the Data Refinery flow steps in a ""snapshot view""](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensnapshot) -* [Export the Data Refinery flow data to a CSV file](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enexport) - - - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_3,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,"Working with the data sets - - - -* [Change the source of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange) -* [Edit the sample size](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensample) -* [Edit the source properties ](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-source) -* [Change the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput) -* [Edit the target properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enedit-target) -* [Change the name of the Data Refinery flow target](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enchange-name) - - - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_4,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,"Actions on the project page - - - -* [Reopen a Data Refinery flow to continue working](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enreopen) -* [Duplicate a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enclone) -* [Delete a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enremove) -* [Promote a Data Refinery flow to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enpromote) - - - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_5,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Working with the Data Refinery flow - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_6,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Save a Data Refinery flow - -Save a Data Refinery flow by clicking the Save Data Refinery flow icon ![Save icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/save.png) in the Data Refinery toolbar. Data Refinery flows are saved to the project that you're working in. Save a Data Refinery flow so that you can continue refining a data set later. - -The default output of the Data Refinery flow is saved as a data asset source-file-name_shaped.csv. For example, if the source file is mydata.csv, the default name and output for the Data Refinery flow is mydata_csv_shaped. You can edit the name and add an extension by [changing the target of a Data Refinery flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=enoutput). - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_7,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Run or schedule a job for a Data Refinery flow - -Data Refinery supports large data sets, which can be time-consuming and unwieldy to refine. So that you can work quickly and efficiently, Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. When you run a job for the Data Refinery flow, the entire data set is processed. When you run the job, you select the runtime and you can add a one-time or repeating schedule. - -In Data Refinery, from the Data Refinery toolbar click the Jobs icon ![the run or schedule a job icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png), and then select Save and create a job or Save and view jobs. - -After you save a Data Refinery flow, you can also create a job for it from the Project page. Go to the Assets tab, select the Data Refinery flow, choose New job from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)). - -You must have the Admin or Editor role to view the job details or to edit or run the job. With the Viewer role for the project, you can view only the job details. - -For more information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html). - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_8,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Rename a Data Refinery flow - -On the Data Refinery toolbar, open the Info pane ![info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/info-pane.png). Or open the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png) and go to the General tab. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_9,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Steps - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_10,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Undo or redo a step - -Click the undo (![undo icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/undo.png)) icon or the redo (![redo icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/redo.png)) icon on the toolbar. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_11,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Edit, duplicate, insert, or delete a step - -In the Steps pane, click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) on the step for the operation that you want to change. Select the action (Edit, Duplicate, Insert step before, Insert step after, or Delete). - - - -* If you select Edit, Data Refinery goes into edit mode and either displays the operation to be edited on the command line or in the Operation pane. Apply the edited operation. - - - - - -* If you select Duplicate, the duplicated step is inserted after the selected step. - - - -Note:The Duplicate action is not available for the Join or Union operations. - -Data Refinery updates the Data Refinery flow to reflect the changes and reruns all the operations. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_12,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," View the Data Refinery flow steps in a ""snapshot view"" - -To see what your data looked like at any point in time, click a previous step to put Data Refinery into snapshot view. For example, if you click Data source, you see what your data looked like before you started refining it. Click any operation step to see what your data looked like after that operation was applied. To leave snapshot view, click Viewing step x of y or click the same step that you selected to get into snapshot view. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_13,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Export the Data Refinery flow data to a CSV file - -Click Export (![export icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/export.png)) on the toolbar to export the data at the current step in your Data Refinery flow to a CSV file without saving or running a Data Refinery flow job. Use this option, for example, if you want quick output of a Data Refinery flow that is in progress. When you export the data, a CSV file is created and downloaded to your computer's Downloads folder (or the user-specified download location) at the current step in the Data Refinery flow. If you are in [snapshot view](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html?context=cdpaas&locale=ensnapshot), the output of the CSV file is at the step that you clicked. If you are viewing a sample (subset) of the data, only the sample data will be in the output. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_14,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Working with the data sets - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_15,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Change the source of a Data Refinery flow - -Change the source of a Data Refinery flow. Run the same Data Refinery flow but with a different source data set. There are two ways that you can change the source: - - - -* In the Steps pane: Click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) next to Data source, select Edit, and then choose a different source data set. -![Edit source](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/edit-source.png) -* In the Flow settings: You can use this method if you want to change more than one data source in the same place. For example, for a Join or a Union operation. On the toolbar, open the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png). Go to the Source data sets tab and click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) next to the data source. Select Replace data source, and then choose a different source data set. - - - -For best results, the new data set should have a schema that is compatible to the original data set (for example, column names, number of columns, and data types). If the new data set has a different schema, operations that won't work with the schema will show errors. You can edit or delete the operations, or change the source to one that has a more compatible schema. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_16,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Edit the sample size - -When you run the job for the Data Refinery flow, the operations are performed on the full data set. However, when you apply the operations interactively in Data Refinery, depending on the size of the data set, you view only a sample of the data. - -Increase the sample size to see results that will be closer to the results of the Data Refinery flow job, but be aware that it might take longer to view the results in Data Refinery. The maximum is a top-row count of 10,000 rows or 1 MB, whichever comes first. Decrease the sample size to view faster results. Depending on the size of the data and the number and complexity of the operations, you might want to experiment with the sample size to see what works best for the data set. - -On the toolbar, open the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png). Go to the Source data sets tab and click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) next to the data source, and select Edit sample. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_17,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Edit the source properties - -The available properties depend on the data source. Different properties are available for data assets and for data from different kinds of connections. Change the file format only if the inferred file format is incorrect. If you change the file format, the source is read with the new format, but the source file remains unchanged. Changing the format source properties might be an iterative process. Inspect your data after you apply an option. - -On the toolbar, open the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png). Go to the Source data sets tab and click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) next to the data source, and select Edit format. - -Important: Use caution if you edit the source properties. Incorrect selections might produce unexpected results when the data is read or impair the Data Refinery flow job. Inspect the results of the Data Refinery flow carefully. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_18,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Change the target of a Data Refinery flow - -By default, the target of the Data Refinery is saved as a data asset in the project that you're working in. - -To change the target location, open Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png) from the toolbar. Go to the Target data set tab, click Select target, and select a different target location. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_19,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Edit the target properties - -The available properties depend on the data source. Different properties are available for data assets and for data from different kinds of connections. - -To change the target data set's properties, open the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png) from the toolbar. Go to the Target data set tab, and click Edit properties. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_20,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Change the name of the Data Refinery flow target - -The name of the target data set is included in the fields that you can change when you edit the target properties. - -By default, the target of the Data Refinery is saved as a data asset source-file-name_shaped.csv in the project. For example, if the source is mydata.csv, the default name and output for the Data Refinery flow is the data asset mydata_csv_shaped. - -Different properties and naming conventions apply to a target data set from a connection. For example, if the data set is in Cloud Object Storage, the data set is identified in the Bucket and File name fields. If the data set is in a Db2 database, the data set is identified in the Schema name and Table name fields. - -Important: Use caution if you edit the target properties. Incorrect selections might produce unexpected results or impair the Data Refinery flow job. Inspect the results of the Data Refinery flow carefully. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_21,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Actions on the project page - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_22,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Reopen a Data Refinery flow to continue working - -To reopen a Data Refinery flow and continue refining your data, go to the project’s Assets tab. Under Asset types, expand Flows, click Data Refinery flow. Click the Data Refinery flow name. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_23,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Duplicate a Data Refinery flow - -To create a copy of a Data Refinery flow, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow, and then select Duplicate from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)). The Data Refinery flow is added to the Data Refinery flows list as ""original-name copy 1"". - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_24,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Delete a Data Refinery flow - -To delete a Data Refinery flow, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow, and then select Delete from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)). - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_25,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45," Promote a Data Refinery flow to a space - -Deployment spaces are used to manage a set of related assets in a separate environment from your projects. You use a space to prepare data for a deployment job for Watson Machine Learning. You can promote Data Refinery flows from multiple projects to a single space. Complete the steps in the Data Refinery flow before you promote it because the Data Refinery flow is not editable in a space. - -To promote a Data Refinery flow to a space, go to the project's Assets tab, expand Flows, click Data Refinery flow. Select the Data Refinery flow. Click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) for the Data Refinery flow, and then select Promote. The source file for the Data Refinery flow and any other dependent data will be promoted as well. - -To create or run a job for the Data Refinery flow in a space, go the space’s Assets tab, scroll down to the Data Refinery flow, and select New job (![the run or schedule a job icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png)) from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)). If you've already created the job, go to the Jobs tab to edit the job or view the job run details. The shaped output of the Data Refinery flow job will be available on the space’s Assets tab. You must have the Admin or Editor role to view the job details or to edit or run the job. With the Viewer role for the project, you can only view the job details. You can use the shaped output as input data for a job in Watson Machine Learning. - -" -0999F59BB8E2E2AB7722D57CDBC051A0984ABE45_26,0999F59BB8E2E2AB7722D57CDBC051A0984ABE45,"Restriction:When you promote a Data Refinery flow from a project to a space and the target of the Data Refinery flow is a connected data asset, you must manually promote the connected data asset. This action ensures that the connected data asset's data is updated when you run the Data Refinery flow job in the space. Otherwise, a successful run of the Data Refinery flow job will create a new data asset in the space. - -For information about spaces, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_0,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," CLEANSE - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_1,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Convert column type -When you open a file in Data Refinery, the Convert column type operation is automatically applied as the first step if it detects any nonstring data types in the data. Data types are automatically converted to inferred data types. To change the automatic conversion for a selected column, click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) for the step and select Edit. As with any other operation, you can undo the step. The Convert column type operation is reapplied every time that you open the file in Data Refinery. Automatic conversion is applied as needed for file-based data sources only. (It does not apply to a data source from a database connection.) - -To confirm what data type each column's data was converted to, click Edit from the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) to view the data types. The information includes the format for date or timestamp data. - -If the data is converted to an Integer or to a Decimal data type, you can specify the decimal symbol and the thousands grouping symbol for all applicable columns. Strings that are converted to the Decimal data type use a dot for the decimal symbol and a comma for the thousands grouping symbol. Alternatively, you can select comma for the decimal symbol and dot or a custom symbol for the thousands grouping symbol. The decimal symbol and the thousands grouping symbol cannot be the same. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_2,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"The source data is read from left to right until a terminator or an unrecognized character is encountered. For example, if you are converting string data 12,834 to Decimal and you do not specify what to do with the comma (,), the data will be truncated to 12. Similarly, if the source data has multiple dots (.), and you select dot for the decimal symbol, the first dot is used as the decimal separator and the digits following the second dot are truncated. A source string of 1.834.230,000 is converted to a value of 1.834. - -The Convert column type operation automatically converts these date and timestamp formats: - - - -* Date: ymd, ydm -* Timestamp: ymdHMS, ymdHM, ydmHMS, ydmHM - - - -Date and Timestamp strings must use four digits for the year. - -You can manually apply the Convert column type operation to change the data type of a column at any point in the Data Refinery flow. You can create a new column to hold the result of this operation or you can overwrite the existing column. - -Tip: A column's data type determines the operations that you can use. Changing the data type can affect which operations are relevant for that column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_3,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Convert column type operation automatically converted the first column from String to Integer. Let's change the data types of the other three columns. -2. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step. -3. To change the data type of european column from string to decimal, select the column and then edit the Convert column type operation step. -4. Select Decimal. -5. The column uses the comma delimiter so select Comma (,) for the decimal symbol. -6. Select the next column, DATETIME. Select Timestamp and a format. -7. Click Apply. -8. The columns are now Integer, Decimal, Date, and Timestamp data types The Convert column type step in the Steps panel is updated. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_4,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Convert column value to missing -Convert values in the selected column to missing values if they match values in the specified column or they match a specified value. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_5,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Convert column value to missing operation converts the values in a selected column to missing values if they match the values in a specified column or if they match a specified value. -2. A missing value is equivalent to an SQL NULL, which is a field with no value. It is different from a zero value or a value that contains spaces. -3. You can use the Convert column value to missing operation when you think that the data would be better represented as missing values. For example, when you want to use missing values in a Replace missing values operation or in a Filter operation. -4. Let's use the Convert column value to missing operation to change values to missing based on a matched value. -5. Notice that the DESC column has many rows with the value CANCELLED ORDER. Let's convert the CANCELLED ORDER strings to missing values. -6. The Convert column value to missing operation is under the CLEANSE category. -7. Type the string to replace with missing values. -8. The values that were formerly CANCELLED ORDER are now missing values. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_6,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Extract date or time value -Extract a selected portion of a date or time value from a column with a date or timestamp data type. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_7,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Extract date or time value operation extracts a selected portion of a date or time value from a column that is a date or timestamp data type. -2. The DATE column is a String data type. First, let's use the Convert column type operation to convert it to the Date data type. -3. Select the Convert column type operation from the DATE column's menu. Select Date. -4. Select a Date format. -5. The DATE column is now a date data type. -6. The ISO Date format is used when the String data type was converted to the Date data type. For example, the string 01/08/2018 was converted to the date 2018-01-08. -7. Now we can extract the year portion of the date into a new column. -8. The Extract date or time value operation is under the CLEANSE category. -9. Select Year for the portion of the date to extract, and type YEAR for the new column name. -10. The year portion of the DATE column is in the new column, YEAR. -11. The Steps panel displays the Extract date or time value operation. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_8,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Filter -Filter rows by the selected columns. Keep rows with the selected column values; filter out all other rows. - -For these string Filter operators, do not enclose the value in quotation marks. If the value contains quotation marks, escape them with a slash character. For example: ""text"": - - - -* Contains -* Does not contain -* Starts with -* Does not start with -* End with -* Does not end with - - - -Folowing are the operators for numeric, string, and Boolean (logical), and date and timestamp columns: - - - - Operator Numeric String Boolean Date and timestamp - - Contains ✓ - Does not contain ✓ - Does not end with ✓ - Does not start with ✓ - Ends with ✓ - Is between two numbers ✓ - Is empty ✓ ✓ ✓ - Is equal to ✓ ✓ ✓ - Is false ✓ - Is greater than ✓ ✓ - Is greater than or equal to ✓ ✓ - Is in ✓ ✓ - Is less than ✓ ✓ - Is less than or equal to ✓ ✓ - Is not empty ✓ ✓ ✓ - Is not equal to ✓ ✓ ✓ - Is not in ✓ ✓ - Is not null ✓ - Is null ✓ ✓ - Is true ✓ - Starts with ✓ - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_9,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Use the Filter operation to filter rows by the selected columns. You can apply multiple conditions in one Filter operation. -2. Use a regular expression to filter out all the rows except those where the string in the Emp ID column starts with 8. -3. Filter the rows by two states abbreviations. -4. Click Apply. Only the rows where Emp ID starts with 8 and State is AR or TX are in the table. -5. The rows are now filtered by AR and PA. The Filter step in the Steps panel is updated. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_10,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Remove column -Remove the selected column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_11,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Use the Remove column operation to quickly remove a column from a data asset. -2. The quickest way to remove a column is from the column's menu. -3. The name of the removed column is in the Steps panel. -4. Remove another column. -5. The name of the removed column is in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_12,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Remove duplicates -Remove rows with duplicate column values. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_13,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Remove duplicates operation removes rows that have duplicate column values. -2. The data set has 43 rows. Many of the rows in the APPLYCODE column have duplicate values. We want to reduce the data set to the rows where each value in the APPLYCODE column occurs only once. -3. Select the Remove duplicates operation from the APPLYCODE column's menu. -4. The Remove duplicates operation removed each occurrence of a duplicate value starting from the top row. The data set is now 4 rows. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_14,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Remove empty rows -Remove rows that have a blank or missing value for the selected column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_15,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Remove empty rows operation removes rows that have a blank or missing value for the selected column. -2. A missing value is equivalent to an SQL NULL, which is a field with no value. It is different from a zero value or a value that contains spaces. -3. The data set has 43 rows. Many of the rows in the TRACK column have missing values. We want to reduce the data set to the rows that have a value in the TRACK column. -4. Select the Remove empty rows operation from the TRACK column's menu. -5. The Remove empty rows operation removed each row that had a blank or missing value in the TRACK column. The data set is now 21 rows. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_16,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Replace missing values -Replace missing values in the column with a specified value or with the value from a specified column in the same row. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_17,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Replace missing values operation replaces missing values in a column with a specified value or with the value from a specified column in the same row. -2. The STATE column has many rows with empty values. We want to replace those empty values with a string. -3. The Replace missing values operation is under the CLEANSE category. -4. For the State column, replace the missing values with the string Incomplete. -5. The missing values now have the value Incomplete. -6. The Steps panel displays the Replace missing values operation. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_18,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Replace substring -Replace the specified substring with the specified text. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_19,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Replace substring operation replaces a substring with text that you specify. -2. The DECLINE column has many rows that include the string BANC. We want to replace this string with BANK. -3. The Replace substring operation is under the CLEANSE category. -4. Type the string to replace and the replacement string. -5. All occurrences of the string BANC have been replaced with BANK. -6. The Steps panel displays the Replace substring operation. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_20,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Substitute -Obscure sensitive information from view by substituting a random string of characters for the actual data in the selected column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_21,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Substitute operation obscures sensitive information by substituting a random string of characters for the data in the selected column. -2. The quickest way to substitute the data in a column is to select Substitute from the column's menu. -3. The Substitute operation shows in the Steps panel. -4. Substitute values in another column. -5. The second Substitute operation shows in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_22,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," Text - -You can apply text operations only to string columns. You can create a new column to hold the result of an operation or you can overwrite the existing column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_23,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Collapse spaces -Collapse multiple, consecutive spaces in the text to a single space. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_24,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Concatenate string -Link together any string to the text. You can prepend the string to the text, append the string to the text, or both. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_25,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Lowercase -Convert the text to lowercase. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_26,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Number of characters -Return the number of characters in the text. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_27,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Pad characters -Pad the text with the specified string. Specify whether to pad the text on the left, right, or both the left and right. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_28,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Substring -Create substrings from the text that start at the specified position and have the specified length. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_29,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Title case -Convert the text to title case. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_30,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Trim quotes -Remove single or double quotation marks from the text. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_31,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Trim spaces -Remove leading, trailing, and extra spaces from the text. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_32,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Text > Uppercase -Convert the text to uppercase. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_33,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. You can apply a Text operation to string columns. Create a new column for the result or overwrite the existing column. -2. First, concatenate a string to the values in the WORD column. -3. Available Text operations. -4. Concatenate the string to the right side, append with a space, and type up. -5. The values in the WORD column are appended with a space and the word up. -6. The Text operation displays in the Steps panel. -7. Next, pad the values in the ANIMAL column with a string. -8. Pad the values in the ANIMAL column with ampersand (&) symbols to the right for a minimum of 7 characters. -9. The values in the ANIMAL column are padded with the & symbol so that each string is at least seven characters. -10. Notice that the opossum, pangolin, platypus, and hedgehog values do not have a padding character because those strings were already seven or more characters long. -11. Next, use Substring to remove the t character from the ID column. -12. Select Position 2 to start the new string at that position. Select Length 4 for a four-character length string. -13. The initial t character in the ID column is removed in the NEW-ID column. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_34,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," COMPUTE - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_35,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Calculate -Perform a calculation with another column or with a specified value. The operators are: - - - -* Addition -* Division -* Exponentiation -* Is between two numbers -* Is equal to -* Is greater than -* Is greater than or equal to -* Is less than -* Is less than or equal to -* Is not equal to -* Modulus -* Multiplication -* Subtraction - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_36,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Calculate operation performs a calculation, such as addition or subtraction, with another column or with a specified value. -2. Select the column to begin. -3. Available calculations -4. Now select the second column for the Addition calculation. -5. And apply the change. -6. The id column is updated, and the Steps panel shows the completed operation. -7. You can also access the operations from the column's menu. -8. This time, select Is between two numbers. Specify the range, and create a new column for the results. -9. The new column displays in the table and the new calculate operation displays in the Steps panel. -10. This time, select Is equal to to compare two columns, and create a new column for the results. -11. The new column displays in the table and the new calculate operation displays in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_37,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," Math - -You can apply math operations only to numeric columns. You can create a new column to hold the result of an operation or you can overwrite the existing column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_38,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Absolute value -Get the absolute value of a number. -Example: The absolute value of both 4 and -4 is 4. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_39,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Arc cosine -Get the arc cosine of an angle. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_40,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Ceiling -Get the nearest integer of greater value, also known as the ceiling of the number. -Examples: The ceiling of 2.31 is 3. The ceiling of -2.31 is -2. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_41,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Exponent -Get a number raised to the power of the column value. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_42,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Floor -Get the nearest integer of lesser value, also known as the floor of the number. -Example: The floor of 2.31 is 2. The floor of -2.31 is -3. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_43,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Round -Get the whole number nearest to the column value. If the column value is a whole number, return it. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_44,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Math > Square root -Get the square root of the column value. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_45,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Apply a Math operation to the values in a column. Create a new column for the results or overwrite the existing column. -2. Available Math operations -3. Apply Absolute value to the column's values. -4. Create new column for results. -5. The new column is added to the table, and the Math operation displays in the Steps panel. -6. You can also access the operation from the column's menu. -7. Apply Round to the ANGLE column's values. -8. Create a new column for results. -9. The new column is added to the table, and the new Math operation displays in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_46,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," ORGANIZE - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_47,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Aggregate -Apply summary calculations to the values of one or more columns. Each aggregation creates a new column. Optionally, select Group by columns to group the new column by another column that defines a characteristic of the group, for example, a department or an ID. You can group by multiple columns. You can combine multiple aggregations in a single operation. - -The available aggregate operations depend on the data type. - -Numeric data: - - - -* Count unique values -* Minimum -* Maximum -* Sum -* Standard deviation -* Mean - - - -String data: - - - -* Combine row values -* Count unique values - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_48,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Aggregate operation applies summary calculations to the values of one or more columns. Each aggregation creates a new column. -2. Available aggregations depend on whether the data is numeric or string data. -3. The available operators depend on the column's data type. Available operators for numeric data. -4. With the UniqueCarrier text column selected, you can see the available operators for string data. -5. We will count how many unique values are in the UniqueCarrier column. This aggregation will show how many airlines are in the data set. -6. We have 22 airlines in the new Airlines column. The other columns are deleted. -7. The Aggregate operation displays in the Steps panel. -8. Let's start over to show an aggregation on numeric data. -9. Show the average (mean value) of the arrival delays. -10. The average value of all the arrival delays is in the new MeanArrDelay column. The other columns are deleted. -11. You can also group the aggregated column by another column that defines a characteristic of the group. -12. Let's edit the Aggregate step by adding a Group by selection so we can see the average of arrival delays by airline. -13. Group the results by the UniqueCarrier column. -14. The average arrival delays are now grouped by airline. -15. The Steps panel displays the Aggregate operation. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_49,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Concatenate -Concatenate the values of two or more columns. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_50,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Concatenate operation concatenates the values of two or more columns. -2. The Concatenate operation is under the ORGANIZE category. -3. Select the columns to concatenate. -4. Select a separator to use between the concatenated values. -5. Type a name for the column for the concatenated values. -6. The new column can display as the right-most column in the data set, or next to the original column. -7. Keep the original columns, and apply the changes. -8. The new DATE column shows the concatenated values from the other three columns with a semicolon separator. -9. The Concatenate operation displays in the Steps panel. -10. The DATE column is a String data type. Let's use the Convert column type operation to convert it to the Date data type. -11. Select the Convert column type operation from the DATE column's menu. Select Date. -12. Select a date format and create a new column for the result. -13. Place the new column next to the original column, and apply the changes. -14. The new column displays with the converted date format. -15. The Convert column type operation displays in the Steps panel. -16. The ISO Date format is used when the String data type was converted to the Date data type. For example, the string 2004;2;3 was converted to the date 2004-02-03. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_51,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Conditional replace -Replace the values in a column based on conditions. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_52,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Use the Conditional replace operation to replace the values in a column based on conditions. -2. First, let's specify conditions to replace data in the CODE string column and create a new column for the results. -3. Available condition operators for string data. -4. Add the first condition - CONDITION 1: CODE Is equal to value C replace with COMPLETE. -5. Add a second condition - CONDITION 2: CODE Is equal to value I replace with INCOMPLETE. -6. Specify what to do with any values that do not meet the conditions. Here we will enter two double quotation marks to indicate an empty string. -7. Create a new column for the results. -8. The new column, STATUS, shows the conditional replacements from the CODE column. -9. The Conditional replace operation shows in the Steps panel. -10. Next, let's specify conditions to replace data in the INPUT integer column and create a new column for the results. -11. Available condition operators for numeric data. -12. Add the first condition - CONDITION 1: INPUT Is less than or equal to value 3 replace with value LOW. -13. Add a second condition - CONDITION 2: INPUT Is in values 4,5,6 replace with value MED. -14. Add a third condition - CONDITION 3: INPUT Is greater than or equal to value 7 replace with value HIGH. -15. Specify what to do with any values that do not meet the conditions. -16. Create a new column for the results. -17. The new column, RATING, shows the conditional replacements from the INPUT column. -18. The Conditional replace operation shows in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_53,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Join -Combine data from two data sets based on a comparison of the values in specified key columns. Specify the type of join to perform, select the columns (join keys) in both data sets that you want to compare, and select the columns that you want in the resulting data set. - -The join key columns in both data sets need to be compatible data types. If the Join operation is the first step that you add, check whether the Convert column type operation automatically converted the data type of the join key columns in the first data set when you opened the file in Data Refinery. Also, depending where the Join operation is in the Data Refinery flow, you can use the Convert column type operation to ensure that the join key columns' data types match. Click a previous step in Steps panel to see the snapshot view of the step. - -The join types include: - - - - Join type Description - - Left join Returns all rows in the original data set and return only matching rows in the joining data set. Returns one row in the original data set for each matching row in the joining data set. - Right join Returns all rows in the joining data set and return only matching rows in the original data set. Returns one row in the joining data set for each matching row in the original data set. - Inner join Returns only the rows in each data set that match rows in the other data set. Returns one row in the original data set for each matching row in the joining data set. - Full join Returns all rows in both data sets. Blends rows in the original data set with matching rows in the joining data set. - Semi join Returns only the rows in the original data set that match rows in the joining data set. Returns one row in the original data set for all matching rows in the joining data set. - Anti join Returns only the rows in the original data set that do not match rows in the joining data set. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_54,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The customers.csv data set contains information about your company's customers, and the sales.csv data set contains information about your company's sales representatives. -2. The data sets share the SALESREP_ID column. -3. The customers.csv data set is open in Data Refinery. -4. The Join operation can combine the data from these two data sets based on a comparison of the values in the SALESREP_ID column. -5. You want to do an inner join to return only the rows in each data set that match in the other data set. -6. You can add a custom suffix to append to columns that exist in both data sets to see the source data set for that column. -7. Select the sales.csv data set to join with the customers.csv data set. -8. For the join key, begin typing the column name to see a filtered list. The SALESREP_ID column links the two data sets. -9. Next, select the columns to include. Duplicate columns will display the suffix appended. -10. Now apply the changes. -11. The Join operation displays in the Steps panel. -12. Now, the data set is enriched with the columns from the customers.csv and sales.csv data sets. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_55,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Rename column -Rename the selected column. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_56,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Use the Rename column operation to quickly rename a column. -2. The fastest way to rename a column is to edit the column's name in the table. -3. Edit the name and press Enter on your keyboard. -4. The Rename column step shows the old name and the new name. -5. Now rename another column. -6. The Steps panel shows the BANKS column was renamed to DOGS. -7. Now rename the last column. -8. The Steps panel shows the RATIOS column was renamed to BIRDS. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_57,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Sample -Generate a subset of your data by using one of the following methods. Sampling steps from UI operations apply only when the flow is run. - - - -* Random sample: Each data record of the subset has an equal probability of being chosen. -* Stratified sample: Divide the data into one or more subgroups called strata. Then generate one random sample that contains data from each subgroup. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_58,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Sample operation generates a subset of your data. -2. Use the Sample operation when you have a large amount of data and you want to work on a representative sample for faster prototyping. -3. The Sample operation is in the ORGANIZE category. -4. Choose one of two methods to create a sample. -5. With a random sample, each row has an equal probability to be included in the sample data. -6. You can choose a random sample by number of rows or by percentage of data. -7. A stratified sample builds on a random sample. As with a random sample, you specify the amount of data in the sample (rows or percentage). -8. With a stratified sample, you divide the data into one or more subgroups called strata. Then you generate one random sample that contains customized data from each subgroup. -9. For Method, if you choose Auto, you select one column for the strata. -10. If you choose Manual, you specify one or more strata and for each strata you specify filter conditions that define the rows in each strata. -11. In this airline data example, we'll create two strata. One strata defines 50% of the output to have New York City destination airports and the second the strata defines the remaining 50% to have a specified flight distance. -12. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in this first strata. The strata percentages must total 100%. -13. Available operators for string data. -14. 50% of the sample will have New York City area destination airports. -15. Click Save to save the first strata. -16. The first strata, identified as Strata0, has one condition. In this strata, 50% of sample must meet the condition. -17. In Specify details for this strata box, enter the percentage of the sample that will represent the conditions that you will specify in the second strata. -18. Available operators for numeric data. -19. 50% of the sample will be for flights with a distance greater than 500. -20. Click Save to save the second strata. -21. The second strata, identified as Strata1, has one condition. In this strata, 50% of the sample must meet the condition. -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_59,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"22. If you use multiple strata, the Sample operation internally applies a Filter operation with an OR condition on the strata. Depending on the data, the conditions, and the size of the sample, the results of using one strata with multiple conditions might differ from using multiple strata. -23. Unlike the other Data Refinery operations, the Sample operation changes the data set only after you create and run a job for the Data Refinery flow. -24. The Sample step shows in the Steps panel. -25. The data set is over 10000 rows. -26. Save and create a job for the Data Refinery flow. -27. The new asset file is added to the project for the output of the Data Refinery flow. -28. View the output file. -29. There are 10 rows (50% of the sample) with New York City airports in the Dest column, but 17 rows in the Distance column with values greater than 500. -30. These results are because the strata were applied with an OR condition and there was overlapping data for the conditions specified in first strata where the rows that were filtered by Dest containing New York City airports had Distance values greater than 500. -31. The output file in Data Refinery shows the reduced size. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_60,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Sort ascending -Sort all the rows in the table by the selected column in ascending order. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_61,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Sort descending -Sort all the rows in the table by the selected column in descending order. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_62,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. Quickly sort all the rows in a data set by sorting the rows in a selected column. -2. The fastest way to sort columns is from the column's menu. -3. You can sort the rows in ascending or descending order. -4. Sort ascending. -5. The order of all the rows in the table is updated by the Sort operation of the first column. -6. The Sort operation shows in the Steps panel. -7. Sort descending. -8. The order of all the rows in the table is changed by the Sort operation of the second column. -9. The second Sort operation shows in the Steps panel. -10. Sort ascending. -11. The order of all the rows in the table is changed by the Sort operation of the third column. -12. The third Sort operation shows in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_63,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Split column -Split the column by non-alphanumeric characters, position, pattern, or text. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_64,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Split column operation splits one column into two or more columns based on non-alphanumeric characters, text, pattern, or position. -2. To begin, let's split the YMD column into YEAR, MONTH, and DAY columns. -3. The Split column operation is in the ORGANIZE category. -4. First, select the YMD column to split. -5. The tabs offer four choices for ways to split the column. -6. DEFAULT uses any non-alphanumeric character that's in the column values to split the column. -7. In TEXT, you select a character or enter text to split the column. -8. In PATTERN, you enter a regular expression based on R syntax to determine where to split the column. -9. In POSITION, you specify at what position to split the column. -10. We want to split the YMD column by the asterisk (*), which is a non-alphanumeric character, so we'll select the DEFAULT tab. -11. Split the YMD column into three new columns - YEAR, MONTH, and DAY. -12. The three new columns, YEAR, MONTH, and DAY, are added to the data set. -13. The Split column operation shows in the Steps panel. -14. Next split the FLIGHT column into two columns - One for the airline code and one for the flight number. Because airline codes are two characters, we can split the column by position. -15. Click the POSITION tab, and then type 2 in the Positions box. -16. Split the FLIGHT column into two new columns - AIRLINE and FLTNMBR. -17. The two new columns, AIRLINE and FLIGHTNBR, are added to the data set. -18. The Split column operation shows in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_65,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Union -Combine the rows from two data sets that share the same schema and filter out the duplicates. If you select Allow a different number of columns and allow duplicate values, the operation is a UNION ALL command. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_66,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Union operation combines the rows from two data sets that share the same schema. -2. This data set has four columns and six rows. The data types from left to right are String, String, Decimal, String. -3. When the data set was loaded into Data Refinery, the AUTOMATIC Convert column type operation automatically converted the PRICE column to the Decimal data type. -4. The columns in the second data set must be compatible to the data types in this data set. -5. Select the data set to combine with the current data set. -6. When you preview the new data set, you see that it also has four columns. However, the PRICE column is a String data type. -7. Before you apply the Union operation, you need to delete the AUTOMATIC Convert column type step so that the PRICE column is the same data type as the PRICE column in the new data set (String). -8. The PRICE column is now string data. -9. Now repeat the union operation. -10. The new data set is added to the current data set. The data set is increased to 12 rows. -11. The Union operation shows in the Steps panel. -12. Now add a data set that has a different number of columns. The matching columns must still be compatible data types. -13. Select the data set to combine with the current data set. -14. When you preview the new data set, you see that it has one more column than the original data set. The fifth column is TYPE. -15. Select Allow a different number of columns and allow duplicate values. -16. Apply the Union operation. -17. The new data set is added to the current data set. The data set is increased to 18 rows. -18. The additional column, TYPE, is added to the data set. -19. The Union operation shows in the Steps panel. - - - -Tip for the Union operation: If you receive an error about incompatible schemas, check if the automatic Convert column type operation changed the data types of the first data set. Delete the Convert column type step and try again. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_67,9C03418999E6B01345837D9DD0F8E0410ED5CB7D," NATURAL LANGUAGE - -Remove stop words Remove common words of the English language, such as “the” or “and.” Stop words usually have little semantic value for text analytics algorithms and models. Remove the stop words to reduce the data volume and to improve the quality of the data that you use to train machine learning models. - -Optional: To confirm which words were removed, apply the Tokenize operation (by words) on the selected column, and then view the statistics for the words in the Profile tab. You can undo the Tokenize step later in the Data Refinery flow. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_68,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Remove stop words operation removes common words of the English language from the data set. Stop words usually have little semantic value for text analytics algorithms and models. Remove the stop words to reduce the data volume and to improve the data quality. -2. The Remove stop words operation removes these words: a, an, and, are, as, at, be, but, by, for, from, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with. -3. The Remove stop words operation is under the NATURAL LANGUAGE category. -4. Select the STRING column. -5. Click Apply to remove the stop words. -6. The stop words are removed from the STRING column. -7. The Remove stop words operation shows in the Steps panel. - - - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_69,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Tokenize -Break up English text into words, sentences, paragraphs, lines, characters, or by regular expression. - -" -9C03418999E6B01345837D9DD0F8E0410ED5CB7D_70,9C03418999E6B01345837D9DD0F8E0410ED5CB7D,"Video transcript - - - -1. The Tokenize operation breaks up English text into words, sentences, paragraphs, lines, characters, or by regular expression. -2. The Tokenize operation is under the NATURAL LANGUAGE category. -3. Select the STRING column. -4. Available tokenize options. -5. Create a new column with the name WORDS. -6. The Tokenize operation has taken the words from the STRING column and created a new column, WORDS, with a row for each word. -7. The Tokenize operation shows in the Steps panel. - - - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1_0,82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1," Validating your data in Data Refinery - -At any time after you've added data to Data Refinery, you can validate your data. Typically, you'll want to do this at multiple points in the refinement process. - -To validate your data: - - - -1. From Data Refinery, click the Profile tab. -2. Review the metrics for each column. -3. Take appropriate actions, as described in the following sections, depending on what you learn. - - - -" -82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1_1,82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1," Frequency - -Frequency is the number of times that a value, or a value in a specified range, occurs. Each frequency distribution (bar) shows the count of unique values in a column. - -Review the frequency distribution to find anomalies in your data. If you want to cleanse your data of those anomalies, simply remove the values. - -For Integer and Date/Time columns, you can customize the number of bins (groupings) that you want to see. In the default multi-column view, the maximum is 20. If you expand the frequency chart row, the maximum is 50. - -" -82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1_2,82B2C3E8A59998DAA1BC70938A0155EC8C9ED3A1," Statistics - -Statistics are a collection of quantitative data. The statistics for each column show the minimum, maximum, mean, and number of unique values in that column. - -Depending on a column's data type, the statistics for each column will vary slightly. For example, statistics for a column of data type integer have minimum, maximum, and mean values while statistics for a column of data type string have minimum length, maximum length, and mean length values. - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -4B74E9409284F77897DB58B77271337A4493A410_0,4B74E9409284F77897DB58B77271337A4493A410," Supported data sources for Data Refinery - -Data Refinery supports the following data sources in connections. - -" -4B74E9409284F77897DB58B77271337A4493A410_1,4B74E9409284F77897DB58B77271337A4493A410," IBM services - - - -* [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html)(Supports source connections only) -* [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) -* [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html)(Supports source connections only) -* [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) -* [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) -* [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)(Supports source connections only) -* [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) -* [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) -* [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) -* [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) -" -4B74E9409284F77897DB58B77271337A4493A410_2,4B74E9409284F77897DB58B77271337A4493A410,"* [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) -* [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html)(Supports source connections only) -* [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)(Supports source connections only) - - - -" -4B74E9409284F77897DB58B77271337A4493A410_3,4B74E9409284F77897DB58B77271337A4493A410," Third-party services - - - -* [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) -* [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html) -* [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) -* [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) -* [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) -* [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) -* [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) -* [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) -* [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html)(Supports source connections only) -* [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) -* [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html)(Supports source connections only) -" -4B74E9409284F77897DB58B77271337A4493A410_4,4B74E9409284F77897DB58B77271337A4493A410,"* [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html) -* [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) -* [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html) -* [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) -* [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) -* [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) -* [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) -* [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html)(Supports source connections only) -* [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) -* [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) -* [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) -* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) -" -4B74E9409284F77897DB58B77271337A4493A410_5,4B74E9409284F77897DB58B77271337A4493A410,"* [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html)(Supports source connections only) -* [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html)(Supports source connections only) -* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) -* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) -* [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html)(Supports source connections only) -* [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html)(Supports source connections only) -* [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html) -* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) - - - -Parent topic: [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_0,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Refining data - -To refine data, you take it from one location, cleanse and shape it, and then load the result into a different location. You can cleanse and shape tabular data with a graphical flow editor tool called Data Refinery. - -When you cleanse data, you fix or remove data that is incorrect, incomplete, improperly formatted, or duplicated. When you shape data, you customize it by filtering, sorting, combining or removing columns. - -You create a Data Refinery flow as a set of ordered operations on data. Data Refinery includes a graphical interface to profile your data to validate it and over 20 customizable charts that give you insights into your data. - -Data format {: #dr-format} : Avro, CSV, JSON, Microsoft Excel (xls and xlsx formats. First sheet only, except for connections and connected data assets.), Parquet, SAS with the ""sas7bdat"" extension (read only), TSV (read only), or delimited text data asset : Tables in relational data sources - -Data size : Any. Data Refinery operates on a sample subset of rows in the data set. The sample size is 1 MB or 10,000 rows, whichever comes first. However, when you run a job for the Data Refinery flow, the entire data set is processed. If the Data Refinery flow fails with a large data asset, see workarounds in [Troubleshooting Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html). - - - -* [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enprereqs) -* [Source file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enlimitsource) -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_1,653F494EE7F3D688FCAEB05AFF303354D718EAB5,"* [Target file limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enlimittarget) -* [Data set previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enpreviews) -* [Refine your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=enrefine) - - - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_2,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Prerequisites - -Before you can refine data, you need [a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) that uses Cloud Object Storage. You can use the sandbox project or create a new project. - - - -* Watch this video to see how to create a project - - - -If you have data in cloud or on-premises data sources, you'll need to [add connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to those sources and you'll need to [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) from each connection. If you want to be able to save refined data to cloud or on-premises data sources, create connections for this purpose as well. Source connections can be used only to read data; target connections can be used only to load (save) data. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save your Data Refinery flow output to the target. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - - - -* Watch this video to see how to create a connection and add connected data to a project - - - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_3,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Source file limitations - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_4,653F494EE7F3D688FCAEB05AFF303354D718EAB5," CSV files - -Be sure that CSV files are correctly formatted and conform to the following rules: - - - -* Two consecutive commas in a row indicate an empty column. -* If a row ends with a comma, an additional column is created. - - - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_5,653F494EE7F3D688FCAEB05AFF303354D718EAB5," White-space characters are considered as part of the data - -If your data includes columns that contain white space (blank) characters, Data Refinery considers those white-space characters as part of the data, even though you can't see them in the grid. Some database tools might pad character strings with white-space characters to make all the data in a column the same length and this change affects the results of Data Refinery operations that compare data. - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_6,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Column names - -Be sure that column names conform to the following rules: - - - -* Duplicate column names are not allowed. Column names must be unique within the data set. Column names are not case-sensitive. A data set that includes a column name ""Sales"" and another column name ""sales"" will not work. -* The column names are not reserved words in the R programming language. -* The column names are not numbers. A workaround is to enclose the column names in double quotation marks (""""). - - - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_7,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Data sets with columns with the ""Other"" data type are not supported in Data Refinery flows - -If your data set contains columns that have data types that are identified as ""Other"" in the Watson Studio preview, the columns will show as the String data type in Data Refinery. However, if you try to use the data in a Data Refinery flow, the job for the Data Refinery flow will fail. An example of a data type that shows as ""Other"" in the preview is the Db2 DECFLOAT data type. - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_8,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Target file limitations - -The following limitation applies if you save Data Refinery flow output (the target data set) to a file: - - - -* You can't change the file format if the file is an existing data asset. - - - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_9,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Data set previews - -Data Refinery provides support for large data sets, which can be time-consuming and unwieldy to refine. To enable you to work quickly and efficiently, it operates on a subset of rows in the data set while you interactively refine the data. When you run a job for the Data Refinery flow, it operates on the entire data set. - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_10,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Refine your data - -The following video shows you how to refine data. - -Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. - -This video provides a visual method to learn the concepts and tasks in this documentation. - - - -* Transcript - -Synchronize transcript with video - - - - Time Transcript - - 00:00 This video shows you how to shape raw data using Data Refinery. - 00:05 To get started refining data from a project, view the data asset and open it in Data Refinery. - 00:14 The ""Information"" pane contains the name for the data flow and for the data flow output, once you've finished refining the data. - 00:23 The ""Data"" tab shows you a sample set of the rows and columns in the data set. - 00:29 To improve performance, you won't see all the rows in the shaper. - 00:33 But rest assured that when you are done refining the data, the data flow will be run on the full data set. - 00:41 The ""Profile"" tab shows you frequency and summary statistics for each of your columns. - 00:49 The ""Visualizations"" tab provides data visualizations for the columns you are interested in. - 00:57 Suggested charts have a blue dot next to their icons. - 01:03 Use the different perspectives available in the charts to identify patterns, connections, and relationships within the data. - 01:12 Now, let's do some data wrangling. - 01:17 Start with a simple operation, like sorting on the specified column - in this case, the ""Year"" column. - 01:27 Say you want to focus on delays just for a specific airline so you can filter the data to show only those rows where the unique carrier is ""United Airlines"". - 01:47 It would be helpful to see the total delay. - 01:50 You can do that by creating a new column to combine the arrival and departure delays. - 01:56 Notice that the column type is inferred to be integer. - 02:00 Select the departure delay column and use the ""Calculate"" operation. -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_11,653F494EE7F3D688FCAEB05AFF303354D718EAB5," 02:09 In this case, you'll add the arrive delay column to the selected column and create a new column, called ""TotalDelay"". - 02:23 You can position the new column at the end of the list of columns or next to the original column. - 02:31 When you apply the operation, the new column displays next to the departure delay column. - 02:38 If you make a mistake, or just decide to make a change, just access the ""Steps"" panel and delete that step. - 02:46 This will undo that particular operation. - 02:50 You can also use the redo and undo buttons. - 02:56 Next, you'd like to focus on the ""TotalDelay"" column so you can use the ""select"" operation to move the column to the beginning. - 03:09 This command arranges the ""TotalDelay"" column as the first in the list, and everything else comes after that. - 03:21 Next, use the ""group_by"" operation to divide the data into groups by year, month, and day. - 03:32 So, when you select the ""TotalDelay"" column, you'll see the ""Year"", ""Month"", ""DayofMonth"", and ""TotalDelay"" columns. - 03:44 Lastly, you want to find the mean of the ""TotalDelay"" column. - 03:48 When you expand the ""Operations"" menu, in the ""Organize"" section, you'll find the ""Aggregate"" operation, which includes the ""Mean"" function. - 04:08 Now you have a new column, called ""AverageDelay"", that represents the average for the total delay. - 04:17 Now to run the data flow and save and create the job. - 04:24 Provide a name for the job and continue to the next screen. - 04:28 The ""Configure"" step allows you to review what the input and output of your job run will be. - 04:36 And select the environment used to run the job. - 04:41 Scheduling a job is optional, but you can set a date and repeat the job, if you'd like. - 04:51 And you can choose to receive notifications for this job. - 04:56 Everything looks good, so create and run the job. -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_12,653F494EE7F3D688FCAEB05AFF303354D718EAB5," 05:00 This could take several minutes, because remember that the data flow will be run on the full data set. - 05:06 In the mean time, you can view the status. - 05:12 When the run is compete, you can go back to the ""Assets"" tab in the project. - 05:20 And open the Data Refinery flow to further refine the data. - 05:28 For example, you could sort the ""AverageDelay"" column in descending order. - 05:36 Now, edit the flow settings. - 05:39 On the ""General"" panel, you can change the Data Refinery flow name. - 05:46 On the ""Source data sets"" panel, you can edit the sample or format for the source data set or replace the data source. - 05:56 And on the ""Target data set"" panel, you can specify an alternate location, such as an external data source. - 06:06 You can also edit the properties for the target, such as the write mode, the file format, and change the data set asset name. - 06:21 Now, run the data flow again; but this time, save and view the jobs. - 06:28 Select the job that you want to view from the list and run the job. - 06:41 When the run completes, go back to the project. - 06:46 And on the ""Assets"" tab, you'll see all three files: - 06:51 The original. - 06:54 The first refined data set, showing the ""AverageDelay"" unsorted. - 07:02 And the second data set, showing the ""AverageDelay"" column sorted in descending order. - 07:11 And back on the ""Assets"" tab, there's the Data Refinery flow. - 07:19 Find more videos in the Cloud Pak for Data as a Service documentation. - - - - - -1. Access Data Refinery from within a project. Click New asset > Prepare and visualize data. Then select the data that you want to work with. Alternatively, from the Assets tab of a project, open a file ([supported formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html?context=cdpaas&locale=endr-format)) to preview it, and then click Prepare data. - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_13,653F494EE7F3D688FCAEB05AFF303354D718EAB5,"2. Use steps to apply operations that cleanse, shape, and enrich your data. Browse [operation categories or search for a specific operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/gui_operations.html), then let the UI guide you. You can [enter R code](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/code_operations.html) in the command line and let autocomplete assist you in getting the correct syntax. As you apply operations to a data set, Data Refinery keeps track of them and builds a Data Refinery flow. For each operation that you apply, Data Refinery adds a step. - -Data tab -![Data tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/dr-data-tab.png) - -If your data contains non-string data types, the Convert column type GUI operation is automatically applied as the first step in the Data Refinery flow when you open a file in Data Refinery. Data types are automatically converted to inferred data types, such as Integer, Date, or Boolean. You can undo or edit this step. - -3. Click the Profile tab to [validate your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/metrics.html) throughout the data refinement process. - -Profile tab -![Profile tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/dr-profile-tab.png) - -4. Click the Visualizations tab to [visualize the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) in charts. Uncover patterns, trends, and correlations within your data. - -Visualizations tab -![Visualizations tab](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/dr-viz-tab.png) - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_14,653F494EE7F3D688FCAEB05AFF303354D718EAB5,"5. Refine the sample data set to suit your needs. - -6. Click Save and create a job or Save and view jobs in the toolbar to run the Data Refinery flow on the entire data set. Select the runtime and add a one-time or repeating schedule. For information about jobs, see [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html). - -For the actions that you can do as you refine your data, see [Managing Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html). - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_15,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Next step - -[Analyze your data and build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) - -" -653F494EE7F3D688FCAEB05AFF303354D718EAB5_16,653F494EE7F3D688FCAEB05AFF303354D718EAB5," Learn more - - - -* [Manage Data Refinery flows](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/data_flows.html) -* [Quick start: Refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-refine.html) - - - -Parent topic: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) -" -751ABCAB00F67C93C253EC74D686E2CFCC0062AD_0,751ABCAB00F67C93C253EC74D686E2CFCC0062AD," Troubleshooting Data Refinery - -Use this information to resolve questions about using Data Refinery. - - - -* [Cannot refine data from an Excel data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=endr-excel) -* [Data Refinery flow job fails with a large data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html?context=cdpaas&locale=enbigdata-dr) - - - -" -751ABCAB00F67C93C253EC74D686E2CFCC0062AD_1,751ABCAB00F67C93C253EC74D686E2CFCC0062AD," Cannot refine data from an Excel data asset - -The Data Refinery flow might fail if it cannot read the data. Confirm the format of the Excel file. By default, the first line of the file is treated as the header. You can change this setting in the Flow settings ![settings icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/settings.png). Go to the Source data sets tab and click the overflow menu (![overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) next to the data source, and select Edit format. You can also specify the first line property, which designates which row is the first row in the data set to be read. Changing these properties affects how the data is displayed in Data Refinery as well as the Data Refinery job run and flow output. - -" -751ABCAB00F67C93C253EC74D686E2CFCC0062AD_2,751ABCAB00F67C93C253EC74D686E2CFCC0062AD," Data Refinery flow job fails with a large data asset - -If your Data Refinery flow job fails with a large data asset, try these troubleshooting tips to fix the problem: - - - -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_0,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12," Visualizing your data in Data Refinery - -Visualizing information in graphical ways gives you insights into your data. You can add steps to your Data Refinery flow while you visualize you data and see the changes. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data as well as quickly understand large amounts of information. - -You can also visualize your data with these same charts in an SPSS Modeler flow. Use the Charts node, which is available under the Graphs section on the node palette. Double-click the Charts node to open the properties pane. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node. - -![Chart examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/viz_animated.gif) - -To visualize your data: - - - -1. From Data Refinery, click the Visualizations tab. -2. Start with a chart or select columns: - - - -* Click any of the available charts. Then, add columns in the DETAILS pane that opens on the left side of the page. -* Select the columns that you want to work with. Suggested charts are indicated with a dot next to the chart name. Click a chart to visualize your data. - - - - - -Important: Available chart types are ordered from most relevant to least relevant, based on the selected columns. If there are no columns in the data set with a data type that is supported for a chart type, that chart will not be available. If a column's data type is not supported for a chart, that column is not available for selection for that chart. Dots next to the charts' names suggest the best charts for your data. - -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_1,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12," Charts - -The following charts are included: - - - -* 3D charts display data in a 3-D coordinate system by drawing each column as a cuboid to create a 3D effect. -* Bar charts are handy for displaying and comparing categories of data side by side. The bars can be in any order. You can also arrange them from high to low or from low to high. -* Box plot charts compare distributions between many groups or data sets. They display the variation in groups of data: the spread and skew of that data and the outliers. -* Bubble charts display each category in the groups as a bubble. -* Candlestick charts are a type of financial chart that displays price movements of a security, derivative, or currency. -* Circle packing charts display hierarchical data as a set of nested areas. -* Customized charts give you the ability to render charts based on JSON input. -* Dual Y-axes charts use two Y-axis variables to show relationships between data. -* Error bars indicate the error or uncertainty in a value. They give a general idea of how precise a value is or conversely, how far a value might be from the true value. -* Evaluation charts are combination charts that measure the quality of a binary classifier. You need three columns for input: actual (target) value, predict value, and confidence (0 or 1). Move the slider in the Cutoff chart to dynamically update the other charts. The ROC and other charts are standard measurements of the classifier. -* Heat map charts display data as color to convey activity levels or density. Typically low values are displayed as cooler colors and high values are displayed as warmer colors. -* Histogram charts show the frequency distribution of data. -* Line charts show trends in data over time by calculating a summary statistic for one column for each value of another column and then drawing a line that connects the values. -* Map charts show geographic point data, so you can compare values and show categories across geographical regions. -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_2,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12,"* Math curve charts display a group of curves based on equations that you enter. You do not use a data set with this chart. Instead, you use it to compare the results with the data set in another chart, like the scatter plot chart. -* Multi-charts display up to four combinations of Bar, Line, Pie, and Scatter plot charts. You can show the same kind of chart more than once with different data. For example, two pie charts with data from different columns. -* Multi-series charts display data from multiple data sets or multiple columns as a series of points that are connected by straight lines or bars. -* Parallel coordinate charts display and compare rows of data (called profiles) to find similarities. Each row is a line and the value in each column of the row is represented by a point on that line. -* Pie charts show proportion. Each value in a series is displayed as a proportional slice of the pie. The pie represents the total sum of the values. -* Population pyramid charts show the frequency distribution of a variable across categories. They are typically used to show changes in demographic data. -* Quantile-quantile (Q-Q) plot charts compare the expected distribution values with the observed values by plotting their quantiles. -* Radar charts integrate three or more quantitative variables that are represented on axes (radii) into a single radial figure. Data is plotted on each axis and joined to adjacent axes by connecting lines. Radar charts are useful to show correlations and compare categorized data. -* Relationship charts show how columns of data relate to one another and what the strength of that relationship is by using varying types of lines. -* Scatter matrix charts map columns against each other and display their scatter plots and correlation. Use to compare multiple columns and how strong their correlation is with one another. -* Scatter plot charts show correlation (how much one variable is affected by another) by displaying and comparing the values in two columns. -* Sunburst charts are similar to layered pie charts, in which different proportions of different categories are shown at once on multiple levels. -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_3,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12,"* Theme river charts use a specialized flow graph that shows changes over time. -* Time plot charts illustrate data points at successive intervals of time. -* t-SNE charts help you visualize high-dimensional data sets. They're useful for embedding high-dimensional data into a space of two or three dimensions, which can then be visualized in a scatter plot. -* Tree charts display hierarchical data, categorically splitting into different branches. Use to sort different data sets under different categories. The Tree chart consists of a root node, line connections called branches that represent the relationships and connections between the members, and leaf nodes that do not have child nodes. -* Treemap charts display hierarchical data as a set of nested areas. Use to compare sizes between groups and single elements that are nested in the groups. -* Word cloud charts display how frequently words appear in text by making the size of each word proportional to its frequency. - - - -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_4,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12," Actions - -You can take any of the following actions: - - - -* Start over: Clears the visualization and the DETAILS pane, and returns you to the starting page for visualizations -* Specify whether to display the field value or the field label. This option applies only to SPSS Modeler when you define labels. For example, if you have a ""Gender"" field and you have defined a label as female with the value 0, and then the label male for value 1. If there is no label defined, the value is displayed. -* Download visualization: - - - -* Download chart image: Download a PNG file that contains an image of the current chart. -* Download chart details: Download a JSON file that contains the details for the current chart. - - - -* Set global preferences that apply to all charts - - - -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_5,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12," Chart actions - -Available chart actions depend on the chart. Chart actions include: - - - -* Zoom -* Restore: View the chart at normal scale -* Select data: Highlight data in the Data tab that you select in the chart -* Clear selection: Remove highlighting from the data in the Data tab - - - -" -B8AA7399868C0AE8DD698C9048EBD50C3F17EF12_6,B8AA7399868C0AE8DD698C9048EBD50C3F17EF12," Learn more - -[Data Visualization – How to Pick the Right Chart Type?](https://eazybi.com/blog/data_visualization_and_chart_types/) - -Parent topic:[Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) -" -A4E9FAE09BE2F3C0191CBC14A56085B0773A2585_0,A4E9FAE09BE2F3C0191CBC14A56085B0773A2585," Accessing data in AWS through access points from a notebook - -In IBM watsonx you can access data stored in AWS S3 buckets through access points from a notebook. - -Run the notebook in an environment in IBM watsonx. Create an internet-enabled access point to connect to the S3 bucket. - -" -A4E9FAE09BE2F3C0191CBC14A56085B0773A2585_1,A4E9FAE09BE2F3C0191CBC14A56085B0773A2585," Connecting to AWS S3 data through an internet-enabled access point - -You can access data in an AWS S3 bucket through an internet-enabled access point in any AWS region. - -To access S3 data through an internet-enabled access point: - - - -1. Create an access point for your S3 bucket. See [Creating access points](https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html). - -Set the network origin to Internet. -2. After the access point is created, make a note of the Amazon resource name (ARN) for the access point. Example: ARN: arn:aws:s3:us-east-1:675068711478:accesspoint/cust-data-bucket-internet-ap. You will need to enter the ARN in your notebook. - - - -" -A4E9FAE09BE2F3C0191CBC14A56085B0773A2585_2,A4E9FAE09BE2F3C0191CBC14A56085B0773A2585," Accessing AWS S3 data from your notebook - -The following sample code snippet shows you how to access AWS data from your notebook by using an access point: - -import boto3 -import pandas as pd - - use an access key and a secret that has access to the bucket -access_key=""..."" -secret=""..."" - -s3_client = boto3.client('s3', aws_access_key_id=access_key, aws_secret_access_key=secret) - -the Amazon resource name (ARN) of the access point -arn = ""..."" - the file you want to retrieve -fileName=""customers.csv"" - -response = s3_client.get_object(Bucket=arn, Key=fileName) -s3FileStream = response[""Body""] -for other file types, change the line below to use the appropriate read_() method from pandas -customerDF = pd.read_csv(s3FileStream) - -Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) -" -EEF0F3C3DC121F5C389E547BD20F2AA807074028_0,EEF0F3C3DC121F5C389E547BD20F2AA807074028," Key management by application - -This topic describes how to manage column encryption keys by application. It explains how to provide master keys and how to write and read encrypted data using these master keys. - -" -EEF0F3C3DC121F5C389E547BD20F2AA807074028_1,EEF0F3C3DC121F5C389E547BD20F2AA807074028," Providing master keys - -To provide master keys: - - - -1. Pass the explicit master keys, in the following format: - -parameter name: ""encryption.key.list"" -parameter value: "": , :.."" - -For example: - -sc.hadoopConfiguration.set(""encryption.key.list"" , ""k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ=="") - -The length of master keys before base64 encoding can be 16, 24 or 32 bytes (128, 192 or 256 bits). - - - -" -EEF0F3C3DC121F5C389E547BD20F2AA807074028_2,EEF0F3C3DC121F5C389E547BD20F2AA807074028," Writing encrypted data - -To write encrypted data: - - - -1. Specify which columns to encrypt, and which master keys to use: - -parameter name: ""encryption.column.keys"" -parameter value: "":,;: .."" -2. Specify the footer key: - -parameter name: ""encryption.footer.key"" -parameter value: """" - -For example: - -dataFrame.write -.option(""encryption.footer.key"" , ""k1"") -.option(""encryption.column.keys"" , ""k2:SSN,Address;k3:CreditCard"") -.parquet("""") - -Note:"""" must contain the string .encrypted in the URL, for example /path/to/my_table.parquet.encrypted. If either the ""encryption.column.keys"" parameter or the ""encryption.footer.key"" parameter is not set, an exception will be thrown. - - - -" -EEF0F3C3DC121F5C389E547BD20F2AA807074028_3,EEF0F3C3DC121F5C389E547BD20F2AA807074028," Reading encrypted data - -The required metadata is stored in the encrypted Parquet files. - -To read the encrypted data: - - - -1. Provide the encryption keys: - -sc.hadoopConfiguration.set(""encryption.key.list"" , ""k1:iKwfmI5rDf7HwVBcqeNE6w== , k2:LjxH/aXxMduX6IQcwQgOlw== , k3:rnZHCxhUHr79Y6zvQnxSEQ=="") -2. Call the regular parquet read commands, such as: - -val dataFrame = spark.read.parquet("""") - -Note:"""" must contain the string .encrypted in the URL, for example /path/to/my_table.parquet.encrypted. - - - -Parent topic:[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_0,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Key management by KMS - -Parquet modular encryption can work with arbitrary Key Management Service (KMS) servers. A custom KMS client class, able to communicate with the chosen KMS server, has to be provided to the Analytics Engine powered by Apache Spark instance. This class needs to implement the KmsClient interface (part of the Parquet modular encryption API). Analytics Engine powered by Apache Spark includes the VaultClient KmsClient, that can be used out of the box if you use Hashicorp Vault as the KMS server for the master keys. If you use or plan to use a different KMS system, you can develop a custom KmsClient class (taking the VaultClient code as an example). - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_1,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Custom KmsClient class - -Parquet modular encryption provides a simple interface called org.apache.parquet.crypto.keytools.KmsClient with the following two main functions that you must implement: - -// Wraps a key - encrypts it with the master key, encodes the result and -// potentially adds KMS-specific metadata. -public String wrapKey(byte[] keyBytes, String masterKeyIdentifier) -// Decrypts (unwraps) a key with the master key. -public byte[] unwrapKey(String wrappedKey, String masterKeyIdentifier) - -In addition, the interface provides the following initialization function that passes KMS parameters and other configuration: - -public void initialize(Configuration configuration, String kmsInstanceID, String kmsInstanceURL, String accessToken) - -See [Example of KmsClient implementation](https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/test/java/org/apache/parquet/crypto/keytools/samples/VaultClient.java) to learn how to implement a KmsClient. - -After you have developed the custom KmsClient class, add it to a jar supplied to Analytics Engine powered by Apache Spark, and pass its full name in the Spark Hadoop configuration, for example: - -sc.hadoopConfiguration.set(""parquet.ecnryption.kms.client.class"", ""full.name.of.YourKmsClient"" - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_2,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Key management by Hashicorp Vault - -If you decide to use Hashicorp Vault as the KMS server, you can use the pre-packaged VaultClient: - -sc.hadoopConfiguration.set(""parquet.ecnryption.kms.client.class"", ""com.ibm.parquet.key.management.VaultClient"") - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_3,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Creating master keys - -Consult the Hashicorp Vault documentation for the specifics about actions on Vault. See: - - - -* [Transit Secrets Engine](https://www.vaultproject.io/docs/secrets/transit) -* [Encryption as a Service: Transit Secrets Engine](https://learn.hashicorp.com/tutorials/vault/eaas-transit) -* Enable the Transit Engine either at the default path or providing a custom path. -* Create named encryption keys. -* Configure access policies with which a user or machine is allowed to access these named keys. - - - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_4,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Writing encrypted data - - - -1. Pass the following parameters: - - - -* Set ""parquet.encryption.kms.client.class"" to ""com.ibm.parquet.key.management.VaultClient"": - -sc.hadoopConfiguration.set(""parquet.ecnryption.kms.client.class"", ""com.ibm.parquet.key.management.VaultClient"") -* Optional: Set the custom path ""parquet.encryption.kms.instance.id"" to your transit engine: - -sc.hadoopConfiguration.set(""parquet.encryption.kms.instance.id"" , ""north/transit1"") -* Set ""parquet.encryption.kms.instance.url"" to the URL of your Vault instance: - -sc.hadoopConfiguration.set(""parquet.encryption.kms.instance.url"" , ""https://:8200"") -* Set ""parquet.encryption.key.access.token"" to a valid access token with the access policy attached, which provides access rights to the required keys in your Vault instance: - -sc.hadoopConfiguration.set(""parquet.encryption.key.access.token"" , """") -* If the token is located in a local file, load it: - -val token = scala.io.Source.fromFile("""").mkStringsc.hadoopConfiguration.set(""parquet.encryption.key.access.token"" , token) - - - -2. Specify which columns need to be encrypted, and with which master keys. You must also specify the footer key. For example: - -val k1 = ""key1"" -val k2 = ""key2"" -val k3 = ""key3"" -dataFrame.write -.option(""parquet.encryption.footer.key"" , k1) -.option(""parquet.encryption.column.keys"" , k2+"":SSN,Address;""+k3+"":CreditCard"") -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_5,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E,".parquet("""") - -Note: If either the ""parquet.encryption.column.keys"" or the ""parquet.encryption.footer.key"" parameter is not set, an exception will be thrown. - - - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_6,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Reading encrypted data - -The required metadata, including the ID and URL of the Hashicorp Vault instance, is stored in the encrypted Parquet files. - -To read the encrypted metadata: - - - -1. Set KMS client to the Vault client implementation: - -sc.hadoopConfiguration.set(""parquet.ecnryption.kms.client.class"", ""com.ibm.parquet.key.management.VaultClient"") -2. Provide the access token with policy attached that grants access to the relevant keys: - -sc.hadoopConfiguration.set(""parquet.encryption.key.access.token"" , """") -3. Call the regular Parquet read commands, such as: - -val dataFrame = spark.read.parquet("""") - - - -" -E778331BF398F2DB0F6477EF689D0DD6A2AAA81E_7,E778331BF398F2DB0F6477EF689D0DD6A2AAA81E," Key rotation - -If key rotation is required, an administrator with access rights to the KMS key rotation actions must rotate master keys in Hashicorp Vault using the procedure described in the Hashicorp Vault documentation. Thereafter the administrator can trigger Parquet key rotation by calling: - -public static void KeyToolkit.rotateMasterKeys(String folderPath, Configuration hadoopConfig) - -To enable Parquet key rotation, the following Hadoop configuration properties must be set: - - - -* The parameters ""parquet.encryption.key.access.token"" and ""parquet.encryption.kms.instance.url"" must set set, and optionally ""parquet.encryption.kms.instance.id"" -* The parameter ""parquet.encryption.key.material.store.internally"" must be set to ""false"". -* The parameter ""parquet.encryption.kms.client.class"" must be set to ""com.ibm.parquet.key.management.VaultClient"" - - - -For example: - -sc.hadoopConfiguration.set(""parquet.encryption.kms.instance.url"" , ""https://:8200"")sc.hadoopConfiguration.set(""parquet.encryption.key.access.token"" , """") -sc.hadoopConfiguration.set(""parquet.encryption.kms.client.class"",""com.ibm.parquet.key.management.VaultClient"") -sc.hadoopConfiguration.set(""parquet.encryption.key.material.store.internally"", ""false"") -KeyToolkit.rotateMasterKeys("""", sc.hadoopConfiguration) - -Parent topic:[Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_0,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Parquet modular encryption - -If your data is stored in columnar format, you can use Parquet modular encryption to encrypt sensitive columns when writing Parquet files, and decrypt these columns when reading the encrypted files. Encrypting data at the column level, enables you to decide which columns to encrypt and how to control the column access. - -Besides ensuring privacy, Parquet modular encryption also protects the integrity of stored data. Any tampering with file contents is detected and triggers a reader-side exception. - -Key features include: - - - -1. Parquet modular encryption and decryption is performed on the Spark cluster. Therefore, sensitive data and the encryption keys are not visible to the storage. -2. Standard Parquet features, such as encoding, compression, columnar projection and predicate push-down, continue to work as usual on files with Parquet modular encryption format. -3. You can choose one of two encryption algorithms that are defined in the Parquet specification. Both algorithms support column encryption, however: - - - -* The default algorithm AES-GCM provides full protection against tampering with data and metadata parts in Parquet files. -* The alternative algorithm AES-GCM-CTR supports partial integrity protection of Parquet files. Only metadata parts are protected against tampering, not data parts. An advantage of this algorithm is that it has a lower throughput overhead compared to the AES-GCM algorithm. - - - -4. You can choose which columns to encrypt. Other columns won't be encrypted, reducing the throughput overhead. -5. Different columns can be encrypted with different keys. -6. By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns. However, you can choose not to encrypt the file footers in order to enable legacy readers (such as other Spark distributions that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files. -7. Encryption keys can be managed in one of two ways: - - - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_1,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4,"* Directly by your application. See [Key management by application](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-application.html). -* By a key management system (KMS) that generates, stores and destroys encryption keys used by the Spark service. These keys never leave the KMS server, and therefore are invisible to other components, including the Spark service. See [Key management by KMS](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/key-management-by-kms.html). - -Note: Only master encryption keys (MEKs) need to be managed by your application or by a KMS. - -For each sensitive column, you must specify which master key to use for encryption. Also, a master key must be specified for the footer of each encrypted file (data frame). By default, the footer key will be used for footer encryption. However, if you choose a plain text footer mode, the footer won’t be encrypted, and the key will be used only for integrity verification of the footer. - -The encryption parameters can be passed via the standard Spark Hadoop configuration, for example by setting configuration values in the Hadoop configuration of the application's SparkContext: - -sc.hadoopConfiguration.set("""" , """") - -Alternatively, you can pass parameter values through write options: - -.write -.option("""" , """") -.parquet("""") - - - - - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_2,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Running with Parquet modular encryption - -Parquet modular encryption is available only in Spark notebooks that are run in an IBM Analytics Engine service instance. Parquet modular encryption is not supported in notebooks that run in a Spark environment. - -To enable Parquet modular encryption, set the following Spark classpath properties to point to the Parquet jar files that implement Parquet modular encryption, and to the key management jar file: - - - -1. Navigate to Ambari > Spark > Config -> Custom spark2-default. -2. Add the following two parameters to point explicitly to the location of the JAR files. Make sure that you edit the paths to use the actual version of jar files on the cluster. - -spark.driver.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet-kms--jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-.jar:/home/common/lib/parquetEncryption/parquet-hadoop-.jar - -spark.executor.extraClassPath=/home/common/lib/parquetEncryption/ibm-parquet--jar-with-dependencies.jar:/home/common/lib/parquetEncryption/parquet-format-.jar:/home/common/lib/parquetEncryption/parquet-hadoop-.jar - - - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_3,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Mandatory parameters - -The following parameters are required for writing encrypted data: - - - -* List of columns to encrypt, with the master encryption keys: - -parameter name: ""encryption.column.keys"" -parameter value: "":,;:,.."" -* The footer key: - -parameter name: ""encryption.footer.key"" -parameter value: """" - -For example: - -dataFrame.write -.option(""encryption.footer.key"" , ""k1"") -.option(""encryption.column.keys"" , ""k2:SSN,Address;k3:CreditCard"") -.parquet("""") - -Important:If neither the encryption.column.keys parameter nor the encryption.footer.key parameter is set, the file will not be encrypted. If only one of these parameters is set, an exception is thrown, because these parameters are mandatory for encrypted files. - - - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_4,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Optional parameters - -The following optional parameters can be used when writing encrypted data: - - - -* The encryption algorithm AES-GCM-CTR - -By default, Parquet modular encryption uses the AES-GCM algorithm that provides full protection against tampering with data and metadata in Parquet files. However, as Spark 2.3.0 runs on Java 8, which doesn’t support AES acceleration in CPU hardware (this was only added in Java 9), the overhead of data integrity verification can affect workload throughput in certain situations. - -To compensate this, you can switch off the data integrity verification support and write the encrypted files with the alternative algorithm AES-GCM-CTR, which verifies the integrity of the metadata parts only and not that of the data parts, and has a lower throughput overhead compared to the AES-GCM algorithm. - -parameter name: ""encryption.algorithm"" -parameter value: ""AES_GCM_CTR_V1"" -* Plain text footer mode for legacy readers - -By default, the main Parquet metadata module (the file footer) is encrypted to hide the file schema and list of sensitive columns. However, you can decide not to encrypt the file footers in order to enable other Spark and Parquet readers (that don't yet support Parquet modular encryption) to read the unencrypted columns in the encrypted files. To switch off footer encryption, set the following parameter: - -parameter name: ""encryption.plaintext.footer"" -parameter value: ""true"" - -Important:The encryption.footer.key parameter must also be specified in the plain text footer mode. Although the footer is not encrypted, the key is used to sign the footer content, which means that new readers could verify its integrity. Legacy readers are not affected by the addition of the footer signature. - - - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_5,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Usage examples - -The following sample code snippets for Python show how to create data frames, written to encrypted parquet files, and read from encrypted parquet files. - - - -* Python: Writing encrypted data: - -from pyspark.sql import Row - -squaresDF = spark.createDataFrame( -sc.parallelize(range(1, 6)) -.map(lambda i: Row(int_column=i, square_int_column=i 2))) - -sc._jsc.hadoopConfiguration().set(""encryption.key.list"", -""key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA=="") -sc._jsc.hadoopConfiguration().set(""encryption.column.keys"", -""key1:square_int_column"") -sc._jsc.hadoopConfiguration().set(""encryption.footer.key"", ""key2"") - -encryptedParquetPath = ""squares.parquet.encrypted"" -squaresDF.write.parquet(encryptedParquetPath) -* Python: Reading encrypted data: - -sc._jsc.hadoopConfiguration().set(""encryption.key.list"", -""key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA=="") - -encryptedParquetPath = ""squares.parquet.encrypted"" -parquetFile = spark.read.parquet(encryptedParquetPath) -parquetFile.show() - - - -The contents of the Python job file InMemoryKMS.py is as follows: - -from pyspark.sql import SparkSession -from pyspark import SparkContext -from pyspark.sql import Row - -if __name__ == ""__main__"": -spark = SparkSession -.builder -.appName(""InMemoryKMS"") -.getOrCreate() -sc = spark.sparkContext -KMS operation -print(""Setup InMemoryKMS"") -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_6,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4,"hconf = sc._jsc.hadoopConfiguration() -encryptedParquetFullName = ""testparquet.encrypted"" -print(""Write Encrypted Parquet file"") -hconf.set(""encryption.key.list"", ""key1: AAECAwQFBgcICQoLDA0ODw==, key2: AAECAAECAAECAAECAAECAA=="") -btDF = spark.createDataFrame(sc.parallelize(range(1, 6)).map(lambda i: Row(ssn=i, value=i 2))) -btDF.write.mode(""overwrite"").option(""encryption.column.keys"", ""key1:ssn"").option(""encryption.footer.key"", ""key2"").parquet(encryptedParquetFullName) -print(""Read Encrypted Parquet file"") -encrDataDF = spark.read.parquet(encryptedParquetFullName) -encrDataDF.createOrReplaceTempView(""bloodtests"") -queryResult = spark.sql(""SELECT ssn, value FROM bloodtests"") -queryResult.show(10) -sc.stop() -spark.stop() - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_7,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Internals of encryption key handling - -When writing a Parquet file, a random data encryption key (DEK) is generated for each encrypted column and for the footer. These keys are used to encrypt the data and the metadata modules in the Parquet file. - -The data encryption key is then encrypted with a key encryption key (KEK), also generated inside Spark/Parquet for each master key. The key encryption key is encrypted with a master encryption key (MEK) locally. - -Encrypted data encryption keys and key encryption keys are stored in the Parquet file metadata, along with the master key identity. Each key encryption key has a unique identity (generated locally as a secure random 16-byte value), also stored in the file metadata. - -When reading a Parquet file, the identifier of the master encryption key (MEK) and the encrypted key encryption key (KEK) with its identifier, and the encrypted data encryption key (DEK) are extracted from the file metadata. - -The key encryption key is decrypted with the master encryption key locally. Then the data encryption key (DEK) is decrypted locally, using the key encryption key (KEK). - -" -339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4_8,339F2EBDAD7CD0A3445BDF69C69AB7B28B4353C4," Learn more - - - -* [Parquet modular encryption](https://github.com/apache/parquet-format/blob/apache-parquet-format-2.7.0/Encryption.md) -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_0,2D08EDD168FBEE078290F386F7EC3EB1998ADF02," Time reference system - -Time reference system (TRS) is a local, regional or global system used to identify time. - -A time reference system defines a specific projection for forward and reverse mapping between a timestamp and its numeric representation. A common example that most users are familiar with is UTC time, which maps a timestamp, for example, (1 Jan 2019, 12 midnight (GMT) into a 64-bit integer value (1546300800000), which captures the number of milliseconds that have elapsed since 1 Jan 1970, 12 midnight (GMT). Generally speaking, the timestamp value is better suited for human readability, while the numeric representation is better suited for machine processing. - -In the time series library, a time series can be associated with a TRS. A TRS is composed of a: - - - -* Time tick that captures time granularity, for example 1 minute -* Zoned date time that captures a start time, for example 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT). A timestamp is mapped into a numeric representation by computing the number of elapsed time ticks since the start time. A numeric representation is scaled by the granularity and shifted by the start time when it is mapped back to a timestamp. - - - -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_1,2D08EDD168FBEE078290F386F7EC3EB1998ADF02,"Note that this forward + reverse projection might lead to time loss. For instance, if the true time granularity of a time series is in seconds, then forward and reverse mapping of the time stamps 09:00:01 and 09:00:02 (to be read as hh:mm:ss) to a granularity of one minute would result in the time stamps 09:00:00 and 09:00:00 respectively. In this example, a time series, whose granularity is in seconds, is being mapped to minutes and thus the reverse mapping looses information. However, if the mapped granularity is higher than the granularity of the input time series (more specifically, if the time series granularity is an integral multiple of the mapped granularity) then the forward + reverse projection is guaranteed to be lossless. For example, mapping a time series, whose granularity is in minutes, to seconds and reverse projecting it to minutes would result in lossless reconstruction of the timestamps. - -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_2,2D08EDD168FBEE078290F386F7EC3EB1998ADF02," Setting TRS - -When a time series is created, it is associated with a TRS (or None if no TRS is specified). If the TRS is None, then the numeric values cannot be mapped to timestamps. Note that TRS can only be set on a time series at construction time. The reason is that a time series by design is an immutable object. Immutability comes in handy when the library is used in multi-threaded environments or in distributed computing environments such as Apache Spark. While a TRS can be set only at construction time, it can be changed using the with_trs method as described in the next section. with_trs produces a new time series and thus has no impact on immutability. - -Let us consider a simple time series created from an in-memory list: - -values = [1.0, 2.0, 4.0] -x = tspy.time_series(values) -x - -This returns: - -TimeStamp: 0 Value: 1.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 4.0 - -At construction time, the time series can be associated with a TRS. Associating a TRS with a time series allows its numeric timestamps to be as per the time tick and offset/timezone. The following example shows 1 minute and 1 Jan 2019, 12 midnight (GMT): - -zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) -x_trs = tspy.time_series(data, granularity=datetime.timedelta(minutes=1), start_time=zdt) -x_trs - -This returns: - -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:01Z Value: 2.0 -TimeStamp: 2019-01-01T00:02Z Value: 4.0 - -Here is another example where the numeric timestamps are reinterpreted with a time tick of one hour and offset/timezone as 1 Jan 2019, 12 midnight US Eastern Daylight Savings time (EDT). - -tz_edt = datetime.timezone.edt -zdt = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=tz_edt) -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_3,2D08EDD168FBEE078290F386F7EC3EB1998ADF02,"x_trs = tspy.time_series(data, granularity=datetime.timedelta(hours=1), start_time=zdt) -x_trs - -This returns: - -TimeStamp: 2019-01-01T00:00-04:00 Value: 1.0 -TimeStamp: 2019-01-01T00:01-04:00 Value: 2.0 -TimeStamp: 2019-01-01T00:02-04:00 Value: 4.0 - -Note that the timestamps now indicate an offset of -4 hours from GMT (EDT timezone) and captures the time tick of one hour. Also note that setting a TRS does NOT change the numeric timestamps - it only specifies a way of interpreting numeric timestamps. - -x_trs.print(human_readable=False) - -This returns: - -TimeStamp: 0 Value: 1.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 4.0 - -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_4,2D08EDD168FBEE078290F386F7EC3EB1998ADF02," Changing TRS - -You can change the TRS associated with a time series using the with_trs function. Note that this function will throw an exception if the input time series is not associated with a TRS (if TRS is None). Using with_trs changes the numeric timestamps. - -The following code sample shows TRS set at contructions time without using with_trs: - - 1546300800 is the epoch time in seconds for 1 Jan 2019, 12 midnight GMT -zdt1 = datetime.datetime(1970,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) -y = tspy.observations.of(tspy.observation(1546300800, 1.0),tspy.observation(1546300860, 2.0), tspy.observation(1546300920, -4.0)).to_time_series(granularity=datetime.timedelta(seconds=1), start_time=zdt1) -y.print() -y.print(human_readable=False) - -This returns: - -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:01Z Value: 2.0 -TimeStamp: 2019-01-01T00:02Z Value: 4.0 - - TRS has been set during construction time - no changes to numeric timestamps -TimeStamp: 1546300800 Value: 1.0 -TimeStamp: 1546300860 Value: 2.0 -TimeStamp: 1546300920 Value: 4.0 - -The following example shows how to apply with_trs to change granularity to one minute and retain the original time offset (1 Jan 1970, 12 midnight GMT): - -y_minutely_1970 = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt1) -y_minutely_1970.print() -y_minutely_1970.print(human_readable=False) - -This returns: - -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:01Z Value: 2.0 -TimeStamp: 2019-01-01T00:02Z Value: 4.0 - - numeric timestamps have changed to number of elapsed minutes since 1 Jan 1970, 12 midnight GMT -TimeStamp: 25771680 Value: 1.0 -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_5,2D08EDD168FBEE078290F386F7EC3EB1998ADF02,"TimeStamp: 25771681 Value: 2.0 -TimeStamp: 25771682 Value: 4.0 - -Now apply with_trs to change granularity to one minute and the offset to 1 Jan 2019, 12 midnight GMT: - -zdt2 = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) -y_minutely = y.with_trs(granularity=datetime.timedelta(minutes=1), start_time=zdt2) -y_minutely.print() -y_minutely.print(human_readable=False) - -This returns: - -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:01Z Value: 2.0 -TimeStamp: 2019-01-01T00:02Z Value: 4.0 - - numeric timestamps are now minutes elapsed since 1 Jan 2019, 12 midnight GMT -TimeStamp: 0 Value: 1.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 4.0 - -To better understand how it impacts post processing, let's examine the following. Note that materialize on numeric timestamps operates on the underlying numeric timestamps associated with the time series. - -print(y.materialize(0,2)) -print(y_minutely_1970.materialize(0,2)) -print(y_minutely.materialize(0,2)) - -This returns: - - numeric timestamps in y are in the range 1546300800, 1546300920 and thus y.materialize(0,2) is empty -[] - numeric timestamps in y_minutely_1970 are in the range 25771680, 25771682 and thus y_minutely_1970.materialize(0,2) is empty -[] - numeric timestamps in y_minutely are in the range 0, 2 -[(0,1.0),(1,2.0),(2,4.0)] - -The method materialize can also be applied to datetime objects. This results in an exception if the underlying time series is not associated with a TRS (if TRS is None). Assuming the underlying time series has a TRS, the datetime objects are mapped to a numeric range using the TRS. - - Jan 1 2019, 12 midnight GMT -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_6,2D08EDD168FBEE078290F386F7EC3EB1998ADF02,"dt_beg = datetime.datetime(2019,1,1,0,0,0,0,tzinfo=datetime.timezone.utc) - Jan 1 2019, 12:02 AM GMT -dt_end = datetime.datetime(2019,1,1,0,2,0,0,tzinfo=datetime.timezone.utc) - -print(y.materialize(dt_beg, dt_end)) -print(y_minutely_1970.materialize(dt_beg, dt_end)) -print(y_minutely.materialize(dt_beg, dt_end)) - - materialize on y in UTC millis -[(1546300800,1.0),(1546300860,2.0), (1546300920,4.0)] - materialize on y_minutely_1970 in UTC minutes -[(25771680,1.0),(25771681,2.0),(25771682,4.0)] - materialize on y_minutely in minutes offset by 1 Jan 2019, 12 midnight -[(0,1.0),(1,2.0),(2,4.0)] - -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_7,2D08EDD168FBEE078290F386F7EC3EB1998ADF02," Duplicate timestamps - -Changing the TRS can result in duplicate timestamps. The following example changes the granularity to one hour which results in duplicate timestamps. The time series library handles duplicate timestamps seamlessly and provides convenience combiners to reduce values associated with duplicate timestamps into a single value, for example by calculating an average of the values grouped by duplicate timestamps. - -y_hourly = y_minutely.with_trs(granularity=datetime.timedelta(hours=1), start_time=zdt2) -print(y_minutely) -print(y_minutely.materialize(0,2)) - -print(y_hourly) -print(y_hourly.materialize(0,0)) - -This returns: - - y_minutely - minutely time series -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:01Z Value: 2.0 -TimeStamp: 2019-01-01T00:02Z Value: 4.0 - - y_minutely has numeric timestamps 0, 1 and 2 -[(0,1.0),(1,2.0),(2,4.0)] - - y_hourly - hourly time series has duplicate timestamps -TimeStamp: 2019-01-01T00:00Z Value: 1.0 -TimeStamp: 2019-01-01T00:00Z Value: 2.0 -TimeStamp: 2019-01-01T00:00Z Value: 4.0 - - y_hourly has numeric timestamps of all 0 -[(0,1.0),(0,2.0),(0,4.0)] - -Duplicate timestamps can be optionally combined as follows: - -y_hourly_averaged = y_hourly.transform(transformers.combine_duplicate_granularity(lambda x: sum(x)/len(x)) -print(y_hourly_averaged.materialize(0,0)) - -This returns: - - values corresponding to the duplicate numeric timestamp 0 have been combined using average - average = (1+2+4)/3 = 2.33 -[(0,2.33)] - -" -2D08EDD168FBEE078290F386F7EC3EB1998ADF02_8,2D08EDD168FBEE078290F386F7EC3EB1998ADF02," Learn more - -To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/). - -Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_0,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Time series functions - -Time series functions are aggregate functions that operate on sequences of data values measured at points in time. - -The following sections describe some of the time series functions available in different time series packages. - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_1,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Transforms - -Transforms are functions that are applied on a time series resulting in another time series. The time series library supports various types of transforms, including provided transforms (by using from tspy.functions import transformers) as well as user defined transforms. - -The following sample shows some provided transforms: - -Interpolation ->>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) ->>> periodicity = 2 ->>> interp = interpolators.nearest(0.0) ->>> interp_ts = ts.resample(periodicity, interp) ->>> interp_ts.print() -TimeStamp: 0 Value: 1.0 -TimeStamp: 2 Value: 3.0 -TimeStamp: 4 Value: 5.0 - -Fillna ->>> shift_ts = ts.shift(2) -print(""shifted ts to add nulls"") -print(shift_ts) -print(""nfilled ts to make nulls 0s"") -null_filled_ts = shift_ts.fillna(interpolators.fill(0.0)) -print(null_filled_ts) - -shifted ts to add nulls -TimeStamp: 0 Value: null -TimeStamp: 1 Value: null -TimeStamp: 2 Value: 1.0 -TimeStamp: 3 Value: 2.0 -TimeStamp: 4 Value: 3.0 -TimeStamp: 5 Value: 4.0 - -filled ts to make nulls 0s -TimeStamp: 0 Value: 0.0 -TimeStamp: 1 Value: 0.0 -TimeStamp: 2 Value: 1.0 -TimeStamp: 3 Value: 2.0 -TimeStamp: 4 Value: 3.0 -TimeStamp: 5 Value: 4.0 - - Additive White Gaussian Noise (AWGN) ->>> noise_ts = ts.transform(transformers.awgn(mean=0.0,sd=.03)) ->>> print(noise_ts) -TimeStamp: 0 Value: 0.9962378841388397 -TimeStamp: 1 Value: 1.9681980879378596 -TimeStamp: 2 Value: 3.0289374962174405 -TimeStamp: 3 Value: 3.990728648807705 -TimeStamp: 4 Value: 4.935338359740761 - -TimeStamp: 5 Value: 6.03395072999318 - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_2,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Segmentation - -Segmentation or windowing is the process of splitting a time series into multiple segments. The time series library supports various forms of segmentation and allows creating user-defined segments as well. - - - -* Window based segmentation - -This type of segmentation of a time series is based on user specified segment sizes. The segments can be record based or time based. There are options that allow for creating tumbling as well as sliding window based segments. - ->>> import tspy ->>> ts_orig = tspy.builder() -.add(tspy.observation(1,1.0)) -.add(tspy.observation(2,2.0)) -.add(tspy.observation(6,6.0)) -.result().to_time_series() ->>> ts_orig -timestamp: 1 Value: 1.0 -timestamp: 2 Value: 2.0 -timestamp: 6 Value: 6.0 - ->>> ts = ts_orig.segment_by_time(3,1) ->>> ts -timestamp: 1 Value: original bounds: (1,3) actual bounds: (1,2) observations: [(1,1.0),(2,2.0)] -timestamp: 2 Value: original bounds: (2,4) actual bounds: (2,2) observations: [(2,2.0)] -timestamp: 3 Value: this segment is empty -timestamp: 4 Value: original bounds: (4,6) actual bounds: (6,6) observations: [(6,6.0)] -* Anchor based segmentation - -Anchor based segmentation is a very important type of segmentation that creates a segment by anchoring on a specific lambda, which can be a simple value. An example is looking at events that preceded a 500 error or examining values after observing an anomaly. Variants of anchor based segmentation include providing a range with multiple markers. - ->>> import tspy ->>> ts_orig = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0]) ->>> ts_orig -timestamp: 0 Value: 1.0 -timestamp: 1 Value: 2.0 -timestamp: 2 Value: 3.0 -timestamp: 3 Value: 4.0 -timestamp: 4 Value: 5.0 - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_3,0108F00736882AC35E3C56CD3CE0D91BCB5798A8,">>> ts = ts_orig.segment_by_anchor(lambda x: x % 2 == 0, 1, 2) ->>> ts -timestamp: 1 Value: original bounds: (0,3) actual bounds: (0,3) observations: [(0,1.0),(1,2.0),(2,3.0),(3,4.0)] -timestamp: 3 Value: original bounds: (2,5) actual bounds: (2,4) observations: [(2,3.0),(3,4.0),(4,5.0)] -* Segmenters - -There are several specialized segmenters provided out of the box by importing the segmenters package (using from tspy.functions import segmenters). An example segmenter is one that uses regression to segment a time series: - ->>> ts = tspy.time_series([1.0,2.0,3.0,4.0,5.0,2.0,1.0,-1.0,50.0,53.0,56.0]) ->>> max_error = .5 ->>> skip = 1 ->>> reg_sts = ts.to_segments(segmenters.regression(max_error,skip,use_relative=True)) ->>> reg_sts - -timestamp: 0 Value: range: (0, 4) outliers: {} -timestamp: 5 Value: range: (5, 7) outliers: {} -timestamp: 8 Value: range: (8, 10) outliers: {} - - - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_4,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Reducers - -A reducer is a function that is applied to the values across a set of time series to produce a single value. The time series reducer functions are similar to the reducer concept used by Hadoop/Spark. This single value can be a collection, but more generally is a single object. An example of a reducer function is averaging the values in a time series. - -Several reducer functions are supported, including: - - - -* Distance reducers - -Distance reducers are a class of reducers that compute the distance between two time series. The library supports numeric as well as categorical distance functions on sequences. These include time warping distance measurements such as Itakura Parallelogram, Sakoe-Chiba Band, DTW non-constrained and DTW non-time warped contraints. Distribution distances such as Hungarian distance and Earth-Movers distance are also available. - -For categorical time series distance measurements, you can use Damerau Levenshtein and Jaro-Winkler distance measures. - ->>> from tspy.functions import ->>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) ->>> ts2 = ts.transform(transformers.awgn(sd=.3)) ->>> dtw_distance = ts.reduce(ts2,reducers.dtw(lambda obs1, obs2: abs(obs1.value - obs2.value))) ->>> print(dtw_distance) -1.8557981638880405 -* Math reducers - -Several convenient math reducers for numeric time series are provided. These include basic ones such as average, sum, standard deviation, and moments. Entropy, kurtosis, FFT and variants of it, various correlations, and histogram are also included. A convenient basic summarization reducer is the describe function that provides basic information about the time series. - ->>> from tspy.functions import ->>> ts = tspy.time_series([1.0, 2.0, 3.0, 4.0, 5.0, 6.0]) ->>> ts2 = ts.transform(transformers.awgn(sd=.3)) ->>> corr = ts.reduce(ts2, reducers.correlation()) ->>> print(corr) -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_5,0108F00736882AC35E3C56CD3CE0D91BCB5798A8,"0.9938941942380525 - ->>> adf = ts.reduce(reducers.adf()) ->>> print(adf) -pValue: -3.45 -satisfies test: false - ->>> ts2 = ts.transform(transformers.awgn(sd=.3)) ->>> granger = ts.reduce(ts2, reducers.granger(1)) ->>> print(granger) f_stat, p_value, R2 --1.7123613937876463,-3.874412217575385,1.0 -* Another basic reducer that is very useful for getting a first order understanding of the time series is the describe reducer. The following illustrates this reducer: - ->>> desc = ts.describe() ->>> print(desc) -min inter-arrival-time: 1 -max inter-arrival-time: 1 -mean inter-arrival-time: 1.0 -top: null -unique: 6 -frequency: 1 -first: TimeStamp: 0 Value: 1.0 -last: TimeStamp: 5 Value: 6.0 -count: 6 -mean:3.5 -std:1.707825127659933 -min:1.0 -max:6.0 -25%:1.75 -50%:3.5 -75%:5.25 - - - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_6,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Temporal joins - -The library includes functions for temporal joins or joining time series based on their timestamps. The join functions are similar to those in a database, including left, right, outer, inner, left outer, right outer joins, and so on. The following sample codes shows some of these join functions: - - Create a collection of observations (materialized TimeSeries) -observations_left = tspy.observations(tspy.observation(1, 0.0), tspy.observation(3, 1.0), tspy.observation(8, 3.0), tspy.observation(9, 2.5)) -observations_right = tspy.observations(tspy.observation(2, 2.0), tspy.observation(3, 1.5), tspy.observation(7, 4.0), tspy.observation(9, 5.5), tspy.observation(10, 4.5)) - - Build TimeSeries from Observations -ts_left = observations_left.to_time_series() -ts_right = observations_right.to_time_series() - - Perform full join -ts_full = ts_left.full_join(ts_right) -print(ts_full) - -TimeStamp: 1 Value: [0.0, null] -TimeStamp: 2 Value: [null, 2.0] -TimeStamp: 3 Value: [1.0, 1.5] -TimeStamp: 7 Value: [null, 4.0] -TimeStamp: 8 Value: [3.0, null] -TimeStamp: 9 Value: [2.5, 5.5] -TimeStamp: 10 Value: [null, 4.5] - - Perform left align with interpolation -ts_left_aligned, ts_right_aligned = ts_left.left_align(ts_right, interpolators.nearest(0.0)) - -print(""left ts result"") -print(ts_left_aligned) -print(""right ts result"") -print(ts_right_aligned) - -left ts result -TimeStamp: 1 Value: 0.0 -TimeStamp: 3 Value: 1.0 -TimeStamp: 8 Value: 3.0 -TimeStamp: 9 Value: 2.5 -right ts result -TimeStamp: 1 Value: 0.0 -TimeStamp: 3 Value: 1.5 -TimeStamp: 8 Value: 4.0 -TimeStamp: 9 Value: 5.5 - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_7,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Forecasting - -A key functionality provided by the time series library is forecasting. The library includes functions for simple as well as complex forecasting models, including ARIMA, Exponential, Holt-Winters, and BATS. The following example shows the function to create a Holt-Winters: - -import random - -model = tspy.forecasters.hws(samples_per_season=samples_per_season, initial_training_seasons=initial_training_seasons) - -for i in range(100): -timestamp = i -value = random.randint(1,10)* 1.0 -model.update_model(timestamp, value) - -print(model) - -Forecasting Model -Algorithm: HWSAdditive=5 (aLevel=0.001, bSlope=0.001, gSeas=0.001) level=6.087789839896166, slope=0.018901997884893912, seasonal(amp,per,avg)=(1.411203455586738,5, 0,-0.0037471500727535465) - -Is model init-ed -if model.is_initialized(): -print(model.forecast_at(120)) - -6.334135728495107 - -ts = tspy.time_series([float(i) for i in range(10)]) - -print(ts) - -TimeStamp: 0 Value: 0.0 -TimeStamp: 1 Value: 1.0 -TimeStamp: 2 Value: 2.0 -TimeStamp: 3 Value: 3.0 -TimeStamp: 4 Value: 4.0 -TimeStamp: 5 Value: 5.0 -TimeStamp: 6 Value: 6.0 -TimeStamp: 7 Value: 7.0 -TimeStamp: 8 Value: 8.0 -TimeStamp: 9 Value: 9.0 - -num_predictions = 5 -model = tspy.forecasters.auto(8) -confidence = .99 - -predictions = ts.forecast(num_predictions, model, confidence=confidence) - -print(predictions.to_time_series()) - -TimeStamp: 10 Value: {value=10.0, lower_bound=10.0, upper_bound=10.0, error=0.0} -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_8,0108F00736882AC35E3C56CD3CE0D91BCB5798A8,"TimeStamp: 11 Value: {value=10.997862810553725, lower_bound=9.934621260488143, upper_bound=12.061104360619307, error=0.41277640121597475} -TimeStamp: 12 Value: {value=11.996821082897318, lower_bound=10.704895525154571, upper_bound=13.288746640640065, error=0.5015571318964149} -TimeStamp: 13 Value: {value=12.995779355240911, lower_bound=11.50957896664928, upper_bound=14.481979743832543, error=0.5769793776877866} -TimeStamp: 14 Value: {value=13.994737627584504, lower_bound=12.33653268707341, upper_bound=15.652942568095598, error=0.6437557559526337} - -print(predictions.to_time_series().to_df()) - -timestamp value lower_bound upper_bound error -0 10 10.000000 10.000000 10.000000 0.000000 -1 11 10.997863 9.934621 12.061104 0.412776 -2 12 11.996821 10.704896 13.288747 0.501557 -3 13 12.995779 11.509579 14.481980 0.576979 -4 14 13.994738 12.336533 15.652943 0.643756 - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_9,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Time series SQL - -The time series library is tightly integrated with Apache Spark. By using new data types in Spark Catalyst, you are able to perform time series SQL operations that scale out horizontally using Apache Spark. This enables you to easily use time series extensions in IBM Analytics Engine or in solutions that include IBM Analytics Engine functionality like the Watson Studio Spark environments. - -SQL extensions cover most aspects of the time series functions, including segmentation, transformations, reducers, forecasting, and I/O. See [Analyzing time series data](https://cloud.ibm.com/docs/sql-query?topic=sql-query-ts_intro). - -" -0108F00736882AC35E3C56CD3CE0D91BCB5798A8_10,0108F00736882AC35E3C56CD3CE0D91BCB5798A8," Learn more - -To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/). - -Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_0,A6587CBE69B6227CE1D087CC141CCF13669F2060," Time series key functionality - -The time series library provides various functions on univariate, multivariate, multi-key time series as well as numeric and categorical types. - -The functionality provided by the library can be broadly categorized into: - - - -* Time series I/O, for creating and saving time series data -* Time series functions, transforms, windowing or segmentation, and reducers -* Time series SQL and SQL extensions to Spark to enable executing scalable time series functions - - - -Some of the key functionality is shown in the following sections using examples. - -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_1,A6587CBE69B6227CE1D087CC141CCF13669F2060," Time series I/O - -The primary input and output (I/O) functionality for a time series is through a pandas DataFrame or a Python list. The following code sample shows constructing a time series from a DataFrame: - ->>> import numpy as np ->>> import pandas as pd ->>> data = np.array(['', 'key', 'timestamp', ""value""],'', ""a"", 1, 27], '', ""b"", 3, 4], '', ""a"", 5, 17], '', ""a"", 3, 7], '', ""b"", 2, 45]]) ->>> df = pd.DataFrame(data=data[1:, 1:], index=data[1:, 0], columns=data[0, 1:]).astype(dtype={'key': 'object', 'timestamp': 'int64', 'value': 'float64'}) ->>> df -key timestamp value -a 1 27.0 -b 3 4.0 -a 5 17.0 -a 3 7.0 -b 2 45.0 - -Create a timeseries from a dataframe, providing a timestamp and a value column ->>> ts = tspy.time_series(df, ts_column=""timestamp"", value_column=""value"") ->>> ts -TimeStamp: 1 Value: 27.0 -TimeStamp: 2 Value: 45.0 -TimeStamp: 3 Value: 4.0 -TimeStamp: 3 Value: 7.0 -TimeStamp: 5 Value: 17.0 - -To revert from a time series back to a pandas DataFrame, use the to_df function: - ->>> import tspy ->>> ts_orig = tspy.time_series([1.0, 2.0, 3.0]) ->>> ts_orig -TimeStamp: 0 Value: 1 -TimeStamp: 1 Value: 2 -TimeStamp: 2 Value: 3 - ->>> df = ts_orig.to_df() ->>> df -timestamp value -0 0 1 -1 1 2 -2 2 3 - -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_2,A6587CBE69B6227CE1D087CC141CCF13669F2060," Data model - -Time series data does not have any standards for the model and data types, unlike some data types such as spatial, which are governed by a standard such as Open Geospatial Consortium (OGC). The challenge with time series data is the wide variety of functions that need to be supported, similar to that of Spark Resilient Distributed Datasets (RDD). - -The data model allows for a wide variety of operations ranging across different forms of segmentation or windowing of time series, transformations or conversions of one time series to another, reducers that compute a static value from a time series, joins that join multiple time series, and collectors of time series from different time zones. The time series library enables the plug-and-play of new functions while keeping the core data structure unchangeable. The library also support numeric and categorical typed timeseries. - -With time zones and various human readable time formats, a key aspect of the data model is support for Time Reference System (TRS). Every time series is associated with a TRS (system default), which can be remapped to any specific choice of the user at any time, enabling easy transformation of a specific time series or a segment of a time series. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html). - -Further, with the need for handling large scale time series, the library offers a lazy evaluation construct by providing a mechanism for identifying the maximal narrow temporal dependency. This construct is very similar to that of a Spark computation graph, which also loads data into memory on as needed basis and realizes the computations only when needed. - -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_3,A6587CBE69B6227CE1D087CC141CCF13669F2060," Time series data types - -You can use multiple data types as an element of a time series, spanning numeric, categorical, array, and dictionary data structures. - -The following data types are supported in a time series: - - - - Data type Description - - numeric Time series with univariate observations of numeric type including double and integer. For example:[(1, 7.2), (3, 4.5), (5, 4.5), (5, 4.6), (5, 7.1), (7, 3.9), (9, 1.1)] - numeric array Time series with multivariate observations of numeric type, including double array and integer array. For example: [(1, 7.2, 8.74]), (3, 4.5, 9.44]), (5, 4.5, 10.12]), (5, 4.6, 12.91]), (5, 7.1, 9.90]), (7, 3.9, 3.76])] - string Time series with univariate observations of type string, for example: [(1, ""a""), (3, ""b""), (5, ""c""), (5, ""d""), (5, ""e""), (7, ""f""), (9, ""g"")] - string array Time series with multivariate observations of type string array, for example: [(1, ""a"", ""xq""]), (3, ""b"", ""zr""]), (5, ""c"", ""ms""]), (5, ""d"", ""rt""]), (5, ""e"", ""wu""]), (7, ""f"", ""vv""]), (9, ""g"", ""zw""])] - segment Time series of segments. The output of the segmentBy function, can be any type, including numeric, string, numeric array, and string array. For example: [(1,(1, 7.2), (3, 4.5)]), (5,(5, 4.5), (5, 4.6), (5, 7.1)]), (7,(7, 3.9), (9, 1.1)])] - dictionary Time series of dictionaries. A dictionary can have arbitrary types inside it - - - -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_4,A6587CBE69B6227CE1D087CC141CCF13669F2060," Time series functions - -You can use different functions in the provided time series packages to analyze time series data to extract meaningful information with which to create models that can be used to predict new values based on previously observed values. See [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html). - -" -A6587CBE69B6227CE1D087CC141CCF13669F2060_5,A6587CBE69B6227CE1D087CC141CCF13669F2060," Learn more - -To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/). - -Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) -" -F3C0AD81BBF56463510440F7F81EB146A6C0015C_0,F3C0AD81BBF56463510440F7F81EB146A6C0015C," Time series lazy evaluation - -Lazy evaluation is an evaluation strategy that delays the evaluation of an expression until its value is needed. When combined with memoization, lazy evaluation strategy avoids repeated evaluations and can reduce the running time of certain functions by a significant factor. - -The time series library uses lazy evaluation to process data. Notionally an execution graph is constructed on time series data whose evaluation is triggered only when its output is materialized. Assuming an object is moving in a one dimensional space, whose location is captured by x(t). You can determine the harsh acceleration/braking (h(t)) of this object by using its velocity (v(t)) and acceleration (a(t)) time series as follows: - - 1d location timeseries -x(t) = input location timeseries - - velocity - first derivative of x(t) -v(t) = x(t) - x(t-1) - - acceleration - second derivative of x(t) -a(t) = v(t) - v(t-1) - - harsh acceleration/braking using thresholds on acceleration -h(t) = +1 if a(t) > threshold_acceleration -= -1 if a(t) < threshold_deceleration -= 0 otherwise - -This results in a simple execution graph of the form: - -x(t) --> v(t) --> a(t) --> h(t) - -Evaluations are triggered only when an action is performed, such as compute h(5...10), i.e. compute h(5), ..., h(10). The library captures narrow temporal dependencies between time series. In this example, h(5...10) requires a(5...10), which in turn requires v(4...10), which then requires x(3...10). Only the relevant portions of a(t), v(t) and x(t) are evaluated. - -h(5...10) <-- a(5...10) <-- v(4...10) <-- x(3...10) - -" -F3C0AD81BBF56463510440F7F81EB146A6C0015C_1,F3C0AD81BBF56463510440F7F81EB146A6C0015C,"Furthermore, evaluations are memoized and can thus be reused in subsequent actions on h. For example, when a request for h(7...12) follows a request for h(5...10), the memoized values h(7...10) would be leveraged; further, h(11...12) would be evaluated using a(11...12), v(10...12) and x(9...12), which would in turn leverage v(10) and x(9...10) memoized from the prior computation. - -In a more general example, you could define a smoothened velocity timeseries as follows: - - 1d location timeseries -x(t) = input location timeseries - - velocity - first derivative of x(t) -v(t) = x(t) - x(t-1) - - smoothened velocity - alpha is the smoothing factor - n is a smoothing history -v_smooth(t) = (v(t)1.0 + v(t-1)alpha + ... + v(t-n)alpha^n) / (1 + alpha + ... + alpha^n) - - acceleration - second derivative of x(t) -a(t) = v_smooth(t) - v_smooth(t-1) - -In this example h(l...u) has the following temporal dependency. Evaluation of h(l...u) would strictly adhere to this temporal dependency with memoization. - -h(l...u) <-- a(l...u) <-- v_smooth(l-1...u) <-- v(l-n-1...u) <-- x(l-n-2...u) - -" -F3C0AD81BBF56463510440F7F81EB146A6C0015C_2,F3C0AD81BBF56463510440F7F81EB146A6C0015C," An Example - -The following example shows a python code snippet that implements harsh acceleration on a simple in-memory time series. The library includes several built-in transforms. In this example the difference transform is applied twice to the location time series to compute acceleration time series. A map operation is applied to the acceleration time series using a harsh lambda function, which is defined after the code sample, that maps acceleration to either +1 (harsh acceleration), -1 (harsh braking) and 0 (otherwise). The filter operation selects only instances wherein either harsh acceleration or harsh braking is observed. Prior to calling get_values, an execution graph is created, but no computations are performed. On calling get_values(5, 10), the evaluation is performed with memoization on the narrowest possible temporal dependency in the execution graph. - -import tspy -from tspy.builders.functions import transformers - -x = tspy.time_series([1.0, 2.0, 4.0, 7.0, 11.0, 16.0, 22.0, 29.0, 28.0, 30.0, 29.0, 30.0, 30.0]) -v = x.transform(transformers.difference()) -a = v.transform(transformers.difference()) -h = a.map(harsh).filter(lambda h: h != 0) - -print(h[5, 10]) - -The harsh lambda is defined as follows: - -def harsh(a): -threshold_acceleration = 2.0 -threshold_braking = -2.0 - -if (a > threshold_acceleration): -return +1 -elif (a < threshold_braking): -return -1 -else: -return 0 - -" -F3C0AD81BBF56463510440F7F81EB146A6C0015C_3,F3C0AD81BBF56463510440F7F81EB146A6C0015C," Learn more - -To use the tspy Python SDK, see the [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/). - -Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) -" -22D15F386DC333BC069EEA8671E895C97956E754_0,22D15F386DC333BC069EEA8671E895C97956E754," Using the time series library - -To get started working with the time series library, import the library to your Python notebook or application. - -Use this command to import the time series library: - - Import the package -import tspy - -" -22D15F386DC333BC069EEA8671E895C97956E754_1,22D15F386DC333BC069EEA8671E895C97956E754," Creating a time series - -To create a time series and use the library functions, you must decide on the data source. Supported data sources include: - - - -* In-memory lists -* pandas DataFrames -* In-memory collections of observations (using the ObservationCollection construct) -* User-defined readers (using the TimeSeriesReader construct) - - - -The following example shows ingesting data from an in-memory list: - -ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0]) -ts - -The output is as follows: - -TimeStamp: 0 Value: 5.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 4.0 -TimeStamp: 3 Value: 6.0 -TimeStamp: 4 Value: 6.0 -TimeStamp: 5 Value: 7.0 - -You can also operate on many time-series at the same time by using the MultiTimeSeries construct. A MultiTimeSeries is essentially a dictionary of time series, where each time series has its own unique key. The time series are not aligned in time. - -The MultiTimeSeries construct provides similar methods for transforming and ingesting as the single time series construct: - -mts = tspy.multi_time_series({ -""ts1"": tspy.time_series([1.0, 2.0, 3.0]), -""ts2"": tspy.time_series([5.0, 2.0, 4.0, 5.0]) -}) - -The output is the following: - -ts2 time series ------------------------------- -TimeStamp: 0 Value: 5.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 4.0 -TimeStamp: 3 Value: 5.0 -ts1 time series ------------------------------- -TimeStamp: 0 Value: 1.0 -TimeStamp: 1 Value: 2.0 -TimeStamp: 2 Value: 3.0 - -" -22D15F386DC333BC069EEA8671E895C97956E754_2,22D15F386DC333BC069EEA8671E895C97956E754," Interpreting time - -By default, a time series uses a long data type to denote when a given observation was created, which is referred to as a time tick. A time reference system is used for time series with timestamps that are human interpretable. See [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html). - -The following example shows how to create a simple time series where each index denotes a day after the start time of 1990-01-01: - -import datetime -granularity = datetime.timedelta(days=1) -start_time = datetime.datetime(1990, 1, 1, 0, 0, 0, 0, tzinfo=datetime.timezone.utc) - -ts = tspy.time_series([5.0, 2.0, 4.0, 6.0, 6.0, 7.0], granularity=granularity, start_time=start_time) -ts - -The output is as follows: - -TimeStamp: 1990-01-01T00:00Z Value: 5.0 -TimeStamp: 1990-01-02T00:00Z Value: 2.0 -TimeStamp: 1990-01-03T00:00Z Value: 4.0 -TimeStamp: 1990-01-04T00:00Z Value: 6.0 -TimeStamp: 1990-01-05T00:00Z Value: 6.0 -TimeStamp: 1990-01-06T00:00Z Value: 7.0 - -" -22D15F386DC333BC069EEA8671E895C97956E754_3,22D15F386DC333BC069EEA8671E895C97956E754," Performing simple transformations - -Transformations are functions which, when given one or more time series, return a new time series. - -For example, to segment a time series into windows where each window is of size=3, sliding by 2 records, you can use the following method: - -window_ts = ts.segment(3, 2) -window_ts - -The output is as follows: - -TimeStamp: 0 Value: original bounds: (0,2) actual bounds: (0,2) observations: [(0,5.0),(1,2.0),(2,4.0)] -TimeStamp: 2 Value: original bounds: (2,4) actual bounds: (2,4) observations: [(2,4.0),(3,6.0),(4,6.0)] - -This example shows adding 1 to each value in a time series: - -add_one_ts = ts.map(lambda x: x + 1) -add_one_ts - -The output is as follows: - -TimeStamp: 0 Value: 6.0 -TimeStamp: 1 Value: 3.0 -TimeStamp: 2 Value: 5.0 -TimeStamp: 3 Value: 7.0 -TimeStamp: 4 Value: 7.0 -TimeStamp: 5 Value: 8.0 - -Or you can temporally left join a time series, for example ts with another time series ts2: - -ts2 = tspy.time_series([1.0, 2.0, 3.0]) -joined_ts = ts.left_join(ts2) -joined_ts - -The output is as follows: - -TimeStamp: 0 Value: [5.0, 1.0] -TimeStamp: 1 Value: [2.0, 2.0] -TimeStamp: 2 Value: [4.0, 3.0] -TimeStamp: 3 Value: [6.0, null] -TimeStamp: 4 Value: [6.0, null] -TimeStamp: 5 Value: [7.0, null] - -" -22D15F386DC333BC069EEA8671E895C97956E754_4,22D15F386DC333BC069EEA8671E895C97956E754," Using transformers - -A rich suite of built-in transformers is provided in the transformers package. Import the package to use the provided transformer functions: - -from tspy.builders.functions import transformers - -After you have added the package, you can transform data in a time series be using the transform method. - -For example, to perform a difference on a time-series: - -ts_diff = ts.transform(transformers.difference()) - -Here the output is: - -TimeStamp: 1 Value: -3.0 -TimeStamp: 2 Value: 2.0 -TimeStamp: 3 Value: 2.0 -TimeStamp: 4 Value: 0.0 -TimeStamp: 5 Value: 1.0 - -" -22D15F386DC333BC069EEA8671E895C97956E754_5,22D15F386DC333BC069EEA8671E895C97956E754," Using reducers - -Similar to the transformers package, you can reduce a time series by using methods provided by the reducers package. You can import the reducers package as follows: - -from tspy.builders.functions import reducers - -After you have imported the package, use the reduce method to get the average over a time-series for example: - -avg = ts.reduce(reducers.average()) -avg - -This outputs: - -5.0 - -Reducers have a special property that enables them to be used alongside segmentation transformations (hourly sum, avg in the window prior to an error occurring, and others). Because the output of a segmentation + reducer is a time series, the transform method is used. - -For example, to segment into windows of size 3 and get the average across each window, use: - -avg_windows_ts = ts.segment(3).transform(reducers.average()) - -This results in: - -imeStamp: 0 Value: 3.6666666666666665 -TimeStamp: 1 Value: 4.0 -TimeStamp: 2 Value: 5.333333333333333 -TimeStamp: 3 Value: 6.333333333333333 - -" -22D15F386DC333BC069EEA8671E895C97956E754_6,22D15F386DC333BC069EEA8671E895C97956E754," Graphing time series - -Lazy evaluation is used when graphing a time series. When you graph a time series, you can do one of the following: - - - -* Collect the observations of the time series, which returns an BoundTimeSeries -* Reduce the time series to a value or collection of values -* Perform save or print operations - - - -For example, to collect and return all of the values of a timeseries: - -observations = ts.materialize() -observations - -This results in: - -[(0,5.0),(1,2.0),(2,4.0),(3,6.0),(4,6.0),(5,7.0)] - -To collect a range from a time series, use: - -observations = ts[1:3] same as ts.materialize(1, 3) -observations - -Here the output is: - -[(1,2.0),(2,4.0),(3,6.0)] - -Note that a time series is optimized for range queries if the time series is periodic in nature. - -Using the describe on a current time series, also graphs the time series: - -describe_obj = ts.describe() -describe_obj - -The output is: - -min inter-arrival-time: 1 -max inter-arrival-time: 1 -mean inter-arrival-time: 1.0 -top: 6.0 -unique: 5 -frequency: 2 -first: TimeStamp: 0 Value: 5.0 -last: TimeStamp: 5 Value: 7.0 -count: 6 -mean:5.0 -std:1.632993161855452 -min:2.0 -max:7.0 -25%:3.5 -50%:5.5 -75%:6.25 - -" -22D15F386DC333BC069EEA8671E895C97956E754_7,22D15F386DC333BC069EEA8671E895C97956E754," Learn more - - - -* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html) -* [Time series functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-functions.html) -* [Time series lazy evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lazy-evaluation.html) -* [Using time reference system](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-reference-system.html) -* [tspy Python SDK documentation](https://ibm-cloud.github.io/tspy-docs/) - - - -Parent topic:[Time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) -" -D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7_0,D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7," Time series analysis - -A time series is a sequence of data values measured at successive, though not necessarily regular, points in time. The time series library allows you to perform various key operations on time series data, including segmentation, forecasting, joins, transforms, and reducers. - -The library supports various time series types, including numeric, categorical, and arrays. Examples of time series data include: - - - -* Stock share prices and trading volumes -* Clickstream data -* Electrocardiogram (ECG) data -* Temperature or seismographic data -* Network performance measurements -* Network logs -* Electricity usage as recorded by a smart meter and reported via an Internet of Things data feed - - - -An entry in a time series is called an observation. Each observation comprises a time tick, a 64-bit integer that indicates when the observation was made, and the data that was recorded for that observation. The recorded data can be either numerical, for example, a temperature or a stock share price, or categorical, for example, a geographic area. A time series can but must not necessarily be associated with a time reference system (TRS), which defines the granularity of each time tick and the start time. - -The time series library is Python only. - -" -D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7_1,D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7," Next step - - - -* [Using the time series library](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib-using.html) - - - -" -D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7_2,D5521B8EC8CED84A2E383B3B6D5BC20795EF87B7," Learn more - - - -* [Time series key functionality](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-key-functionality.html) -" -7E8D1F67FD96A81FF6D9459C1310919908000CBF_0,7E8D1F67FD96A81FF6D9459C1310919908000CBF," Exporting synthetic data - -Using Synthetic Data Generator, you can export synthetic data to remote data sources using connections or write data to a project (Delimited or SAV). - -Double-click the node to open its properties. Various options are available, described as follows. After running the node, you can find the data at the export location you specified. - -" -7E8D1F67FD96A81FF6D9459C1310919908000CBF_1,7E8D1F67FD96A81FF6D9459C1310919908000CBF," Exporting to a project - -Under Export to, select This project and then select the project path. For File type, select either Delimited or SAV. - -" -7E8D1F67FD96A81FF6D9459C1310919908000CBF_2,7E8D1F67FD96A81FF6D9459C1310919908000CBF," Exporting to a connection - -Under Export to, select Save to a connection to open the Asset Browser and then select the connection to export to. For a list of supported data sources, see [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html). - -" -7E8D1F67FD96A81FF6D9459C1310919908000CBF_3,7E8D1F67FD96A81FF6D9459C1310919908000CBF," Setting the field delimiter, quote character, and decimal symbol - -Different countries use different symbols to separate the integer part from the fractional part of a number and to separate fields in data. For example, you might use a comma instead of a period to separate the integer part from the fractional part of numbers. And, rather than using commas to separate fields in your data, you might use colons or tabs. With a Data Asset import or export node, you can specify these symbols and other options. Double-click the node to open its properties and specify data formats as desired. ![Export data field delimiters](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-export-data-field-delimiters.png) -" -C52D7D525C33EB8FA5B5ACC8B16243223D78AC68_0,C52D7D525C33EB8FA5B5ACC8B16243223D78AC68," Creating synthetic data from a custom data schema - -Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on meta data, automatically or with user-specified statistical distributions. You can define the data within each table column, their distributions, and any correlations. You can then export and review your synthetic data. - -Before you can use generate to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic). - -1. The Generate synthetic tabular data flow window opens. Select use case Create from custom data schema. Click Next. ![Generate synthetic tabular data flow window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-flow.png) - -2. Select Generate options. You can use the Synthetic Data Generator graphical editor flow tool to specify the number of rows and add columns. You can define properties and specify fields, storage types, statistical distributions, and distribution parameters. Click Next. ![Generate options](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-options.png) - -3. Select Export data to select the export file name and type. For more information, see [Exporting data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html). Click Next. ![Export data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-export.png) - -" -C52D7D525C33EB8FA5B5ACC8B16243223D78AC68_1,C52D7D525C33EB8FA5B5ACC8B16243223D78AC68,"4. Select Review to check your selection and make any updates before generating your synthetic data. Click Save and run. ![Review data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-review.png) - -" -C52D7D525C33EB8FA5B5ACC8B16243223D78AC68_2,C52D7D525C33EB8FA5B5ACC8B16243223D78AC68," Learn more - -[Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html) -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_0,204F36069EE071B185A1BCE8370946A50BDDCDD5," Creating synthetic data from imported data - -Supported data sources for Synthetic Data Generator. - -Using Synthetic Data Generator, you can connect to your data no matter where it lives, using either connectors or data files. - -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_1,204F36069EE071B185A1BCE8370946A50BDDCDD5," Data size - -The Synthetic Data Generator environment can import up to 2.5GB of data. - -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_2,204F36069EE071B185A1BCE8370946A50BDDCDD5," Connectors - -The following table lists the data sources that you can connect to using Synthetic Data Generator. - - - - Connector Read Only Read & Write Notes - - [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) ✓ Replace the data set option isn't supported for this connection. - [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) ✓ Replace the data set option isn't supported for this connection. - [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) ✓ - [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) ✓ - [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) ✓ - [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) ✓ - [Apache HDFS (formerly known as ""Hortonworks HDFS"")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) ✓ - [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html) ✓ - [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) ✓ ✓ - [Cloud Object-Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) ✓ -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_3,204F36069EE071B185A1BCE8370946A50BDDCDD5," [Cloud Object-Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) ✓ - [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) ✓ - [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html) ✓ - [Cognos-Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) ✓ - [Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) ✓ - [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) ✓ - [Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) ✓ - [Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) ✓ - [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) ✓ - [Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) ✓ - [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) ✓ -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_4,204F36069EE071B185A1BCE8370946A50BDDCDD5," [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) ✓ - [FTP (remote file system transfer)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) ✓ - [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) ✓ - [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) ✓ - [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) ✓ - [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) ✓ - [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) ✓ - [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html) ✓ - [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) ✓ - [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓ - [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) ✓ -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_5,204F36069EE071B185A1BCE8370946A50BDDCDD5," [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html) ✓ - [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) ✓ - [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html) ✓ - [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) - [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) ✓ - [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) ✓ - [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) ✓ - [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) ✓ - [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) ✓ - [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) ✓ SQL pushback isn't supported when Active Directory is enabled. - [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) ✓ -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_6,204F36069EE071B185A1BCE8370946A50BDDCDD5," [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) ✓ - [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) ✓ - [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) ✓ - [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) ✓ - [Planning Analytics (formerly known as ""IBM TM1"")](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) ✓ Only the Replace the data set option is supported. - [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) ✓ - [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html) ✓ - [Salesforce.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html) ✓ - [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) ✓ - [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html) ✓ - [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) ✓ -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_7,204F36069EE071B185A1BCE8370946A50BDDCDD5," [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) ✓ - [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html) ✓ - [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) ✓ - - - -" -204F36069EE071B185A1BCE8370946A50BDDCDD5_8,204F36069EE071B185A1BCE8370946A50BDDCDD5," Data files - -In addition to using data from remote data sources or integrated databases, you can use data from files. You can work with data from the following types of files using Synthetic Data Generator. - - - - Connector Read Only Read & Write - - AVRO ✓ - CSV/delimited ✓ - Excel (XLS, XLSX) ✓ - JSON ✓ - ORC - Parquet - SAS ✓ - SAV ✓ -" -971AE69D7D2A527C25F31A6C8D8D64EE68B48519_0,971AE69D7D2A527C25F31A6C8D8D64EE68B48519," Creating synthetic data from production data - -Using the Synthetic Data Generator graphical editor flow tool, you can generate a structured synthetic data set based on your production data. You can import data, anonymize, mimic (to generate synthetic data), export, and review your data. - -Before you can use mimic and mask to create synthetic data, you need [to create a task](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.htmlcreate-synthetic). - -1. The Generate synthetic tabular data flow window opens. Select use case Leverage your existing data. Click Next. ![Generate synthetic tabular data flow window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-mimic-mask-flow.png) - -2. Select Import data. You can also drag-and-drop a data file into your project. You can also select data from a project. For more information, see [Importing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html). ![Import data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-import-data.png) - -3. Once you have imported your data, you can use the Synthetic Data Generator graphical flow editor tool to anonymize your production data, masking the data. You can disguise column names, column values, or both, when working with data that is to be included in a model downstream of the node. For example, you can use bank customer data and hide marital status. ![Anonymize data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-anonymize.png) - -" -971AE69D7D2A527C25F31A6C8D8D64EE68B48519_1,971AE69D7D2A527C25F31A6C8D8D64EE68B48519,"4. You can then use the Synthetic Data Generator tool to mimic your production data. This will generate synthetic data, based on your production data, using a set of candidate statistical distributions to modify each column in your data. ![Mimic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-mimic-options.png) - -5. You can export your synthetic data and review it. For more information, see [Exporting synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/export_data_sd.html). ![Export data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-export.png) - -" -971AE69D7D2A527C25F31A6C8D8D64EE68B48519_2,971AE69D7D2A527C25F31A6C8D8D64EE68B48519," Learn more - -[Creating synthetic data from a custom data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/generate_data_sd.html) -" -30A8256A4972314DA32827A081B7541138B454A9_0,30A8256A4972314DA32827A081B7541138B454A9," Creating Synthetic data - -Use the graphical flow editor tool Synthetic Data Generator to generate synthetic tabular data based on production data or a custom data schema using visual flows and modeling algorithms. - -To create synthetic data, the first option is to use the Synthetic Data Generator graphical flow editor tool to mask and mimic production data, and then to load the result into a different location. - -The second option is to use the Synthetic Data Generator graphical flow editor to generate synthetic data from a custom data schema using visual flows and modeling algorithms. - -This image shows an overview of the Synthetic Data Generator graphical flow editor. ![Synthetic Data Generator overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-overview.png) - -Data format Learn more about [Creating synthetic data from imported data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html). - -Data size : The Synthetic Data Generator environment can import up to 2.5GB of data. - -" -30A8256A4972314DA32827A081B7541138B454A9_1,30A8256A4972314DA32827A081B7541138B454A9," Prerequisites - -Before you can create synthetic data, you need [to create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html). - -" -30A8256A4972314DA32827A081B7541138B454A9_2,30A8256A4972314DA32827A081B7541138B454A9," Create synthetic data - -1. Access the Synthetic Data Generator tool from within a project. To select a new asset, open a tool, and create an asset, click New asset. - -2. Select All > Prepare Data > Generate synthetic tabular data from the What do you want to do? window. ![What do you want to do window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-what-do-you-want.png) - -3. The Generate synthetic tabular data window opens. Add a name for the asset and a description (optional). Click Create. The flow will open and it might take a minute to create a new session for the flow. ![Generate synthetic tabular data flow asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-generate-synthetic-tabular-data-flow-asset.png) - -4. The Welcome to Synthetic Data Generator wizard opens. You can choose to get started as a first time or experienced user. -![Synthetic Data Generator Get started wizard](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-wizard.png) - -5. If you choose to get started as a first time user, the Generate synthetic tabular data flow window opens. ![Generate synthetic tabular data flow window](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/sd-mimic-mask-flow.png) - -" -30A8256A4972314DA32827A081B7541138B454A9_3,30A8256A4972314DA32827A081B7541138B454A9," Learn more - - - -* [Creating synthetic data from production data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/mask_mimic_data_sd.html) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_0,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Troubleshoot Watson Machine Learning - -Here are the answers to common troubleshooting questions about using IBM Watson Machine Learning. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_1,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Getting help and support for Watson Machine Learning - -If you have problems or questions when using Watson Machine Learning, you can get help by searching for information or by asking questions through a forum. You can also open a support ticket. - -When using the forums to ask a question, tag your question so that it is seen by the Watson Machine Learning development teams. - -If you have technical questions about Watson Machine Learning, post your question on [Stack Overflow ![External link icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](http://stackoverflow.com/search?q=machine-learning+ibm-bluemix) and tag your question with ""ibm-bluemix"" and ""machine-learning"". - -For questions about the service and getting started instructions, use the [IBM developerWorks dW Answers ![External link icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://developer.ibm.com/answers/topics/machine-learning/?smartspace=bluemix) forum. Include the ""machine-learning"" and ""bluemix"" tags. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_2,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Contents - - - -* [Authorization token has not been provided](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_authorization_token) -* [Invalid authorization token](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_authorization_token) -* [Authorization token and instance_id which was used in the request are not the same](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_matching_authorization_token) -* [Authorization token is expired](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_expired_authorization_token) -* [Public key needed for authentication is not available](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_public_key) -* [Operation timed out after {{timeout}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_operation_timeout) -* [Unhandled exception of type {{type}} with {{status}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_status) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_3,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Unhandled exception of type {{type}} with {{response}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_response) -* [Unhandled exception of type {{type}} with {{json}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_json) -* [Unhandled exception of type {{type}} with {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_unhandled_exception_with_message) -* [Requested object could not be found](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_found) -* [Underlying database reported too many requests](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_too_many_cloudant_requests) -* [The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment. It needs to be specified \"" +\n \""at least in one of the places](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_evaluation_definition) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_4,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Data module not found in IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enfl_data_module_missing) -* [Evaluation requires learning configuration specified for the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_learning_configuration) -* [Evaluation requires spark instance to be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_spark_definition_for_evaluation) -* [Model does not contain any version](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_latest_model_version) -* [Patch operation can only modify existing learning configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_non_existing_learning_configuration) -* [Patch operation expects exactly one replace operation](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_multiple_ops) -* [The given payload is missing required fields: FIELD or the values of the fields are corrupted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_request_payload) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_5,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Provided evaluation method: METHOD is not supported. Supported values: VALUE](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_evaluation_method_not_supported) -* [There can be only one active evaluation per model. Request could not be completed because of existing active evaluation: {{url}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_active_evaluation_conflict) -* [The deployment type {{type}} is not supported](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_supported_deployment_type) -* [Incorrect input: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deserialization_error) -* [Insufficient data - metric {{name}} could not be calculated](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_metric) -* [For type {{type}} spark instance must be provided in X-Spark-Service-Instance header](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_missing_prediction_spark_definition) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_6,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Action {{action}} has failed with message {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_http_client_error) -* [Path {{path}} is not allowed. Only allowed path for patch stream is /status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_wrong_stream_patch_path) -* [Patch operation is not allowed for instance of type {{$type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_not_supported) -* [Data connection {{data}} is invalid for feedback_data_ref](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_feedback_data_connection) -* [Path {{path}} is not allowed. Only allowed path for patch model is /deployed_version/url or /deployed_version/href for V2](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_patch_model_path_not_allowed) -* [Parsing failure: {{msg}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_parsing_error) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_7,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Runtime environment for selected model: {{env}} is not supported for learning configuration. Supported environments: - [{{supported_envs}}]](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_runtime_env_not_supported) -* [Current plan \'{{plan}}\' only allows {{limit}} deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_deployments_plan_limit_reached) -* [Database connection definition is not valid ({{code}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_sql_error) -* [There were problems while connecting underlying {{system}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_stream_tcp_error) -* [Error extracting X-Spark-Service-Instance header: ({{message}})](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_spark_header_deserialization_error) -* [This functionality is forbidden for non beta users](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_not_beta_user) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_8,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [{{code}} {{message}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_underlying_api_error) -* [Rate limit exceeded](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_rate_limit_exceeded) -* [Invalid query parameter {{paramName}} value: {{value}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_query_parameter_value) -* [Invalid token type: {{type}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_type) -* [Invalid token format. Bearer token format should be used](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=ents_invalid_token_format) -* [Input JSON file is missing or invalid: 400](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_input) -* [Authorization token has expired: 401](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_expired_authorization_token) -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_9,448502B5D06CD5BCAA58F569AA43AA2E0394A794,"* [Unknown deployment identification:404](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_unkown_depid) -* [Internal server error:500](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_internal_error) -* [Invalid type for ml_artifact: Pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enos_invalid_type_artifact) -* [ValueError: Training_data_ref name and connection cannot be None, if Pipeline Artifact is not given.](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html?context=cdpaas&locale=enpipeline_error) - - - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_10,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Authorization token has not been provided. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_11,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_12,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Authorization token has not been provided in the Authorization header. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_13,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Pass authorization token in the Authorization header. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_14,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Invalid authorization token. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_15,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_16,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Authorization token which has been provided cannot be decoded or parsed. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_17,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Pass correct authorization token in the Authorization header. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_18,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Authorization token and instance_id which was used in the request are not the same. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_19,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_20,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The Authorization token which has been used is not generated for the service instance against which it was used. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_21,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Pass authorization token in the Authorization header which corresponds to the service instance which is being used. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_22,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Authorization token is expired. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_23,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_24,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Authorization token is expired. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_25,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Pass not expired authorization token in the Authorization header. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_26,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Public key needed for authentication is not available. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_27,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_28,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This is internal service issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_29,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -The issue needs to be fixed by support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_30,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Operation timed out after {{timeout}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_31,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_32,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The timeout occurred during performing requested operation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_33,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_34,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Unhandled exception of type {{type}} with {{status}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_35,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_36,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This is internal service issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_37,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_38,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Unhandled exception of type {{type}} with {{response}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_39,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_40,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This is internal service issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_41,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_42,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Unhandled exception of type {{type}} with {{json}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_43,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_44,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This is internal service issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_45,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_46,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Unhandled exception of type {{type}} with {{message}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_47,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_48,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This is internal service issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_49,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. If it occurs more times than it needs to be fixed by support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_50,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Requested object could not be found. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_51,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_52,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The request resource could not be found. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_53,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Ensure that you are referring to the existing resource. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_54,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Underlying database reported too many requests. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_55,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_56,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The user has sent too many requests in a given amount of time. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_57,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_58,448502B5D06CD5BCAA58F569AA43AA2E0394A794," The definition of the evaluation is not defined neither in the artifactModelVersion nor in the deployment. It needs to be specified \"" +\n \""at least in one of the places. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_59,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_60,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Learning Configuration does not contain all required information - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_61,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Provide definition in learning configuration - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_62,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Evaluation requires learning configuration specified for the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_63,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create learning iteration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_64,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is no learning configuration defined for the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_65,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Create learning configuration and try to create learning iteration again. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_66,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Evaluation requires spark instance to be provided in X-Spark-Service-Instance header - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_67,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_68,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is no all required information in learning configuration - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_69,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Provide spark_service in Learning Configuration or in X-Spark-Service-Instance header - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_70,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Model does not contain any version. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_71,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create neither deployment nor set learning configuration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_72,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is inconsistency related to the persistence of the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_73,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to persist the model again and try perform the action again. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_74,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Data module not found in IBM Federated Learning. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_75,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The data handler for IBM Federated Learning is trying to extract a data module from the FL library but is unable to find it. You might see the following error message: - -ModuleNotFoundError: No module named 'ibmfl.util.datasets' - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_76,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Possibly an outdated DataHandler. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_77,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Please review and update your DataHandler to conform to the latest spec. Here is the link to the most recent [MNIST data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py) or ensure your sample versions are up-to-date. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_78,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Patch operation can only modify existing learning configuration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_79,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to invoke patch REST API method to patch learning configuration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_80,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is no learning configuration set for this model or model does not exist. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_81,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Endure that model exists and has already learning configuration set. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_82,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Patch operation expects exactly one replace operation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_83,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The deployment cannot be patched. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_84,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The patch payload contains more than one operation or the patch operation is different than replace. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_85,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Use only one operation in the patch payload which is replace operation - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_86,448502B5D06CD5BCAA58F569AA43AA2E0394A794," The given payload is missing required fields: FIELD or the values of the fields are corrupted. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_87,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to process action which is related to access to the underlying data set. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_88,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The access to the data set is not properly defined. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_89,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Correct the access definition for the data set. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_90,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Provided evaluation method: METHOD is not supported. Supported values: VALUE. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_91,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create learning configuration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_92,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The wrong evaluation method was used to create learning configuration. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_93,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Use supported evaluation method which is one of: regression, binary, multiclass. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_94,448502B5D06CD5BCAA58F569AA43AA2E0394A794," There can be only one active evaluation per model. Request could not be completed because of existing active evaluation: {{url}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_95,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create another learning iteration - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_96,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There can be only one running evaluation for the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_97,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -See the already running evaluation or wait till it ends and start the new one. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_98,448502B5D06CD5BCAA58F569AA43AA2E0394A794," The deployment type {{type}} is not supported. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_99,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create deployment. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_100,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Not supported deployment type was used. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_101,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Supported deployment type should be used. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_102,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Incorrect input: ({{message}}) - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_103,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_104,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is an issue with parsing json. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_105,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Ensure that the correct json is passed in the request. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_106,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Insufficient data - metric {{name}} could not be calculated - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_107,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Learning iteration has failed - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_108,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Value for metric with defined threshold could not be calculated because of insufficient feedback data - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_109,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Review and improve data in data source feedback_data_ref in learning configuration - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_110,448502B5D06CD5BCAA58F569AA43AA2E0394A794," For type {{type}} spark instance must be provided in X-Spark-Service-Instance header - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_111,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Deployment cannot be created - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_112,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -batch and streaming deployments require spark instance to be provided - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_113,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Provide spark instance in X-Spark-Service-Instance header - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_114,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Action {{action}} has failed with message {{message}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_115,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_116,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There was an issue with invoking underlying service. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_117,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -If there is suggestion how to fix the issue than follow it. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_118,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Path {{path}} is not allowed. Only allowed path for patch stream is /status - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_119,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to patch the stream deployment. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_120,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The wrong path was used to patch the stream deployment. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_121,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Patch the stream deployment with supported path option which is /status (it allows to start/stop stream processing. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_122,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Patch operation is not allowed for instance of type {{$type}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_123,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to patch the deployment. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_124,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The wrong deployment type is being patched. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_125,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Patch the stream deployment type. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_126,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Data connection {{data}} is invalid for feedback_data_ref - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_127,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create learning configuration for the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_128,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Not supported data source was used when defining feedback_data_ref. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_129,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Use only supported data source type which is dashdb - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_130,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Path {{path}} is not allowed. Only allowed path for patch model is /deployed_version/url or /deployed_version/href for V2 - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_131,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no option to patch model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_132,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The wrong path was used during patching of the model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_133,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Patch model with supported path which allows to update the version of deployed model. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_134,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Parsing failure: {{msg}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_135,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_136,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The requested payload could not be parsed successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_137,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Ensure that your request payload is correct and can be parsed correctly. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_138,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Runtime environment for selected model: {{env}} is not supported for learning configuration. Supported environments: [{{supported_envs}}]. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_139,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no option to create learning configuration - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_140,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The model for which the learning_configuration was tried to be created is not supported. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_141,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Create learning configuration for model which has supported runtime. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_142,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Current plan \'{{plan}}\' only allows {{limit}} deployments - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_143,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to create deployment. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_144,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The limit for number of deployments was reached for the current plan. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_145,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Upgrade to the plan which does not have such limitation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_146,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Database connection definition is not valid ({{code}}) - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_147,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility utilize the learning configuration functionality. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_148,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Database connection definition is not valid. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_149,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to fix the issue which is described by code returned by underlying database. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_150,448502B5D06CD5BCAA58F569AA43AA2E0394A794," There were problems while connecting underlying {{system}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_151,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_152,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There was an issue during connection to the underlying system. It might be temporary network issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_153,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Try to invoke desired operation again. If it occurs more times than contact support team. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_154,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -There is no possibility to invoke REST API which requires Spark credentials - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_155,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There is an issue with base-64 decoding or parsing Spark credentials. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_156,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Ensure that the correct Spark credentials were correctly base-64 encoded. For more information, see the documentation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_157,448502B5D06CD5BCAA58F569AA43AA2E0394A794," This functionality is forbidden for non beta users. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_158,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The desired REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_159,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -REST API which was invoked is currently in beta. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_160,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -If you are interested in participating, add yourself to the wait list. The details can be found in documentation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_161,448502B5D06CD5BCAA58F569AA43AA2E0394A794," {{code}} {{message}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_162,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The REST API cannot be invoked successfully. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_163,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -There was an issue with invoking underlying service. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_164,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -If there is suggestion how to fix the issue then follow it. Contact the support team if there is no suggestion in the message or the suggestion does not solve the issue. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_165,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Rate limit exceeded. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_166,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Rate limit exceeded. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_167,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Rate limit for current plan has been exceeded. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_168,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -To solve this problem, acquire another plan with a greater rate limit - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_169,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Invalid query parameter {{paramName}} value: {{value}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_170,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Validation error as passed incorrect value for query parameter. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_171,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Error in getting result for query. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_172,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Correct query parameter value. The details can be found in documentation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_173,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Invalid token type: {{type}} - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_174,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Error regarding token type. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_175,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Error in authorization. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_176,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Token should be started with Bearer prefix - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_177,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Invalid token format. Bearer token format should be used. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_178,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -Error regarding token format. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_179,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -Error in authorization. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_180,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Token should be bearer token and should start with Bearer prefix - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_181,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Input JSON file is missing or invalid: 400 - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_182,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The following message displays when you try to score online: Input JSON file is missing or invalid. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_183,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This message displays when the scoring input payload doesn't match the expected input type that is required for scoring the model. Specifically, the following reasons may apply: - - - -* The input payload is empty. -* The input payload schema is not valid. -* The input datatypes does not match the expected datatypes. - - - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_184,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Correct the input payload. Make sure that the payload has correct syntax, a valid schema, and proper data types. After you make corrections, try to score online again. For syntax issues, verify the JSON file by using the jsonlint command. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_185,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Authorization token has expired: 401 - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_186,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The following message displays when you try to score online: Authorization failed. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_187,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This message displays when the token that is used for scoring has expired. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_188,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Re-generate the token for this IBM Watson Machine Learning instance and then retry. If you still see this issue contact IBM Support. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_189,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Unknown deployment identification:404 - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_190,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The following message displays when you try to score online Unknown deployment identification. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_191,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This message displays when the deployment ID that is used for scoring does not exists. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_192,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Make sure you are providing the correct deployment ID. If not, deploy the model with the deployment ID and then try scoring it again. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_193,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Internal server error:500 - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_194,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The following message displays when you try to score online: Internal server error - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_195,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This message displays if the downstream data flow on which the online scoring depends fails. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_196,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -After waiting for a period of time, try to score online again. If it fails again then contact IBM Support. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_197,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Invalid type for ml_artifact: Pipeline - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_198,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The following message displays when you try to publish Spark model using Common API client library on your workstation. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_199,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -This message displays if you have invalid pyspark setup in operating system. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_200,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -Set up system environment paths according to the instruction: - -SPARK_HOME={installed_spark_path} -JAVA_HOME={installed_java_path} -PYTHONPATH=$SPARK_HOME/python/ - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_201,448502B5D06CD5BCAA58F569AA43AA2E0394A794," ValueError: Training_data_ref name and connection cannot be None, if Pipeline Artifact is not given. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_202,448502B5D06CD5BCAA58F569AA43AA2E0394A794," What's happening - -The training data set is missing or has not been properly referenced. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_203,448502B5D06CD5BCAA58F569AA43AA2E0394A794," Why it's happening - -The Pipeline Artifact is a training data set in this instance. - -" -448502B5D06CD5BCAA58F569AA43AA2E0394A794_204,448502B5D06CD5BCAA58F569AA43AA2E0394A794," How to fix it - -When persisting a Spark PipelineModel you MUST supply a training data set, if you don't the client says it doesn't support PipelineModels, rather than saying a PipelineModel must be accompanied by the training set. -" -4A7F60F563F15CC32060C5F17CB44699A221AD5E_0,4A7F60F563F15CC32060C5F17CB44699A221AD5E," IBM Cloud services status - -If you're having a problem with one of your services, go to the IBM Cloud Status page. The Status page shows unplanned incidents, planned maintenance, announcements, and security bulletin notifications about key events that affect the IBM Cloud platform, infrastructure, and major services. - -You can find the Status page by logging in to the IBM Cloud console. Click Support from the menu bar, and then click View cloud status from the Support Center. Or, you can access the page directly at [IBM Cloud - Status](https://cloud.ibm.com/status?type=incident&component=ibm-cloud-platform&selected=status). Search for the service to view its status. - -" -4A7F60F563F15CC32060C5F17CB44699A221AD5E_1,4A7F60F563F15CC32060C5F17CB44699A221AD5E," Learn more - -[Viewing cloud status](https://cloud.ibm.com/docs/get-support?topic=get-support-viewing-cloud-status) -" -5A6081124D93ACD0A12843F64984257A02BB3871_0,5A6081124D93ACD0A12843F64984257A02BB3871," Troubleshooting connections - -Use these solutions to resolve problems that you might encounter with connections. - -" -5A6081124D93ACD0A12843F64984257A02BB3871_1,5A6081124D93ACD0A12843F64984257A02BB3871," IBM Db2 for z/OS: Error retrieving the schema list when you try to connect to a Db2 for z/OS server - -When you test the connection to a Db2 for z/OS server and the connection cannot retrieve the schema list, you might receive the following error: - -CDICC7002E: The assets request failed: CDICO2064E: The metadata for the column TABLE_SCHEM could not -be obtained: Sql error: [jcc] [10300] Invalid parameter: Unknown column name -TABLE_SCHEM. ERRORCODE=-4460, SQLSTATE=null - -Workaround: On the Db2 for z/OS server, set the DESCSTAT subsystem parameter to No. For more information, see [DESCRIBE FOR STATIC field (DESCSTAT subsystem parameter)](https://www.ibm.com/docs/SSEPEK_13.0.0/inst/src/tpc/db2z_ipf_descstat.html). - -Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_0,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Troubleshooting Cloud Object Storage for projects - -Use these solutions to resolve issues you might experience when using Cloud Object Storage with projects in IBM watsonx. Many errors that occur when creating projects can be resolved by correctly configuring Cloud Object Storage. For instructions, see [Setting up Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html). - -Possible error messages: - - - -* [Error retrieving Administrator API key token for your Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enkey-token) -* [Unable to configure credentials for your project in the selected Cloud Object Storage instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=encredentials) -* [User login from given IP address is not permitted](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enrestricted-ip) -* [Project cannot be created](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html?context=cdpaas&locale=enproject-failed) - - - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_1,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Cannot retrieve API key - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_2,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Symptoms - -When you create a project, the following error occurs: - -Error retrieving Administrator API key token for your Cloud Object Storage instance - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_3,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Causes - - - -* You have not been assigned the Editor role in the IBM Cloud account. - - - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_4,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Resolutions - -The account administrator must complete the following tasks: - - - -* Invite users to the IBM Cloud account and assign the Editor role. See [Add non-administrative users to your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-addl-users.htmlusers). - - - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_5,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Unable to configure credentials - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_6,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Symptoms - -When you create a project and associate it to a Cloud Object Storage instance, the following error occurs: - -Unable to configure credentials for your project in the selected Cloud Object Storage instance. - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_7,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Causes - - - -* You have exceeded the access policy limit for the account. -* For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance. - - - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_8,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Resolutions - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_9,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2,"For exceeding access policies: - - - -1. Verify that you are the owner of the Cloud Object Storage instance or that the owner has granted you Administrator and Manager roles for this service instance. Otherwise, ask your IBM Cloud administrator to fix this problem. -2. Check the total number of access policies to determine whether you have reached a limit. See [IBM Cloud IAM limits](https://cloud.ibm.com/docs/account?topic=account-known-issuesiam_limits) for the limit information. -3. Delete at least 4 or more unused access policies for the service ID. - - - -See [Reducing time and effort managing access](https://cloud.ibm.com/docs/account?topic=account-account_setuplimit-policies) for strategies that you can use to ensure that you don't reach the limit. - -For exceeding 25 GB limit for a Lite account: - -For a Lite account, you have exceeded the 25 GB limit for the Cloud Object Storage instance. Possible resolutions are to upgrade to a billable account, delete stored assets for the current account, or wait until the first of the month when the limit resets. See [Set up a billable account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account). - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_10,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Login not permitted from IP address - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_11,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Symptoms - -When you create or work with a project, the following error occurs: - -User login from given IP address is not permitted. The user has configured IP address restriction for login. The given IP address 'XX.XXX.XXX.XX' is not contained in the list of allowed IP addresses. - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_12,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Causes - -Restrict IP address access has been configured to allow specific IP addresses access to Watson Studio. The IP address of the computer you are using is not allowed. - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_13,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Resolutions - -Add the IP address to the allowed IP addresses, if your security qualifications allow it. See [Allow specific IP addresses](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-network.htmlallow-specific-ip-addresses). - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_14,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Project cannot be created - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_15,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Symptoms - -When you create a project, the following error occurs: - -Project cannot be created. - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_16,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Causes - -The Cloud Object Storage instance is not available, due to the Global location is not enabled for your services. Cloud Object Storage requires the Global location. - -" -5D1BCA52E974C3F4DE54366A242DF751E73ACBD2_17,5D1BCA52E974C3F4DE54366A242DF751E73ACBD2," Possible Resolutions - -Enable the Global location in your account profile. From your account, click your avatar and select Profile and settings to open your IBM watsonx profile. Under Service Filters > Locations, check the Global location as well as other locations where services are present. See [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlprofile). -" -3E24051D290E000441A4FDB326D73BB81505BD05,3E24051D290E000441A4FDB326D73BB81505BD05," Troubleshooting - -If you encounter an issue in IBM watsonx, use the following resources to resolve the problem. - - - -* [View IBM Cloud service status](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) -* [Troubleshoot connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html) -* [Troubleshoot Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/ts_index.html) -* [Troubleshoot Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ts_sd.html) -* [Troubleshoot IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) -* [Troubleshoot Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/ml_troubleshooting.html) -* [Troubleshoot Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html) -* [Troubleshoot Watson Studio on IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wscloud-troubleshoot.html) -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_0,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Troubleshooting Synthetic Data Generator - -Use this information to resolve questions about using Synthetic Data Generator. - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_1,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Typeless columns ignored for an Import node - -When you use an Import node that contains Typeless columns, these columns will be ignored when you use the Mimic node. After pressing the Read Values button, the Typeless columns will be automatically set to Pass and will not be present in the final dataset. - -Suggested workaround: - -Add a new column in the Generate node for the missing column(s). - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_2,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Size limit notice - -The Synthetic Data Generator environment can import up to 2.5GB of data. - -Suggested workaround: - -If you receive a related error message or your data fails to import, please reduce the amount of data and try again. - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_3,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string - -For example, preview of data asset using Import node gives the following error: - -Node: -Import -WDP Connector Error: CDICO9999E: Internal error occurred: SCAPI error: The value on row 1,029 is not a valid string of the Bit data type for the SecurityDelay column. - -This is expected behavior. In this particular case, the 1st 1000 rows were binary, 0's or 1's. The value at row 1,029 was 3. For most flat files, Synthetic Data Generator reads the 1st 1000 records to infer the data type. In this case, Synthetic Data Generator inferred binary values (0 or 1). When Synthetic Data Generator read a value of 3 at row 1,029, it threw an error, as 3 is not a binary value. - -Suggested workarounds: - - - -1. Users can adjust their Infer_record_count parameter to include more data, choosing 2000 rows instead (or more). -2. Users can update the value in the first 1000 rows that is causing the error, if this is an error in the data. - - - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_4,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Error Mimic Data set no available input record. - -The Mimic node requires the input dataset to have at least one valid record (a record without any missing values). If your dataset is empty, or if the dataset does not contain at least one valid record, clicking Run selection gives the following error message: - -Node: -Mimic -Mimic Data set no available input record. - -Suggested workarounds: - - - -1. Fix your dataset so that there is at least one record (row) that contains a value for every column and then try again. -2. Click Read values from the Import node and run your flow again. ![Read values](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/td-read-values-sd.png) - - - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_5,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Error: Incorrect number of fields detected in the server data model. or WDP Connector Execution Error - -Creating a new flow using a .synth file, then doing a migration of the Import node with a newly uploaded file to the project, and then running the flow, gives one or both of the following errors: - -Error: Incorrect number of fields detected in the server data model. - -or - -WDP Connector Execution Error - -This error is caused by using different data sets (data models) for the create flow and for the migration data. - -Suggested workaround: - -Run the Mimic node that creates the Generate node a second time. - -" -D0907278CA0EA55B0E0ED9E834810D502A817AF0_6,D0907278CA0EA55B0E0ED9E834810D502A817AF0," Error: Valid variable does not exist in metadata - -Doing a migration of the Import node and then running the flow fails and gives the error: - -Error: Valid variable does not exist in metadata - -Suggested workaround: - -Make sure that in your Import node you have at least one field that is not Typeless. For example, in the screen capture below, the only field in the Import node is Typeless. At least one field that is not Typeless should be added to the Import node to avoid this error. ![Typeless field in Import node](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/images/td-import-typeless-sd.png) -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_0,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Troubleshooting Watson OpenScale - -You can use the following techniques to work around problems with IBM Watson OpenScale. - - - -* [When I use AutoAI, why am I getting an error about mismatched data?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-autoai-binary) -* [Why am I getting errors during model configuration?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-wml-model-details) -* [Why are my class labels missing when I use XGBoost?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-xgboost-multiclass) -* [Why are the payload analytics not displaying properly?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadfileformat) -* [Error: An error occurred while computing feature importance](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-wos-equals-sign-explainability) -* [Why are some of my active debias records missing?](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-trouble-common-payloadlogging-1000k-limit) -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_1,0B35E778B109957EE1CC48FA8E46ED7A1633E380,"* [Watson OpenScale does not show any available schemas](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-available-schemas) -* [A monitor run fails with an OutOfResources exception error message](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/wos-troubleshoot.html?context=cdpaas&locale=ents-resources-exception) - - - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_2,0B35E778B109957EE1CC48FA8E46ED7A1633E380," When I use AutoAI, why am I getting an error about mismatched data? - -You receive an error message about mismatched data when using AutoAI for binary classification. Note that AutoAI is only supported in IBM Watson OpenScale for IBM Cloud Pak for Data. - -For binary classification type, AutoAI automatically sets the data type of the prediction column to boolean. - -To fix this, implement one of the following solutions: - - - -* Change the label column values in the training data to integer values, such as 0 or 1 depending on the outcome. -* Change the label column values in the training data to string value, such as A and B. - - - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_3,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Why am I getting errors during model configuration? - -The following error messages appear when you are configuring model details: Field feature_fields references column , which is missing in input_schema of the model. Feature not found in input schema. - -The preceding messages while completing the Model details section during configuration indicate a mismatch between the model input schema and the model training data schema: - -To fix the issue, you must determine which of the following conditions is causing the error and take corrective action: If you use IBM Watson Machine Learning as your machine learning provider and the model type is XGBoost/scikit-learn refer to the Machine Learning [Python SDK documentation](https://ibm.github.io/watson-machine-learning-sdk/repository) for important information about how to store the model. To generate the drift detection model, you must use scikit-learn version 0.20.2 in notebooks. For all other cases, you must ensure that the training data column names match with the input schema column names. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_4,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Why are my class labels missing when I use XGBoost? - -Native XGBoost multiclass classification does not return class labels. - -By default, for binary and multiple class models, the XGBoost framework does not return class labels. - -For XGBoost binary and multiple class models, you must update the model to return class labels. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_5,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Why are the payload analytics not displaying properly? - -Payload analytics does not display properly and the following error message displays: AIQDT0044E Forbidden character "" in column name - -For proper processing of payload analytics, Watson OpenScale does not support column names with double quotation marks ("") in the payload. This affects both scoring payload and feedback data in CSV and JSON formats. - -Remove double quotation marks ("") from the column names of the payload file. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_6,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Error: An error occurred while computing feature importance - -You receive the following error message during processing: Error: An error occurred while computing feature importance. - -Having an equals sign (=) in the column name of a dataset causes an issue with explainability. - -Remove the equals sign (=) from the column name and send the dataset through processing again. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_7,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Why are some of my active debias records missing? - -Active debias records do not reach the payload logging table. - -When you use the active debias API, there is a limit of 1000 records that can be sent at one time for payload logging. - -To avoid loss of data, you must use the active debias API to score in chunks of 1000 records or fewer. - -For more information, see [Reviewing debiased transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-insight-timechart.html). - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_8,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Watson OpenScale does not show any available schemas - -When a user attempts to retrieve schema information for Watson OpenScale, none are available. After attempting directly in DB2, without reference to Watson OpenScale, checking what schemas are available for the database userid also returns none. - -Insufficient permissions for the database userid is causing database connection issues for Watson OpenScale. - -Make sure the database user has the correct permissions needed for Watson OpenScale. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_9,0B35E778B109957EE1CC48FA8E46ED7A1633E380," A monitor run fails with an OutOfResources exception error message - -You receive an OutOfResources exception error message. - -Although there's no longer a limit on the number of rows you can have in the feedback payload, scoring payload, or business payload tables. The 50,000 limit now applies to the number of records you can run through the quality and bias monitors each billing period. - -After you reach your limit, you must either upgrade to a Standard plan or wait for the next billing period. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_10,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Missing deployments - -A deployed model does not show up as a deployment that can be selected to create a subscription. - -There are different reasons that a deployment does not show up in the list of available deployed models. If the model is not a supported type of model because it uses an unsupported algorithm or framework, it won't appear. Your machine learning provider might not be configured properly. It could also be that there are issues with permissions. - -Use the following steps to resolve this issue: - - - -1. Check that the model is a supported type. Not sure? For more information, see [Supported machine learning engines, frameworks, and models](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-frameworks-ovr.html). -2. Check that a machine learning provider exists in the Watson OpenScale configuration for the specific deployment space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). -3. Check that the CP4D admin user has permission to access the deployment space. - - - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_11,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Watson OpenScale evaluation might fail due to large number of subscriptions - -If a Watson OpenScale instance contains too many subscriptions, such as 100 subscriptions, your quality evaluations might fail. You can view the details of the failure in the log for the data mart service pod that displays the following error message: - -""Failure converting response to expected model EntityStreamSizeException: actual entity size (Some(8644836)) exceeded content length limit (8388608 bytes)! You can configure this by setting akka.http.[server|client].parsing.max-content-length or calling HttpEntity.withSizeLimit before materializing the dataBytes stream"". - -You can use the oc get pod -l component=aios-datamart command to find the name of the pod. You can also use the oc logs command to the log for the pod. - -To fix this error, you can use the following command to increase the maximum request body size by editing the ""ADDITIONAL_JVM_OPTIONS"" environment variable: - -oc patch woservice -p '{""spec"": {""datamart"": {""additional_jvm_options"":""-Dakka.http.client.parsing.max-content-length=100m""} }}' --type=merge - -The release name is ""aiopenscale"" if you don't customize the release name when you install Watson OpenScale. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_12,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Microsoft Azure ML Studio - - - -* Of the two types of Azure Machine Learning web services, only the New type is supported by Watson OpenScale. The Classic type is not supported. -* Default input name must be used: In the Azure web service, the default input name is ""input1"". Currently, this field is mandated for Watson OpenScale and, if it is missing, Watson OpenScale will not work. - -If your Azure web service does not use the default name, change the input field name to ""input1"", then redeploy your web service and reconfigure your OpenScale machine learning provider settings. -* If calls to Microsoft Azure ML Studio to list the machine learning models causes the response to time out, for example when you have many web services, you must increase timeout values. You may need to work around this issue by changing the /etc/haproxy/haproxy.cfg configuration setting: - - - -* Log in to the load balancer node and update /etc/haproxy/haproxy.cfg to set the client and server timeout from 1m to 5m: - -timeout client 5m -timeout server 5m -* Run systemctl restart haproxy to restart the HAProxy load balancer. - - - - - -If you are using a different load balancer, other than HAProxy, you may need to adjust timeout values in a similar fashion. - - - -* Of the two types of Azure Machine Learning web services, only the New type is supported by Watson OpenScale. The Classic type is not supported. - - - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_13,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Uploading feedback data fails in production subscription after importing settings - -After importing the settings from your pre-production space to your production space you might have problems uploading feedback data. This happens when the datatypes do not match precisely. When you import settings, the feedback table references the payload table for its column types. You can avoid this issue by making sure that the payload data has the most precise value type first. For example, you must prioritize a double datatype over an integer datatype. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_14,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Microsoft Azure Machine Learning Service - -When performing model evaluation, you may encounter issues where Watson OpenScale is not able to communicate with Azure Machine Learning Service, when it needs to invoke deployment scoring endpoints. Security tools that enforce your enterprise security policies, such as Symantec Blue Coat may prevent such access. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_15,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Watson OpenScale fails to create a new Hive table for the batch deployment subscription - -When you choose to create a new Apache Hive table with the Parquet format during your Watson OpenScale batch deployment configuration, the following error might occur: - -Attribute name ""table name"" contains invalid character(s) among "" ,;{}()\n\t="". Please use alias to rename it.; - -This error occurs if Watson OpenScale fails to run the CREATE TABLE SQL operation due to white space in a column name. To avoid this error, you can remove any white space from your column names or change the Apache Hive format to csv. - -" -0B35E778B109957EE1CC48FA8E46ED7A1633E380_16,0B35E778B109957EE1CC48FA8E46ED7A1633E380," Watson OpenScale setup might fail with default Db2 database - -When you set up Watson OpenScale and specify the default Db2 database, the setup might fail to complete. - -To fix this issue, you must run the following command in Cloud Pak for Data to update Db2: - -db2 update db cfg using DFT_EXTENT_SZ 32 - -After you run the command, you must create a new Db2 database to set up Watson OpenScale. - -Parent topic:[Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot.html) -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_0,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Troubleshooting Watson Studio on IBM Cloud - -You can use the following techniques to work around problems you might encounter with Watson Studio on IBM Cloud. - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_1,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Project limit exceeded - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_2,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Symptoms - -When you create a project, the following error occurs: - -The number of projects created by the authenticated user exceeds the designated limit. - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_3,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Possible Causes - -The number of projects an authenticated user can create per data center (region) is 100. The limit applies only to projects that a user creates. Projects for which the user is listed as a collaborator are not included in this limit. - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_4,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Possible Resolutions - -Although most customers do not reach this limit, possible resolutions include: - - - -* Delete projects. -* Any authenticated user can request a project limit increase by contacting [IBM Cloud Support](https://www.ibm.com/cloud/support), provided that an adequate justification is specified. - - - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_5,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Blank screen when loading - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_6,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Symptoms - -A blank screen appears when you open Watson Studio. - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_7,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Possible Causes - -A cached version is loading. - -" -93A3A5E1A633EB2AB616759DFB76DC433ABD4D38_8,93A3A5E1A633EB2AB616759DFB76DC433ABD4D38," Possible Resolutions - - - -1. Clear the browser cache and cookies and re-open Watson Studio. -" -F7B2DD759B6FC618D53AD49053C24EF8D35105C5_0,F7B2DD759B6FC618D53AD49053C24EF8D35105C5," Deploying and managing assets - -Use Watson Machine Learning to deploy models and solutions so that you can put them into productive use, then monitor the deployed assets for fairness and explainability. You can also automate the AI lifecycle to keep your deployed assets current. - -" -F7B2DD759B6FC618D53AD49053C24EF8D35105C5_1,F7B2DD759B6FC618D53AD49053C24EF8D35105C5," Completing the AI lifecycle - -After you prepare your data and build then train models or solutions, you complete the AI lifecycle by deploying and monitoring your assets. - -![Overview of model workflow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-engineer-overview-wx.svg) - -Deployment is the final stage of the lifecycle of a model or script, where you run your models and code. Watson Machine Learning provides the tools that you need to deploy an asset, such as a predictive model, Python function. You can also deploy foundation model assets, such as prompt templates, to put them into production. - -Following deployment, you can use model management tools to evaluate your models. IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure they remain fair, explainable, and compliant. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production. - -Finally, you can use IBM Watson Pipelines to manage your ModelOps processes. Create a pipeline that automates parts of the AI lifecycle, such as training and deploying a machine learning model. - -" -F7B2DD759B6FC618D53AD49053C24EF8D35105C5_2,F7B2DD759B6FC618D53AD49053C24EF8D35105C5," Next steps - - - -* To learn more about how to manage assets in a deployment space, see [Manage assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). -" -F003581774D3028EF53E61A002C20A6D36BA8E00_0,F003581774D3028EF53E61A002C20A6D36BA8E00," Glossary - -This glossary provides terms and definitions for watsonx.ai and watsonx.governance. - -The following cross-references are used in this glossary: - - - -* See refers you from a nonpreferred term to the preferred term or from an abbreviation to the spelled-out form. -* See also refers you to a related or contrasting term. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_1,F003581774D3028EF53E61A002C20A6D36BA8E00,[A](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossa)[B](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossb)[C](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossc)[D](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossd)[E](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosse)[F](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossf)[G](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossg)[H](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossh)[I](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossi)[J](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englos -F003581774D3028EF53E61A002C20A6D36BA8E00_2,F003581774D3028EF53E61A002C20A6D36BA8E00,sj)[K](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossk)[L](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossl)[M](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossm)[N](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossn)[O](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosso)[P](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossp)[R](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossr)[S](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosss)[T](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englosst)[U](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale -F003581774D3028EF53E61A002C20A6D36BA8E00_3,F003581774D3028EF53E61A002C20A6D36BA8E00,=englossu)[V](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossv)[W](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossw)[Z](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=englossz) -F003581774D3028EF53E61A002C20A6D36BA8E00_4,F003581774D3028EF53E61A002C20A6D36BA8E00," A - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_5,F003581774D3028EF53E61A002C20A6D36BA8E00," accelerator - -In high-performance computing, a specialized circuit that is used to take some of the computational load from the CPU, increasing the efficiency of the system. For example, in deep learning, GPU-accelerated computing is often employed to offload part of the compute workload to a GPU while the main application runs off the CPU. See also [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8987320). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_6,F003581774D3028EF53E61A002C20A6D36BA8E00," accountability - -The expectation that organizations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks. This includes determining who is responsible for an AI mistake which may require legal experts to determine liability on a case-by-case basis. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_7,F003581774D3028EF53E61A002C20A6D36BA8E00," activation function - -A function defining a neural unit's output given a set of incoming activations from other neurons - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_8,F003581774D3028EF53E61A002C20A6D36BA8E00," active learning - -A model for machine learning in which the system requests more labeled data only when it needs it. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_9,F003581774D3028EF53E61A002C20A6D36BA8E00," active metadata - -Metadata that is automatically updated based on analysis by machine learning processes. For example, profiling and data quality analysis automatically update metadata for data assets. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_10,F003581774D3028EF53E61A002C20A6D36BA8E00," active runtime - -An instance of an environment that is running to provide compute resources to analytical assets. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_11,F003581774D3028EF53E61A002C20A6D36BA8E00," agent - -An algorithm or a program that interacts with an environment to learn optimal actions or decisions, typically using reinforcement learning, to achieve a specific goal. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_12,F003581774D3028EF53E61A002C20A6D36BA8E00," AI - -See [artificial intelligence](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3448902). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_13,F003581774D3028EF53E61A002C20A6D36BA8E00," AI accelerator - -Specialized silicon hardware designed to efficiently execute AI-related tasks like deep learning, machine learning, and neural networks for faster, energy-efficient computing. It can be a dedicated unit in a core, a separate chiplet on a multi-module chip or a separate card. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_14,F003581774D3028EF53E61A002C20A6D36BA8E00," AI ethics - -A multidisciplinary field that studies how to optimize AI's beneficial impact while reducing risks and adverse outcomes. Examples of AI ethics issues are data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_15,F003581774D3028EF53E61A002C20A6D36BA8E00," AI governance - -An organization's act of governing, through its corporate instructions, staff, processes and systems to direct, evaluate, monitor, and take corrective action throughout the AI lifecycle, to provide assurance that the AI system is operating as the organization intends, as its stakeholders expect, and as required by relevant regulation. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_16,F003581774D3028EF53E61A002C20A6D36BA8E00," AI safety - -The field of research aiming to ensure artificial intelligence systems operate in a manner that is beneficial to humanity and don't inadvertently cause harm, addressing issues like reliability, fairness, transparency, and alignment of AI systems with human values. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_17,F003581774D3028EF53E61A002C20A6D36BA8E00," AI system - -See [artificial intelligence system](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10065431). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_18,F003581774D3028EF53E61A002C20A6D36BA8E00," algorithm - -A formula applied to data to determine optimal ways to solve analytical problems. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_19,F003581774D3028EF53E61A002C20A6D36BA8E00," analytics - -The science of studying data in order to find meaningful patterns in the data and draw conclusions based on those patterns. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_20,F003581774D3028EF53E61A002C20A6D36BA8E00," appropriate trust - -In an AI system, an amount of trust that is calibrated to its accuracy, reliability, and credibility. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_21,F003581774D3028EF53E61A002C20A6D36BA8E00," artificial intelligence (AI) - -The capability to acquire, process, create and apply knowledge in the form of a model to make predictions, recommendations or decisions. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_22,F003581774D3028EF53E61A002C20A6D36BA8E00," artificial intelligence system (AI system) - -A system that can make predictions, recommendations or decisions that influence physical or virtual environments, and whose outputs or behaviors are not necessarily pre-determined by its developer or user. AI systems are typically trained with large quantities of structured or unstructured data, and might be designed to operate with varying levels of autonomy or none, to achieve human-defined objectives. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_23,F003581774D3028EF53E61A002C20A6D36BA8E00," asset - -An item that contains information about data, other valuable information, or code that works with data. See also [data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx6094928). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_24,F003581774D3028EF53E61A002C20A6D36BA8E00," attention mechanism - -A mechanism in deep learning models that determines which parts of the input a model focuses on when producing output. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_25,F003581774D3028EF53E61A002C20A6D36BA8E00," AutoAI experiment - -An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_26,F003581774D3028EF53E61A002C20A6D36BA8E00," B - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_27,F003581774D3028EF53E61A002C20A6D36BA8E00," batch deployment - -A method to deploy models that processes input data from a file, data connection, or connected data in a storage bucket, then writes the output to a selected destination. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_28,F003581774D3028EF53E61A002C20A6D36BA8E00," bias - -Systematic error in an AI system that has been designed, intentionally or not, in a way that may generate unfair decisions. Bias can be present both in the AI system and in the data used to train and test it. AI bias can emerge in an AI system as a result of cultural expectations; technical limitations; or unanticipated deployment contexts. See also [fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3565572). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_29,F003581774D3028EF53E61A002C20A6D36BA8E00," bias detection - -The process of calculating fairness to metrics to detect when AI models are delivering unfair outcomes based on certain attributes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_30,F003581774D3028EF53E61A002C20A6D36BA8E00," bias mitigation - -Reducing biases in AI models by curating training data and applying fairness techniques. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_31,F003581774D3028EF53E61A002C20A6D36BA8E00," binary classification - -A classification model with two classes. Predictions are a binary choice of one of the two classes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_32,F003581774D3028EF53E61A002C20A6D36BA8E00," C - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_33,F003581774D3028EF53E61A002C20A6D36BA8E00," classification model - -A predictive model that predicts data in distinct categories. Classifications can be binary, with two classes of data, or multi-class when there are more than 2 categories. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_34,F003581774D3028EF53E61A002C20A6D36BA8E00," cleanse - -To ensure that all values in a data set are consistent and correctly recorded. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_35,F003581774D3028EF53E61A002C20A6D36BA8E00," CNN - -See [convolutional neural network](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10297974). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_36,F003581774D3028EF53E61A002C20A6D36BA8E00," computational linguistics - -Interdisciplinary field that explores approaches for computationally modeling natural languages. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_37,F003581774D3028EF53E61A002C20A6D36BA8E00," compute resource - -The hardware and software resources that are defined by an environment template to run assets in tools. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_38,F003581774D3028EF53E61A002C20A6D36BA8E00," confusion matrix - -A performance measurement that determines the accuracy between a model's positive and negative predicted outcomes compared to positive and negative actual outcomes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_39,F003581774D3028EF53E61A002C20A6D36BA8E00," connected data asset - -A pointer to data that is accessed through a connection to an external data source. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_40,F003581774D3028EF53E61A002C20A6D36BA8E00," connected folder asset - -A pointer to a folder in IBM Cloud Object Storage. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_41,F003581774D3028EF53E61A002C20A6D36BA8E00," connection - -The information required to connect to a database. The actual information that is required varies according to the DBMS and connection method. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_42,F003581774D3028EF53E61A002C20A6D36BA8E00," connection asset - -An asset that contains information that enables connecting to a data source. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_43,F003581774D3028EF53E61A002C20A6D36BA8E00," constraint - - - -* In databases, a relationship between tables. -* In Decision Optimization, a condition that must be satisfied by the solution of a problem. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_44,F003581774D3028EF53E61A002C20A6D36BA8E00," continuous learning - -Automating the tasks of monitoring model performance, retraining with new data, and redeploying to ensure prediction quality. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_45,F003581774D3028EF53E61A002C20A6D36BA8E00," convolutional neural network (CNN) - -A class of neural network commonly used in computer vision tasks that uses convolutional layers to process image data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_46,F003581774D3028EF53E61A002C20A6D36BA8E00," Core ML deployment - -The process of downloading a deployment in Core ML format for use in iOS apps. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_47,F003581774D3028EF53E61A002C20A6D36BA8E00," corpus - -A collection of source documents that are used to train a machine learning model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_48,F003581774D3028EF53E61A002C20A6D36BA8E00," cross-validation - -A technique for testing how well a model generalizes in the absence of a hold-out test sample. Cross-validation divides the training data into a number of subsets, and then builds the same number of models, with each subset held out in turn. Each of those models is tested on the holdout sample, and the average accuracy of the models on those holdout samples is used to estimate the accuracy of the model when applied to new data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_49,F003581774D3028EF53E61A002C20A6D36BA8E00," curate - -To select, collect, preserve, and maintain content relevant to a specific topic. Curation establishes, maintains, and adds value to data; it transforms data into trusted information and knowledge. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_50,F003581774D3028EF53E61A002C20A6D36BA8E00," D - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_51,F003581774D3028EF53E61A002C20A6D36BA8E00," data asset - -An asset that points to data, for example, to an uploaded file. Connections and connected data assets are also considered data assets. See also [asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2172042). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_52,F003581774D3028EF53E61A002C20A6D36BA8E00," data imputation - -The substitution of missing values in a data set with estimated or explicit values. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_53,F003581774D3028EF53E61A002C20A6D36BA8E00," data lake - -A large-scale data storage repository that stores raw data in any format in a flat architecture. Data lakes hold structured and unstructured data as well as binary data for the purpose of processing and analysis. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_54,F003581774D3028EF53E61A002C20A6D36BA8E00," data lakehouse - -A unified data storage and processing architecture that combines the flexibility of a data lake with the structured querying and performance optimizations of a data warehouse, enabling scalable and efficient data analysis for AI and analytics applications. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_55,F003581774D3028EF53E61A002C20A6D36BA8E00," data mining - -The process of collecting critical business information from a data source, correlating the information, and uncovering associations, patterns, and trends. See also [predictive analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx5067245). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_56,F003581774D3028EF53E61A002C20A6D36BA8E00," Data Refinery flow - -A set of steps that cleanse and shape data to produce a new data asset. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_57,F003581774D3028EF53E61A002C20A6D36BA8E00," data science - -The analysis and visualization of structured and unstructured data to discover insights and knowledge. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_58,F003581774D3028EF53E61A002C20A6D36BA8E00," data set - -A collection of data, usually in the form of rows (records) and columns (fields) and contained in a file or database table. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_59,F003581774D3028EF53E61A002C20A6D36BA8E00," data source - -A repository, queue, or feed for reading data, such as a Db2 database. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_60,F003581774D3028EF53E61A002C20A6D36BA8E00," data table - -A collection of data, usually in the form of rows (records) and columns (fields) and contained in a table. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_61,F003581774D3028EF53E61A002C20A6D36BA8E00," data warehouse - -A large, centralized repository of data collected from various sources that is used for reporting and data analysis. It primarily stores structured and semi-structured data, enabling businesses to make informed decisions. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_62,F003581774D3028EF53E61A002C20A6D36BA8E00," DDL - -See [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443383). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_63,F003581774D3028EF53E61A002C20A6D36BA8E00," decision boundary - -A division of data points in a space into distinct groups or classifications. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_64,F003581774D3028EF53E61A002C20A6D36BA8E00," decoder-only model - -A model that generates output text word by word by inference from the input sequence. Decoder-only models are used for tasks such as generating text and answering questions. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_65,F003581774D3028EF53E61A002C20A6D36BA8E00," deep learning - -A computational model that uses multiple layers of interconnected nodes, which are organized into hierarchical layers, to transform input data (first layer) through a series of computations to produce an output (final layer). Deep learning is inspired by the structure and function of the human brain. See also [distributed deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443383). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_66,F003581774D3028EF53E61A002C20A6D36BA8E00," deep neural network - -A neural network with multiple hidden layers, allowing for more complex representations of the data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_67,F003581774D3028EF53E61A002C20A6D36BA8E00," deployment - -A model or application package that is available for use. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_68,F003581774D3028EF53E61A002C20A6D36BA8E00," deployment space - -A workspace where models are deployed and deployments are managed. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_69,F003581774D3028EF53E61A002C20A6D36BA8E00," deterministic - -Describes a characteristic of computing systems when their outputs are completely determined by their inputs. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_70,F003581774D3028EF53E61A002C20A6D36BA8E00," DevOps - -A software methodology that integrates application development and IT operations so that teams can deliver code faster to production and iterate continuously based on market feedback. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_71,F003581774D3028EF53E61A002C20A6D36BA8E00," discriminative AI - -A class of algorithm that focuses on finding a boundary that separates different classes in the data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_72,F003581774D3028EF53E61A002C20A6D36BA8E00," distributed deep learning (DDL) - -An approach to deep learning training that leverages the methods of distributed computing. In a DDL environment, compute workload is distributed between the central processing unit and graphics processing unit. See also [deep learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9443378). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_73,F003581774D3028EF53E61A002C20A6D36BA8E00," DOcplex - -A Python API for modeling and solving Decision Optimization problems. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_74,F003581774D3028EF53E61A002C20A6D36BA8E00," E - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_75,F003581774D3028EF53E61A002C20A6D36BA8E00," embedding - -A numerical representation of a unit of information, such as a word or a sentence, as a vector of real-valued numbers. Embeddings are learned, low-dimensional representations of higher-dimensional data. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2426645), [representation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx6075962). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_76,F003581774D3028EF53E61A002C20A6D36BA8E00," emergence - -A property of foundation models in which the model exhibits behaviors that were not explicitly trained. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_77,F003581774D3028EF53E61A002C20A6D36BA8E00," emergent behavior - -A behavior exhibited by a foundation model that was not explicitly constructed. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_78,F003581774D3028EF53E61A002C20A6D36BA8E00," encoder-decoder model - -A model for both understanding input text and for generating output text based on the input text. Encoder-decoder models are used for tasks such as summarization or translation. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_79,F003581774D3028EF53E61A002C20A6D36BA8E00," encoder-only model - -A model that understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Encoder-only models are used for tasks such as classifying customer feedback and extracting information from large documents. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_80,F003581774D3028EF53E61A002C20A6D36BA8E00," encoding - -The representation of a unit of information, such as a character or a word, as a set of numbers. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298004), [positional encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298071). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_81,F003581774D3028EF53E61A002C20A6D36BA8E00," endpoint URL - -A network destination address that identifies resources, such as services and objects. For example, an endpoint URL is used to identify the location of a model or function deployment when a user sends payload data to the deployment. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_82,F003581774D3028EF53E61A002C20A6D36BA8E00," environment - -The compute resources for running jobs. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_83,F003581774D3028EF53E61A002C20A6D36BA8E00," environment runtime - -An instantiation of the environment template to run analytical assets. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_84,F003581774D3028EF53E61A002C20A6D36BA8E00," environment template - -A definition that specifies hardware and software resources to instantiate environment runtimes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_85,F003581774D3028EF53E61A002C20A6D36BA8E00," exogenous feature - -A feature that can influence the predictive model but cannot be influenced in return. For example, temperatures can affect predicted ice cream sales, but ice cream sales cannot influence temperatures. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_86,F003581774D3028EF53E61A002C20A6D36BA8E00," experiment - -A model training process that considers a series of training definitions and parameters to determine the most accurate model configuration. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_87,F003581774D3028EF53E61A002C20A6D36BA8E00," explainability - - - -* The ability of human users to trace, audit, and understand predictions that are made in applications that use AI systems. -* The ability of an AI system to provide insights that humans can use to understand the causes of the system's predictions. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_88,F003581774D3028EF53E61A002C20A6D36BA8E00," F - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_89,F003581774D3028EF53E61A002C20A6D36BA8E00," fairness - -In an AI system, the equitable treatment of individuals or groups of individuals. The choice of a specific notion of equity for an AI system depends on the context in which it is used. See also [bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2803778). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_90,F003581774D3028EF53E61A002C20A6D36BA8E00," feature - -A property or characteristic of an item within a data set, for example, a column in a spreadsheet. In some cases, features are engineered as combinations of other features in the data set. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_91,F003581774D3028EF53E61A002C20A6D36BA8E00," feature engineering - -The process of selecting, transforming, and creating new features from raw data to improve the performance and predictive power of machine learning models. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_92,F003581774D3028EF53E61A002C20A6D36BA8E00," feature group - -A set of columns of a particular data asset along with the metadata that is used for machine learning. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_93,F003581774D3028EF53E61A002C20A6D36BA8E00," feature selection - -Identifying the columns of data that best support an accurate prediction or score in a machine learning model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_94,F003581774D3028EF53E61A002C20A6D36BA8E00," feature store - -A centralized repository or system that manages and organizes features, providing a scalable and efficient way to store, retrieve, and share feature data across machine learning pipelines and applications. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_95,F003581774D3028EF53E61A002C20A6D36BA8E00," feature transformation - -In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_96,F003581774D3028EF53E61A002C20A6D36BA8E00," federated learning - -The training of a common machine learning model that uses multiple data sources that are not moved, joined, or shared. The result is a better-trained model without compromising data security. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_97,F003581774D3028EF53E61A002C20A6D36BA8E00," few-shot prompting - -A prompting technique in which a small number of examples are provided to the model to demonstrate how to complete the task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_98,F003581774D3028EF53E61A002C20A6D36BA8E00," fine tuning - -The process of adapting a pre-trained model to perform a specific task by conducting additional training. Fine tuning may involve (1) updating the model’s existing parameters, known as full fine tuning, or (2) updating a subset of the model’s existing parameters or adding new parameters to the model and training them while freezing the model’s existing parameters, known as parameter-efficient fine tuning. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_99,F003581774D3028EF53E61A002C20A6D36BA8E00," flow - -A collection of nodes that define a set of steps for processing data or training a model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_100,F003581774D3028EF53E61A002C20A6D36BA8E00," foundation model - -An AI model that can be adapted to a wide range of downstream tasks. Foundation models are typically large-scale generative models that are trained on unlabeled data using self-supervision. As large scale models, foundation models can include billions of parameters. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_101,F003581774D3028EF53E61A002C20A6D36BA8E00," G - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_102,F003581774D3028EF53E61A002C20A6D36BA8E00," Gantt chart - -A graphical representation of a project timeline and duration in which schedule data is displayed as horizontal bars along a time scale. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_103,F003581774D3028EF53E61A002C20A6D36BA8E00," gen AI - -See [generative AI](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298036). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_104,F003581774D3028EF53E61A002C20A6D36BA8E00," generative AI (gen AI) - -A class of AI algorithms that can produce various types of content including text, source code, imagery, audio, and synthetic data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_105,F003581774D3028EF53E61A002C20A6D36BA8E00," generative variability - -The characteristic of generative models to produce varied outputs, even when the input to the model is held constant. See also [probabilistic](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298081). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_106,F003581774D3028EF53E61A002C20A6D36BA8E00," GPU - -See [graphics processing unit](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8987320). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_107,F003581774D3028EF53E61A002C20A6D36BA8E00," graphical builder - -A tool for creating analytical assets by visually coding. A canvas is an area on which to place objects or nodes that can be connected to create a flow. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_108,F003581774D3028EF53E61A002C20A6D36BA8E00," graphics processing unit (GPU) - -A specialized processor designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display. GPUs are heavily utilized in machine learning due to their parallel processing capabilities. See also [accelerator](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2048370). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_109,F003581774D3028EF53E61A002C20A6D36BA8E00," H - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_110,F003581774D3028EF53E61A002C20A6D36BA8E00," hallucination - -A response from a foundation model that includes off-topic, repetitive, incorrect, or fabricated content. Hallucinations involving fabricating details can happen when a model is prompted to generate text, but the model doesn't have enough related text to draw upon to generate a result that contains the correct details. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_111,F003581774D3028EF53E61A002C20A6D36BA8E00," hold-out set - -A set of labeled data that is intentionally withheld from both the training and validation sets, serving as an unbiased assessment of the final model's performance on unseen data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_112,F003581774D3028EF53E61A002C20A6D36BA8E00," homogenization - -The trend in machine learning research in which a small number of deep neural net architectures, such as the transformer, are achieving state-of-the-art results across a wide variety of tasks. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_113,F003581774D3028EF53E61A002C20A6D36BA8E00," HPO - -See [hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx9895660). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_114,F003581774D3028EF53E61A002C20A6D36BA8E00," human oversight - -Human involvement in reviewing decisions rendered by an AI system, enabling human autonomy and accountability of decision. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_115,F003581774D3028EF53E61A002C20A6D36BA8E00," hyperparameter - -In machine learning, a parameter whose value is set before training as a way to increase model accuracy. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_116,F003581774D3028EF53E61A002C20A6D36BA8E00," hyperparameter optimization (HPO) - -The process for setting hyperparameter values to the settings that provide the most accurate model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_117,F003581774D3028EF53E61A002C20A6D36BA8E00," I - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_118,F003581774D3028EF53E61A002C20A6D36BA8E00," image - -A software package that contains a set of libraries. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_119,F003581774D3028EF53E61A002C20A6D36BA8E00," incremental learning - -The process of training a model using data that is continually updated without forgetting data obtained from the preceding tasks. This technique is used to train a model with batches of data from a large training data source. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_120,F003581774D3028EF53E61A002C20A6D36BA8E00," inferencing - -The process of running live data through a trained AI model to make a prediction or solve a task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_121,F003581774D3028EF53E61A002C20A6D36BA8E00," ingest - - - -* To feed data into a system for the purpose of creating a base of knowledge. -* To continuously add a high-volume of real-time data to a database. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_122,F003581774D3028EF53E61A002C20A6D36BA8E00," insight - -An accurate or deep understanding of something. Insights are derived using cognitive analytics to provide current snapshots and predictions of customer behaviors and attitudes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_123,F003581774D3028EF53E61A002C20A6D36BA8E00," intelligent AI - -Artificial intelligence systems that can understand, learn, adapt, and implement knowledge, demonstrating abilities like decision-making, problem-solving, and understanding complex concepts, much like human intelligence. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_124,F003581774D3028EF53E61A002C20A6D36BA8E00," intent - -A purpose or goal expressed by customer input to a chatbot, such as answering a question or processing a bill payment. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_125,F003581774D3028EF53E61A002C20A6D36BA8E00," J - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_126,F003581774D3028EF53E61A002C20A6D36BA8E00," job - -A separately executable unit of work. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_127,F003581774D3028EF53E61A002C20A6D36BA8E00," K - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_128,F003581774D3028EF53E61A002C20A6D36BA8E00," knowledge base - -See [corpus](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx3954167). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_129,F003581774D3028EF53E61A002C20A6D36BA8E00," L - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_130,F003581774D3028EF53E61A002C20A6D36BA8E00," label - -A class or category assigned to a data point in supervised learning.Labels can be derived from data but are often applied by human labelers or annotators. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_131,F003581774D3028EF53E61A002C20A6D36BA8E00," labeled data - -Raw data that is assigned labels to add context or meaning so that it can be used to train machine learning models. For example, numeric values might be labeled as zip codes or ages to provide context for model inputs and outputs. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_132,F003581774D3028EF53E61A002C20A6D36BA8E00," large language model (LLM) - -A language model with a large number of parameters, trained on a large quantity of text. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_133,F003581774D3028EF53E61A002C20A6D36BA8E00," LLM - -See [large language model](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298052). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_134,F003581774D3028EF53E61A002C20A6D36BA8E00," M - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_135,F003581774D3028EF53E61A002C20A6D36BA8E00," machine learning (ML) - -A branch of artificial intelligence (AI) and computer science that focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving the accuracy of AI models. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_136,F003581774D3028EF53E61A002C20A6D36BA8E00," machine learning framework - -The libraries and runtime for training and deploying a model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_137,F003581774D3028EF53E61A002C20A6D36BA8E00," machine learning model - -An AI model that is trained on a a set of data to develop algorithms that it can use to analyze and learn from new data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_138,F003581774D3028EF53E61A002C20A6D36BA8E00," mental model - -An individual’s understanding of how a system works and how their actions affect system outcomes. When these expectations do not match the actual capabilities of a system, it can lead to frustration, abandonment, or misuse. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_139,F003581774D3028EF53E61A002C20A6D36BA8E00," misalignment - -A discrepancy between the goals or behaviors that an AI system is optimized to achieve and the true, often complex, objectives of its human users or designers - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_140,F003581774D3028EF53E61A002C20A6D36BA8E00," ML - -See [machine learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8397498). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_141,F003581774D3028EF53E61A002C20A6D36BA8E00," MLOps - - - -* The practice for collaboration between data scientists and operations professionals to help manage production machine learning (or deep learning) lifecycle. MLOps looks to increase automation and improve the quality of production ML while also focusing on business and regulatory requirements. It involves model development, training, validation, deployment, monitoring, and management and uses methods like CI/CD. -* A methodology that takes a machine learning model from development to production. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_142,F003581774D3028EF53E61A002C20A6D36BA8E00," model - - - -* In a machine learning context, a set of functions and algorithms that have been trained and tested on a data set to provide predictions or decisions. -* In Decision Optimization, a mathematical formulation of a problem that can be solved with CPLEX optimization engines using different data sets. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_143,F003581774D3028EF53E61A002C20A6D36BA8E00," ModelOps - -A methodology for managing the full lifecycle of an AI model, including training, deployment, scoring, evaluation, retraining, and updating. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_144,F003581774D3028EF53E61A002C20A6D36BA8E00," monitored group - -A class of data that is monitored to determine if the results from a predictive model differ significantly from the results of the reference group. Groups are commonly monitored based on characteristics that include race, gender, or age. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_145,F003581774D3028EF53E61A002C20A6D36BA8E00," multiclass classification model - -A classification task with more than two classes. For example, where a binary classification model predicts yes or no values, a multi-class model predicts yes, no, maybe, or not applicable. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_146,F003581774D3028EF53E61A002C20A6D36BA8E00," multivariate time series - -Time series experiment that contains two or more changing variables. For example, a time series model forecasting the electricity usage of three clients. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_147,F003581774D3028EF53E61A002C20A6D36BA8E00," N - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_148,F003581774D3028EF53E61A002C20A6D36BA8E00," natural language processing (NLP) - -A field of artificial intelligence and linguistics that studies the problems inherent in the processing and manipulation of natural language, with an aim to increase the ability of computers to understand human languages. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_149,F003581774D3028EF53E61A002C20A6D36BA8E00," natural language processing library - -A library that provides basic natural language processing functions for syntax analysis and out-of-the-box pre-trained models for a wide variety of text processing tasks. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_150,F003581774D3028EF53E61A002C20A6D36BA8E00," neural network - -A mathematical model for predicting or classifying cases by using a complex mathematical scheme that simulates an abstract version of brain cells. A neural network is trained by presenting it with a large number of observed cases, one at a time, and allowing it to update itself repeatedly until it learns the task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_151,F003581774D3028EF53E61A002C20A6D36BA8E00," NLP - -See [natural language processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2031058). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_152,F003581774D3028EF53E61A002C20A6D36BA8E00," node - -In an SPSS Modeler flow, the graphical representation of a data operation. - - notebook - -An interactive document that contains executable code, descriptive text for that code, and the results of any code that is run. - - notebook kernel - -The part of the notebook editor that executes code and returns the computational results. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_153,F003581774D3028EF53E61A002C20A6D36BA8E00," O - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_154,F003581774D3028EF53E61A002C20A6D36BA8E00," object storage - -A method of storing data, typically used in the cloud, in which data is stored as discrete units, or objects, in a storage pool or repository that does not use a file hierarchy but that stores all objects at the same level. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_155,F003581774D3028EF53E61A002C20A6D36BA8E00," one-shot learning - -A model for deep learning that is based on the premise that most human learning takes place upon receiving just one or two examples. This model is similar to unsupervised learning. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_156,F003581774D3028EF53E61A002C20A6D36BA8E00," one-shot prompting - -A prompting technique in which a single example is provided to the model to demonstrate how to complete the task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_157,F003581774D3028EF53E61A002C20A6D36BA8E00," online deployment - -Method of accessing a model or Python code deployment through an API endpoint as a web service to generate predictions online, in real time. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_158,F003581774D3028EF53E61A002C20A6D36BA8E00," ontology - -An explicit formal specification of the representation of the objects, concepts, and other entities that can exist in some area of interest and the relationships among them. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_159,F003581774D3028EF53E61A002C20A6D36BA8E00," operational asset - -An asset that runs code in a tool or a job. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_160,F003581774D3028EF53E61A002C20A6D36BA8E00," optimization - -The process of finding the most appropriate solution to a precisely defined problem while respecting the imposed constraints and limitations. For example, determining how to allocate resources or how to find the best elements or combinations from a large set of alternatives. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_161,F003581774D3028EF53E61A002C20A6D36BA8E00," Optimization Programming Language - -A modeling language for expressing model formulations of optimization problems in a format that can be solved by CPLEX optimization engines such as IBM CPLEX. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_162,F003581774D3028EF53E61A002C20A6D36BA8E00," optimized metric - -A metric used to measure the performance of the model. For example, accuracy is the typical metric used to measure the performance of a binary classification model. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_163,F003581774D3028EF53E61A002C20A6D36BA8E00," orchestration - -The process of creating an end-to-end flow that can train, run, deploy, test, and evaluate a machine learning model, and uses automation to coordinate the system, often using microservices. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_164,F003581774D3028EF53E61A002C20A6D36BA8E00," overreliance - -A user's acceptance of an incorrect recommendation made by an AI model. See also [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299283), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299288). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_165,F003581774D3028EF53E61A002C20A6D36BA8E00," P - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_166,F003581774D3028EF53E61A002C20A6D36BA8E00," parameter - - - -* A configurable part of the model that is internal to a model and whose values are estimated or learned from data. Parameters are aspects of the model that are adjusted during the training process to help the model accurately predict the output. The model's performance and predictive power largely depend on the values of these parameters. -* A real-valued weight between 0.0 and 1.0 indicating the strength of connection between two neurons in a neural network. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_167,F003581774D3028EF53E61A002C20A6D36BA8E00," party - -In Federated Learning, an entity that contributes data for training a common model. The data is not moved or combined but each party gets the benefit of the federated training. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_168,F003581774D3028EF53E61A002C20A6D36BA8E00," payload - -The data that is passed to a deployment to get back a score, prediction, or solution. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_169,F003581774D3028EF53E61A002C20A6D36BA8E00," payload logging - -The capture of payload data and deployment output to monitor ongoing health of AI in business applications. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_170,F003581774D3028EF53E61A002C20A6D36BA8E00," pipeline - - - -* In Watson Pipelines, an end-to-end flow of assets from creation through deployment. -* In AutoAI, a candidate model. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_171,F003581774D3028EF53E61A002C20A6D36BA8E00," pipeline leaderboard - -In AutoAI, a table that shows the list of automatically generated candidate models, as pipelines, ranked according to the specified criteria. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_172,F003581774D3028EF53E61A002C20A6D36BA8E00," policy - -A strategy or rule that an agent follows to determine the next action based on the current state. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_173,F003581774D3028EF53E61A002C20A6D36BA8E00," positional encoding - -An encoding of an ordered sequence of data that includes positional information, such as encoding of words in a sentence that includes each word's position within the sentence. See also [encoding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2426645). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_174,F003581774D3028EF53E61A002C20A6D36BA8E00," predictive analytics - -A business process and a set of related technologies that are concerned with the prediction of future possibilities and trends. Predictive analytics applies such diverse disciplines as probability, statistics, machine learning, and artificial intelligence to business problems to find the best action for a specific situation. See also [data mining](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2114083). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_175,F003581774D3028EF53E61A002C20A6D36BA8E00," pretrained model - -An AI model that was previously trained on a large data set to accomplish a specific task. Pretrained models are used instead of building a model from scratch. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_176,F003581774D3028EF53E61A002C20A6D36BA8E00," pretraining - -The process of training a machine learning model on a large dataset before fine-tuning it for a specific task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_177,F003581774D3028EF53E61A002C20A6D36BA8E00," privacy - -Assurance that information about an individual is protected from unauthorized access and inappropriate use. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_178,F003581774D3028EF53E61A002C20A6D36BA8E00," probabilistic - -The characteristic of being subject to randomness; non-deterministic. Probabilistic models do not produce the same outputs given the same inputs. See also [generative variability](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298041). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_179,F003581774D3028EF53E61A002C20A6D36BA8E00," project - -A collaborative workspace for working with data and other assets. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_180,F003581774D3028EF53E61A002C20A6D36BA8E00," prompt - - - -* Data, such as text or an image, that prepares, instructs, or conditions a foundation model's output. -* A component of an action that indicates that user input is required for a field before making a transition to an output screen. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_181,F003581774D3028EF53E61A002C20A6D36BA8E00," prompt engineering - -The process of designing natural language prompts for a language model to perform a specific task. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_182,F003581774D3028EF53E61A002C20A6D36BA8E00," prompting - -The process of providing input to a foundation model to induce it to produce output. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_183,F003581774D3028EF53E61A002C20A6D36BA8E00," prompt tuning - -An efficient, low-cost way of adapting a pre-trained model to new tasks without retraining the model or updating its weights. Prompt tuning involves learning a small number of new parameters that are appended to a model’s prompt, while freezing the model’s existing parameters. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_184,F003581774D3028EF53E61A002C20A6D36BA8E00," pruning - -The process of simplifying, shrinking, or trimming a decision tree or neural network. This is done by removing less important nodes or layers, reducing complexity to prevent overfitting and improve model generalization while maintaining its predictive power. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_185,F003581774D3028EF53E61A002C20A6D36BA8E00," Python - -A programming language that is used in data science and AI. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_186,F003581774D3028EF53E61A002C20A6D36BA8E00," Python function - -A function that contains Python code to support a model in production. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_187,F003581774D3028EF53E61A002C20A6D36BA8E00," R - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_188,F003581774D3028EF53E61A002C20A6D36BA8E00," R - -An extensible scripting language that is used in data science and AI that offers a wide variety of analytic, statistical, and graphical functions and techniques. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_189,F003581774D3028EF53E61A002C20A6D36BA8E00," RAG - -See [retrieval augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299275). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_190,F003581774D3028EF53E61A002C20A6D36BA8E00," random seed - -A number used to initialize a pseudorandom number generator. Random seeds enable reproducibility for processes that rely on random number generation. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_191,F003581774D3028EF53E61A002C20A6D36BA8E00," reference group - -A group that is identified as most likely to receive a positive result in a predictive model. The results can be compared to a monitored group to look for potential bias in outcomes. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_192,F003581774D3028EF53E61A002C20A6D36BA8E00," refine - -To cleanse and shape data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_193,F003581774D3028EF53E61A002C20A6D36BA8E00," regression model - -A model that relates a dependent variable to one or more independent variables. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_194,F003581774D3028EF53E61A002C20A6D36BA8E00," reinforcement learning - -A machine learning technique in which an agent learns to make sequential decisions in an environment to maximize a reward signal. Inspired by trial and error learning, agents interact with the environment, receive feedback, and adjust their actions to achieve optimal policies. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_195,F003581774D3028EF53E61A002C20A6D36BA8E00," reinforcement learning on human feedback (RLHF) - -A method of aligning a language learning model's responses to the instructions given in a prompt. RLHF requires human annotators rank multiple outputs from the model. These rankings are then used to train a reward model using reinforcement learning. The reward model is then used to fine-tune the large language model's output. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_196,F003581774D3028EF53E61A002C20A6D36BA8E00," reliance - -In AI systems, a user’s acceptance of a recommendation made by, or the output generated by, an AI model. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299271), [underreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299288). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_197,F003581774D3028EF53E61A002C20A6D36BA8E00," representation - -An encoding of a unit of information, often as a vector of real-valued numbers. See also [embedding](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298004). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_198,F003581774D3028EF53E61A002C20A6D36BA8E00," retrieval augmented generation (RAG) - -A technique in which a large language model is augmented with knowledge from external sources to generate text. In the retrieval step, relevant documents from an external source are identified from the user’s query. In the generation step, portions of those documents are included in the LLM prompt to generate a response grounded in the retrieved documents. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_199,F003581774D3028EF53E61A002C20A6D36BA8E00," reward - -A signal used to guide an agent, typically a reinforcement learning agent, that provides feedback on the goodness of a decision - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_200,F003581774D3028EF53E61A002C20A6D36BA8E00," RLHF - -See [reinforcement learning on human feedback](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10298109). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_201,F003581774D3028EF53E61A002C20A6D36BA8E00," runtime environment - -The predefined or custom hardware and software configuration that is used to run tools or jobs, such as notebooks. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_202,F003581774D3028EF53E61A002C20A6D36BA8E00," S - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_203,F003581774D3028EF53E61A002C20A6D36BA8E00," scoring - - - -* In machine learning, the process of measuring the confidence of a predicted outcome. -* The process of computing how closely the attributes for an incoming identity match the attributes of an existing entity. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_204,F003581774D3028EF53E61A002C20A6D36BA8E00," script - -A file that contains Python or R scripts to support a model in production. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_205,F003581774D3028EF53E61A002C20A6D36BA8E00," self-attention - -An attention mechanism that uses information from the input data itself to determine what parts of the input to focus on when generating output. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_206,F003581774D3028EF53E61A002C20A6D36BA8E00," self-supervised learning - -A machine learning training method in which a model learns from unlabeled data by masking tokens in an input sequence and then trying to predict them. An example is ""I like __ sprouts"". - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_207,F003581774D3028EF53E61A002C20A6D36BA8E00," sentience - -The capacity to have subjective experiences and feelings, or consciousness. It involves the ability to perceive, reason, and experience sensations such as pain and pleasure. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_208,F003581774D3028EF53E61A002C20A6D36BA8E00," sentiment analysis - -Examination of the sentiment or emotion expressed in text, such as determining if a movie review is positive or negative. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_209,F003581774D3028EF53E61A002C20A6D36BA8E00," shape - -To customize data by filtering, sorting, removing columns; joining tables; performing operations that include calculations, data groupings, hierarchies and more. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_210,F003581774D3028EF53E61A002C20A6D36BA8E00," small data - -Data that is accessible and comprehensible by humans. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490040). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_211,F003581774D3028EF53E61A002C20A6D36BA8E00," SQL pushback - -In SPSS Modeler, the process of performing many data preparation and mining operations directly in the database through SQL code. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_212,F003581774D3028EF53E61A002C20A6D36BA8E00," structured data - -Data that resides in fixed fields within a record or file. Relational databases and spreadsheets are examples of structured data. See also [unstructured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490044), [small data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx8317275). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_213,F003581774D3028EF53E61A002C20A6D36BA8E00," structured information - -Items stored in structured resources, such as search engine indices, databases, or knowledge bases. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_214,F003581774D3028EF53E61A002C20A6D36BA8E00," supervised learning - -A machine learning training method in which a model is trained on a labeled dataset to make predictions on new data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_215,F003581774D3028EF53E61A002C20A6D36BA8E00," T - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_216,F003581774D3028EF53E61A002C20A6D36BA8E00," temperature - -A parameter in a generative model that specifies the amount of variation in the generation process. Higher temperatures result in greater variability in the model's output. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_217,F003581774D3028EF53E61A002C20A6D36BA8E00," text classification - -A model that automatically identifies and classifies text into specified categories. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_218,F003581774D3028EF53E61A002C20A6D36BA8E00," time series - -A set of values of a variable at periodic points in time. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_219,F003581774D3028EF53E61A002C20A6D36BA8E00," time series model - -A model that tracks and predicts data over time. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_220,F003581774D3028EF53E61A002C20A6D36BA8E00," token - -A discrete unit of meaning or analysis in a text, such as a word or subword. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_221,F003581774D3028EF53E61A002C20A6D36BA8E00," tokenization - -The process used in natural language processing to split a string of text into smaller units, such as words or subwords. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_222,F003581774D3028EF53E61A002C20A6D36BA8E00," trained model - -A model that is trained with actual data and is ready to be deployed to predict outcomes when presented with new data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_223,F003581774D3028EF53E61A002C20A6D36BA8E00," training - -The initial stage of model building, involving a subset of the source data. The model learns by example from the known data. The model can then be tested against a further, different subset for which the outcome is already known. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_224,F003581774D3028EF53E61A002C20A6D36BA8E00," training data - -A set of annotated documents that can be used to train machine learning models. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_225,F003581774D3028EF53E61A002C20A6D36BA8E00," training set - -A set of labeled data that is used to train a machine learning model by exposing it to examples and their corresponding labels, enabling the model to learn patterns and make predictions. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_226,F003581774D3028EF53E61A002C20A6D36BA8E00," transfer learning - -A machine learning strategy in which a trained model is applied to a completely new problem. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_227,F003581774D3028EF53E61A002C20A6D36BA8E00," transformer - -A neural network architecture that uses positional encodings and the self-attention mechanism to predict the next token in a sequence of tokens. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_228,F003581774D3028EF53E61A002C20A6D36BA8E00," transparency - -Sharing appropriate information with stakeholders on how an AI system has been designed and developed. Examples of this information are what data is collected, how it will be used and stored, and who has access to it; and test results for accuracy, robustness and bias. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_229,F003581774D3028EF53E61A002C20A6D36BA8E00," trust calibration - -The process of evaluating and adjusting one’s trust in an AI system based on factors such as its accuracy, reliability, and credibility. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_230,F003581774D3028EF53E61A002C20A6D36BA8E00," Turing test - -Proposed by Alan Turing in 1950, a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_231,F003581774D3028EF53E61A002C20A6D36BA8E00," U - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_232,F003581774D3028EF53E61A002C20A6D36BA8E00," underreliance - -A user's rejection of a correct recommendations made by an AI model. See also [overreliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299271), [reliance](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx10299283). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_233,F003581774D3028EF53E61A002C20A6D36BA8E00," univariate time series - -Time series experiment that contains only one changing variable. For example, a time series model forecasting the temperature has a single prediction column of the temperature. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_234,F003581774D3028EF53E61A002C20A6D36BA8E00," unstructured data - -Any data that is stored in an unstructured format rather than in fixed fields. Data in a word processing document is an example of unstructured data. See also [structured data](https://dataplatform.cloud.ibm.com/docs/content/wsj/wscommon/glossary-wx.html?context=cdpaas&locale=enx2490040). - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_235,F003581774D3028EF53E61A002C20A6D36BA8E00," unstructured information - -Data that is not contained in a fixed location, such as the natural language text document. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_236,F003581774D3028EF53E61A002C20A6D36BA8E00," unsupervised learning - - - -* A machine learning training method in which a model is not provided with labeled data and must find patterns or structure in the data on its own. -* A model for deep learning that allows raw, unlabeled data to be used to train a system with little to no human effort. - - - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_237,F003581774D3028EF53E61A002C20A6D36BA8E00," V - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_238,F003581774D3028EF53E61A002C20A6D36BA8E00," validation set - -A separate set of labeled data that is used to evaluate the performance and generalization ability of a machine learning model during the training process, assisting in hyperparameter tuning and model selection. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_239,F003581774D3028EF53E61A002C20A6D36BA8E00," vector - -A one-dimensional, ordered list of numbers, such as [1, 2, 5] or [0.7, 0.2, -1.0]. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_240,F003581774D3028EF53E61A002C20A6D36BA8E00," virtual agent - -A pretrained chat bot that can process natural language to respond and complete simple business transactions, or route more complicated requests to a human with subject matter expertise. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_241,F003581774D3028EF53E61A002C20A6D36BA8E00," visualization - -A graph, chart, plot, table, map, or any other visual representation of data. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_242,F003581774D3028EF53E61A002C20A6D36BA8E00," W - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_243,F003581774D3028EF53E61A002C20A6D36BA8E00," weight - -A coefficient for a node that transforms input data within the network's layer. Weight is a parameter that an AI model learns through training, adjusting its value to reduce errors in the model's predictions. - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_244,F003581774D3028EF53E61A002C20A6D36BA8E00," Z - -" -F003581774D3028EF53E61A002C20A6D36BA8E00_245,F003581774D3028EF53E61A002C20A6D36BA8E00," zero-shot prompt - -A prompting technique in which the model completes a task without being given a specific example of how. -"