doc_id
stringlengths
40
40
url
stringlengths
90
160
title
stringlengths
5
96
document
stringlengths
24
62.1k
md_document
stringlengths
63
109k
81D740CEF3967C20721612B7866072EF240484E9
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOJava.html?context=cdpaas&locale=en
Decision Optimization Java models
Decision Optimization Java models You can create and run Decision Optimization models in Java by using the Watson Machine Learning REST API. You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models. For more information about these models, see the following reference manuals. * [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) * [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html) * [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html) To package and deploy Java models in Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).
# Decision Optimization Java models # You can create and run Decision Optimization models in Java by using the Watson Machine Learning REST API\. You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models\. For more information about these models, see the following reference manuals\. <!-- <ul> --> * [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) * [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html) * [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html) <!-- </ul> --> To package and deploy Java models in Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\. <!-- </article "role="article" "> -->
6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html?context=cdpaas&locale=en
Decision Optimization notebooks
Decision Optimization notebooks You can create and run Decision Optimization models in Python notebooks by using DOcplex, a native Python API for Decision Optimization. Several Decision Optimization notebooks are already available for you to use. The Decision Optimization environment currently supports Python 3.10. The following Python environments give you access to the Community Edition of the CPLEX engines. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems. * Runtime 23.1 on Python 3.10 S/XS/XXS * Runtime 22.2 on Python 3.10 S/XS/XXS To run larger problems, select a runtime that includes the full CPLEX commercial edition. The Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition): * NLP + DO runtime 23.1 on Python 3.10 with CPLEX 22.1.1.0 * DO + NLP runtime 22.2 on Python 3.10 with CPLEX 20.1.0.1 You can easily change environments (runtimes and Python version) inside a notebook by using the Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)). Thus, you can formulate optimization models and test them with small data sets in one environment. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the notebook code. Multiple examples of Decision Optimization notebooks are available in the Samples, including: * The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid. * The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied. These and more examples are also available in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) All Decision Optimization notebooks use DOcplex.
# Decision Optimization notebooks # You can create and run Decision Optimization models in Python notebooks by using DOcplex, a native Python API for Decision Optimization\. Several Decision Optimization notebooks are already available for you to use\. The Decision Optimization environment currently supports `Python 3.10`\. The following Python environments give you access to the Community Edition of the CPLEX engines\. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems\. <!-- <ul> --> * `Runtime 23.1 on Python 3.10 S/XS/XXS` * `Runtime 22.2 on Python 3.10 S/XS/XXS` <!-- </ul> --> To run larger problems, select a runtime that includes the full CPLEX commercial edition\. The Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition): <!-- <ul> --> * `NLP + DO runtime 23.1 on Python 3.10` with `CPLEX 22.1.1.0` * `DO + NLP runtime 22.2 on Python 3.10` with `CPLEX 20.1.0.1` <!-- </ul> --> You can easily change environments (runtimes and Python version) inside a notebook by using the Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#change-env))\. Thus, you can formulate optimization models and test them with small data sets in one environment\. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the notebook code\. Multiple examples of Decision Optimization notebooks are available in the Samples, including: <!-- <ul> --> * The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid\. * The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied\. <!-- </ul> --> These and more examples are also available in the **jupyter** folder of the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)** All Decision Optimization notebooks use DOcplex\. <!-- </article "role="article" "> -->
277C8CB678CAF766466EDE03C506EB0A822FD400
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en
Supported data sources in Decision Optimization
Supported data sources in Decision Optimization Decision Optimization supports the following relational and nonrelational data sources on . watsonx.ai. * [IBM data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__ibm-data-src) * [Third-party data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__third-party-data-src)
# Supported data sources in Decision Optimization # Decision Optimization supports the following relational and nonrelational data sources on \. watsonx\.ai\. <!-- <ul> --> * [IBM data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en#DOConnections__ibm-data-src) * [Third\-party data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en#DOConnections__third-party-data-src) <!-- </ul> --> <!-- </article "role="article" "> -->
E990E009903E315FA6752E7E82C2634AF4A425B9
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.html?context=cdpaas&locale=en
Ways to use Decision Optimization
Ways to use Decision Optimization To build Decision Optimization models, you can create Python notebooks with DOcplex, a native Python API for Decision Optimization, or use the Decision Optimization experiment UI that has more benefits and features.
# Ways to use Decision Optimization # To build Decision Optimization models, you can create Python notebooks with DOcplex, a native Python API for Decision Optimization, or use the Decision Optimization experiment UI that has more benefits and features\. <!-- </article "role="article" "> -->
8892A757ECB2C4A02806A7B262712FF2E30CE044
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en
OPL models
OPL models You can build OPL models in the Decision Optimization experiment UI in watsonx.ai. In this section: * [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__section_oplIO) * [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__engsettings) To create an OPL model in the experiment UI, select in the model selection window. You can also import OPL models from a file or import a scenario .zip file that contains the OPL model and the data. If you import from a file or scenario .zip file, the data must be in .csv format. However, you can import other file formats that you have as project assets into the experiment UI. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata). For more information about the OPL language and engine parameters, see: * [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html) * [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html) * [A list of CPLEX parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/CPLEX/Parameters/topics/introListTopical.html) * [A list of CPO parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/CP_Optimizer/Parameters/topics/paramcpoptimizer.html)
# OPL models # You can build OPL models in the Decision Optimization experiment UI in watsonx\.ai\. In this section: <!-- <ul> --> * [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en#topic_oplmodels__section_oplIO) * [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en#topic_oplmodels__engsettings) <!-- </ul> --> To create an OPL model in the experiment UI, select in the model selection window\. You can also import OPL models from a file or import a scenario \.zip file that contains the OPL model and the data\. If you import from a file or scenario \.zip file, the data must be in \.csv format\. However, you can import other file formats that you have as project assets into the experiment UI\. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata)\. For more information about the OPL language and engine parameters, see: <!-- <ul> --> * [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html) * [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html) * [A list of CPLEX parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/CPLEX/Parameters/topics/introListTopical.html) * [A list of CPO parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/CP_Optimizer/Parameters/topics/paramcpoptimizer.html) <!-- </ul> --> <!-- </article "role="article" "> -->
8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en
Decision Optimization Visualization view
Visualization view With the Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios. Quick links: * [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section-dashboard) * [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter) * [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_widgetssyntax) * [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__viseditor) * [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__vispages) The Visualization view is common to all scenarios in a Decision Optimization experiment. For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b). ![Visualization panel showing solution in table and bar chart](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudvisualization.jpg) The Visualization view helps you compare different scenarios to validate models and business decisions. For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows: 1. Click the chart widget and configure it by clicking the pencil icon. 2. In the Chart widget editor, select Add scenario and choose scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed. 3. In the Table field, select the Solution data option and select solution from the drop-down list. 4. In the bar chart pane, select Descending for the Category order, Y-axis for the Bar type and click OK to close the Chart widget editor. A second bar chart is then displayed showing you the solution results for scenario 2. 5. Re-edit the chart and select @Scenario in the Split by field of the Bar chart pane. You then obtain both scenarios in the same bar chart: ![Chart with two scenarios displayed in one chart.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/ChartVisu2Scen.png). You can select many different types of charts in the Chart widget editor. Alternatively using the Vega Chart widget, you can similarly choose Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor. Then, in the Mark section, select @Scenario for the color field. This selection gives you the following bar chart with the two scenarios on the same y-axis, distinguished by different colors. ![Vega chart showing 2 scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen.jpg). If you re-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side-by-side as follows: ![Vega charts showing 2 scenarios side by side.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen2.jpg) You can use many different types of charts that are available in the Mark field of the Vega Chart widget editor. You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_hdc_5mm_33b) section. The following widgets are available: * [Notes widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_edc_5mm_33b) Add simple text notes to the Visualization view. * [Table widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_fdc_5mm_33b) Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter). * [Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_alh_lfn_l2b) Present input data and solution in charts. * [Gantt chart widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_idc_5mm_33b) Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart. This widget is used automatically for scheduling problems that are modeled with the Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the Modeling Assistant.
# Visualization view # With the Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios\. Quick links: <!-- <ul> --> * [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section-dashboard) * [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_tablefilter) * [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_widgetssyntax) * [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__viseditor) * [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__vispages) <!-- </ul> --> The Visualization view is common to all scenarios in a Decision Optimization experiment\. For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b)\. ![Visualization panel showing solution in table and bar chart](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudvisualization.jpg) The Visualization view helps you compare different scenarios to validate models and business decisions\. For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows: <!-- <ol> --> 1. Click the chart widget and configure it by clicking the pencil icon\. 2. In the Chart widget editor, select Add scenario and choose scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed\. 3. In the Table field, select the Solution data option and select solution from the drop\-down list\. 4. In the bar chart pane, select Descending for the Category order, Y\-axis for the Bar type and click OK to close the Chart widget editor\. A second bar chart is then displayed showing you the solution results for scenario 2\. 5. Re\-edit the chart and select @Scenario in the Split by field of the Bar chart pane\. You then obtain both scenarios in the same bar chart: <!-- </ol> --> ![Chart with two scenarios displayed in one chart\.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/ChartVisu2Scen.png)\. You can select many different types of charts in the Chart widget editor\. Alternatively using the Vega Chart widget, you can similarly choose Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor\. Then, in the Mark section, select @Scenario for the color field\. This selection gives you the following bar chart with the two scenarios on the same y\-axis, distinguished by different colors\. ![Vega chart showing 2 scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen.jpg)\. If you re\-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side\-by\-side as follows: ![Vega charts showing 2 scenarios side by side\.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/VegaChart2Scen2.jpg) You can use many different types of charts that are available in the Mark field of the Vega Chart widget editor\. You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code\. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_hdc_5mm_33b) section\. The following widgets are available: <!-- <ul> --> * [**Notes widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_edc_5mm_33b) Add simple text notes to the Visualization view. * [**Table widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_fdc_5mm_33b) Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_tablefilter). * **[Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_alh_lfn_l2b)** Present input data and solution in charts. * [**Gantt chart widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_idc_5mm_33b) Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart. This widget is used automatically for scheduling problems that are modeled with the Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the Modeling Assistant. <!-- </ul> --> <!-- </article "role="article" "> -->
33923FE20855D3EA3850294C0FB447EC3F1B7BDF
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html?context=cdpaas&locale=en
Decision Optimization experiments
Decision Optimization experiments If you use the Decision Optimization experiment UI, you can take advantage of its many features in this user-friendly environment. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with Watson Machine Learning. The Decision Optimization experiment UI facilitates workflow. Here you can: * Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata) * Create, import, edit and solve Python models in the Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b) * Create, import, edit and solve models expressed in natural language with the Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase) * Create, import, edit and solve OPL models in the Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels) * Generate a notebook from your model, work with it as a notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) * Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__solution) * Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) * Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.htmltopic_visualization) * Save models that are ready for deployment in Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.htmlDOIntro__comparisontable) for a list of features available with and without the Decision Optimization experiment UI. See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface) for a description of the user interface and scenario management.
# Decision Optimization experiments # If you use the Decision Optimization experiment UI, you can take advantage of its many features in this user\-friendly environment\. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with Watson Machine Learning\. The Decision Optimization experiment UI facilitates workflow\. Here you can: <!-- <ul> --> * Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata) * Create, import, edit and solve Python models in the Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b) * Create, import, edit and solve models expressed in natural language with the Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html#cogusercase) * Create, import, edit and solve OPL models in the Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html#topic_oplmodels) * Generate a notebook from your model, work with it as a notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) * Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__solution) * Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) * Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html#topic_visualization) * Save models that are ready for deployment in Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) <!-- </ul> --> See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.html#DOIntro__comparisontable) for a list of features available with and without the Decision Optimization experiment UI\. See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface) for a description of the user interface and scenario management\. <!-- </article "role="article" "> -->
497007D0D0ABAC3202BBF912A15BFC389066EBDA
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html?context=cdpaas&locale=en
Decision Optimization experiment Python and CPLEX runtime versions and Python extensions
Configuring environments and adding Python extensions You can change your default environment for Python and CPLEX in the experiment Overview. Procedure To change the default environment for DOcplex and Modeling Assistant models: 1. Open the Overview, click ![information icon](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/infoicon.jpg) to open the Information pane, and select the Environments tab. ![Environment tab of information pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfoenvirons.png) 2. Expand the environment section according to your model type. For Python and Modeling Assistant models, expand Python environment. You can see the default Python environment (if one exists). To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure. 3. Expand the name of your environment, and select a different Python environment. 4. Optional: To create a new environment: 1. Select New environment for Python. A new window opens for you to define your new environment. ![New environment window showing empty fields](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfonewenv1.png) 2. Enter a name, and select a CPLEX version, hardware specification, copies (number of nodes), Python version and (optionally) you can set Associate a Python extension to On to include any Python libraries that you want to add. 3. Click New Python extension. 4. Enter a name for your extension in the new Create a Python extension window that opens, and click Create. 5. In the new Configure Python extension window that opens, you can set YAML code to On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries: Modify the following content to add a software customization to an environment. To remove an existing customization, delete the entire content and click Apply. Add conda channels on a new line after defaults, indented by two spaces and a hyphen. channels: - defaults To add packages through conda or pip, remove the comment on the following line. dependencies: Add conda packages here, indented by two spaces and a hyphen. Remove the comment on the following line and replace sample package name with your package name: - a_conda_package=1.0 Add pip packages here, indented by four spaces and a hyphen. Remove the comments on the following lines and replace sample package name with your package name. - pip: - a_pip_package==1.0 You can also click Browse to add any Python libraries. For example, this image shows a dynamic programming Python library that is imported and YAML code set to On.![Configure Python extension window showing YAML code and a Dynamic Programming library included](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/PythonExtension.png) Click Done. 6. Click Create in the New environment window. Your chosen (or newly created) environment appears as ticked in the Python environments drop-down list in the Environments tab. The tick indicates that this is the default Python environment for all scenarios in your experiment. 5. Select Manage experiment environments to see a detailed list of all existing environments for your experiment in the Environments tab.![Manage experiment environment with two environments and drop-down menu.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvextn.png) You can use the options provided by clicking the three vertical dots next to an environment to Edit, Set as default, Update in a deployment space or Delete the environment. You can also create a New environment from the Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default. Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment. 6. Click the Python extensions tab. ![Python extensions tab showing created extension](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvpyextn.png) Here you can view your Python extensions and see which environment it is used in. You can also create a New Python extension or use the options to Edit, Download, and Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created. You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification. Selecting a different run environment for a particular scenario You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane. Procedure 1. Open the Scenario pane and select your scenario in the Build model view. 2. Click the Configure run icon next to the Run button to open the Run configuration pane and select the Environment tab. 3. Choose Select run environment for this scenario, choose an environment from the drop-down menu, and click Run. 4. Open the Overview information pane. You can now see that your scenario has your chosen environment, while other scenarios are not affected by this modification.
# Configuring environments and adding Python extensions # You can change your default environment for Python and CPLEX in the experiment Overview\. ## Procedure ## To change the default environment for DOcplex and Modeling Assistant models: <!-- <ol> --> 1. Open the Overview, click ![information icon](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/infoicon.jpg) to open the Information pane, and select the Environments tab\. ![Environment tab of information pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfoenvirons.png) 2. Expand the environment section according to your model type\. For Python and Modeling Assistant models, expand Python environment\. You can see the default Python environment (if one exists)\. To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure\. 3. Expand the name of your environment, and select a different Python environment\. 4. Optional: **To create a new environment**: <!-- <ol> --> 1. Select New environment for Python. A new window opens for you to define your new environment. ![New environment window showing empty fields](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/overviewinfonewenv1.png) 2. Enter a name, and select a CPLEX version, hardware specification, copies (number of nodes), Python version and (optionally) you can set Associate a Python extension to On to include any Python libraries that you want to add. 3. Click New Python extension. 4. Enter a name for your extension in the new Create a Python extension window that opens, and click Create. 5. In the new Configure Python extension window that opens, you can set YAML code to On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries: # Modify the following content to add a software customization to an environment. # To remove an existing customization, delete the entire content and click Apply. # Add conda channels on a new line after defaults, indented by two spaces and a hyphen. channels: - defaults # To add packages through conda or pip, remove the comment on the following line. # dependencies: # Add conda packages here, indented by two spaces and a hyphen. # Remove the comment on the following line and replace sample package name with your package name: # - a_conda_package=1.0 # Add pip packages here, indented by four spaces and a hyphen. # Remove the comments on the following lines and replace sample package name with your package name. # - pip: # - a_pip_package==1.0 You can also click Browse to add any Python libraries. For example, this image shows a dynamic programming Python library that is imported and YAML code set to On.![Configure Python extension window showing YAML code and a Dynamic Programming library included](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/PythonExtension.png) Click Done. 6. Click Create in the New environment window. <!-- </ol> --> Your chosen (or newly created) environment appears as ticked in the Python environments drop-down list in the Environments tab. The tick indicates that this is the default Python environment for all scenarios in your experiment. 5. Select Manage experiment environments to see a detailed list of all existing environments for your experiment in the Environments tab\.![Manage experiment environment with two environments and drop\-down menu\.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvextn.png) You can use the options provided by clicking the three vertical dots next to an environment to Edit, Set as default, Update in a deployment space or Delete the environment. You can also create a New environment from the Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default. Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment. 6. Click the Python extensions tab\. ![Python extensions tab showing created extension](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/manageenvpyextn.png) Here you can view your Python extensions and see which environment it is used in. You can also create a New Python extension or use the options to Edit, Download, and Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created. You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification. <!-- </ol> --> <!-- <article "class="topic task nested1" role="article" id="task_envscenario" "> --> ## Selecting a different run environment for a particular scenario ## You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane\. ### Procedure ### <!-- <ol> --> 1. Open the Scenario pane and select your scenario in the Build model view\. 2. Click the Configure run icon next to the Run button to open the Run configuration pane and select the Environment tab\. 3. Choose Select run environment for this scenario, choose an environment from the drop\-down menu, and click Run\. 4. Open the Overview information pane\. You can now see that your scenario has your chosen environment, while other scenarios are not affected by this modification\. <!-- </ol> --> <!-- </article "class="topic task nested1" role="article" id="task_envscenario" "> --> <!-- </article "class="nested0" role="article" id="task_hwswconfig" "> -->
5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en
Decision Optimization sample models and notebooks
Sample models and notebooks for Decision Optimization Several examples are presented in this documentation as tutorials. You can also use many other examples that are provided in the Decision Optimization GitHub, and in the Samples. Quick links: * [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__docexamples) * [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_modelbuildersamples) * [Jupyter notebook samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_xrg_fdj_cgb) * [Python notebooks in the Samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_pythoncommunity)
# Sample models and notebooks for Decision Optimization # Several examples are presented in this documentation as tutorials\. You can also use many other examples that are provided in the Decision Optimization GitHub, and in the Samples\. Quick links: <!-- <ul> --> * [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__docexamples) * [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_modelbuildersamples) * [Jupyter notebook samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_xrg_fdj_cgb) * [Python notebooks in the Samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_pythoncommunity) <!-- </ul> --> <!-- </article "role="article" "> -->
167D5677958594BA275E34B8748F7E8091782560
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en
Decision Optimization experiment UI views and scenarios
Decision Optimization experiment views and scenarios The Decision Optimization experiment UI has different views in which you can select data, create models, solve different scenarios, and visualize the results. Quick links to sections: * [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_overview) * [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_environment) * [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_preparedata) * [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__ModelView) * [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_g21_p5n_plb) * [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__runmodel) * [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_runconfig) * [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__envtabConfigRun) * [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__solution) * [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__scenariopanel) * [Generating notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__generateNB) * [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Importingscenarios) * [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Exportingscenarios) Note: To create and run Optimization models, you must have both a Machine Learning service added to your project and a deployment space that is associated with your experiment: 1. Add a [Machine Learning service](https://cloud.ibm.com/catalog/services/machine-learning) to your project. You can either add this service at the project level (see [Creating a Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new Decision Optimization experiment: click Add a Machine Learning service, select, or create a New service, click Associate, then close the window. 2. Associate a [deployment space](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.htmlcreate)). A deployment space can be created or selected when you first create a new Decision Optimization experiment: click Create a deployment space, enter a name for your deployment space, and click Create. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane. When you add a Decision Optimization experiment as an asset in your project, you open the Decision Optimization experiment UI. With the Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve. To edit and solve models, you must have Admin or Editor roles in the project. Viewers of shared projects can only see experiments, but cannot modify or run them. You can create a Decision Optimization model from scratch by entering a name or by choosing a .zip file, and then selecting Create. Scenario 1 opens. With the Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem. For a step-by-step guide to build, solve and deploy a Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html). For each of the following views, you can organize your screen as full-screen or as a split-screen. To do so, hover over one of the view tabs ( Prepare data, Build model, Explore solution) for a second or two. A menu then appears where you can select Full Screen, Left or Right. For example, if you choose Left for the Prepare data view, and then choose Right for the Explore solution view, you can see both these views on the same screen.
# Decision Optimization experiment views and scenarios # The Decision Optimization experiment UI has different views in which you can select data, create models, solve different scenarios, and visualize the results\. Quick links to sections: <!-- <ul> --> * [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_overview) * [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_environment) * [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_preparedata) * [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__ModelView) * [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_g21_p5n_plb) * [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__runmodel) * [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_runconfig) * [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__envtabConfigRun) * [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__solution) * [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__scenariopanel) * [Generating notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__generateNB) * [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__p_Importingscenarios) * [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__p_Exportingscenarios) <!-- </ul> --> Note: To create and run Optimization models, you must have both a Machine Learning service added to your project and a deployment space that is associated with your experiment: <!-- <ol> --> 1. Add a [**Machine Learning** service](https://cloud.ibm.com/catalog/services/machine-learning) to your project\. You can either add this service at the project level (see [Creating a Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new Decision Optimization experiment: click Add a Machine Learning service, select, or create a New service, click Associate, then close the window\. 2. Associate a [**deployment space**](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html#create))\. A deployment space can be created or selected when you first create a new Decision Optimization experiment: click Create a deployment space, enter a name for your deployment space, and click Create\. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) information pane\. <!-- </ol> --> When you add a **Decision Optimization experiment** as an asset in your project, you open the **Decision Optimization experiment UI**\. With the Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve\. To edit and solve models, you must have Admin or Editor roles in the project\. Viewers of shared projects can only see experiments, but cannot modify or run them\. You can create a Decision Optimization model from scratch by entering a name or by choosing a `.zip` file, and then selecting Create\. Scenario 1 opens\. With the Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models\. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem\. For a step\-by\-step guide to build, solve and deploy a Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)\. For each of the following views, you can organize your screen as full\-screen or as a **split\-screen**\. To do so, hover over one of the view tabs ( Prepare data, Build model, Explore solution) for a second or two\. A menu then appears where you can select Full Screen, Left or Right\. For example, if you choose Left for the Prepare data view, and then choose Right for the Explore solution view, you can see both these views on the same screen\. <!-- </article "role="article" "> -->
1C20BD9F24D670DD18B6BC28E020FBB23C742682
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/CustomRules.html?context=cdpaas&locale=en
Creating advanced custom constraints with Python in the Decision Optimization Modeling Assistant
Creating advanced custom constraints with Python This Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python. Procedure To create a new advanced custom constraint: 1. In the Build model view of your open Modeling Assistant model, look at the Suggestions pane. If you have Display by category selected, expand the Others section to locate New custom constraint, and click it to add it to your model. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model.A new custom constraint is added to your model. ![New custom constraint in model, with elements highlighted to be completed by user.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/newcustomconstraint.jpg) 2. Click Enter your constraint. Use [brackets] for data, concepts, variables, or parameters and enter the constraint you want to specify. For example, type No [employees] has [onCallDuties] for more than [2] consecutive days and press enter.The specification is displayed with default parameters (parameter1, parameter2, parameter3) for you to customize. These parameters will be passed to the Python function that implements this custom rule. ![Custom constraint expanded to show default parameters and function name.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintFillParameters.jpg) 3. Edit the default parameters in the specification to give them more meaningful names. For example, change the parameters to employees, on_call_duties, and limit and click enter. 4. Click function name and enter a name for the function. For example, type limitConsecutiveAssignments and click enter.Your function name is added and an Edit Python button appears. ![Custom rule showing customized parameters and Edit Python button.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintParameters.jpg) 5. Click the Edit Python button.A new window opens showing you Python code that you can edit to implement your custom rule. You can see your customized parameters in the code as follows: ![Python code showing block to be customized](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CustomRulePythoncode.jpg) Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value. 6. Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here. In this case, close this window for now and in the Scenario pane, expand the three vertical dots and select Generate a notebook for this scenario that contains the custom rule. Enter a name for this notebook.The notebook is created in your project assets ready for you to edit and debug. Once you have edited, run and debugged it you can copy the code for your custom function back into this Edit Python window in the Modeling Assistant. 7. Edit the Python code in the Modeling Assistant custom rule Edit Python window. For example, you can define the rule for consecutive days in Python as follows: def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit): global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property print('Adding constraints for the custom rule') for employee, duties in employees.associated(on_call_duties): duties_day_idx = duties.join(Day) Retrieve Day index from Day label for d in Day['index']: end = d + limit + 1 One must enforce that there are no occurence of (limit + 1) working consecutive days duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)] mdl.add_constraint(mdl.sum(duties_in_win.onCallDutyVar) <= limit) 8. Click the Run button to run your model with your custom constraint.When the run is completed you can see the results in the Explore solution view.
# Creating advanced custom constraints with Python # This Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python\. ## Procedure ## To create a new advanced custom constraint: <!-- <ol> --> 1. In the Build model view of your open Modeling Assistant model, look at the Suggestions pane\. If you have Display by category selected, expand the Others section to locate New custom constraint, and click it to add it to your model\. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model\.A new custom constraint is added to your model\. ![New custom constraint in model, with elements highlighted to be completed by user.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/newcustomconstraint.jpg) 2. Click Enter your constraint\. Use \[brackets\] for data, concepts, variables, or parameters and enter the constraint you want to specify\. For example, type No \[employees\] has \[onCallDuties\] for more than \[2\] consecutive days and press enter\.The specification is displayed with default parameters (`parameter1, parameter2, parameter3`) for you to customize\. These parameters will be passed to the Python function that implements this custom rule\. ![Custom constraint expanded to show default parameters and function name.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintFillParameters.jpg) 3. Edit the default parameters in the specification to give them more meaningful names\. For example, change the parameters to `employees, on_call_duties`, and `limit` and click enter\. 4. Click function name and enter a name for the function\. For example, type limitConsecutiveAssignments and click enter\.Your function name is added and an Edit Python button appears\. ![Custom rule showing customized parameters and Edit Python button.](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/customconstraintParameters.jpg) 5. Click the Edit Python button\.A new window opens showing you Python code that you can edit to implement your custom rule\. You can see your customized parameters in the code as follows: ![Python code showing block to be customized](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CustomRulePythoncode.jpg) Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value. 6. Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here\. In this case, close this window for now and in the Scenario pane, expand the three vertical dots and select Generate a notebook for this scenario that contains the custom rule\. Enter a name for this notebook\.The notebook is created in your project assets ready for you to edit and debug\. Once you have edited, run and debugged it you can copy the code for your custom function back into this Edit Python window in the Modeling Assistant\. 7. Edit the Python code in the Modeling Assistant custom rule Edit Python window\. For example, you can define the rule for consecutive days in Python as follows: def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit): global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property print('Adding constraints for the custom rule') for employee, duties in employees.associated(on_call_duties): duties_day_idx = duties.join(Day) # Retrieve Day index from Day label for d in Day['index']: end = d + limit + 1 # One must enforce that there are no occurence of (limit + 1) working consecutive days duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)] mdl.add_constraint(mdl.sum(duties_in_win.onCallDutyVar) <= limit) 8. Click the Run button to run your model with your custom constraint\.When the run is completed you can see the results in the **Explore solution** view\. <!-- </ol> --> <!-- </article "role="article" "> -->
C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/advancedMA.html?context=cdpaas&locale=en
Creating constraints and custom decisions with the Decision Optimization Modeling Assistant
Adding multi-concept constraints and custom decisions: shift assignment This Decision Optimization Modeling Assistant example shows you how to use multi-concept iterations, the associated keyword in constraints, how to define your own custom decisions, and define logical constraints. For illustration, a resource assignment problem, ShiftAssignment, is used and its completed model with data is provided in the DO-samples. Procedure To download and open the sample: 1. Download the ShiftAssignment.zip file from the Model_Builder subfolder in the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder. 2. Open your project or create an empty project. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. 4. Select the Assets tab. 5. Select New asset > Solve optimization problems in the Work with models section. 6. Click Local file in the Solve optimization problems window that opens. 7. Browse locally to find and choose the ShiftAssignment.zip archive that you downloaded. Click Open. Alternatively use drag and drop. 8. Associate a Machine Learning service instance with your project and reload the page. 9. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. 10. Click Create.A Decision Optimization model is created with the same name as the sample. 11. Open the scenario pane and select the AssignmentWithOnCallDuties scenario. Using multi-concept iteration Procedure To use multi-concept iteration, follow these steps. 1. Click Build model in the sidebar to view your model formulation.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints. 2. Expand the constraint For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1. Defining custom decisions Procedure To define custom decisions, follow these steps. 1. Click Build model to see the model formulation of the AssignmentWithOnCallDuties Scenario.![Build model view showing Shift Assignment formulation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CloudStaffAssignRunModel.png) The custom decision OnCallDuties is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees. The constraint ![On call duty constraint](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/StaffAssignOncallDuty.jpg) ensures that the on-call duty requirements that are listed in the Day table are satisfied. The following steps show you how this custom decision OnCallDuties was defined. 2. Open the Settings pane and notice that the Visualize and edit decisions is set to true (or set it to true if it is set to the default false). This setting adds a Decisions tab to your Add to model window. ![Decisions tab of the Add to Model pane showing two intents](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/DecisionsTab.jpg) Here you can see OnCallDuty is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables Day and Employee. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent. 3. Optional: Enter your own text to describe the OnCallDuty in the [to be documented] field. 4. Optional: To create your own decision in the Decisions tab, click the enter name, type in a name and click enter. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop-down menus. If you, for example, select assignment as the decision type, two dimensions are created. As assignment involves assigning at least one thing to another, at least two dimensions must be defined. Use select a table fields to define the dimensions. Using logical constraints Procedure To use logical constraints: 1. Look at the constraint ![Logical constraint suggestion](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/impliedconstraint.jpg)This constraint ensures that, for each employee and day combination, when no associated assignments exist (for example, the employee is on vacation on that day), that no on-call duties are assigned to that employee on that day. Note the use of the if...then keywords to define this logical constraint. 2. Optional: Add other logical constraints to your model by searching in the suggestions.
# Adding multi\-concept constraints and custom decisions: shift assignment # This Decision Optimization Modeling Assistant example shows you how to use multi\-concept iterations, the `associated` keyword in constraints, how to define your own custom decisions, and define logical constraints\. For illustration, a resource assignment problem, `ShiftAssignment`, is used and its completed model with data is provided in the **DO\-samples**\. ## Procedure ## To download and open the sample: <!-- <ol> --> 1. Download the ShiftAssignment\.zip file from the Model\_Builder subfolder in the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**\. Select the relevant product and version subfolder\. 2. Open your project or create an empty project\. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service\. Then select an existing Machine Learning service instance (or create a new one ) and click Associate\. When the service is associated, a success message is displayed, and you can then close the Associate service window\. 4. Select the Assets tab\. 5. Select New asset > Solve optimization problems in the Work with models section\. 6. Click Local file in the Solve optimization problems window that opens\. 7. Browse locally to find and choose the ShiftAssignment\.zip archive that you downloaded\. Click Open\. Alternatively use drag and drop\. 8. Associate a **Machine Learning service instance** with your project and reload the page\. 9. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment\. 10. Click **Create**\.A Decision Optimization model is created with the same name as the sample\. 11. Open the scenario pane and select the `AssignmentWithOnCallDuties` scenario\. <!-- </ol> --> <!-- <article "class="topic task nested1" role="article" id="task_multiconceptiterations" "> --> ## Using multi\-concept iteration ## ### Procedure ### To use multi\-concept iteration, follow these steps\. <!-- <ol> --> 1. Click Build model in the sidebar to view your model formulation\.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints\. 2. Expand the constraint `For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1`\. <!-- </ol> --> <!-- </article "class="topic task nested1" role="article" id="task_multiconceptiterations" "> --> <!-- <article "class="topic task nested1" role="article" id="task_customdecision" "> --> ## Defining custom decisions ## ### Procedure ### To define custom decisions, follow these steps\. <!-- <ol> --> 1. Click Build model to see the model formulation of the `AssignmentWithOnCallDuties` Scenario\.![Build model view showing Shift Assignment formulation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/CloudStaffAssignRunModel.png) The custom decision `OnCallDuties` is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees. The constraint ![On call duty constraint](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/StaffAssignOncallDuty.jpg) ensures that the on-call duty requirements that are listed in the Day table are satisfied. The following steps show you how this custom decision `OnCallDuties` was defined. 2. Open the Settings pane and notice that the Visualize and edit decisions is set to `true` (or set it to true if it is set to the default false)\. This setting adds a Decisions tab to your Add to model window. ![Decisions tab of the Add to Model pane showing two intents](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/DecisionsTab.jpg) Here you can see `OnCallDuty` is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables `Day` and `Employee`. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent. 3. Optional: Enter your own text to describe the `OnCallDuty` in the \[to be documented\] field\. 4. Optional: To create your own decision in the Decisions tab, click the enter name, type in a name and click enter\. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop\-down menus\. If you, for example, select assignment as the decision type, two dimensions are created\. As assignment involves assigning at least one thing to another, at least two dimensions must be defined\. Use select a table fields to define the dimensions\. <!-- </ol> --> <!-- </article "class="topic task nested1" role="article" id="task_customdecision" "> --> <!-- <article "class="topic task nested1" role="article" id="task_impliedconstraints" "> --> ## Using logical constraints ## ### Procedure ### To use logical constraints: <!-- <ol> --> 1. Look at the constraint ![Logical constraint suggestion](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/impliedconstraint.jpg)This constraint ensures that, for each employee and day combination, when no associated assignments exist (for example, the employee is on vacation on that day), that no on\-call duties are assigned to that employee on that day\. Note the use of the `if...then` keywords to define this logical constraint\. 2. Optional: Add other logical constraints to your model by searching in the suggestions\. <!-- </ol> --> <!-- </article "class="topic task nested1" role="article" id="task_impliedconstraints" "> --> <!-- </article "role="article" "> -->
0EFC1AA12637C84918CEF9FA5DE5DA424822330C
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en
Decision Optimization Modeling Assistant scheduling tutorial
Formulating and running a model: house construction scheduling This tutorial shows you how to use the Modeling Assistant to define, formulate and run a model for a house construction scheduling problem. The completed model with data is also provided in the DO-samples, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.htmlExamples__section_modelbuildersamples). In this section: * [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_The_problem) * [More about the model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_tbl_kdj_t1b) * [Generating a Python notebook from your scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_j2m_xnh_4bb)
# Formulating and running a model: house construction scheduling # This tutorial shows you how to use the Modeling Assistant to define, formulate and run a model for a house construction scheduling problem\. The completed model with data is also provided in the **DO\-samples**, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html#Examples__section_modelbuildersamples)\. In this section: <!-- <ul> --> * [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_The_problem) * [More about the model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_tbl_kdj_t1b) * [Generating a Python notebook from your scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_j2m_xnh_4bb) <!-- </ul> --> <!-- </article "role="article" "> -->
312E91752782553D39C335D0DAAF189025739BB4
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuildintro.html?context=cdpaas&locale=en
Decision Optimization Modeling Assistant models
Modeling Assistant models You can model and solve Decision Optimization problems using the Modeling Assistant (which enables you to formulate models in natural language). This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code. The Modeling Assistant is only available in English and is not globalized. The basic workflow to create a model with the Modeling Assistant and examine it under different scenarios is as follows: 1. Create a project. 2. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI). 3. Add and import your data into the scenario. 4. Create a natural language model in the scenario, by first selecting your decision domain and then using the Modeling Assistant to guide you. 5. Run the model to solve it and explore the solution. 6. Create visualizations of solution and data. 7. Copy the scenario and edit the model and/or the data. 8. Solve the new scenario to see the impact of these changes. ![Workflow showing the previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg) This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase). For more information about deployment see .
# Modeling Assistant models # You can model and solve Decision Optimization problems using the Modeling Assistant (which enables you to formulate models in natural language)\. This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code\. The Modeling Assistant is **only available in English** and is not globalized\. The basic workflow to create a model with the Modeling Assistant and examine it under different scenarios is as follows: <!-- <ol> --> 1. Create a project\. 2. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI)\. 3. Add and import your data into the scenario\. 4. Create a natural language model in the scenario, by first selecting your decision domain and then using the Modeling Assistant to guide you\. 5. Run the model to solve it and explore the solution\. 6. Create visualizations of solution and data\. 7. Copy the scenario and edit the model and/or the data\. 8. Solve the new scenario to see the impact of these changes\. <!-- </ol> --> ![Workflow showing the previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg) This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html#cogusercase)\. For more information about deployment see \. <!-- </article "role="article" "> -->
2746F2E53D41F5810D92D843AF8C0AB2B36A0D47
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/mdl_asst_domains.html?context=cdpaas&locale=en
Selecting a Decision domain in the Modeling Assistant
Selecting a Decision domain in the Modeling Assistant There are different decision domains currently available in the Modeling Assistant and you can be guided to choose the right domain for your problem. Once you have added and imported your data into your model, the Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a decision domain for your model. If you need a decision domain that is not currently supported by the Modeling Assistant, you can still formulate your model as a Python notebook or as an OPL model in the experiment UI editor.
# Selecting a Decision domain in the Modeling Assistant # There are different decision domains currently available in the Modeling Assistant and you can be guided to choose the right domain for your problem\. Once you have added and imported your data into your model, the Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit\. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a decision domain for your model\. If you need a decision domain that is not currently supported by the Modeling Assistant, you can still formulate your model as a Python notebook or as an OPL model in the experiment UI editor\. <!-- </article "role="article" "> -->
F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/createScenario.html?context=cdpaas&locale=en
Decision Optimization notebook tutorial create new scenario
Create new scenario To solve with different versions of your model or data you can create new scenarios in the Decision Optimization experiment UI. Procedure To create a new scenario: 1. Click the Open scenario pane icon ![Open scenario pane button](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/CPDscenariomanage.jpg) to open the Scenario panel. 2. Use the Create Scenario drop-down menu to create a new scenario from the current one. 3. Add a name for the duplicate scenario and click Create. 4. Working in your new scenario, in the Prepare data view, open the diet_food data table in full mode. 5. Locate the entry for Hotdog at row 9, and set the qmax value to 0 to exclude hot dog from possible solutions. 6. Switch to the Build model view and run the model again. 7. You can see the impact of your changes on the solution by switching from one scenario to the other.
# Create new scenario # To solve with different versions of your model or data you can create new scenarios in the Decision Optimization experiment UI\. ## Procedure ## To create a new scenario: <!-- <ol> --> 1. Click the **Open scenario pane** icon ![Open scenario pane button](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/CPDscenariomanage.jpg) to open the **Scenario** panel\. 2. Use the Create Scenario drop\-down menu to create a new scenario from the current one\. 3. Add a name for the duplicate scenario and click **Create**\. 4. Working in your new scenario, in the Prepare data view, open the `diet_food` data table in full mode\. 5. Locate the entry for *Hotdog* at row 9, and set the `qmax` value to 0 to exclude hot dog from possible solutions\. 6. Switch to the **Build model** view and run the model again\. 7. You can see the impact of your changes on the solution by switching from one scenario to the other\. <!-- </ol> --> <!-- </article "role="article" "> -->
056E37762231E9E32F0F443987C32ACF7BF1AED4
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/multiIntro.html?context=cdpaas&locale=en
Decision Optimization notebook multiple scenarios
Working with multiple scenarios You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is. This example steps you through the process to generate multiple scenarios with a model. This makes it possible to test the performance of the model against multiple randomly generated data sets. It's important in practice to check the robustness of a model against a wide range of data. This helps ensure that the model performs well in potentially stochastic real-world conditions. The example is the StaffPlanning model in the DO-samples. The example is structured as follows: * The model StaffPlanning contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets. * The Python notebookCopyAndSolveScenarios contains the random generator to create the new scenarios in the StaffPlanning model. For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview). For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/).
# Working with multiple scenarios # You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is\. This example steps you through the process to generate multiple scenarios with a model\. This makes it possible to test the performance of the model against multiple randomly generated data sets\. It's important in practice to check the robustness of a model against a wide range of data\. This helps ensure that the model performs well in potentially stochastic real\-world conditions\. The example is the `StaffPlanning` model in the **DO\-samples**\. The example is structured as follows: <!-- <ul> --> * The model `StaffPlanning` contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets\. * The Python notebook`CopyAndSolveScenarios` contains the random generator to create the new scenarios in the `StaffPlanning` model\. <!-- </ul> --> For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview)\. For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/)\. <!-- </article "role="article" "> -->
3BEB81A5A5953CD570FA673B2496F8AF98725438
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/multiScenario.html?context=cdpaas&locale=en
Decision Optimization notebook generating multiple scenarios
Generating multiple scenarios This tutorial shows you how to generate multiple scenarios from a notebook using randomized data. Generating multiple scenarios lets you test a model by exposing it to a wide range of data. Procedure To create and solve a scenario using a sample: 1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your machine. You can also download just the StaffPlanning.zip file from the Model_Builder subfolder for your product and version, but in this case do not extract it. 2. Open your project or create an empty project. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. 4. Select the Assets tab. 5. Select New asset > Solve optimization problems in the Work with models section. 6. Click Local file in the Solve optimization problems window that opens. 7. Browse to choose the StaffPlanning.zip file in the Model_Builder folder. Select the relevant product and version subfolder in your downloaded DO-samples. 8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. 9. Click Create.A Decision Optimization model is created with the same name as the sample. 10. Working in Scenario 1 of the StaffPlanning model, you can see that the solution contains tables to identify which resources work which days to meet expected demand. If there is no solution displayed, or to rerun the model, click Build model in the sidebar, then click Run to solve the model.
# Generating multiple scenarios # This tutorial shows you how to generate multiple scenarios from a notebook using randomized data\. Generating multiple scenarios lets you test a model by exposing it to a wide range of data\. ## Procedure ## To create and solve a scenario using a sample: <!-- <ol> --> 1. Download and extract all the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)** on to your machine\. You can also download just the StaffPlanning\.zip file from the Model\_Builder subfolder for your product and version, but in this case do not extract it\. 2. Open your project or create an empty project\. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service\. Then select an existing Machine Learning service instance (or create a new one ) and click Associate\. When the service is associated, a success message is displayed, and you can then close the Associate service window\. 4. Select the Assets tab\. 5. Select New asset > Solve optimization problems in the Work with models section\. 6. Click Local file in the Solve optimization problems window that opens\. 7. Browse to choose the StaffPlanning\.zip file in the **Model\_Builder** folder\. Select the relevant product and version subfolder in your downloaded DO\-samples\. 8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment\. 9. Click **Create**\.A Decision Optimization model is created with the same name as the sample\. 10. Working in Scenario 1 of the `StaffPlanning` model, you can see that the solution contains tables to identify which resources work which days to meet expected demand\. If there is no solution displayed, or to rerun the model, click **Build model** in the sidebar, then click **Run** to solve the model\. <!-- </ol> --> <!-- </article "role="article" "> -->
DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/preparedataIO.html?context=cdpaas&locale=en
Decision Optimization input and output data
Input and output data You can access the input and output data you defined in the experiment UI by using the following dictionaries. The data that you imported in the Prepare data view in the experiment UI is accessible from the input dictionary. You must define each table by using the syntax inputs['tablename']. For example, here food is an entity that is defined from the table called diet_food: food = inputs['diet_food'] Similarly, to show tables in the Explore solution view of the experiment UI you must specify them using the syntax outputs['tablename']. For example, outputs['solution'] = solution_df defines an output table that is called solution. The entity solution_df in the Python model defines this table. You can find this Diet example in the Model_Builder folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). To import and run (solve) it in the experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).
# Input and output data # You can access the input and output data you defined in the experiment UI by using the following dictionaries\. The data that you imported in the **Prepare data view** in the experiment UI is accessible from the input dictionary\. You must define each table by using the syntax `inputs['tablename']`\. For example, here food is an entity that is defined from the table called `diet_food`: food = inputs['diet_food'] Similarly, to show tables in the Explore solution view of the experiment UI you must specify them using the syntax `outputs['tablename']`\. For example, outputs['solution'] = solution_df defines an output table that is called `solution`\. The entity `solution_df` in the Python model defines this table\. You can find this Diet example in the Model\_Builder folder of the [DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)\. To import and run (solve) it in the experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b)\. <!-- </article "role="article" "> -->
726175290D457B10A02C27F08ECA1F6546E64680
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveIntro.html?context=cdpaas&locale=en
Python DOcplex models
Python DOcplex models You can solve Python DOcplex models in a Decision Optimization experiment. The Decision Optimization environment currently supports Python 3.10. The default version is Python 3.10. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane. The basic workflow to create a Python DOcplex model in Decision Optimization, and examine it under different scenarios, is as follows: 1. Create a project. 2. Add data to the project. 3. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI). 4. Select and import your data into the scenario. 5. Create or import your Python model. 6. Run the model to solve it and explore the solution. 7. Copy the scenario and edit the data in the context of the new scenario. 8. Solve the new scenario to see the impact of the changes to data. ![Workflow showing previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg)
# Python DOcplex models # You can solve Python DOcplex models in a Decision Optimization experiment\. The Decision Optimization environment currently supports Python 3\.10\. The default version is Python 3\.10\. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) information pane\. The basic workflow to create a Python DOcplex model in Decision Optimization, and examine it under different scenarios, is as follows: <!-- <ol> --> 1. Create a project\. 2. Add data to the project\. 3. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI)\. 4. Select and import your data into the scenario\. 5. Create or import your Python model\. 6. Run the model to solve it and explore the solution\. 7. Copy the scenario and edit the data in the context of the new scenario\. 8. Solve the new scenario to see the impact of the changes to data\. <!-- </ol> --> ![Workflow showing previously mentioned steps](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/images/new_overviewcognitive-3.jpg) <!-- </article "role="article" "> -->
2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html?context=cdpaas&locale=en
Decision Optimization notebook tutorial
Solving and analyzing a model: the diet problem This example shows you how to create and solve a Python-based model by using a sample. Procedure To create and solve a Python-based model by using a sample: 1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer. You can also download just the diet.zip file from the Model_Builder subfolder for your product and version, but in this case, do not extract it. 2. Open your project or create an empty project. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window. 4. Select the Assets tab. 5. Select New asset > Solve optimization problems in the Work with models section. 6. Click Local file in the Solve optimization problems window that opens. 7. Browse to find the Model_Builder folder in your downloaded DO-samples. Select the relevant product and version subfolder. Choose the Diet.zip file and click Open. Alternatively use drag and drop. 8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment. 9. Click New deployment space, enter a name, and click Create (or select an existing space from the drop-down menu). 10. Click Create.A Decision Optimization model is created with the same name as the sample. 11. In the Prepare data view, you can see the data assets imported.These tables represent the min and max values for nutrients in the diet (diet_nutrients), the nutrients in different foods (diet_food_nutrients), and the price and quantity of specific foods (diet_food). ![Tables of input data in Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudpreparedata2.png) 12. Click Build model in the sidebar to view your model.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements. ![Python model for diet problem displayed in Run model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/newrunmodel3.png) Note also how the inputs (tables in the Prepare data view) and the outputs (in this case the solution table to be displayed in the Explore solution view) are specified in this model. 13. Run the model by clicking the Run button in the Build model view.
# Solving and analyzing a model: the diet problem # This example shows you how to create and solve a Python\-based model by using a sample\. ## Procedure ## To create and solve a Python\-based model by using a sample: <!-- <ol> --> 1. Download and extract all the [DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer\. You can also download just the diet\.zip file from the Model\_Builder subfolder for your product and version, but in this case, do not extract it\. 2. Open your project or create an empty project\. 3. On the Manage tab of your project, select the Services and integrations section and click Associate service\. Then select an existing Machine Learning service instance (or create a new one ) and click Associate\. When the service is associated, a success message is displayed, and you can then close the Associate service window\. 4. Select the Assets tab\. 5. Select New asset > Solve optimization problems in the Work with models section\. 6. Click Local file in the Solve optimization problems window that opens\. 7. Browse to find the Model\_Builder folder in your downloaded DO\-samples\. Select the relevant product and version subfolder\. Choose the Diet\.zip file and click Open\. Alternatively use drag and drop\. 8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment\. 9. Click New deployment space, enter a name, and click Create (or select an existing space from the drop\-down menu)\. 10. Click **Create**\.A Decision Optimization model is created with the same name as the sample\. 11. In the Prepare data view, you can see the data assets imported\.These tables represent the min and max values for nutrients in the diet (`diet_nutrients`), the nutrients in different foods (`diet_food_nutrients`), and the price and quantity of specific foods (`diet_food`)\. ![Tables of input data in Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/Cloudpreparedata2.png) 12. Click Build model in the sidebar to view your model\.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements\. ![Python model for diet problem displayed in Run model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/images/newrunmodel3.png) Note also how the **inputs** (tables in the Prepare data view) and the **outputs** (in this case the solution table to be displayed in the Explore solution view) are specified in this model. 13. Run the model by clicking the **Run** button in the Build model view\. <!-- </ol> --> <!-- </article "role="article" "> -->
D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733
https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en
Decision Optimization run parameters
Run parameters and Environment You can select various run parameters for the optimization solve in the Decision Optimization experiment UI. Quick links to sections: * [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__cplexruntime) * [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__pyversion) * [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__section_runconfig) * [Environment for scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__section_runparamenv)
# Run parameters and Environment # You can select various run parameters for the optimization solve in the Decision Optimization experiment UI\. Quick links to sections: <!-- <ul> --> * [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__cplexruntime) * [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__pyversion) * [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__section_runconfig) * [Environment for scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__section_runparamenv) <!-- </ul> --> <!-- </article "role="article" "> -->
C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB
https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html?context=cdpaas&locale=en
Decision Optimization
Decision Optimization IBM® Decision Optimization gives you access to IBM's industry-leading solution engines for mathematical programming and constraint programming. You can build Decision Optimization models either with notebooks or by using the powerful Decision Optimization experiment UI (Beta version). Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent Modeling Assistant (Beta version). You can also deploy models with Watson Machine Learning. Data format : Tabular: .csv, .xls, .json files. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata) Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html) Data size : Any
# Decision Optimization # IBM® Decision Optimization gives you access to IBM's industry\-leading solution engines for mathematical programming and constraint programming\. You can build Decision Optimization models either with notebooks or by using the powerful Decision Optimization experiment UI (Beta version)\. Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent Modeling Assistant (Beta version)\. You can also deploy models with Watson Machine Learning\. Data format : Tabular: `.csv`, `.xls`, `.json` files\. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata) Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html) For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html) Data size : Any <!-- </article "role="article" "> -->
E45F37BDDB38D6656992642FBEA2707FE34E942A
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/CPLEXSolveWML.html?context=cdpaas&locale=en
Delegating CPLEX solve to Watson Machine Learning
Delegating the Decision Optimization solve to run on Watson Machine Learning from Java or .NET CPLEX or CPO models You can delegate the Decision Optimization solve to run on Watson Machine Learning from your Java or .NET (CPLEX or CPO) models. Delegating the solve is only useful if you are building and generating your models locally. You cannot deploy models and run jobs Watson Machine Learning with this method. For full use of Java models on Watson Machine Learning use the Java™ worker Important: To deploy and test models on Watson Machine Learning, use the Java worker. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).For the library and documentation for: * Java CPLEX or CPO models. See [Decision Optimization GitHub DOforWMLwithJava](https://github.com/IBMDecisionOptimization/DOforWMLwithJava). * .NET CPLEX or CPO models. See [Decision Optimization GitHub DOforWMLWith.NET](https://github.com/IBMDecisionOptimization/DOForWMLWith.NET).
# Delegating the Decision Optimization solve to run on Watson Machine Learning from Java or \.NET CPLEX or CPO models # You can delegate the Decision Optimization solve to run on Watson Machine Learning from your Java or \.NET (CPLEX or CPO) models\. Delegating the solve is only useful if you are building and generating your models locally\. You cannot deploy models and run jobs Watson Machine Learning with this method\. For full use of Java models on Watson Machine Learning use the Java™ worker Important: To deploy and test models on Watson Machine Learning, use the Java worker\. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\.For the library and documentation for: <!-- <ul> --> * Java CPLEX or CPO models\. See [Decision Optimization GitHub DOforWMLwithJava](https://github.com/IBMDecisionOptimization/DOforWMLwithJava)\. * \.NET CPLEX or CPO models\. See [Decision Optimization GitHub DOforWMLWith\.NET](https://github.com/IBMDecisionOptimization/DOForWMLWith.NET)\. <!-- </ul> --> <!-- </article "role="article" "> -->
5BC48AB9A35E2E8BAEA5204C4406835154E2B836
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.html?context=cdpaas&locale=en
Decision Optimization deployment steps
Deployment steps With IBM Watson Machine Learning you can deploy your Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data. This deployment can be achieved by using the Watson Machine Learning REST API or by using the Watson Machine Learning Python client. See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST) for a full code example. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient) for a link to a Python notebook available from the Samples.
# Deployment steps # With IBM Watson Machine Learning you can deploy your Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data\. This deployment can be achieved by using the Watson Machine Learning REST API or by using the Watson Machine Learning Python client\. See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html#task_deploymodelREST) for a full code example\. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html#topic_wmlpythonclient) for a link to a Python notebook available from the Samples\. <!-- </article "role="article" "> -->
134EB5D79038B55A3A6AC019016A21EC2B6A1917
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html?context=cdpaas&locale=en
Deploying Java models
Deploying Java models for Decision Optimization You can deploy Decision Optimization Java models in Watson Machine Learning by using the Watson Machine Learning REST API. With the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md). The Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in Watson Machine Learning, including an example. You can use the code in this repository to package your Decision Optimization Java model in a .jar file that can be used as a Watson Machine Learning model. For more information about Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar). You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models. For more information about these models, see the following reference manuals. * [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) * [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html) * [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
# Deploying Java models for Decision Optimization # You can deploy Decision Optimization Java models in Watson Machine Learning by using the Watson Machine Learning REST API\. With the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs\. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\. The Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in Watson Machine Learning, including an example\. You can use the code in this repository to package your Decision Optimization Java model in a `.jar` file that can be used as a Watson Machine Learning model\. For more information about Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar)\. You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models\. For more information about these models, see the following reference manuals\. <!-- <ul> --> * [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html) * [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html) * [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html) <!-- </ul> --> <!-- </article "role="article" "> -->
B92F42609B54B82BFE38A69B781052E876258C2C
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html?context=cdpaas&locale=en
Decision Optimization REST API deployment
REST API example You can deploy a Decision Optimization model, create and monitor jobs and get solutions using the Watson Machine Learning REST API. Procedure 1. Generate an IAM token using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows. curl "https://iam.bluemix.net/identity/token" -d "apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey" -H "Content-Type: application/x-www-form-urlencoded" -H "Authorization: Basic Yng6Yng=" Output example: { "access_token": " obtained IAM token ", "refresh_token": "", "token_type": "Bearer", "expires_in": 3600, "expiration": 1554117649, "scope": "ibm openid" } Use the obtained token (access_token value) prepended by the word Bearer in the Authorization header, and the Machine Learning service GUID in the ML-Instance-ID header, in all API calls. 2. Optional: If you have not obtained your SPACE-ID from the user interface as described previously, you can create a space using the REST API as follows. Use the previously obtained token prepended by the word bearer in the Authorization header in all API calls. curl --location --request POST "https://api.dataplatform.cloud.ibm.com/v2/spaces" -H "Authorization: Bearer TOKEN-HERE" -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" -H "Content-Type: application/json" --data-raw "{ "name": "SPACE-NAME-HERE", "description": "optional description here", "storage": { "resource_crn": "COS-CRN-ID-HERE" }, "compute": [{ "name": "MACHINE-LEARNING-SERVICE-NAME-HERE", "crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE" }] }" For Windows users, put the --data-raw command on one line and replace all " with " inside this command as follows: curl --location --request POST ^ "https://api.dataplatform.cloud.ibm.com/v2/spaces" ^ -H "Authorization: Bearer TOKEN-HERE" ^ -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" ^ -H "Content-Type: application/json" ^ --data-raw "{"name": "SPACE-NAME-HERE","description": "optional description here","storage": {"resource_crn": "COS-CRN-ID-HERE" },"compute": [{"name": "MACHINE-LEARNING-SERVICE-NAME-HERE","crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE" }]}" Alternatively put the data in a separate file.A SPACE-ID is returned in id field of the metadata section. Output example: { "entity": { "compute": [ { "crn": "MACHINE-LEARNING-SERVICE-CRN", "guid": "MACHINE-LEARNING-SERVICE-GUID", "name": "MACHINE-LEARNING-SERVICE-NAME", "type": "machine_learning" } ], "description": "string", "members": [ { "id": "XXXXXXX", "role": "admin", "state": "active", "type": "user" } ], "name": "name", "scope": { "bss_account_id": "account_id" }, "status": { "state": "active" } }, "metadata": { "created_at": "2020-07-17T08:36:57.611Z", "creator_id": "XXXXXXX", "id": "SPACE-ID", "url": "/v2/spaces/SPACE-ID" } } You must wait until your deployment space status is "active" before continuing. You can poll to check for this as follows. curl --location --request GET "https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" 3. Create a new Decision Optimization model All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file create_model.json. The URL will vary according to the chosen region/location for your machine learning service. See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url). curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @create_model.json The create_model.json file contains the following code: { "name": "ModelName", "description": "ModelDescription", "type": "do-docplex_22.1", "software_spec": { "name": "do_22.1" }, "custom": { "decision_optimization": { "oaas.docplex.python": "3.10" } }, "space_id": "SPACE-ID-HERE" } The Python version is stated explicitly here in a custom block. This is optional. Without it your model will use the default version which is currently Python 3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are 3.10. If you want to be able to run jobs for this model from the user interface, instead of only using the REST API , you must define the schema for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface. You can also use the schema specified for input and output in your optimization model: { "name": "Diet-Model-schema", "description": "Diet", "type": "do-docplex_22.1", "schemas": { "input": [ { "id": "diet_food_nutrients", "fields": { "name": "Food", "type": "string" }, { "name": "Calories", "type": "double" }, { "name": "Calcium", "type": "double" }, { "name": "Iron", "type": "double" }, { "name": "Vit_A", "type": "double" }, { "name": "Dietary_Fiber", "type": "double" }, { "name": "Carbohydrates", "type": "double" }, { "name": "Protein", "type": "double" } ] }, { "id": "diet_food", "fields": { "name": "name", "type": "string" }, { "name": "unit_cost", "type": "double" }, { "name": "qmin", "type": "double" }, { "name": "qmax", "type": "double" } ] }, { "id": "diet_nutrients", "fields": { "name": "name", "type": "string" }, { "name": "qmin", "type": "double" }, { "name": "qmax", "type": "double" } ] } ], "output": [ { "id": "solution", "fields": { "name": "name", "type": "string" }, { "name": "value", "type": "double" } ] } ] }, "software_spec": { "name": "do_22.1" }, "space_id": "SPACE-ID-HERE" } When you post a model you provide information about its model type and the software specification to be used.Model types can be, for example: * do-opl_22.1 for OPL models * do-cplex_22.1 for CPLEX models * do-cpo_22.1 for CP models * do-docplex_22.1 for Python models Version 20.1 can also be used for these model types. For the software specification, you can use the default specifications using their names do_22.1 or do_20.1. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient__extendWML) which shows you how to extend the Decision Optimization software specification (runtimes with additional Python libraries for docplex models). A MODEL-ID is returned in id field in the metadata. Output example: { "entity": { "software_spec": { "id": "SOFTWARE-SPEC-ID" }, "type": "do-docplex_20.1" }, "metadata": { "created_at": "2020-07-17T08:37:22.992Z", "description": "ModelDescription", "id": "MODEL-ID", "modified_at": "2020-07-17T08:37:22.992Z", "name": "ModelName", "owner": "", "space_id": "SPACE-ID" } } 4. Upload a Decision Optimization model formulation ready for deployment.First compress your model into a (tar.gz, .zip or .jar) file and upload it to be deployed by the Watson Machine Learning service.This code example uploads a model called diet.zip that contains a Python model and no common data: curl --location --request PUT "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/gzip" --data-binary "@diet.zip" You can download this example and other models from the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder. 5. Deploy your modelCreate a reference to your model. Use the SPACE-ID, the MODEL-ID obtained when you created your model ready for deployment and the hardware specification. For example: curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @deploy_model.json The deploy_model.json file contains the following code: { "name": "Test-Diet-deploy", "space_id": "SPACE-ID-HERE", "asset": { "id": "MODEL-ID-HERE" }, "hardware_spec": { "name": "S" }, "batch": {} } The DEPLOYMENT-ID is returned in id field in the metadata. Output example: { "entity": { "asset": { "id": "MODEL-ID" }, "custom": {}, "description": "", "hardware_spec": { "id": "HARDWARE-SPEC-ID", "name": "S", "num_nodes": 1 }, "name": "Test-Diet-deploy", "space_id": "SPACE-ID", "status": { "state": "ready" } }, "metadata": { "created_at": "2020-07-17T09:10:50.661Z", "description": "", "id": "DEPLOYMENT-ID", "modified_at": "2020-07-17T09:10:50.661Z", "name": "test-Diet-deploy", "owner": "", "space_id": "SPACE-ID" } } 6. Once deployed, you can monitor your model's deployment state. Use the DEPLOYMENT-ID.For example: curl --location --request GET "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" Output example: 7. You can then Submit jobs for your deployed model defining the input data and the output (results of the optimization solve) and the log file.For example, the following shows the contents of a file called myjob.json. It contains (inline) input data, some solve parameters, and specifies that the output will be a .csv file. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt). { "name":"test-job-diet", "space_id": "SPACE-ID-HERE", "deployment": { "id": "DEPLOYMENT-ID-HERE" }, "decision_optimization" : { "solve_parameters" : { "oaas.logAttachmentName":"log.txt", "oaas.logTailEnabled":"true" }, "input_data": [ { "id":"diet_food.csv", "fields" : "name","unit_cost","qmin","qmax"], "values" : "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10] ] }, { "id":"diet_food_nutrients.csv", "fields" : "Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"], "values" : "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4] ] }, { "id":"diet_nutrients.csv", "fields" : "name","qmin","qmax"], "values" : "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100] ] } ], "output_data": [ { "id":"..csv" } ] } } This code example posts a job that uses this file myjob.json. curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -H "cache-control: no-cache" -d @myjob.json A JOB-ID is returned. Output example: (the job is queued) { "entity": { "decision_optimization": { "input_data": [{ "id": "diet_food.csv", "fields": "name", "unit_cost", "qmin", "qmax"], "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]] }, { "id": "diet_food_nutrients.csv", "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"], "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] }, { "id": "diet_nutrients.csv", "fields": "name", "qmin", "qmax"], "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]] }], "output_data": [ { "id": "..csv" } ], "solve_parameters": { "oaas.logAttachmentName": "log.txt", "oaas.logTailEnabled": "true" }, "status": { "state": "queued" } }, "deployment": { "id": "DEPLOYMENT-ID" }, "platform_job": { "job_id": "", "run_id": "" } }, "metadata": { "created_at": "2020-07-17T10:42:42.783Z", "id": "JOB-ID", "name": "test-job-diet", "space_id": "SPACE-ID" } } 8. You can also monitor job states. Use the JOB-IDFor example: curl --location --request GET "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" Output example: (job has completed) { "entity": { "decision_optimization": { "input_data": [{ "id": "diet_food.csv", "fields": "name", "unit_cost", "qmin", "qmax"], "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]] }, { "id": "diet_food_nutrients.csv", "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"], "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] }, { "id": "diet_nutrients.csv", "fields": "name", "qmin", "qmax"], "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]] }], "output_data": [{ "fields": "Name", "Value"], "id": "kpis.csv", "values": "Total Calories", 2000], "Total Calcium", 800.0000000000001], "Total Iron", 11.278317739831891], "Total Vit_A", 8518.432542485823], "Total Dietary_Fiber", 25], "Total Carbohydrates", 256.80576358904455], "Total Protein", 51.17372234135308], "Minimal cost", 2.690409171696264]] }, { "fields": "name", "value"], "id": "solution.csv", "values": "Spaghetti W/ Sauce", 2.1551724137931036], "Chocolate Chip Cookies", 10], "Lowfat Milk", 1.8311671008899097], "Hotdog", 0.9296975991385925]] }], "output_data_references": [], "solve_parameters": { "oaas.logAttachmentName": "log.txt", "oaas.logTailEnabled": "true" }, "solve_state": { "details": { "KPI.Minimal cost": "2.690409171696264", "KPI.Total Calcium": "800.0000000000001", "KPI.Total Calories": "2000.0", "KPI.Total Carbohydrates": "256.80576358904455", "KPI.Total Dietary_Fiber": "25.0", "KPI.Total Iron": "11.278317739831891", "KPI.Total Protein": "51.17372234135308", "KPI.Total Vit_A": "8518.432542485823", "MODEL_DETAIL_BOOLEAN_VARS": "0", "MODEL_DETAIL_CONSTRAINTS": "7", "MODEL_DETAIL_CONTINUOUS_VARS": "9", "MODEL_DETAIL_INTEGER_VARS": "0", "MODEL_DETAIL_KPIS": "["Total Calories", "Total Calcium", "Total Iron", "Total Vit_A", "Total Dietary_Fiber", "Total Carbohydrates", "Total Protein", "Minimal cost"]", "MODEL_DETAIL_NONZEROS": "57", "MODEL_DETAIL_TYPE": "LP", "PROGRESS_CURRENT_OBJECTIVE": "2.6904091716962637" }, "latest_engine_activity": [ "2020-07-21T16:37:36Z, INFO] Model: diet", "2020-07-21T16:37:36Z, INFO] - number of variables: 9", "2020-07-21T16:37:36Z, INFO] - binary=0, integer=0, continuous=9", "2020-07-21T16:37:36Z, INFO] - number of constraints: 7", "2020-07-21T16:37:36Z, INFO] - linear=7", "2020-07-21T16:37:36Z, INFO] - parameters: defaults", "2020-07-21T16:37:36Z, INFO] - problem type is: LP", "2020-07-21T16:37:36Z, INFO] Warning: Model: "diet" is not a MIP problem, progress listeners are disabled", "2020-07-21T16:37:36Z, INFO] objective: 2.690", "2020-07-21T16:37:36Z, INFO] "Spaghetti W/ Sauce"=2.155", "2020-07-21T16:37:36Z, INFO] "Chocolate Chip Cookies"=10.000", "2020-07-21T16:37:36Z, INFO] "Lowfat Milk"=1.831", "2020-07-21T16:37:36Z, INFO] "Hotdog"=0.930", "2020-07-21T16:37:36Z, INFO] solution.csv" ], "solve_status": "optimal_solution" }, "status": { "completed_at": "2020-07-21T16:37:36.989Z", "running_at": "2020-07-21T16:37:35.622Z", "state": "completed" } }, "deployment": { "id": "DEPLOYMENT-ID" } }, "metadata": { "created_at": "2020-07-21T16:37:09.130Z", "id": "JOB-ID", "modified_at": "2020-07-21T16:37:37.268Z", "name": "test-job-diet", "space_id": "SPACE-ID" } } 9. Optional: You can delete jobs as follows: curl --location --request DELETE "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true" -H "Authorization: bearer TOKEN-HERE" If you delete a job using the API, it will still be displayed in the user interface. 10. Optional: You can delete deployments as follows:If you delete a deployment that contains jobs using the API, the jobs will still be displayed in the deployment space in the user interface.
# REST API example # You can deploy a Decision Optimization model, create and monitor jobs and get solutions using the Watson Machine Learning REST API\. ## Procedure ## <!-- <ol> --> 1. **Generate an IAM token** using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows\. curl "https://iam.bluemix.net/identity/token" \ -d "apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey" \ -H "Content-Type: application/x-www-form-urlencoded" \ -H "Authorization: Basic Yng6Yng=" Output example: { "access_token": "****** obtained IAM token ******************************", "refresh_token": "**************************************", "token_type": "Bearer", "expires_in": 3600, "expiration": 1554117649, "scope": "ibm openid" } Use the obtained token (access\_token value) prepended by the word `Bearer` in the `Authorization` header, and the `Machine Learning service GUID` in the `ML-Instance-ID` header, in all API calls. 2. **Optional:** If you have not obtained your **SPACE\-ID** from the user interface as described previously, you can create a space using the REST API as follows\. Use the previously obtained token prepended by the word `bearer` in the `Authorization` header in all API calls\. curl --location --request POST \ "https://api.dataplatform.cloud.ibm.com/v2/spaces" \ -H "Authorization: Bearer TOKEN-HERE" \ -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" \ -H "Content-Type: application/json" \ --data-raw "{ "name": "SPACE-NAME-HERE", "description": "optional description here", "storage": { "resource_crn": "COS-CRN-ID-HERE" }, "compute": [{ "name": "MACHINE-LEARNING-SERVICE-NAME-HERE", "crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE" }] }" For **Windows** users, put the `--data-raw` command on one line and replace all `"` with `\"` inside this command as follows: curl --location --request POST ^ "https://api.dataplatform.cloud.ibm.com/v2/spaces" ^ -H "Authorization: Bearer TOKEN-HERE" ^ -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" ^ -H "Content-Type: application/json" ^ --data-raw "{\"name\": "SPACE-NAME-HERE",\"description\": \"optional description here\",\"storage\": {\"resource_crn\": \"COS-CRN-ID-HERE\" },\"compute\": [{\"name\": "MACHINE-LEARNING-SERVICE-NAME-HERE\",\"crn\": \"MACHINE-LEARNING-SERVICE-CRN-ID-HERE\" }]}" Alternatively put the data in a separate file.A **SPACE-ID** is returned in `id` field of the `metadata` section. Output example: { "entity": { "compute": [ { "crn": "MACHINE-LEARNING-SERVICE-CRN", "guid": "MACHINE-LEARNING-SERVICE-GUID", "name": "MACHINE-LEARNING-SERVICE-NAME", "type": "machine_learning" } ], "description": "string", "members": [ { "id": "XXXXXXX", "role": "admin", "state": "active", "type": "user" } ], "name": "name", "scope": { "bss_account_id": "account_id" }, "status": { "state": "active" } }, "metadata": { "created_at": "2020-07-17T08:36:57.611Z", "creator_id": "XXXXXXX", "id": "SPACE-ID", "url": "/v2/spaces/SPACE-ID" } } You must wait until your deployment space status is `"active"` before continuing. You can poll to check for this as follows. curl --location --request GET "https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" 3. Create a **new Decision Optimization model** All API requests require a version parameter that takes a date in the format `version=YYYY-MM-DD`. This code example posts a model that uses the file `create_model.json`. The URL will vary according to the chosen region/location for your machine learning service. See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learning#endpoint-url). curl --location --request POST \ "https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @create_model.json The create\_model.json file contains the following code: { "name": "ModelName", "description": "ModelDescription", "type": "do-docplex_22.1", "software_spec": { "name": "do_22.1" }, "custom": { "decision_optimization": { "oaas.docplex.python": "3.10" } }, "space_id": "SPACE-ID-HERE" } The *Python version* is stated explicitly here in a `custom` block. This is optional. Without it your model will use the default version which is currently Python 3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are 3.10. If you want to be able to run jobs for this model *from the user interface*, instead of only using the REST API , you must define the **schema** for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface. You can also use the schema specified for input and output in your optimization model: { "name": "Diet-Model-schema", "description": "Diet", "type": "do-docplex_22.1", "schemas": { "input": [ { "id": "diet_food_nutrients", "fields": { "name": "Food", "type": "string" }, { "name": "Calories", "type": "double" }, { "name": "Calcium", "type": "double" }, { "name": "Iron", "type": "double" }, { "name": "Vit_A", "type": "double" }, { "name": "Dietary_Fiber", "type": "double" }, { "name": "Carbohydrates", "type": "double" }, { "name": "Protein", "type": "double" } ] }, { "id": "diet_food", "fields": { "name": "name", "type": "string" }, { "name": "unit_cost", "type": "double" }, { "name": "qmin", "type": "double" }, { "name": "qmax", "type": "double" } ] }, { "id": "diet_nutrients", "fields": { "name": "name", "type": "string" }, { "name": "qmin", "type": "double" }, { "name": "qmax", "type": "double" } ] } ], "output": [ { "id": "solution", "fields": { "name": "name", "type": "string" }, { "name": "value", "type": "double" } ] } ] }, "software_spec": { "name": "do_22.1" }, "space_id": "SPACE-ID-HERE" } When you post a model you provide information about its **model type** and the **software specification** to be used.**Model types** can be, for example: <!-- <ul> --> * `do-opl_22.1` for OPL models * `do-cplex_22.1` for CPLEX models * `do-cpo_22.1` for CP models * `do-docplex_22.1` for Python models <!-- </ul> --> Version 20.1 can also be used for these model types. For the **software specification**, you can use the default specifications using their names `do_22.1` or `do_20.1`. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html#topic_wmlpythonclient__extendWML) which shows you how to extend the Decision Optimization software specification (runtimes with additional Python libraries for docplex models). A **MODEL-ID** is returned in `id` field in the `metadata`. Output example: { "entity": { "software_spec": { "id": "SOFTWARE-SPEC-ID" }, "type": "do-docplex_20.1" }, "metadata": { "created_at": "2020-07-17T08:37:22.992Z", "description": "ModelDescription", "id": "MODEL-ID", "modified_at": "2020-07-17T08:37:22.992Z", "name": "ModelName", "owner": "***********", "space_id": "SPACE-ID" } } 4. **Upload a Decision Optimization model formulation** ready for deployment\.First **compress your model** into a (`tar.gz, .zip or .jar`) file and upload it to be deployed by the Watson Machine Learning service\.This code example uploads a model called diet\.zip that contains a Python model and no common data: curl --location --request PUT \ "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/gzip" \ --data-binary "@diet.zip" You can download this example and other models from the **[DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**. Select the relevant product and version subfolder. 5. **Deploy your model**Create a reference to your model\. Use the **SPACE\-ID**, the **MODEL\-ID** obtained when you created your model ready for deployment and the **hardware specification**\. For example: curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @deploy_model.json `The deploy_model.json file contains the following code:` { "name": "Test-Diet-deploy", "space_id": "SPACE-ID-HERE", "asset": { "id": "MODEL-ID-HERE" }, "hardware_spec": { "name": "S" }, "batch": {} } The **DEPLOYMENT-ID** is returned in `id` field in the `metadata`. Output example: { "entity": { "asset": { "id": "MODEL-ID" }, "custom": {}, "description": "", "hardware_spec": { "id": "HARDWARE-SPEC-ID", "name": "S", "num_nodes": 1 }, "name": "Test-Diet-deploy", "space_id": "SPACE-ID", "status": { "state": "ready" } }, "metadata": { "created_at": "2020-07-17T09:10:50.661Z", "description": "", "id": "DEPLOYMENT-ID", "modified_at": "2020-07-17T09:10:50.661Z", "name": "test-Diet-deploy", "owner": "**************", "space_id": "SPACE-ID" } } 6. Once deployed, you can **monitor your model's deployment state\.** Use the **DEPLOYMENT\-ID**\.For example: curl --location --request GET "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" Output example: 7. You can then **Submit jobs** for your deployed model defining the input data and the output (results of the optimization solve) and the log file\.For example, the following shows the contents of a file called `myjob.json`\. It contains (**inline**) input data, some solve parameters, and specifies that the output will be a \.csv file\. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html#topic_modelIOAdapt)\. { "name":"test-job-diet", "space_id": "SPACE-ID-HERE", "deployment": { "id": "DEPLOYMENT-ID-HERE" }, "decision_optimization" : { "solve_parameters" : { "oaas.logAttachmentName":"log.txt", "oaas.logTailEnabled":"true" }, "input_data": [ { "id":"diet_food.csv", "fields" : "name","unit_cost","qmin","qmax"], "values" : "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10] ] }, { "id":"diet_food_nutrients.csv", "fields" : "Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"], "values" : "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4] ] }, { "id":"diet_nutrients.csv", "fields" : "name","qmin","qmax"], "values" : "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100] ] } ], "output_data": [ { "id":".*\.csv" } ] } } This code example posts a job that uses this file `myjob.json`. curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -H "cache-control: no-cache" \ -d @myjob.json A **JOB-ID** is returned. Output example: (the job is queued) { "entity": { "decision_optimization": { "input_data": [{ "id": "diet_food.csv", "fields": "name", "unit_cost", "qmin", "qmax"], "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]] }, { "id": "diet_food_nutrients.csv", "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"], "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] }, { "id": "diet_nutrients.csv", "fields": "name", "qmin", "qmax"], "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]] }], "output_data": [ { "id": ".*\.csv" } ], "solve_parameters": { "oaas.logAttachmentName": "log.txt", "oaas.logTailEnabled": "true" }, "status": { "state": "queued" } }, "deployment": { "id": "DEPLOYMENT-ID" }, "platform_job": { "job_id": "", "run_id": "" } }, "metadata": { "created_at": "2020-07-17T10:42:42.783Z", "id": "JOB-ID", "name": "test-job-diet", "space_id": "SPACE-ID" } } 8. You can also **monitor job states**\. Use the **JOB\-ID**For example: curl --location --request GET \ "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" Output example: (job has completed) { "entity": { "decision_optimization": { "input_data": [{ "id": "diet_food.csv", "fields": "name", "unit_cost", "qmin", "qmax"], "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]] }, { "id": "diet_food_nutrients.csv", "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"], "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]] }, { "id": "diet_nutrients.csv", "fields": "name", "qmin", "qmax"], "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]] }], "output_data": [{ "fields": "Name", "Value"], "id": "kpis.csv", "values": "Total Calories", 2000], "Total Calcium", 800.0000000000001], "Total Iron", 11.278317739831891], "Total Vit_A", 8518.432542485823], "Total Dietary_Fiber", 25], "Total Carbohydrates", 256.80576358904455], "Total Protein", 51.17372234135308], "Minimal cost", 2.690409171696264]] }, { "fields": "name", "value"], "id": "solution.csv", "values": "Spaghetti W/ Sauce", 2.1551724137931036], "Chocolate Chip Cookies", 10], "Lowfat Milk", 1.8311671008899097], "Hotdog", 0.9296975991385925]] }], "output_data_references": [], "solve_parameters": { "oaas.logAttachmentName": "log.txt", "oaas.logTailEnabled": "true" }, "solve_state": { "details": { "KPI.Minimal cost": "2.690409171696264", "KPI.Total Calcium": "800.0000000000001", "KPI.Total Calories": "2000.0", "KPI.Total Carbohydrates": "256.80576358904455", "KPI.Total Dietary_Fiber": "25.0", "KPI.Total Iron": "11.278317739831891", "KPI.Total Protein": "51.17372234135308", "KPI.Total Vit_A": "8518.432542485823", "MODEL_DETAIL_BOOLEAN_VARS": "0", "MODEL_DETAIL_CONSTRAINTS": "7", "MODEL_DETAIL_CONTINUOUS_VARS": "9", "MODEL_DETAIL_INTEGER_VARS": "0", "MODEL_DETAIL_KPIS": "[\"Total Calories\", \"Total Calcium\", \"Total Iron\", \"Total Vit_A\", \"Total Dietary_Fiber\", \"Total Carbohydrates\", \"Total Protein\", \"Minimal cost\"]", "MODEL_DETAIL_NONZEROS": "57", "MODEL_DETAIL_TYPE": "LP", "PROGRESS_CURRENT_OBJECTIVE": "2.6904091716962637" }, "latest_engine_activity": [ "2020-07-21T16:37:36Z, INFO] Model: diet", "2020-07-21T16:37:36Z, INFO] - number of variables: 9", "2020-07-21T16:37:36Z, INFO] - binary=0, integer=0, continuous=9", "2020-07-21T16:37:36Z, INFO] - number of constraints: 7", "2020-07-21T16:37:36Z, INFO] - linear=7", "2020-07-21T16:37:36Z, INFO] - parameters: defaults", "2020-07-21T16:37:36Z, INFO] - problem type is: LP", "2020-07-21T16:37:36Z, INFO] Warning: Model: \"diet\" is not a MIP problem, progress listeners are disabled", "2020-07-21T16:37:36Z, INFO] objective: 2.690", "2020-07-21T16:37:36Z, INFO] \"Spaghetti W/ Sauce\"=2.155", "2020-07-21T16:37:36Z, INFO] \"Chocolate Chip Cookies\"=10.000", "2020-07-21T16:37:36Z, INFO] \"Lowfat Milk\"=1.831", "2020-07-21T16:37:36Z, INFO] \"Hotdog\"=0.930", "2020-07-21T16:37:36Z, INFO] solution.csv" ], "solve_status": "optimal_solution" }, "status": { "completed_at": "2020-07-21T16:37:36.989Z", "running_at": "2020-07-21T16:37:35.622Z", "state": "completed" } }, "deployment": { "id": "DEPLOYMENT-ID" } }, "metadata": { "created_at": "2020-07-21T16:37:09.130Z", "id": "JOB-ID", "modified_at": "2020-07-21T16:37:37.268Z", "name": "test-job-diet", "space_id": "SPACE-ID" } } 9. Optional: You can **delete jobs** as follows: curl --location --request DELETE "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true" \ -H "Authorization: bearer TOKEN-HERE" If you delete a job using the API, it will still be displayed in the user interface. 10. Optional: You can **delete deployments** as follows:If you delete a deployment that contains jobs using the API, the jobs will still be displayed in the deployment space in the user interface\. <!-- </ol> --> <!-- </article "role="article" "> -->
DEB599F49C3E459A08E8BF25304B063B50CAA294
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html?context=cdpaas&locale=en
Deploying a Decision Optimization model by using the user interface
Deploying a Decision Optimization model by using the user interface You can save a model for deployment in the Decision Optimization experiment UI and promote it to your Watson Machine Learning deployment space. Procedure To save your model for deployment: 1. In the Decision Optimization experiment UI, either from the Scenario or from the Overview pane, click the menu icon ![Scenario menu icon](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/images/scenariomenu.jpg) for the scenario that you want to deploy, and select Save for deployment 2. Specify a name for your model and add a description, if needed, then click Next. 1. Review the Input and Output schema and select the tables you want to include in the schema. 2. Review the Run parameters and add, modify or delete any parameters as necessary. 3. Review the Environment and Model files that are listed in the Review and save window. 4. Click Save. The model is then available in the Models section of your project. To promote your model to your deployment space: 3. View your model in the Models section of your project.You can see a summary with input and output schema. Click Promote to deployment space. 4. In the Promote to space window that opens, check that the Target space field displays the name of your deployment space and click Promote. 5. Click the link deployment space in the message that you receive that confirms successful promotion. Your promoted model is displayed in the Assets tab of your Deployment space. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used. To create a new deployment: 6. From the Assets tab of your deployment space, open your model and click New Deployment. 7. In the Create a deployment window that opens, specify a name for your deployment and select a Hardware specification.Click Create to create the deployment. Your deployment window opens from which you can later create jobs. Creating and running Decision Optimization jobs You can create and run jobs to your deployed model. Procedure 1. Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the data icon to open the data pane. Upload your input data tables, and solution and kpi output tables here. (You must have output tables defined in your model to be able to see the solution and kpi values.) 2. Open your deployment model, by selecting it in the Deployments tab of your deployment space and click New job. 3. Define the details of your job by entering a name, and an optional description for your job and click Next. 4. Configure your job by selecting a hardware specification and Next.You can choose to schedule you job here, or leave the default schedule option off and click Next. You can also optionally choose to turn on notifications or click Next. 5. Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables. Click Next. 6. You can now review and create your model by clicking Create.When you receive a successful job creation message, you can then view it by opening it from your deployment space. There you can see the run status of your job. 7. Open the run for your job.Your job log opens and you can also view and copy the payload information.
# Deploying a Decision Optimization model by using the user interface # You can save a model for deployment in the Decision Optimization experiment UI and promote it to your Watson Machine Learning deployment space\. ## Procedure ## To save your model for deployment: <!-- <ol> --> 1. In the Decision Optimization experiment UI, either from the Scenario or from the Overview pane, click the menu icon ![Scenario menu icon](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/images/scenariomenu.jpg) for the scenario that you want to deploy, and select **Save for deployment** 2. Specify a name for your model and add a description, if needed, then click **Next**\. <!-- <ol> --> 1. Review the Input and Output schema and select the tables you want to include in the schema. 2. Review the Run parameters and add, modify or delete any parameters as necessary. 3. Review the Environment and Model files that are listed in the Review and save window. 4. Click Save. <!-- </ol> --> The model is then available in the **Models** section of your project. <!-- </ol> --> To promote your model to your deployment space: <!-- <ol> --> 3. View your model in the Models section of your project\.You can see a summary with input and output schema\. Click **Promote to deployment space**\. 4. In the Promote to space window that opens, check that the Target space field displays the name of your deployment space and click **Promote**\. 5. Click the link **deployment space** in the message that you receive that confirms successful promotion\. Your promoted model is displayed in the Assets tab of your **Deployment space**\. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used\. <!-- </ol> --> To create a new deployment: <!-- <ol> --> 6. From the **Assets tab** of your deployment space, open your model and click **New Deployment**\. 7. In the Create a deployment window that opens, specify a name for your deployment and select a **Hardware specification**\.Click **Create** to create the deployment\. Your deployment window opens from which you can later create jobs\. <!-- </ol> --> <!-- <article "class="topic task nested1" role="article" id="task_ktn_fkv_5mb" "> --> ## Creating and running Decision Optimization jobs ## You can create and run jobs to your deployed model\. ### Procedure ### <!-- <ol> --> 1. Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the data icon to open the data pane\. Upload your input data tables, and solution and kpi output tables here\. (You must have output tables defined in your model to be able to see the solution and kpi values\.) 2. Open your deployment model, by selecting it in the Deployments tab of your deployment space and click **New job**\. 3. Define the details of your job by entering a name, and an optional description for your job and click **Next**\. 4. Configure your job by selecting a hardware specification and **Next**\.You can choose to schedule you job here, or leave the default schedule option off and click **Next**\. You can also optionally choose to turn on notifications or click Next\. 5. Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables\. Click **Next**\. 6. You can now review and create your model by clicking **Create**\.When you receive a successful job creation message, you can then view it by opening it from your deployment space\. There you can see the run status of your job\. 7. Open the run for your job\.Your job log opens and you can also view and copy the payload information\. <!-- </ol> --> <!-- </article "class="topic task nested1" role="article" id="task_ktn_fkv_5mb" "> --> <!-- </article "class="nested0" role="article" id="task_deployUIWML" "> -->
95689297B729A4186914E81A59FFB3A09289F8D8
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html?context=cdpaas&locale=en
Decision Optimization Python client examples
Python client examples You can deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client. To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html). For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmldeployments). See also the following sample notebooks located in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder.. * Deploying a DO model with WML * RunDeployedModel * ExtendWMLSoftwareSpec The Deploying a DO model with WML sample shows you how to deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client. This notebook uses the diet sample for the Decision Optimization model and takes you through the whole procedure without using the Decision Optimization experiment UI. The RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model. This notebook uses a model that is saved for deployment from a Decision Optimization experiment UI scenario. The ExtendWMLSoftwareSpec notebook shows you how to extend the Decision Optimization software specification within Watson Machine Learning. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it. You can also find in the samples several notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data.
# Python client examples # You can deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client\. To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html)\. For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#deployments)\. See also the following sample notebooks located in the jupyter folder of the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**\. Select the relevant product and version subfolder\.\. <!-- <ul> --> * Deploying a DO model with WML * RunDeployedModel * ExtendWMLSoftwareSpec <!-- </ul> --> The Deploying a DO model with WML sample shows you how to deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client\. This notebook uses the diet sample for the Decision Optimization model and takes you through the whole procedure without using the Decision Optimization experiment UI\. The RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model\. This notebook uses a model that is saved for deployment from a Decision Optimization experiment UI scenario\. The ExtendWMLSoftwareSpec notebook shows you how to extend the Decision Optimization software specification within Watson Machine Learning\. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it\. You can also find in the samples several notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data\. <!-- </article "role="article" "> -->
135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeploySolveParams.html?context=cdpaas&locale=en
Decision Optimization solve parameters
Solve parameters To control solve behavior, you can specify Decision Optimization solve parameters in your request as named value pairs. For example: "solve_parameters" : { "oaas.logAttachmentName":"log.txt", "oaas.logTailEnabled":"true" } You can use this code to collect the engine log tail during the solve and the whole engine log as output at the end of the solve. You can use these parameters in your request. Name Type Description oaas.timeLimit Number You can use this parameter to set a time limit in milliseconds. oaas.resultsFormat Enum<br><br><br><br> * JSON<br> * CSV<br> * XML<br> * TEXT<br> * XLSX<br><br><br> Specifies the format for returned results. The default formats are as follows:<br><br><br><br> * CPLEX - .xml<br> * CPO - .json<br> * OPL - .csv<br> * DOcplex - .json<br><br><br><br>Other formats might or might not be supported depending on the application type. oaas.oplRunConfig String Specifies the name of the OPL run configuration to be executed. oaas.docplex.python 3.10 You can use this parameter to set the Python version for the run in your deployed model. If not specified, 3.10 is used by default. oaas.logTailEnabled Boolean Use this parameter to include the log tail in the solve status. oaas.logAttachmentName String If defined, engine logs will be defined as a job output attachment. oaas.engineLogLevel Enum<br><br><br><br> * OFF<br> * SEVERE<br> * WARNING<br> * INFO<br> * CONFIG<br> * FINE<br> * FINER<br> * FINEST<br><br><br> You can use this parameter to define the level of detail that is provided by the engine log. The default value is INFO. oaas.logLimit Number Maximum log-size limit in number of characters. oaas.dumpZipName Can be viewed as Boolean (see Description) If defined, a job dump (inputs and outputs) .zip file is provided with this name as a job output attachment. The name can contain a placeholder ${job_id}. If defined with no value, dump_${job_id}.zip attachmentName is used. If not defined, by default, no job dump .zip file is attached. oaas.dumpZipRules String If defined, ta .zip file is generated according to specific job rules (RFC 1960-based Filter). It must be used in conjunction with the {@link DUMP_ZIP_NAME} parameter. Filters can be defined on the duration and the following {@link com.ibm.optim.executionservice.model.solve.SolveState} properties:<br><br><br><br> * duration<br> * solveState.executionStatus<br> * solveState.interruptionStatus<br> * solveState.solveStatus<br> * solveState.failureInfo.type<br><br><br><br>Example:<br><br>(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or ( (solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE))<br><br>(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or (|(solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE)) oaas.outputUploadPeriod Number Intermediate output in minutes. This parameter can be used to set up intermediate output publication (if any). oaas.outputUploadFiles String (RegExp) RegExp filter for files to be included in the output upload. If nothing is defined, all outputs are added.<br><br>Example:<br><br>job_${job_id}_log_${update_time}.txt
# Solve parameters # To control solve behavior, you can specify Decision Optimization solve parameters in your request as named value pairs\. For example: "solve_parameters" : { "oaas.logAttachmentName":"log.txt", "oaas.logTailEnabled":"true" } You can use this code to collect the engine log tail during the solve and the whole engine log as output at the end of the solve\. You can use these parameters in your request\. <!-- <table "summary="" id="topic_deploysolveparams__simpletable_kw4_n1y_h2b" class="defaultstyle" "> --> | Name | Type | Description | | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `oaas.timeLimit` | Number | You can use this parameter to set a time limit in milliseconds\. | | `oaas.resultsFormat` | Enum<br><br><!-- <ul> --><br><br> * `JSON`<br> * `CSV`<br> * `XML`<br> * `TEXT`<br> * `XLSX`<br><br><!-- </ul> --><br> | Specifies the format for returned results\. The default formats are as follows:<br><br><!-- <ul> --><br><br> * CPLEX \- `.xml`<br> * CPO \- `.json`<br> * OPL \- `.csv`<br> * DOcplex \- `.json`<br><br><!-- </ul> --><br><br>Other formats might or might not be supported depending on the application type\. | | `oaas.oplRunConfig` | String | Specifies the name of the OPL run configuration to be executed\. | | `oaas.docplex.python` | `3.10` | You can use this parameter to set the Python version for the run in your deployed model\. If not specified, 3\.10 is used by default\. | | `oaas.logTailEnabled` | Boolean | Use this parameter to include the log tail in the solve status\. | | `oaas.logAttachmentName` | String | If defined, engine logs will be defined as a job output attachment\. | | `oaas.engineLogLevel` | Enum<br><br><!-- <ul> --><br><br> * `OFF`<br> * `SEVERE`<br> * `WARNING`<br> * `INFO`<br> * `CONFIG`<br> * `FINE`<br> * `FINER`<br> * `FINEST`<br><br><!-- </ul> --><br> | You can use this parameter to define the level of detail that is provided by the engine log\. The default value is `INFO`\. | | `oaas.logLimit` | Number | Maximum log\-size limit in number of characters\. | | `oaas.dumpZipName` | Can be viewed as Boolean (see Description) | If defined, a job dump (inputs and outputs) `.zip` file is provided with this name as a job output attachment\. The name can contain a placeholder `${job_id}`\. If defined with no value, `dump_${job_id}.zip attachmentName` is used\. If not defined, by default, no job dump `.zip` file is attached\. | | `oaas.dumpZipRules` | String | If defined, ta `.zip` file is generated according to specific job rules (RFC 1960\-based Filter)\. It must be used in conjunction with the `{@link DUMP_ZIP_NAME}` parameter\. Filters can be defined on the duration and the following `{@link com.ibm.optim.executionservice.model.solve.SolveState}` properties:<br><br><!-- <ul> --><br><br> * `duration`<br> * `solveState.executionStatus`<br> * `solveState.interruptionStatus`<br> * `solveState.solveStatus`<br> * `solveState.failureInfo.type`<br><br><!-- </ul> --><br><br>Example:<br><br>`(duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or (|(solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE))`<br><br>(duration>=1000) or (&(duration<1000)(\!(solveState\.solveStatus=OPTIMAL\_SOLUTION))) or (\|(solveState\.interruptionStatus=OUT\_OF\_MEMORY) (solveState\.failureInfo\.type=INFRASTRUCTURE)) | | `oaas.outputUploadPeriod` | Number | Intermediate output in minutes\. This parameter can be used to set up intermediate output publication (if any)\. | | `oaas.outputUploadFiles` | String (RegExp) | RegExp filter for files to be included in the output upload\. If nothing is defined, all outputs are added\.<br><br>Example:<br><br>`job_${job_id}_log_${update_time}.txt` | <!-- </table "summary="" id="topic_deploysolveparams__simpletable_kw4_n1y_h2b" class="defaultstyle" "> --> <!-- </article "role="article" "> -->
939233F807850AE8D28246ADE7FDCCDA66E9DF03
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html?context=cdpaas&locale=en
Decision Optimization model deployment
Model deployment To deploy a Decision Optimization model, create a model ready for deployment in your deployment space and then upload your model as an archive. When deployed, you can submit jobs to your model and monitor job states. Procedure To deploy a Decision Optimization model: 1. Package your Decision Optimization model formulation with your common data (optional) ready for deployment as a tar.gz, .zip, or .jar file. Your archive can include the following optional files: 1. Your model files 2. Settings (For more information, see [ Solve parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeploySolveParams.htmltopic_deploysolveparams) ) 3. Common data Note: For Python models with multiple .py files, put all files in the same folder in your archive. The same folder must contain a main file called main.py. Do not use subfolders. 2. Create a model ready for deployment in Watson Machine Learning providing the following information: * Machine Learning service instance * Deployment space instance * Software specification ( Decision Optimizationruntime version): * do_ 22.1 runtime is based on CPLEX 22.1.1.0 * do_ 20.1 runtime is based on CPLEX 20.1.0.1 You can extend the software specification provided by Watson Machine Learning. See the [ExtendWMLSoftwareSpec](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/jupyter/watsonx.ai%20and%20Cloud%20Pak%20for%20Data%20as%20a%20Service/ExtendWMLSoftwareSpec.ipynb) notebook in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Updating CPLEX runtimes: If you previously deployed your model with a CPLEX runtime that is no longer supported, you can update your existing deployed model by using either the [ REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api) or the [UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmldiscont-soft-spec). * The model type: * opl (do-opl_<runtime version>) * cplex (do-cplex_<runtime version>) * cpo (do-cpo_<runtime version>) * docplex (do-docplex_<runtime version>) using Python 3.10 (The Runtime version can be one of the available runtimes so, for example, an opl model with runtime 22.1 would have the model type do-opl_ 22.1.) You obtain a MODEL-ID. Your Watson Machine Learning model can then be used in one or multiple deployments. 3. Upload your model archive (tar.gz, .zip, or .jar file) on Watson Machine Learning. See [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats) for information about input file types. 4. Deploy your model by using the MODEL-ID, SPACE-ID, and the hardware specification for the available configuration sizes (small S, medium M, large L, extra large XL). See [configurations](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.htmltopic_paralleljobs__34c6).You obtain a DEPLOYMENT-ID. 5. Monitor the deployment by using the DEPLOYMENT-ID. Deployment states can be: initializing, updating, ready, or failed. 6. Submit jobs to your deployment.You obtain a JOB-ID. 7. Monitor your jobs by using the JOB-ID.
# Model deployment # To deploy a Decision Optimization model, create a model ready for deployment in your deployment space and then upload your model as an archive\. When deployed, you can submit jobs to your model and monitor job states\. ## Procedure ## To deploy a Decision Optimization model: <!-- <ol> --> 1. Package your Decision Optimization model formulation with your common data (optional) ready for deployment as a `tar.gz`, `.zip`, or `.jar` file\. Your archive can include the following optional files: <!-- <ol> --> 1. Your model files 2. Settings (For more information, see [ Solve parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeploySolveParams.html#topic_deploysolveparams) ) 3. Common data <!-- </ol> --> Note: For Python models with multiple .py files, put all files in the same folder in your archive. The same folder must contain a main file called main.py. Do not use subfolders. 2. Create a model ready for deployment in Watson Machine Learning providing the following information: <!-- <ul> --> * **Machine Learning** service instance * **Deployment space** instance * **Software specification** ( Decision Optimization**runtime version**): <!-- <ul> --> * do\_ 22.1 runtime is based on CPLEX 22.1.1.0 * do\_ 20.1 runtime is based on CPLEX 20.1.0.1 <!-- </ul> --> You can extend the software specification provided by Watson Machine Learning. See the [ExtendWMLSoftwareSpec](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/jupyter/watsonx.ai%20and%20Cloud%20Pak%20for%20Data%20as%20a%20Service/ExtendWMLSoftwareSpec.ipynb) notebook in the **jupyter** folder of the **[DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**. Updating CPLEX runtimes: If you previously deployed your model with a CPLEX runtime that is no longer supported, you can update your existing deployed model by using either the [ REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html#update-soft-specs-api) or the [UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html#discont-soft-spec). * The **model type**: <!-- <ul> --> * opl (do-opl\_<*runtime version*>) * cplex (do-cplex\_<*runtime version*>) * cpo (do-cpo\_<*runtime version*>) * docplex (do-docplex\_<*runtime version*>) using Python 3.10 <!-- </ul> --> (The *Runtime version* can be one of the available runtimes so, for example, an opl model with runtime 22.1 would have the model type *do-opl\_ 22.1*.) <!-- </ul> --> You obtain a *MODEL-ID*. Your Watson Machine Learning model can then be used in one or multiple deployments. 3. Upload your model archive (`tar.gz`, `.zip`, or `.jar` file) on Watson Machine Learning\. See [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html#topic_modelIOFileFormats) for information about input file types\. 4. Deploy your model by using the *MODEL\-ID*, *SPACE\-ID*, and the **hardware specification** for the available configuration sizes (small S, medium M, large L, extra large XL)\. See [configurations](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.html#topic_paralleljobs__34c6)\.You obtain a *DEPLOYMENT\-ID*\. 5. Monitor the deployment by using the *DEPLOYMENT\-ID*\. **Deployment states** can be: `initializing`, `updating`, `ready`, or `failed`\. 6. Submit jobs to your deployment\.You obtain a *JOB\-ID*\. 7. Monitor your jobs by using the JOB\-ID\. <!-- </ol> --> <!-- </article "role="article" "> -->
02C5718919D676E7EA14D16AC226407CC675C95E
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelExecution.html?context=cdpaas&locale=en
Decision Optimization model execution
Model execution Once your model is deployed, you can submit Decision Optimization jobs to this deployment. You can submit jobs specifying the: * Input data: the transaction data used as input by the model. This can be inline or referenced * Output data: to define how the output data is generated by model. This is returned as inline or referenced data. * Solve parameters: to customize the behavior of the solution engine For more information see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt) After submitting a job, you can use the job-id to poll the job status to collect the: * Job execution status or error message * Solve execution status, progress and log tail * Inline or referenced output data Job states can be : queued, running, completed, failed, canceled.
# Model execution # Once your model is deployed, you can submit Decision Optimization jobs to this deployment\. You can submit jobs specifying the: <!-- <ul> --> * **Input data**: the transaction data used as input by the model\. This can be inline or referenced * **Output data**: to define how the output data is generated by model\. This is returned as inline or referenced data\. * **Solve parameters**: to customize the behavior of the solution engine <!-- </ul> --> For more information see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html#topic_modelIOAdapt) After submitting a job, you can use the job\-id to poll the job status to collect the: <!-- <ul> --> * Job execution status or error message * Solve execution status, progress and log tail * Inline or referenced output data <!-- </ul> --> **Job states** can be : `queued`, `running`, `completed`, `failed`, `canceled`\. <!-- </article "role="article" "> -->
E9E9556CA0C7B258D910BB31222A78BEABB46A48
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html?context=cdpaas&locale=en
Decision Optimization model input and output data
Model input and output data adaptation When submitting your job you can include your data inline or reference your data in your request. This data will be mapped to a file named with data identifier and used by the model. The data identifier extension will define the format of the file used. The following adaptations are supported: * Tabular inline data to embed your data in your request. For example: "input_data": [{ "id":"diet_food.csv", "fields" : "name","unit_cost","qmin","qmax"], "values" : "Roasted Chicken", 0.84, 0, 10] ] }] This will generate the corresponding diet_food.csv file that is used as the model input file. Only csv adaptation is currently supported. * Inline data, that is, non-tabular data (such as an OPL .dat file or an .lpfile) to embed data in your request. For example: "input_data": [{ "id":"diet_food.csv", "content":"Input data as a base64 encoded string" }] * URL referenced data allowing you to reference files stored at a particular URL or REST data service. For example: "input_data_references": { "type": "url", "id": "diet_food.csv", "connection": { "verb": "GET", "url": "https://myserver.com/diet_food.csv", "headers": { "Content-Type": "application/x-www-form-urlencoded" } }, "location": {} } This will copy the corresponding diet_food.csv file that is used as the model input file. * Data assets allowing you to reference any data asset or connected data asset present in your space and benefit from the data connector integration capabilities. For example: "input_data_references": [{ "name": "test_ref_input", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/ASSET-ID?space_id=SPACE-ID" } }], "output_data_references": [{ "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/ASSET-ID?space_id=SPACE-ID" } }] With this data asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo). * Connection assets allowing you to reference any data and then refer to the connection, without having to specify credentials each time. For more information, see [Supported data sources in Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html). Referencing a secure connection without having to use inline credentials in the payload also offers you better security. For more information, see [Example connection_asset payload](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.htmlconnection_asset_payload).For example, to connect to a COS/S3 via a Connection asset: { "type" : "connection_asset", "id" : "diet_food.csv", "connection" : { "id" : <connection_guid> }, "location" : { "file_name" : "FILENAME.csv", "bucket" : "BUCKET-NAME" } } For information about the parameters used in these examples, see [Deployment job definitions](https://cloud.ibm.com/apidocs/machine-learning-cpdeployment-job-definitions-create). Another example showing you how to connect to a DB2 asset via a connection asset: { "type" : "connection_asset", "id" : "diet_food.csv", "connection" : { "id" : <connection_guid> }, "location" : { "table_name" : "TABLE-NAME", "schema_name" : "SCHEMA-NAME" } } With this connection asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo). You can combine different adaptations in the same request. For more information about data definitions see [Adding data to an analytics project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html).
# Model input and output data adaptation # When submitting your job you can include your data inline or reference your data in your request\. This data will be mapped to a file named with data identifier and used by the model\. The data identifier extension will define the format of the file used\. The following adaptations are supported: <!-- <ul> --> * **Tabular inline data** to embed your data in your request\. For example: "input_data": [{ "id":"diet_food.csv", "fields" : "name","unit_cost","qmin","qmax"], "values" : "Roasted Chicken", 0.84, 0, 10] ] }] This will generate the corresponding `diet_food.csv` file that is used as the model input file. Only csv adaptation is currently supported. * **Inline data**, that is, non\-tabular data (such as an OPL `.dat` file or an `.lp`file) to embed data in your request\. For example: "input_data": [{ "id":"diet_food.csv", "content":"Input data as a base64 encoded string" }] * **URL** referenced data allowing you to reference files stored at a particular URL or REST data service\. For example: "input_data_references": { "type": "url", "id": "diet_food.csv", "connection": { "verb": "GET", "url": "https://myserver.com/diet_food.csv", "headers": { "Content-Type": "application/x-www-form-urlencoded" } }, "location": {} } This will copy the corresponding `diet_food.csv` file that is used as the model input file. * **Data assets** allowing you to reference any data asset or connected data asset present in your space and benefit from the data connector integration capabilities\. For example: "input_data_references": [{ "name": "test_ref_input", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/ASSET-ID?space_id=SPACE-ID" } }], "output_data_references": [{ "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/ASSET-ID?space_id=SPACE-ID" } }] With this data asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html#do). * **Connection assets** allowing you to reference any data and then refer to the connection, without having to specify credentials each time\. For more information, see [Supported data sources in Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html)\. Referencing a secure connection without having to use inline credentials in the payload also offers you better security\. For more information, see [Example connection\_asset payload](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html#connection_asset_payload)\.For example, to connect to a COS/S3 via a Connection asset: { "type" : "connection_asset", "id" : "diet_food.csv", "connection" : { "id" : <connection_guid> }, "location" : { "file_name" : "FILENAME.csv", "bucket" : "BUCKET-NAME" } } For information about the parameters used in these examples, see [Deployment job definitions](https://cloud.ibm.com/apidocs/machine-learning-cp#deployment-job-definitions-create). Another example showing you how to connect to a DB2 asset via a connection asset: { "type" : "connection_asset", "id" : "diet_food.csv", "connection" : { "id" : <connection_guid> }, "location" : { "table_name" : "TABLE-NAME", "schema_name" : "SCHEMA-NAME" } } <!-- </ul> --> With this connection asset type there are many different connections available\. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html#do)\. You can combine different adaptations in the same request\. For more information about data definitions see [Adding data to an analytics project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)\. <!-- </article "role="article" "> -->
977988398EFBDCD10DB4ACED047D8D864883614A
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html?context=cdpaas&locale=en
Decision Optimization model input and output data file formats
Model input and output data file formats With your Decision Optimization model, you can use the following input and output data identifiers and extension combinations. This table shows the supported file type combinations for Decision Optimization in Watson Machine Learning: Model type Input file type Output file type Comments cplex .lp <br>.mps <br>.sav <br>.feasibility <br>.prm<br><br>.jar for Java™ <br>models .xml <br>.json <br><br>The name of the output file must be solution The output format can be specified by using the API.<br><br>Files of type .lp, .mps, and .sav can be compressed by using gzip or bzip2, and uploaded as, for example, .lp.gz or .sav.bz2.<br><br>The schemas for the CPLEX formats for solutions, conflicts, and feasibility files are available for you to download in the cplex_xsds.zip archive from the [Decision Optimization github](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/resources/cplex_xsds.zip). cpo .cpo<br><br>.jar for Java <br>models .xml <br>.json <br><br>The name of the output file must be solution The output format can be specified by using the solve parameter.<br><br>For the native file format for CPO models, see: [CP Optimizer file format syntax](https://www.ibm.com/docs/en/icos/20.1.0?topic=manual-cp-optimizer-file-format-syntax). opl .mod <br>.dat <br>.oplproject <br>.xls <br>.json <br>.csv<br><br>.jar for Java <br>models .xml <br>.json <br>.txt <br>.csv <br>.xls The output format is consistent with the input type but can be specified by using the solve parameter if needed. To take advantage of data connectors, use the .csv format.<br><br>Only models that are defined with tuple sets can be deployed; other OPL structures are not supported.<br><br>To read and write input and output in OPL, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels). docplex .py <br>. (input data) Any output file type that is specified in the model. Any format can be used in your Python code, but to take advantage of data connectors, use the .csv format.<br><br>To read and write input and output in Python, use the commands get_input_stream("filename") and get_output_stream("filename"). See [DOcplex API sum example](https://ibmdecisionoptimization.github.io/docplex-doc/2.23.222/mp/docplex.util.environment.html) Data identifier restrictions : A file name has the following restrictions: * Is limited to 255 characters * Can include only ASCII characters * Cannot include the characters /?%:|"<>, the space character, or the null character * Cannot include _ as the first character
# Model input and output data file formats # With your Decision Optimization model, you can use the following input and output data identifiers and extension combinations\. This table shows the supported file type combinations for Decision Optimization in Watson Machine Learning: <!-- <table "summary="" id="topic_modelIOFileFormats__simpletable_iys_hnq_fhb" class="defaultstyle" "> --> | Model type | Input file type | Output file type | Comments | | ------------- | -------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **`cplex`** | `.lp` <br>`.mps` <br>`.sav` <br>`.feasibility` <br>`.prm`<br><br>`.jar` for Java™ <br>models | `.xml` <br>`.json` <br><br>The name of the output file must be **solution** | The output format can be specified by using the API\.<br><br>Files of type `.lp`, `.mps`, and `.sav` can be compressed by using `gzip` or `bzip2`, and uploaded as, for example, `.lp.gz` or `.sav.bz2`\.<br><br>The schemas for the CPLEX formats for solutions, conflicts, and feasibility files are available for you to download in the cplex\_xsds\.zip archive from the [Decision Optimization github](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/resources/cplex_xsds.zip)\. | | **`cpo`** | `.cpo`<br><br>`.jar` for Java <br>models | `.xml` <br>`.json` <br><br>The name of the output file must be **solution** | The output format can be specified by using the solve parameter\.<br><br>For the native file format for CPO models, see: [CP Optimizer file format syntax](https://www.ibm.com/docs/en/icos/20.1.0?topic=manual-cp-optimizer-file-format-syntax)\. | | **`opl`** | `.mod` <br>`.dat` <br>`.oplproject` <br>`.xls` <br>`.json` <br>`.csv`<br><br>`.jar` for Java <br>models | `.xml` <br>`.json` <br>`.txt` <br>`.csv` <br>`.xls` | The output format is consistent with the input type but can be specified by using the solve parameter if needed\. To take advantage of data connectors, use the `.csv` format\.<br><br>Only models that are defined with tuple sets can be deployed; other OPL structures are not supported\.<br><br>To read and write input and output in OPL, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html#topic_oplmodels)\. | | **`docplex`** | `.py` <br>`*.*` (input data) | Any output file type that is specified in the model\. | Any format can be used in your Python code, but to take advantage of data connectors, use the `.csv` format\.<br><br>To read and write input and output in Python, use the commands `get_input_stream("filename")` and `get_output_stream("filename")`\. See [DOcplex API sum example](https://ibmdecisionoptimization.github.io/docplex-doc/2.23.222/mp/docplex.util.environment.html) | <!-- </table "summary="" id="topic_modelIOFileFormats__simpletable_iys_hnq_fhb" class="defaultstyle" "> --> Data identifier restrictions : A file name has the following restrictions: <!-- <ul> --> * Is limited to 255 characters * Can include only ASCII characters * Cannot include the characters `/\?%*:|"<>`, the space character, or the null character * Cannot include \_ as the first character <!-- </ul> --> <!-- </article "role="article" "> -->
D476F3E93D23F52EF1D5079343D92DB793E3AD5E
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/OutputDataDefn.html?context=cdpaas&locale=en
Decision Optimization output data definition
Output data definition When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data). For more information about output file types and names see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats). Some output data definition examples: * To collect solution.csv output as inline data: "output_data": [{ "id":"solution.csv" }] * Regexp can be also used as an identifier. For example to collect all csv output files as inline data: "output_data": [{ "id":"..csv" }] * Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and ${job_id} and ${ attachment_name } place holder "output_data_references": [{ "id":"..csv", "type": "connection_asset", "connection": { "id" : <connection_guid> }, "location": { "bucket": "XXXXXXXXX", "path": "${job_id}/${attachment_name}" } }] For example, here if you have a job with identifier <XXXXXXXXX> to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file.
# Output data definition # When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data)\. For more information about output file types and names see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html#topic_modelIOFileFormats)\. Some output data definition examples: <!-- <ul> --> * To collect solution\.csv output as inline data: "output_data": [{ "id":"solution.csv" }] * Regexp can be also used as an identifier\. For example to collect all csv output files as inline data: "output_data": [{ "id":".*\.csv" }] * Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and $\{job\_id\} and $\{ attachment\_name \} place holder "output_data_references": [{ "id":".*\.csv", "type": "connection_asset", "connection": { "id" : <connection_guid> }, "location": { "bucket": "XXXXXXXXX", "path": "${job_id}/${attachment_name}" } }] For example, here if you have a job with identifier <XXXXXXXXX> to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file. <!-- </ul> --> <!-- </article "role="article" "> -->
693BC91EAADEAE664982AA88A372590A6758F294
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.html?context=cdpaas&locale=en
Decision Optimization running jobs
Running jobs Decision Optimization uses Watson Machine Learning asynchronous APIs to enable jobs to be run in parallel. To solve a problem, you can create a new job from the model deployment and associate data to it. See [Deployment steps](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.htmltopic_wmldeployintro) and the [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST). You are not charged for deploying a model. Only the solving of a model with some data is charged, based on the running time. To solve more than one job at a time, specify more than one node when you create your deployment. For example in this [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST__createdeploy), increment the number of the nodes by changing the value of the nodes property: "nodes" : 1. 1. The new job is sent to the queue. 2. If a POD is started but idle (not running a job), it immediately begins processing this job. 3. Otherwise, if the maximum number of nodes is not reached, a new POD is started. (Starting a POD can take a few seconds). The job is then assigned to this new POD for processing. 4. Otherwise, the job waits in the queue until one of the running PODs has finished and can pick up the waiting job. The configuration of PODs of each size is as follows: Table 1. T-shirt sizes for Decision Optimization Definition Name Description 2 vCPU and 8 GB S Small 4 vCPU and 16 GB M Medium 8 vCPU and 32 GB L Large 16 vCPU and 64 GB XL Extra Large For all configurations, 1 vCPU and 512 MB are reserved for internal use. In addition to the solve time, the pricing depends on the selected size through a multiplier. In the deployment configuration, you can also set the maximal number of nodes to be used. Idle PODs are automatically stopped after some timeout. If a new job is submitted when no PODs are up, it takes some time (approximately 30 seconds) for the POD to restart.
# Running jobs # Decision Optimization uses Watson Machine Learning asynchronous APIs to enable jobs to be run in parallel\. To solve a problem, you can create a new job from the model deployment and associate data to it\. See [Deployment steps](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.html#topic_wmldeployintro) and the [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html#task_deploymodelREST)\. You are not charged for deploying a model\. Only the solving of a model with some data is charged, based on the running time\. To solve **more than one job** at a time, specify more than one node when you create your deployment\. For example in this [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html#task_deploymodelREST__createdeploy), increment the **number of the nodes** by changing the value of the nodes property: `"nodes" : 1`\. <!-- <ol> --> 1. The new job is sent to the queue\. 2. If a POD is started but idle (not running a job), it immediately begins processing this job\. 3. Otherwise, if the maximum number of nodes is not reached, a new POD is started\. (Starting a POD can take a few seconds)\. The job is then assigned to this new POD for processing\. 4. Otherwise, the job waits in the queue until one of the running PODs has finished and can pick up the waiting job\. <!-- </ol> --> The configuration of PODs of each size is as follows: <!-- <table "summary="" id="topic_paralleljobs__table_etc_n5v_f5b" class="defaultstyle" "> --> Table 1\. T\-shirt sizes for Decision Optimization | Definition | Name | Description | | ----------------- | ---- | ----------- | | 2 vCPU and 8 GB | S | Small | | 4 vCPU and 16 GB | M | Medium | | 8 vCPU and 32 GB | L | Large | | 16 vCPU and 64 GB | XL | Extra Large | <!-- </table "summary="" id="topic_paralleljobs__table_etc_n5v_f5b" class="defaultstyle" "> --> For all configurations, 1 vCPU and 512 MB are reserved for internal use\. In addition to the solve time, the pricing depends on the selected size through a multiplier\. In the deployment configuration, you can also set the maximal number of nodes to be used\. Idle PODs are automatically stopped after some timeout\. If a new job is submitted when no PODs are up, it takes some time (approximately 30 seconds) for the POD to restart\. <!-- </article "role="article" "> -->
73DEFA42948BBE878834CA4B7C9B0395F44B9B90
https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/UpdateDeployModelRest.html?context=cdpaas&locale=en
Decision Optimization REST API changing Python version in deployed model
Changing Python version for an existing deployed model with the REST API You can update an existing Decision Optimization model using the Watson Machine Learning REST API. This can be useful, for example, if in your model you have explicitly specified a Python version that has now become deprecated. Procedure To change Python version for an existing deployed model: 1. Create a revision to your Decision Optimization model All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file update_model.json. The URL will vary according to the chosen region/location for your machine learning service. curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/revisions?version=2021-12-01" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @revise_model.json The revise_model.json file contains the following code: { "commit_message": "Save current model", "space_id": "SPACE-ID-HERE" } Note the model revision number "rev" that is provided in the output for use in the next step. 2. Update an existing deployment so that current jobs will not be impacted: curl --location --request PATCH "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @revise_deploy.json The revise_deploy.json file contains the following code: [ { "op": "add", "path": "/asset", "value": { "id":"MODEL-ID-HERE", "rev":"MODEL-REVISION-NUMBER-HERE" } } ] 3. Patch an existing model to explicitly specify Python version 3.10 curl --location --request PATCH "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE?rev=MODEL-REVISION-NUMBER-HERE&version=2021-12-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @update_model.json The update_model.json file, with the default Python version stated explicitly, contains the following code: [ { "op": "add", "path": "/custom", "value": { "decision_optimization":{ "oaas.docplex.python": "3.10" } } } ] Alternatively, to remove any explicit mention of a Python version so that the default version will always be used: [ { "op": "remove", "path": "/custom/decision_optimization" } ] 4. Patch the deployment to use the model that was created for Python to use version 3.10 curl --location --request PATCH "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE" -H "Authorization: bearer TOKEN-HERE" -H "Content-Type: application/json" -d @update_deploy.json The update_deploy.json file contains the following code: [ { "op": "add", "path": "/asset", "value": { "id":"MODEL-ID-HERE"} } ]
# Changing Python version for an existing deployed model with the REST API # You can update an existing Decision Optimization model using the Watson Machine Learning REST API\. This can be useful, for example, if in your model you have explicitly specified a Python version that has now become deprecated\. ## Procedure ## To change Python version for an existing deployed model: <!-- <ol> --> 1. Create a **revision to your Decision Optimization model** All API requests require a version parameter that takes a date in the format `version=YYYY-MM-DD`. This code example posts a model that uses the file `update_model.json`. The URL will vary according to the chosen region/location for your machine learning service. curl --location --request POST \ "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/revisions?version=2021-12-01" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @revise_model.json The revise\_model.json file contains the following code: { "commit_message": "Save current model", "space_id": "SPACE-ID-HERE" } Note the model revision number "`rev`" that is provided in the output for use in the next step. 2. Update an existing deployment so that current jobs will not be impacted: curl --location --request PATCH \ "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @revise_deploy.json The revise\_deploy.json file contains the following code: [ { "op": "add", "path": "/asset", "value": { "id":"MODEL-ID-HERE", "rev":"MODEL-REVISION-NUMBER-HERE" } } ] 3. Patch an existing model to explicitly specify Python version 3\.10 curl --location --request PATCH \ "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE?rev=MODEL-REVISION-NUMBER-HERE&version=2021-12-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @update_model.json The update\_model.json file, with the default *Python version* stated explicitly, contains the following code: [ { "op": "add", "path": "/custom", "value": { "decision_optimization":{ "oaas.docplex.python": "3.10" } } } ] Alternatively, to remove any explicit mention of a Python version so that the default version will always be used: [ { "op": "remove", "path": "/custom/decision_optimization" } ] 4. Patch the deployment to use the model that was created for Python to use version 3\.10 curl --location --request PATCH \ "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE" \ -H "Authorization: bearer TOKEN-HERE" \ -H "Content-Type: application/json" \ -d @update_deploy.json The update\_deploy.json file contains the following code: [ { "op": "add", "path": "/asset", "value": { "id":"MODEL-ID-HERE"} } ] <!-- </ol> --> <!-- </article "role="article" "> -->
1BB1684259F93D91580690D898140D98F12611ED
https://dataplatform.cloud.ibm.com/docs/content/DO/wml_cpd_home.html?context=cdpaas&locale=en
Deploying Decision Optimization models
Decision Optimization When you have created and solved your Decision Optimization models, you can deploy them using Watson Machine Learning. See the [Decision Optimization experiment UI](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.htmltopic_buildingmodels) for building and solving models. The following sections describe how you can deploy your models.
# Decision Optimization # When you have created and solved your Decision Optimization models, you can deploy them using Watson Machine Learning\. See the [Decision Optimization experiment UI](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html#topic_buildingmodels) for building and solving models\. The following sections describe how you can deploy your models\. <!-- </article "role="article" "> -->
A255BB890CA287C5A91765B71832DAA45BA4132B
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_appearance_tab.html?context=cdpaas&locale=en
Global visualization preferences
Global visualization preferences You can override the default settings for titles, range slider, grid lines, and mouse tracking. You can also specify a different color scheme template. 1. In Visualizations, click the Global visualization preferences control in the Actions section. The Global visualization preferences dialog provides the following settings. Titles : Provides global chart title settings. Global titles : Enables or disables the global titles for all charts. Global primary title : Enables or disables the display of global, primary chart titles. When enabled, the top-level chart title that you enter here is applied to all chart's, effectively overriding each chart's individual Primary title setting. Global subtitle : Enables or disables the display of global chart subtitles. When enabled, the chart subtitle that you enter here is applied to all chart's, effectively overriding each chart's individual Subtitle setting. Default titles : Enables or disables the default titles for all charts. Title alignment : Provides the title alignment options Left, Center (the default setting), and Right. Tools : Provides options that control chart behavior. Range slider : Enables or disables the range slider for each chart. When enabled, you can control the amount of chart data that displays with a range slider that is provided for each chart. Grid lines : Controls the display of X axis (vertical) and Y axis (horizontal) grid lines. Mouse tracker : When enabled, the mouse cursor location, in relation to the chart data, is tracked and displayed when placed anywhere over the chart. Toolbox : Enables or disables the toolbox for each chart. Depending on the chart type, the toolbox on the right of the screen provides tools such as zoom, save as image, restore, select data, and clear selection. ARIA : When enabled, web content and web applications are more accessible to users with disabilities. Filter out null : Enables or disables the filtering of null chart data. X axis on zero : When enabled, the X axis lies on the other's origin position. When not enabled, the X axis always starts at 0. Y axis on zero : When enabled, the Y axis lies on the other's origin position. When not enabled, the Y axis always starts at 0. Show xAxis Label : Enables or disables the xAxis label. Show yAxis Label : Enables or disables the yAxis label. Show xAxis Line : Enables or disables the xAxis line. Show yAxis Line : Enables or disables the yAxis line. Show xAxis Name : Enables or disables the xAxis name. Show yAxis Name : Enables or disables the yAxis name. yAxis Name Location : The drop-down list provides options for specifying the yAxis name location. Options include Start, Middle, and End. Truncation length : The specified value sets the string length. Strings that are longer than the specified length are truncated. The default value is 10. When 0 is specified, truncation is turned off. xAxis tick label decimal : Sets the tick label decimal value for the xAxis. The default value is 3. yAxis tick label decimal : Sets the tick label decimal value for the yAxis. The default value is 3. xAxis tick label rotate : Sets the xAxis tick label rotation value. The default value is 0 (no rotation). You can specify value in the range -90 to 90 degrees. Theme : Select a template to change the colors that are used in charts that have a grouping or stacking variable. Any element attributes defined in the selected template file override the default template settings for those element attributes. 2. Click Apply to save your settings or Cancel to disregard the changes.
# Global visualization preferences # You can override the default settings for titles, range slider, grid lines, and mouse tracking\. You can also specify a different color scheme template\. <!-- <ol> --> 1. In Visualizations, click the Global visualization preferences control in the Actions section\. The Global visualization preferences dialog provides the following settings. Titles : Provides global chart title settings. Global titles : Enables or disables the global titles for all charts. Global primary title : Enables or disables the display of global, primary chart titles. When enabled, the top-level chart title that you enter here is applied to all chart's, effectively overriding each chart's individual Primary title setting. Global subtitle : Enables or disables the display of global chart subtitles. When enabled, the chart subtitle that you enter here is applied to all chart's, effectively overriding each chart's individual Subtitle setting. Default titles : Enables or disables the default titles for all charts. Title alignment : Provides the title alignment options Left, Center (the default setting), and Right. Tools : Provides options that control chart behavior. Range slider : Enables or disables the range slider for each chart. When enabled, you can control the amount of chart data that displays with a range slider that is provided for each chart. Grid lines : Controls the display of X axis (vertical) and Y axis (horizontal) grid lines. Mouse tracker : When enabled, the mouse cursor location, in relation to the chart data, is tracked and displayed when placed anywhere over the chart. Toolbox : Enables or disables the toolbox for each chart. Depending on the chart type, the toolbox on the right of the screen provides tools such as zoom, save as image, restore, select data, and clear selection. ARIA : When enabled, web content and web applications are more accessible to users with disabilities. Filter out null : Enables or disables the filtering of null chart data. X axis on zero : When enabled, the X axis lies on the other's origin position. When not enabled, the X axis always starts at 0. Y axis on zero : When enabled, the Y axis lies on the other's origin position. When not enabled, the Y axis always starts at 0. Show xAxis Label : Enables or disables the xAxis label. Show yAxis Label : Enables or disables the yAxis label. Show xAxis Line : Enables or disables the xAxis line. Show yAxis Line : Enables or disables the yAxis line. Show xAxis Name : Enables or disables the xAxis name. Show yAxis Name : Enables or disables the yAxis name. yAxis Name Location : The drop-down list provides options for specifying the yAxis name location. Options include Start, Middle, and End. Truncation length : The specified value sets the string length. Strings that are longer than the specified length are truncated. The default value is 10. When 0 is specified, truncation is turned off. xAxis tick label decimal : Sets the tick label decimal value for the xAxis. The default value is 3. yAxis tick label decimal : Sets the tick label decimal value for the yAxis. The default value is 3. xAxis tick label rotate : Sets the xAxis tick label rotation value. The default value is 0 (no rotation). You can specify value in the range -90 to 90 degrees. Theme : Select a template to change the colors that are used in charts that have a grouping or stacking variable. Any element attributes defined in the selected template file override the default template settings for those element attributes. 2. Click Apply to save your settings or Cancel to disregard the changes\. <!-- </ol> --> <!-- </article "role="article" "> -->
5D043091B2F2398611A819743FC83688D7658B22
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_create_layout.html?context=cdpaas&locale=en
Visualizations layout and terms
Visualizations layout and terms Canvas : The canvas is the area of the Visualizations dialog where you build the chart. Chart type : Lists the available chart types. The graphic elements are the items in the chart that represent data (bars, points, lines, and so on). Details pane : The Details pane provides the basic chart building blocks. Chart settings : Provides options for selecting which variables are used to build the chart, distribution method, title and subtitle fields, and so on. Depending on the selected chart type, the Details pane options might vary. For more information, see [Chart types](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html). Actions : Provides options for downloading chart configuration files, downloading charts as image files, resetting charts, and setting the global chart preferences.
# Visualizations layout and terms # Canvas : The canvas is the area of the Visualizations dialog where you build the chart\. Chart type : Lists the available chart types\. The graphic elements are the items in the chart that represent data (bars, points, lines, and so on)\. Details pane : The Details pane provides the basic chart building blocks\. Chart settings : Provides options for selecting which variables are used to build the chart, distribution method, title and subtitle fields, and so on. Depending on the selected chart type, the Details pane options might vary. For more information, see [Chart types](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html). Actions : Provides options for downloading chart configuration files, downloading charts as image files, resetting charts, and setting the global chart preferences\. <!-- </article "role="article" "> -->
9F5D44B3A96F8418BE317AD258E4932E468551BE
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_3d.html?context=cdpaas&locale=en
3D charts
3D charts 3D charts are commonly used to represent multiple-variable functions and include a z-axis variable that is a function of both the x and y-axis variables.
# 3D charts # 3D charts are commonly used to represent multiple\-variable functions and include a z\-axis variable that is a function of both the x and y\-axis variables\. <!-- </article "role="article" "> -->
823EB607207DFD62D80671AF48451CCE1C44153F
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.html?context=cdpaas&locale=en
Bar charts
Bar charts Bar charts are useful for summarizing categorical variables. For example, you can use a bar chart to show the number of men and the number of women who participated in a survey. You can also use a bar chart to show the mean salary for men and the mean salary for women.
# Bar charts # Bar charts are useful for summarizing categorical variables\. For example, you can use a bar chart to show the number of men and the number of women who participated in a survey\. You can also use a bar chart to show the mean salary for men and the mean salary for women\. <!-- </article "role="article" "> -->
BECCA4C839A0BCF01ADCB6A5CE31A3B1168D3548
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_boxplots.html?context=cdpaas&locale=en
Box plots
Box plots A box plot chart shows the five statistics (minimum, first quartile, median, third quartile, and maximum). It is useful for displaying the distribution of a scale variable and pinpointing outliers.
# Box plots # A box plot chart shows the five statistics (minimum, first quartile, median, third quartile, and maximum)\. It is useful for displaying the distribution of a scale variable and pinpointing outliers\. <!-- </article "role="article" "> -->
5466D9A71E87BB01000DC957683E9CD3C10AD8BC
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_bubble.html?context=cdpaas&locale=en
Bubble charts
Bubble charts Bubble charts display categories in your groups as nonhierarchical packed circles. The size of each circle (bubble) is proportional to its value. Bubble charts are useful for comparing relationships in your data.
# Bubble charts # Bubble charts display categories in your groups as nonhierarchical packed circles\. The size of each circle (bubble) is proportional to its value\. Bubble charts are useful for comparing relationships in your data\. <!-- </article "role="article" "> -->
F7D94E6CD13F36EB9B1FE7653C436DC5745250B1
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_candlestick.html?context=cdpaas&locale=en
Candlestick charts
Candlestick charts Candlestick charts are a style of financial charts that are used to describe price movements of a security, derivative, or currency. Each candlestick element typically shows one day. A one-month chart might show the 20 trading days as 20 candlesticks elements. Candlestick charts are most often used in the analysis of equity and currency price patterns and are similar to box plots. The data set that is used to create a candlestick chart must contain open, high, low, and close values for each time period you want to display.
# Candlestick charts # Candlestick charts are a style of financial charts that are used to describe price movements of a security, derivative, or currency\. Each candlestick element typically shows one day\. A one\-month chart might show the 20 trading days as 20 candlesticks elements\. Candlestick charts are most often used in the analysis of equity and currency price patterns and are similar to box plots\. The data set that is used to create a candlestick chart must contain open, high, low, and close values for each time period you want to display\. <!-- </article "role="article" "> -->
2C9D0D0309E01FF2EE0D298A16011857DE068038
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html?context=cdpaas&locale=en
Chart types
Chart types The gallery contains a collection of the most commonly used charts.
# Chart types # The gallery contains a collection of the most commonly used charts\. <!-- </article "role="article" "> -->
035430AFAC1E73483636073C5BF48BCF8B4F5E1D
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_circlepacking.html?context=cdpaas&locale=en
Circle packing charts
Circle packing charts Circle packing charts display hierarchical data as a set of nested areas to visualize a large amount of hierarchically structured data. It's similar to a treemap, but uses circles instead of rectangles. Circle packing charts use containment (nesting) to display hierarchy data.
# Circle packing charts # Circle packing charts display hierarchical data as a set of nested areas to visualize a large amount of hierarchically structured data\. It's similar to a treemap, but uses circles instead of rectangles\. Circle packing charts use containment (nesting) to display hierarchy data\. <!-- </article "role="article" "> -->
49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_customize.html?context=cdpaas&locale=en
Custom charts
Custom charts The custom charts option provides options for pasting or editing JSON code to create the wanted chart.
# Custom charts # The custom charts option provides options for pasting or editing JSON code to create the wanted chart\. <!-- </article "role="article" "> -->
91B834E69C2153740973C59CF6B4D66260640342
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_dendrogram.html?context=cdpaas&locale=en
Dendrogram charts
Dendrogram charts Dendrogram charts are similar to tree charts and are typically used to illustrate a network structure (for example, a hierarchical structure). Dendrogram charts consist of a root node that is connected to subordinate nodes through edges or branches. The last nodes in the hierarchy are called leaves.
# Dendrogram charts # Dendrogram charts are similar to tree charts and are typically used to illustrate a network structure (for example, a hierarchical structure)\. Dendrogram charts consist of a root node that is connected to subordinate nodes through edges or branches\. The last nodes in the hierarchy are called leaves\. <!-- </article "role="article" "> -->
2910B7C4CD65F8E4ADD1607791DD22BED468B61D
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_dualy.html?context=cdpaas&locale=en
Dual Y-axes charts
Dual Y-axes charts A dual Y-axes chart summarizes or plots two Y-axes variables that have different domains. For example, you can plot the number of cases on one axis and the mean salary on another. This chart can also be a mix of different graphic elements so that the dual Y-axes chart encompasses several of the different chart types. Dual Y-axes charts can display the counts as a line and the mean of each category as a bar.
# Dual Y\-axes charts # A dual Y\-axes chart summarizes or plots two Y\-axes variables that have different domains\. For example, you can plot the number of cases on one axis and the mean salary on another\. This chart can also be a mix of different graphic elements so that the dual Y\-axes chart encompasses several of the different chart types\. Dual Y\-axes charts can display the counts as a line and the mean of each category as a bar\. <!-- </article "role="article" "> -->
97492A97F355A95D56BCF768A62CA7FD75718086
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_errorbar.html?context=cdpaas&locale=en
Error bar charts
Error bar charts Error bar charts represent the variability of data and indicate the error (or uncertainty) in a reported measurement. Error bars help determine whether differences are statistically significant. Error bars can also suggest goodness of fit for a specific function.
# Error bar charts # Error bar charts represent the variability of data and indicate the error (or uncertainty) in a reported measurement\. Error bars help determine whether differences are statistically significant\. Error bars can also suggest goodness of fit for a specific function\. <!-- </article "role="article" "> -->
41167E3AD363B416D508B03A300E5ACFAF83F042
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_evaluation.html?context=cdpaas&locale=en
Evaluation charts
Evaluation charts Evaluation charts are similar to histograms or collection graphs. Evaluation charts show how accurate models are in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot. Outcomes are handled by defining a specific value or range of values as a "hit". Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). Flag : Output fields are straightforward; hits correspond to true values. Nominal : For nominal output fields, the first value in the set defines a hit. Continuous : For continuous output fields, hits equal values greater than the midpoint of the field's range. Evaluation charts can also be cumulative so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models.
# Evaluation charts # Evaluation charts are similar to histograms or collection graphs\. Evaluation charts show how accurate models are in predicting particular outcomes\. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the criterion for each quantile, from highest to lowest\. Multiple models are shown as separate lines in the plot\. Outcomes are handled by defining a specific value or range of values as a "hit"\. Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis)\. Flag : Output fields are straightforward; hits correspond to `true` values\. Nominal : For nominal output fields, the first value in the set defines a hit\. Continuous : For continuous output fields, hits equal values greater than the midpoint of the field's range\. Evaluation charts can also be cumulative so that each point equals the value for the corresponding quantile plus all higher quantiles\. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models\. <!-- </article "role="article" "> -->
57AB3726FA10435D26878C626F61988F7305B9E8
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_fromgallery.html?context=cdpaas&locale=en
Building a chart from the chart type gallery
Building a chart from the chart type gallery Use chart type gallery for building charts. Following are general steps for building a chart from the gallery. 1. In the Chart Type section, select a chart category. A preview version of the selected chart type is shown on the chart canvas. 2. If the canvas already displays a chart, the new chart replaces the chart's axis set and graphic elements. 1. Depending on the selected chart type, the available variables are presented under a number of different headings in the Details pane (for example, Category for bar charts, X-axis and Y-axis for line charts). Select the appropriate variables for the selected chart type. 3. Click the Save visualization to project control to save the visualization to the project. You can select to also Create a new asset from the visualization, provide a visualization asset name, description, and chart name. 4. Click Apply to save the visualization to the project. The new visualization asset is now available under the Assets tab.
# Building a chart from the chart type gallery # Use chart type gallery for building charts\. Following are general steps for building a chart from the gallery\. <!-- <ol> --> 1. In the Chart Type section, select a chart category\. A preview version of the selected chart type is shown on the chart canvas\. 2. If the canvas already displays a chart, the new chart replaces the chart's axis set and graphic elements\. <!-- <ol> --> 1. Depending on the selected chart type, the available variables are presented under a number of different headings in the Details pane (for example, Category for bar charts, X-axis and Y-axis for line charts). Select the appropriate variables for the selected chart type. <!-- </ol> --> 3. Click the Save visualization to project control to save the visualization to the project\. You can select to also Create a new asset from the visualization, provide a visualization asset name, description, and chart name\. 4. Click Apply to save the visualization to the project\. The new visualization asset is now available under the Assets tab\. <!-- </ol> --> <!-- </article "role="article" "> -->
CC0ADF041F1628221CAC49A1BAEC1D497D762DC4
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_heatmap.html?context=cdpaas&locale=en
Heat map charts
Heat map charts Heat map charts present data where the individual values that are contained in a matrix are represented as colors.
# Heat map charts # Heat map charts present data where the individual values that are contained in a matrix are represented as colors\. <!-- </article "role="article" "> -->
1453D1CAD565842EEA24C8D92963BD73338EF0F1
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_histograms.html?context=cdpaas&locale=en
Histogram charts
Histogram charts A histogram is similar in appearance to a bar chart, but instead of comparing categories or looking for trends over time, each bar represents how data is distributed in a single category. Each bar represents a continuous range of data or the number of frequencies for a specific data point. Histograms are useful for showing the distribution of a single scale variable. Data are binned and summarized by using a count or percentage statistic. A variation of a histogram is a frequency polygon, which is like a typical histogram except that the area graphic element is used instead of the bar graphic element. Another variation of the histogram is the population pyramid. Its name is derived from its most common use: summarizing population data. When used with population data, it is split by gender to provide two back-to-back, horizontal histograms of age data. In countries with a young population, the shape of the resulting graph resembles a pyramid. Footnote : The chart footnote, which is placed beneath the chart. XAxis label : The x-axis label, which is placed beneath the x-axis. YAxis label : The y-axis label, which is placed above the y-axis.
# Histogram charts # A histogram is similar in appearance to a bar chart, but instead of comparing categories or looking for trends over time, each bar represents how data is distributed in a single category\. Each bar represents a continuous range of data or the number of frequencies for a specific data point\. Histograms are useful for showing the distribution of a single scale variable\. Data are binned and summarized by using a count or percentage statistic\. A variation of a histogram is a frequency polygon, which is like a typical histogram except that the area graphic element is used instead of the bar graphic element\. Another variation of the histogram is the population pyramid\. Its name is derived from its most common use: summarizing population data\. When used with population data, it is split by gender to provide two back\-to\-back, horizontal histograms of age data\. In countries with a young population, the shape of the resulting graph resembles a pyramid\. Footnote : The chart footnote, which is placed beneath the chart\. XAxis label : The x\-axis label, which is placed beneath the x\-axis\. YAxis label : The y\-axis label, which is placed above the y\-axis\. <!-- </article "role="article" "> -->
9DF72C2325CE5BACA0CC7D2A884695D115557C40
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_linecharts.html?context=cdpaas&locale=en
Line charts
Line charts A line chart plots a series of data points on a graph and connects them with lines. A line chart is useful for showing trend lines with subtle differences, or with data lines that cross one another. You can use a line chart to summarize categorical variables, in which case it is similar to a bar chart (see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts) ). Line charts are also useful for time-series data.
# Line charts # A line chart plots a series of data points on a graph and connects them with lines\. A line chart is useful for showing trend lines with subtle differences, or with data lines that cross one another\. You can use a line chart to summarize categorical variables, in which case it is similar to a bar chart (see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.html#chart_creation_barcharts) )\. Line charts are also useful for time\-series data\. <!-- </article "role="article" "> -->
F5AF4BCC2D0168D2698BEB2A858C24F81A476610
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_map.html?context=cdpaas&locale=en
Map charts
Map charts Map charts are commonly used to compare values and show categories across geographical regions. Map charts are most beneficial when the data contains geographic information (countries, regions, states, counties, postal codes, and so on).
# Map charts # Map charts are commonly used to compare values and show categories across geographical regions\. Map charts are most beneficial when the data contains geographic information (countries, regions, states, counties, postal codes, and so on)\. <!-- </article "role="article" "> -->
0C836867DD758509B908532F35CFC5E160D81A19
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_mathcurve.html?context=cdpaas&locale=en
Math curve charts
Math curve charts A math curve chart plots mathematical equation curves that are based on user-entered expressions.
# Math curve charts # A math curve chart plots mathematical equation curves that are based on user\-entered expressions\. <!-- </article "role="article" "> -->
66E7B1F986535FCE165F0CB5C553A6305339204E
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_matrixscatter.html?context=cdpaas&locale=en
Scatter matrix charts
Scatter matrix charts Scatter plot matrices are a good way to determine whether linear correlations exist between multiple variables.
# Scatter matrix charts # Scatter plot matrices are a good way to determine whether linear correlations exist between multiple variables\. <!-- </article "role="article" "> -->
3094E343D06DA6AE0D0D5D4865C7B0D806DC61A1
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_multichart.html?context=cdpaas&locale=en
Multi-chart charts
Multi-chart charts Multi-chart charts provide options for creating multiple charts. The charts can be of the same or different types, and can include different variables from the same data set.
# Multi\-chart charts # Multi\-chart charts provide options for creating multiple charts\. The charts can be of the same or different types, and can include different variables from the same data set\. <!-- </article "role="article" "> -->
E777A9C7D0450D572431F168374224179C1AE7C4
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_multiseries.html?context=cdpaas&locale=en
Multiple series charts
Multiple series charts Multiple series charts are similar to line charts, with the exception that you can chart multiple variables on the Y-axis.
# Multiple series charts # Multiple series charts are similar to line charts, with the exception that you can chart multiple variables on the Y\-axis\. <!-- </article "role="article" "> -->
DE359E77F61C11B6F759E8DFE8EA69AAC3D0514A
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_parallel.html?context=cdpaas&locale=en
Parallel charts
Parallel charts Parallel charts are useful for visualizing high dimensional geometry and for analyzing multivariate data. Parallel charts resemble line charts for time-series data, but the axes do not correspond to points in time (a natural order is not present).
# Parallel charts # Parallel charts are useful for visualizing high dimensional geometry and for analyzing multivariate data\. Parallel charts resemble line charts for time\-series data, but the axes do not correspond to points in time (a natural order is not present)\. <!-- </article "role="article" "> -->
6B4213FC5352021865E77592EBC27242E746B5AA
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_pareto.html?context=cdpaas&locale=en
Pareto charts
Pareto charts Pareto charts contain both bars and a line graph. The bars represent individual variable categories and the line graph represents the cumulative total.
# Pareto charts # Pareto charts contain both bars and a line graph\. The bars represent individual variable categories and the line graph represents the cumulative total\. <!-- </article "role="article" "> -->
A2B0DB014389285D9ABCA9FE0D4035F85DE6D102
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_piecharts.html?context=cdpaas&locale=en
Pie charts
Pie charts A pie chart is useful for comparing proportions. For example, you can use a pie chart to demonstrate that a greater proportion of Europeans is enrolled in a certain class.
# Pie charts # A pie chart is useful for comparing proportions\. For example, you can use a pie chart to demonstrate that a greater proportion of Europeans is enrolled in a certain class\. <!-- </article "role="article" "> -->
81F297B28D1978EB0D0B1985D6F44B45DFE53542
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_pyramid.html?context=cdpaas&locale=en
Population pyramid charts
Population pyramid charts Population pyramid charts (also known as "age-sex pyramids") are commonly used to present and analyze population information based on age and gender.
# Population pyramid charts # Population pyramid charts (also known as "age\-sex pyramids") are commonly used to present and analyze population information based on age and gender\. <!-- </article "role="article" "> -->
BA8A6820B3DBFAA703679B19BE070F7BD0CCA3D1
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_qqplot.html?context=cdpaas&locale=en
Q-Q plots
Q-Q plots Q-Q (quantile-quantile) plots compare two probability distributions by plotting their quantiles against each other. A Q–Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such as location, scale, and skewness are similar or different in the two distributions.
# Q\-Q plots # Q\-Q (quantile\-quantile) plots compare two probability distributions by plotting their quantiles against each other\. A Q–Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such as location, scale, and skewness are similar or different in the two distributions\. <!-- </article "role="article" "> -->
61F714F5629AD260B0D9776FC53CDA2EAA10DF24
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_radar.html?context=cdpaas&locale=en
Radar charts
Radar charts Radar charts compare multiple quantitative variables and are useful for visualizing which variables have similar values, or if outliers exist among the variables. Radar charts consists of a sequence of spokes, with each spoke representing a single variable. Radar Charts are also useful for determining which variables are scoring high or low within a data set.
# Radar charts # Radar charts compare multiple quantitative variables and are useful for visualizing which variables have similar values, or if outliers exist among the variables\. Radar charts consists of a sequence of spokes, with each spoke representing a single variable\. Radar Charts are also useful for determining which variables are scoring high or low within a data set\. <!-- </article "role="article" "> -->
5A812008B8370853F0C151FDE4DFEDA4A39193CB
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_relation.html?context=cdpaas&locale=en
Relationship charts
Relationship charts A relationship chart is useful for determining how variables relate to each other.
# Relationship charts # A relationship chart is useful for determining how variables relate to each other\. <!-- </article "role="article" "> -->
67C56AAC7DA2232E4DA2B8AEDEC41B9D8755E22A
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_scatterdot.html?context=cdpaas&locale=en
Scatter plots and dot plots
Scatter plots and dot plots Several broad categories of charts are created with the point graphic element. Scatter plots : Scatter plots are useful for plotting multivariate data. They can help you determine potential relationships among scale variables. A simple scatter plot uses a 2-D coordinate system to plot two variables. A 3-D scatter plot uses a 3-D coordinate system to plot three variables. When you need to plot more variables, you can try overlay scatter plots and scatter plot matrices (SPLOMs). An overlay scatter plot displays overlaid pairs of X-Y variables, with each pair distinguished by color or shape. A SPLOM creates a matrix of 2-D scatter plots, with each variable plotted against every other variable in the SPLOM. Dot plots : Like histograms, dot plots are useful for showing the distribution of a single scale variable. The data are binned, but, instead of one value for each bin (like a count), all of the points in each bin are displayed and stacked. These graphs are sometimes called density plots. Summary point plots : Summary point plots are similar to bar charts, except that points are drawn in place of the top of the bars. For more information, see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts). Drop-line charts : Drop-line charts are a special type of summary point plot. The points are grouped and a line is drawn through the points in each category. The drop-line chart is useful for comparing a statistic across categorical variables.
# Scatter plots and dot plots # Several broad categories of charts are created with the point graphic element\. Scatter plots : Scatter plots are useful for plotting multivariate data\. They can help you determine potential relationships among scale variables\. A simple scatter plot uses a 2\-D coordinate system to plot two variables\. A 3\-D scatter plot uses a 3\-D coordinate system to plot three variables\. When you need to plot more variables, you can try overlay scatter plots and scatter plot matrices (SPLOMs)\. An overlay scatter plot displays overlaid pairs of X\-Y variables, with each pair distinguished by color or shape\. A SPLOM creates a matrix of 2\-D scatter plots, with each variable plotted against every other variable in the SPLOM\. Dot plots : Like histograms, dot plots are useful for showing the distribution of a single scale variable\. The data are binned, but, instead of one value for each bin (like a count), all of the points in each bin are displayed and stacked\. These graphs are sometimes called density plots\. Summary point plots : Summary point plots are similar to bar charts, except that points are drawn in place of the top of the bars\. For more information, see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.html#chart_creation_barcharts)\. Drop\-line charts : Drop\-line charts are a special type of summary point plot\. The points are grouped and a line is drawn through the points in each category\. The drop\-line chart is useful for comparing a statistic across categorical variables\. <!-- </article "role="article" "> -->
7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_seriesarray.html?context=cdpaas&locale=en
Series array charts
Series array charts Series array charts include individual sub charts and display the Y-axis for all sub charts in the legend.
# Series array charts # Series array charts include individual sub charts and display the Y\-axis for all sub charts in the legend\. <!-- </article "role="article" "> -->
5CF2FE478862FCAA1745D5B0770CE6486B3B71F8
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_sunburst.html?context=cdpaas&locale=en
Sunburst charts
Sunburst charts A sunburst chart is useful for visualizing hierarchical data structures. A sunburst chart consists of an inner circle that is surrounded by rings of deeper hierarchy levels. The angle of each segment proportional to either a value or divided equally under its inner segment. The chart segments are colored based on the category or hierarchical level to which they belong.
# Sunburst charts # A sunburst chart is useful for visualizing hierarchical data structures\. A sunburst chart consists of an inner circle that is surrounded by rings of deeper hierarchy levels\. The angle of each segment proportional to either a value or divided equally under its inner segment\. The chart segments are colored based on the category or hierarchical level to which they belong\. <!-- </article "role="article" "> -->
BAE3302FC87E1BBFA604BAA2D003069E4233A517
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_themeriver.html?context=cdpaas&locale=en
Theme River charts
Theme River charts A theme river is a specialized flow graph that shows changes over time.
# Theme River charts # A theme river is a specialized flow graph that shows changes over time\. <!-- </article "role="article" "> -->
B49F37BD511123A94FCAD3C6E826E60FC61DB446
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_timeplot.html?context=cdpaas&locale=en
Time plots
Time plots Time plots illustrate data points at successive intervals of time. The time series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform. Time plots provide a preliminary analysis of the characteristics of time series data on basic statistics and test, and thus generate useful insights about your data before modeling. Time plots include analysis methods such as decomposition, augmented Dickey-Fuller test (ADF), correlations (ACF/PACF), and spectral analysis.
# Time plots # Time plots illustrate data points at successive intervals of time\. The time series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform\. Time plots provide a preliminary analysis of the characteristics of time series data on basic statistics and test, and thus generate useful insights about your data before modeling\. Time plots include analysis methods such as decomposition, augmented Dickey\-Fuller test (ADF), correlations (ACF/PACF), and spectral analysis\. <!-- </article "role="article" "> -->
D872C74770B5729E037E841679F741CF3D8C20AD
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_tree.html?context=cdpaas&locale=en
Tree charts
Tree charts Tree charts represent hierarchy in a tree-like structure. The structure of a Tree chart consists of a root node (has no parent node), line connections (named branches), and leaf nodes (have no child nodes). Line connections represent the relationships and connections between the members.
# Tree charts # Tree charts represent hierarchy in a tree\-like structure\. The structure of a Tree chart consists of a root node (has no parent node), line connections (named branches), and leaf nodes (have no child nodes)\. Line connections represent the relationships and connections between the members\. <!-- </article "role="article" "> -->
9B6386C6C291665ACA0892481681A94A70185E9D
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_treemap.html?context=cdpaas&locale=en
Treemap charts
Treemap charts Treemap charts are an alternative method for visualizing the hierarchical structure of tree diagrams while also displaying quantities for each category. Treemap charts are useful for identifying patterns in data. Tree branches are represented by rectangles, with each sub branch represented by smaller rectangles.
# Treemap charts # Treemap charts are an alternative method for visualizing the hierarchical structure of tree diagrams while also displaying quantities for each category\. Treemap charts are useful for identifying patterns in data\. Tree branches are represented by rectangles, with each sub branch represented by smaller rectangles\. <!-- </article "role="article" "> -->
99B0C1C962E0642E5B877747ED37E9BB27238664
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_tsne.html?context=cdpaas&locale=en
t-SNE charts
t-SNE charts T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization. t-SNE charts model each high-dimensional object by a two-or-three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.
# t\-SNE charts # T\-distributed Stochastic Neighbor Embedding (t\-SNE) is a machine learning algorithm for visualization\. t\-SNE charts model each high\-dimensional object by a two\-or\-three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability\. <!-- </article "role="article" "> -->
3873A285DCB38EF4B4ED663BFA0DF4047AB7692D
https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_wordcloud.html?context=cdpaas&locale=en
Word cloud charts
Word cloud charts Word cloud charts present data as words, where the size and placement of any individual word is determined by how it is weighted.
# Word cloud charts # Word cloud charts present data as words, where the size and placement of any individual word is determined by how it is weighted\. <!-- </article "role="article" "> -->
3BB91EBACC556700F955C3E6E01D90E5256207CF
https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html?context=cdpaas&locale=en
Visualizing your data
Visualizing your data You can discover insights from your data by creating visualizations. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data and quickly understand large amounts of information. Data format : Tabular: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files, and connected data. For more information about supported data sources, see [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). Data size : No limit You can create graphics similar to the following example that shows how humidity values over time. ![Example visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/viz_main.png)
# Visualizing your data # You can discover insights from your data by creating visualizations\. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data and quickly understand large amounts of information\. Data format : Tabular: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel \.xls and \.xlsx files, SAS, delimited text files, and connected data\. For more information about supported data sources, see [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). Data size : No limit You can create graphics similar to the following example that shows how humidity values over time\. ![Example visualization](https://dataplatform.cloud.ibm.com/docs/content/dataview/viz_main.png) <!-- </article "role="article" "> -->
9D9188E6383DB5F7038B98A688CB2DC9CF5A336C
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/aiopenscale.html?context=cdpaas&locale=en
watsonx.governance on IBM watsonx
watsonx.governance on IBM® watsonx
# watsonx\.governance on IBM® watsonx # <!-- </article "role="article" "> -->
CF88BCC09A32B2D6D65F2C2A831E2960ACA1E347
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html?context=cdpaas&locale=en
Cloud Object Storage on IBM watsonx
Cloud Object Storage on IBM® watsonx
# Cloud Object Storage on IBM® watsonx # <!-- </article "role="article" "> -->
59DF73D502B5F62E3837464E81AC6BC9FDF07014
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html?context=cdpaas&locale=en
IBM Cloud services in the IBM watsonx services catalog
IBM Cloud services in the IBM watsonx services catalog You can provision IBM® Cloud service instances for the watsonx platform. The IBM watsonx.ai component provides the following services that provide key functionality, including tools and compute resources: * [Watson™ Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html) * [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html) If you signed up for watsonx.ai, you already have these services. Otherwise, you can create instances of these services from the Services catalog. If you signed up for watsonx.governance, you already have this service. Otherwise, you can create an instance of this service from the Services catalog. The [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html) provides storage for projects and deployment spaces on the IBM watsonx platform. The [Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/secure-gateway.html) service provides secure connections to on-premises date sources. These services provide databases that you can access in IBM watsonx by creating connections: * [IBM Analytics Engine](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/spark.html) * [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloudant.html) * [Databases for Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/elasticsearch.html) * [Databases for EDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/edb.html) * [Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/mongodb.html) * [Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/postgresql.html) * [Db2®](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2oltp.html) * [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2wh.html)
# IBM Cloud services in the IBM watsonx services catalog # You can provision IBM® Cloud service instances for the watsonx platform\. The IBM watsonx\.ai component provides the following services that provide key functionality, including tools and compute resources: <!-- <ul> --> * [Watson™ Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html) * [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html) <!-- </ul> --> If you signed up for watsonx\.ai, you already have these services\. Otherwise, you can create instances of these services from the Services catalog\. If you signed up for watsonx\.governance, you already have this service\. Otherwise, you can create an instance of this service from the Services catalog\. The [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html) provides storage for projects and deployment spaces on the IBM watsonx platform\. The [Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/secure-gateway.html) service provides secure connections to on\-premises date sources\. These services provide databases that you can access in IBM watsonx by creating connections: <!-- <ul> --> * [IBM Analytics Engine](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/spark.html) * [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloudant.html) * [Databases for Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/elasticsearch.html) * [Databases for EDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/edb.html) * [Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/mongodb.html) * [Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/postgresql.html) * [Db2®](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2oltp.html) * [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2wh.html) <!-- </ul> --> <!-- </article "role="article" "> -->
A56686454E771E5FDDA0315DD38313F9FCB31AAC
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloudant.html?context=cdpaas&locale=en
Cloudant on IBM watsonx
Cloudant on IBM® watsonx
# Cloudant on IBM® watsonx # <!-- </article "role="article" "> -->
F3BA8CCB1E55BB6535944CB5ACDB19EFAEB1C3F9
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2oltp.html?context=cdpaas&locale=en
Db2 on IBM watsonx
Db2 on IBM watsonx
# Db2 on IBM watsonx # <!-- </article "role="article" "> -->
E81F1FD08E472AF1516E6C6B0C936A2DCA55CC20
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/db2wh.html?context=cdpaas&locale=en
Db2 Warehouse on IBM watsonx
Db2 Warehouse on IBM watsonx
# Db2 Warehouse on IBM watsonx # <!-- </article "role="article" "> -->
32217F5F0DEE4A95C64B2BD92C25366706CC7E0C
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/edb.html?context=cdpaas&locale=en
Databases for EDB on IBM watsonx
Databases for EDB on IBM watsonx
# Databases for EDB on IBM watsonx # <!-- </article "role="article" "> -->
868801EC73691D31B90C8611E934AA5DD3B17EA7
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/elasticsearch.html?context=cdpaas&locale=en
Databases for Elasticsearch on IBM watsonx
Databases for Elasticsearch on IBM® watsonx
# Databases for Elasticsearch on IBM® watsonx # <!-- </article "role="article" "> -->
408FDAB4F452AB2C207EE3416332D315598E3456
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/mongodb.html?context=cdpaas&locale=en
Databases for MongoDB on IBM watsonx
Databases for MongoDB on IBM watsonx
# Databases for MongoDB on IBM watsonx # <!-- </article "role="article" "> -->
649119A6EF3F5AA2B1B0C63E0973532D4C950F48
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/postgresql.html?context=cdpaas&locale=en
Databases for PostgreSQL on IBM watsonx
Databases for PostgreSQL on IBM® watsonx
# Databases for PostgreSQL on IBM® watsonx # <!-- </article "role="article" "> -->
B9D44BBCF205103BF01619D31CFEBE31A725BA5A
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/secure-gateway.html?context=cdpaas&locale=en
Secure Gateway on IBM watsonx
Secure Gateway on IBM® watsonx
# Secure Gateway on IBM® watsonx # <!-- </article "role="article" "> -->
6AC4A29FEBF419002BDBA62D99D997CF55E9FCF2
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/spark.html?context=cdpaas&locale=en
IBM Analytics Engine on IBM watsonx
IBM Analytics Engine on IBM® watsonx
# IBM Analytics Engine on IBM® watsonx # <!-- </article "role="article" "> -->
40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html?context=cdpaas&locale=en
Watson Machine Learning on IBM watsonx
Watson Machine Learning on IBM watsonx Watson Machine Learning is part of IBM® watsonx.ai. Watson Machine Learning provides a full range of tools for your team to build, train, and deploy Machine Learning models. You can choose the tool with the level of automation or autonomy that matches your needs. Watson Machine Learning provides the following tools: * AutoAI experiment builder for automatically processing structured data to generate model-candidate pipelines. The best-performing pipelines can be saved as a machine learning model and deployed for scoring. * Deployment spaces give you the tools to view and manage model deployments. * Tools to view and manage model deployments.
# Watson Machine Learning on IBM watsonx # Watson Machine Learning is part of IBM® watsonx\.ai\. Watson Machine Learning provides a full range of tools for your team to build, train, and deploy Machine Learning models\. You can choose the tool with the level of automation or autonomy that matches your needs\. Watson Machine Learning provides the following tools: <!-- <ul> --> * AutoAI experiment builder for automatically processing structured data to generate model\-candidate pipelines\. The best\-performing pipelines can be saved as a machine learning model and deployed for scoring\. * Deployment spaces give you the tools to view and manage model deployments\. * Tools to view and manage model deployments\. <!-- </ul> --> <!-- </article "role="article" "> -->
C4BB814768F5D91D2C6AA90B34FDDD944AA1EB91
https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html?context=cdpaas&locale=en
Watson Studio on IBM watsonx
Watson Studio on IBM watsonx
# Watson Studio on IBM watsonx # <!-- </article "role="article" "> -->
189F970CF3B162E67B98B2A928B36193169E3CAF
https://dataplatform.cloud.ibm.com/docs/content/wsd/dataview.html?context=cdpaas&locale=en
Working with your data (SPSS Modeler)
Working with your data To see a quick sample of a flow's data, right-click a node a select Preview. To more thoroughly examine your data, use a Charts node to launch the chart builder. With the chart builder, you can use advanced visualizations to explore your data from different perspectives and identify patterns, connections, and relationships within your data. You can also visualize your data with these same charts in a Data Refinery flow. Figure 1. Sample visualizations available for a flow ![Shows four example charts available in Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/charts_thumbnail4.png) For more information, see [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html).
# Working with your data # To see a quick sample of a flow's data, right\-click a node a select Preview\. To more thoroughly examine your data, use a Charts node to launch the chart builder\. With the chart builder, you can use advanced visualizations to explore your data from different perspectives and identify patterns, connections, and relationships within your data\. You can also visualize your data with these same charts in a Data Refinery flow\. Figure 1\. Sample visualizations available for a flow ![Shows four example charts available in Visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/charts_thumbnail4.png) For more information, see [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html)\. <!-- </article "role="article" "> -->
6A32659DF809F04F9A670634129FC75CC9140729
https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_properties.html?context=cdpaas&locale=en
Setting properties for SPSS Modeler flows
Setting properties for flows You can specify properties to apply to the current flow. To set flow properties, click the Flow Properties icon:![Flow properties icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/flow_properties.png) The following properties are available.
# Setting properties for flows # You can specify properties to apply to the current flow\. To set flow properties, click the Flow Properties icon:![Flow properties icon](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/flow_properties.png) The following properties are available\. <!-- </article "role="article" "> -->
81045ED1B34827B3BD74D2546185C3BD3163B37E
https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_scripting.html?context=cdpaas&locale=en
Flow scripting (SPSS Modeler)
Flow scripting You can use scripts to customize operations within a particular flow, and they're saved with that flow. For example, you might use a script to specify a particular run order for terminal nodes. You use the flow properties page to edit the script that's saved with the current flow. To access scripting in a flow's properties: 1. Right-click your flow's canvas and select Flow properties. 2. Open the Scripting section to work with scripts for the current flow. Tips: * By default, the Python scripting language is used. If you'd rather use a scripting language unique to old versions of SPSS Modeler desktop, select Legacy. * For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide. You can specify whether or not the script runs when the flow runs. To run the script each time the flow runs, respecting the run order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. Or, to ignore the script, you can select the option to only Run all terminal nodes when the flow runs. The script editor includes the following features that help with script authoring: * Syntax highlighting; keywords, literal values (such as strings and numbers), and comments are highlighted * Line numbering * Block matching; when the cursor is placed by the start of a program block, the corresponding end block is also highlighted * Suggested auto-completion A list of suggested syntax completions can be accessed by selecting Auto-Suggest from the context menu, or pressing Ctrl + Space. Use the cursor keys to move up and down the list, then press Enter to insert the selected text. To exit from auto-suggest mode without modifying the existing text, press Esc.
# Flow scripting # You can use scripts to customize operations within a particular flow, and they're saved with that flow\. For example, you might use a script to specify a particular run order for terminal nodes\. You use the flow properties page to edit the script that's saved with the current flow\. To access scripting in a flow's properties: <!-- <ol> --> 1. Right\-click your flow's canvas and select Flow properties\. 2. Open the Scripting section to work with scripts for the current flow\. <!-- </ol> --> Tips: <!-- <ul> --> * By default, the Python scripting language is used\. If you'd rather use a scripting language unique to old versions of SPSS Modeler desktop, select Legacy\. * For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide\. <!-- </ul> --> You can specify whether or not the script runs when the flow runs\. To run the script each time the flow runs, respecting the run order of the script, select Run the script\. This setting provides automation at the flow level for quicker model building\. Or, to ignore the script, you can select the option to only Run all terminal nodes when the flow runs\. The script editor includes the following features that help with script authoring: <!-- <ul> --> * Syntax highlighting; keywords, literal values (such as strings and numbers), and comments are highlighted * Line numbering * Block matching; when the cursor is placed by the start of a program block, the corresponding end block is also highlighted * Suggested auto\-completion <!-- </ul> --> A list of suggested syntax completions can be accessed by selecting Auto\-Suggest from the context menu, or pressing Ctrl \+ Space\. Use the cursor keys to move up and down the list, then press Enter to insert the selected text\. To exit from auto\-suggest mode without modifying the existing text, press Esc\. <!-- </article "role="article" "> -->
D3084BFB07D425EBACE9F538D800E08DAEA97594
https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_scripting_example.html?context=cdpaas&locale=en
SPSS Modeler flow scripting example
Flow scripting example You can use a flow to train a model when it runs. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node. Using a script, you can automate the process of testing the model nugget after you create it. For example, you might use a script such as the following to train a neural network model: stream = modeler.script.stream() neuralnetnode = stream.findByType("neuralnetwork", None) results = [] neuralnetnode.run(results) appliernode = stream.createModelApplierAt(results[0], "Drug", 594, 187) analysisnode = stream.createAt("analysis", "Drug", 688, 187) typenode = stream.findByType("type", None) stream.linkBetween(appliernode, typenode, analysisnode) analysisnode.run([]) The following bullets describe each line in this script example. * The first line defines a variable that points to the current flow. * In line 2, the script finds the Neural Net builder node. * In line 3, the script creates a list where the execution results can be stored. * In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3. * In line 5, a model apply node is created for the model nugget and placed on the flow canvas. * In line 6, an analysis node called Drug is created. * In line 7, the script finds the Type node. * In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node. * Finally, the Analysis node runs to produce the Analysis report. Tips: * It's possible to use a script to build and run a flow from scratch, starting with a blank canvas. * For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide.
# Flow scripting example # You can use a flow to train a model when it runs\. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node\. Using a script, you can automate the process of testing the model nugget after you create it\. For example, you might use a script such as the following to train a neural network model: stream = modeler.script.stream() neuralnetnode = stream.findByType("neuralnetwork", None) results = [] neuralnetnode.run(results) appliernode = stream.createModelApplierAt(results[0], "Drug", 594, 187) analysisnode = stream.createAt("analysis", "Drug", 688, 187) typenode = stream.findByType("type", None) stream.linkBetween(appliernode, typenode, analysisnode) analysisnode.run([]) The following bullets describe each line in this script example\. <!-- <ul> --> * The first line defines a variable that points to the current flow\. * In line 2, the script finds the Neural Net builder node\. * In line 3, the script creates a list where the execution results can be stored\. * In line 4, the Neural Net model nugget is created\. This is stored in the list defined on line 3\. * In line 5, a model apply node is created for the model nugget and placed on the flow canvas\. * In line 6, an analysis node called `Drug` is created\. * In line 7, the script finds the Type node\. * In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node\. * Finally, the Analysis node runs to produce the Analysis report\. <!-- </ul> --> Tips: <!-- <ul> --> * It's possible to use a script to build and run a flow from scratch, starting with a blank canvas\. * For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide\. <!-- </ul> --> <!-- </article "role="article" "> -->
C8B4A993CB8642BC87432FCB305EEE744C16A154
https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html?context=cdpaas&locale=en
Importing a stream (SPSS Modeler)
Importing an SPSS Modeler stream You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client. 1. From your project's Assets tab, click . 2. Select Local file, select the .str file you want to import, and click Create. If the imported stream contains one or more source (import) or export nodes, you'll be prompted to convert the nodes. Watsonx.ai will walk you through the migration process. Watch the following video for an example of this easy process: This video provides a visual method to learn the concepts and tasks in this documentation. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. [https://www.ustream.tv/embed/recorded/127732173](https://www.ustream.tv/embed/recorded/127732173) If the stream contains multiple import nodes that use the same data file, then you must first add that file to your project as a data asset before migrating because the conversion can't upload the same file to more than one import node. After adding the data asset to your project, reopen the flow and proceed with the migration using the new data asset. Nodes with the same name will be automatically mapped to project assets. Configure export nodes to export to your project or to a connection. The following export nodes are supported: Table 1. Export nodes that can be migrated Supported SPSS Modeler export nodes Analytic Server Database Flat File Statistics Export Data Collection Excel IBM Cognos Analytics Export TM1 Export SAS XML Export Notes: Keep the following information in mind when migrating nodes. * When migrating export nodes, you're converting node types that don't exist in watsonx.ai. The nodes are converted to Data Asset export nodes or a connection. Due to a current limitation for automatically migrating nodes, only existing project assets or connections can be selected as export targets. These assets will be overwritten during export when the flow runs. * To preserve any type or filter information, when an import node is replaced with Data Asset nodes, they're converted to a SuperNode. * After migration, you can go back later and use the Convert button if you want to migrate a node that you skipped previously. * If the stream you imported uses scripting, you may encounter an error when you run the flow even after completing a migration. This could be due to the flow script containing a reference to an unsupported import or export node. To avoid such errors, you must remove the scripting code that references the unsupported node. * If the stream you're importing contains unsupported data file types, you need to convert them to a supported type (CSV, Excel, or SPSS Statistics .sav). * In some cases, some settings from your original stream may not be restored during migration. For example, if the field delimiter in your original stream was tabs, it may be changed to commas after migration. Settings such as custom SQL also aren't migrated currently. Compare the new migrated flow to your original stream and making adjustments as needed.
# Importing an SPSS Modeler stream # You can import a stream ( \.str) that was created in SPSS Modeler Subscription or SPSS Modeler client\. <!-- <ol> --> 1. From your project's Assets tab, click \. 2. Select Local file, select the \.str file you want to import, and click Create\. <!-- </ol> --> If the imported stream contains one or more source (import) or export nodes, you'll be prompted to convert the nodes\. Watsonx\.ai will walk you through the migration process\. Watch the following video for an example of this easy process: This video provides a visual method to learn the concepts and tasks in this documentation\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. [https://www\.ustream\.tv/embed/recorded/127732173](https://www.ustream.tv/embed/recorded/127732173) If the stream contains multiple import nodes that use the same data file, then you must first add that file to your project as a data asset before migrating because the conversion can't upload the same file to more than one import node\. After adding the data asset to your project, reopen the flow and proceed with the migration using the new data asset\. Nodes with the same name will be automatically mapped to project assets\. Configure export nodes to export to your project or to a connection\. The following export nodes are supported: <!-- <table "summary="" id="migration__table_hlc_ngk_thb" class="defaultstyle" "> --> Table 1\. Export nodes that can be migrated | Supported SPSS Modeler export nodes | | ----------------------------------- | | Analytic Server | | Database | | Flat File | | Statistics Export | | Data Collection | | Excel | | IBM Cognos Analytics Export | | TM1 Export | | SAS | | XML Export | <!-- </table "summary="" id="migration__table_hlc_ngk_thb" class="defaultstyle" "> --> Notes: Keep the following information in mind when migrating nodes\. <!-- <ul> --> * When migrating export nodes, you're converting node types that don't exist in watsonx\.ai\. The nodes are converted to Data Asset export nodes or a connection\. Due to a current limitation for automatically migrating nodes, only existing project assets or connections can be selected as export targets\. These assets will be overwritten during export when the flow runs\. * To preserve any type or filter information, when an import node is replaced with Data Asset nodes, they're converted to a SuperNode\. * After migration, you can go back later and use the Convert button if you want to migrate a node that you skipped previously\. * If the stream you imported uses scripting, you may encounter an error when you run the flow even after completing a migration\. This could be due to the flow script containing a reference to an unsupported import or export node\. To avoid such errors, you must remove the scripting code that references the unsupported node\. * If the stream you're importing contains unsupported data file types, you need to convert them to a supported type (CSV, Excel, or SPSS Statistics \.sav)\. * In some cases, some settings from your original stream may not be restored during migration\. For example, if the field delimiter in your original stream was tabs, it may be changed to commas after migration\. Settings such as custom SQL also aren't migrated currently\. Compare the new migrated flow to your original stream and making adjustments as needed\. <!-- </ul> --> <!-- </article "role="article" "> -->
B851271C134A1B282412BD7A667C1C9813B4E8B2
https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/TMWBModelApplier.html?context=cdpaas&locale=en
Text Mining model nuggets (SPSS Modeler)
Text Mining model nuggets You can run a Text Mining node to automatically generate a concept model nugget using the Generate directly option in the node settings. Or you can use a more hands-on, exploratory approach using the Build interactively mode to generate category model nuggets from within the Text Analytics Workbench.
# Text Mining model nuggets # You can run a Text Mining node to automatically generate a concept model nugget using the Generate directly option in the node settings\. Or you can use a more hands\-on, exploratory approach using the Build interactively mode to generate category model nuggets from within the Text Analytics Workbench\. <!-- </article "role="article" "> -->
BBD1F022A8393101199ABB731534C10BE99CF1E4
https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/TextMiningWorkbench.html?context=cdpaas&locale=en
Mining for concepts and categories (SPSS Modeler)
Mining for concepts and categories The Text Mining node uses linguistic and frequency techniques to extract key concepts from the text and create categories with these concepts and other data. Use the node to explore the text data contents or to produce either a concept model nugget or category model nugget. ![Text Mining node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_textmining.png)When you run this node, an internal linguistic extraction engine extracts and organizes the concepts, patterns, and categories by using natural language processing methods. Two build modes are available in the Text Mining node's properties: * The Generate directly (concept model nugget) mode automatically produces a concept or category model nugget when you run the node. * The Build interactively (category model nugget) is a more hands-on, exploratory approach. You can use this mode to not only extract concepts, create categories, and refine your linguistic resources, but also run text link analysis and explore clusters. This build mode launches the Text Analytics Workbench. And you can use the Text Mining node to generate one of two text mining model nuggets: * Concept model nuggets uncover and extract important concepts from your structured or unstructured text data. * Category model nuggets score and assign documents and records to categories, which are made up of the extracted concepts (and patterns). The extracted concepts and patterns and the categories from your model nuggets can all be combined with existing structured data, such as demographics, to yield better and more-focused decisions. For example, if customers frequently list login issues as the primary impediment to completing online account management tasks, you might want to incorporate "login issues" into your models.
# Mining for concepts and categories # The Text Mining node uses linguistic and frequency techniques to extract key concepts from the text and create categories with these concepts and other data\. Use the node to explore the text data contents or to produce either a concept model nugget or category model nugget\. ![Text Mining node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/ta_textmining.png)When you run this node, an internal linguistic extraction engine extracts and organizes the concepts, patterns, and categories by using natural language processing methods\. Two build modes are available in the Text Mining node's properties: <!-- <ul> --> * The Generate directly (concept model nugget) mode automatically produces a concept or category model nugget when you run the node\. * The Build interactively (category model nugget) is a more hands\-on, exploratory approach\. You can use this mode to not only extract concepts, create categories, and refine your linguistic resources, but also run text link analysis and explore clusters\. This build mode launches the Text Analytics Workbench\. <!-- </ul> --> And you can use the Text Mining node to generate one of two text mining model nuggets: <!-- <ul> --> * Concept model nuggets uncover and extract important concepts from your structured or unstructured text data\. * Category model nuggets score and assign documents and records to categories, which are made up of the extracted concepts (and patterns)\. <!-- </ul> --> The extracted concepts and patterns and the categories from your model nuggets can all be combined with existing structured data, such as demographics, to yield better and more\-focused decisions\. For example, if customers frequently list login issues as the primary impediment to completing online account management tasks, you might want to incorporate "login issues" into your models\. <!-- </article "role="article" "> -->
D73C52B16EC33CAA6D1F51EFFA5A6E37052D6110
https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes.html?context=cdpaas&locale=en
Nodes palette (SPSS Modeler)
Nodes palette The following sections describe all the nodes available on the palette in SPSS Modeler. Drag-and-drop or double-click a node in the list to add it to your flow canvas. You can then double-click any node icon in your flow to set its properties. Hover over a property to see information about it, or click the information icon to see Help. When first creating a flow, you select which runtime to use. By default, the flow will use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for some nodes will vary depending on which runtime option you choose.
# Nodes palette # The following sections describe all the nodes available on the palette in SPSS Modeler\. Drag\-and\-drop or double\-click a node in the list to add it to your flow canvas\. You can then double\-click any node icon in your flow to set its properties\. Hover over a property to see information about it, or click the information icon to see Help\. When first creating a flow, you select which runtime to use\. By default, the flow will use the IBM SPSS Modeler runtime\. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime\. Properties for some nodes will vary depending on which runtime option you choose\. <!-- </article "role="article" "> -->