doc_id
stringlengths
40
40
url
stringlengths
90
160
title
stringlengths
5
96
document
stringlengths
24
62.1k
md_document
stringlengths
63
109k
5B37710FE7BBD6EFB842FEB7B49B036302E18F81
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en
Supported foundation models available with watsonx.ai
Supported foundation models available with watsonx.ai A collection of open source and IBM foundation models are deployed in IBM watsonx.ai. The following models are available in watsonx.ai: * flan-t5-xl-3b * flan-t5-xxl-11b * flan-ul2-20b * gpt-neox-20b * granite-13b-chat-v2 * granite-13b-chat-v1 * granite-13b-instruct-v2 * granite-13b-instruct-v1 * llama-2-13b-chat * llama-2-70b-chat * mpt-7b-instruct2 * mt0-xxl-13b * starcoder-15.5b You can prompt these models in the Prompt Lab or programmatically by using the Python library. Summary of models To understand how the model provider, instruction tuning, token limits, and other factors can affect which model you choose, see [Choosing a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-choose.html). The following table lists the supported foundation models that IBM provides. Table 1. IBM foundation models in watsonx.ai Model name Provider Instruction-tuned Billing class Maximum tokens <br>Context (input + output) More information [granite-13b-chat-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) <br>* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) [granite-13b-chat-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) <br>* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) [granite-13b-instruct-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) <br>* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) [granite-13b-instruct-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) <br>* [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) The following table lists the supported foundation models that third parties provide through Hugging Face. Table 2. Supported third party foundation models in watsonx.ai Model name Provider Instruction-tuned Billing class Maximum tokens <br>Context (input + output) More information [flan-t5-xl-3b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xl-3b) Google Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) <br>* [Research paper](https://arxiv.org/abs/2210.11416) [flan-t5-xxl-11b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xxl-11b) Google Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) <br>* [Research paper](https://arxiv.org/abs/2210.11416) [flan-ul2-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-ul2-20b) Google Yes Class 3 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) <br>* [UL2 research paper](https://arxiv.org/abs/2205.05131v1) <br>* [Flan research paper](https://arxiv.org/abs/2210.11416) [gpt-neox-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engpt-neox-20b) EleutherAI No Class 3 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) <br>* [Research paper](https://arxiv.org/abs/2204.06745) [llama-2-13b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) <br>* [Research paper](https://arxiv.org/abs/2307.09288) [llama-2-70b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) <br>* [Research paper](https://arxiv.org/abs/2307.09288) [mpt-7b-instruct2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmpt-7b-instruct2) Mosaic ML Yes Class 1 2048 * [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) <br>* [Website](https://www.mosaicml.com/blog/mpt-7b) [mt0-xxl-13b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmt0-xxl-13b) BigScience Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) <br>* [Research paper](https://arxiv.org/abs/2211.01786) [starcoder-15.5b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enstarcoder-15.5b) BigCode No Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) <br>* [Research paper](https://arxiv.org/abs/2305.06161) * For a list of which models are provided in each regional data center, see [Regional availability of foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers). * For information about the billing classes and rate limiting, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering). Foundation model details The available foundation models support a range of use cases for both natural languages and programming languages. To see the types of tasks that these models can do, review and try the [sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). flan-t5-xl-3b The flan-t5-xl-3b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks. Note: This foundation model can be tuned by using the Tuning Studio. Usage : General use with zero- or few-shot prompts. Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) Size : 3 billion parameters Token limits : Context window length (input + output): 4096 : Note: Lite plan output is limited to 700 Supported natural languages : English, German, French Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published. Model architecture : Encoder-decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) : [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) flan-t5-xxl-11b The flan-t5-xxl-11b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks. Usage : General use with zero- or few-shot prompts. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx and Google flan-t5-xxl to generate advertising copy](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/73243d67b49a6e05f4cdf351b4b35e21?context=wx) : [Sample notebook: Use watsonx and LangChain to make a series of calls to a language model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx) Size : 11 billion parameters Token limits : Context window length (input + output): 4096 : Note: Lite plan output is limited to 700 Supported natural languages : English, German, French Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published. Model architecture : Encoder-decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) flan-ul2-20b The flan-ul2-20b model is provided by Google on Hugging Face. This model was trained by using the Unifying Language Learning Paradigms (UL2). The model is optimized for language generation, language understanding, text classification, question answering, common sense reasoning, long text reasoning, structured-knowledge grounding, and information retrieval, in-context learning, zero-shot prompting, and one-shot prompting. Usage : General use with zero- or few-shot prompts. Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx to summarize cybersecurity documents](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1cb62d6a5847b8ed5cdb6531a08e9104?context=wx) : [Sample notebook: Use watsonx and LangChain to answer questions by using retrieval-augmented generation (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6?context=wx&audience=wdp) : [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) : [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) Size : 20 billion parameters Token limits : Context window length (input + output): 4096 : Note: Lite plan output is limited to 700 Supported natural languages : English Instruction tuning information : The flan-ul2-20b model is pretrained on the colossal, cleaned version of Common Crawl's web crawl corpus. The model is fine-tuned with multiple pretraining objectives to optimize it for various natural language processing tasks. Details about the training data sets used are published. Model architecture : Encoder-decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Unifying Language Learning (UL2) research paper](https://arxiv.org/abs/2205.05131v1) : [Fine-tuned Language Model (Flan) research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) gpt-neox-20b The gpt-neox-20b model is provided by EleutherAI on Hugging Face. This model is an autoregressive language model that is trained on diverse English-language texts to support general-purpose use cases. GPT-NeoX-20B has not been fine-tuned for downstream tasks. Usage : Works best with few-shot prompts. Accepts special characters, which can be used for generating structured output. : The data set used for training contains profanity and offensive text. Be sure to curate any output from the model before using it in an application. Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) Size : 20 billion parameters Token limits : Context window length (input + output): 8192 : Note: Lite plan output is limited to 700 Supported natural languages : English Data used during training : The gpt-neox-20b model was trained on the Pile. For more information about the Pile, see [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027). The Pile was not deduplicated before being used for training. Model architecture : Decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Research paper](https://arxiv.org/abs/2204.06745) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) granite-13b-chat-v2 The granite-13b-chat-v2 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications. Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a) Size : 13 billion parameters Token limits : Context window length (input + output): 8192 Supported natural languages : English Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. Model architecture : Decoder License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) granite-13b-chat-v1 The granite-13b-chat-v1 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications. Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a) Size : 13 billion parameters Token limits : Context window length (input + output): 8192 Supported natural languages : English Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. Model architecture : Decoder License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) granite-13b-instruct-v2 The granite-13b-instruct-v2 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d) : [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) Size : 13 billion parameters Token limits : Context window length (input + output): 8192 Supported natural languages : English Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. Model architecture : Decoder License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) granite-13b-instruct-v1 The granite-13b-instruct-v1 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions. Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d) : [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) Size : 13 billion parameters Token limits : Context window length (input + output): 8192 Supported natural languages : English Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used. Model architecture : Decoder License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747). Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) Llama-2 Chat The Llama-2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback. You can choose to use the 13 billion parameter or 70 billion parameter version of the model. Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Cost : 13b: Class 1 : 70b: Class 2 : For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7b) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) Available sizes : 13 billion parameters : 70 billion parameters Token limits : Context window length (input + output): 4096 : Lite plan output is limited as follows: : - 70b version: 900 : - 13b version: 2048 Supported natural languages : English Instruction tuning information : Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction data sets and more than one million new examples that were annotated by humans. Model architecture : Llama 2 is an auto-regressive decoder-only language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning and reinforcement learning with human feedback. License : [License](https://ai.meta.com/llama/license/) Learn more : [Research paper](https://arxiv.org/abs/2307.09288) : [13b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) : [70b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) mpt-7b-instruct2 The mpt-7b-instruct2 model is provided by MosaicML on Hugging Face. This model is a fine-tuned version of the base MosaicML Pretrained Transformer (MPT) model that was trained to handle long inputs. This version of the model was optimized by IBM for following short-form instructions. Usage : General use with zero- or few-shot prompts. Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) Size : 7 billion parameters Token limits : Context window length (input + output): 2048 : Note: Lite plan output is limited to 500 Supported natural languages : English Instruction tuning information : The dataset that was used to train this model is a combination of the Dolly dataset from Databrick and a filtered subset of the Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback training data from Anthropic. During filtering, parts of dialog exchanges that contain instruction-following steps were extracted to be used as samples. Model architecture : Encoder-decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) : [Blog](https://www.mosaicml.com/blog/mpt-7b) mt0-xxl-13b The mt0-xxl-13b model is provided by BigScience on Hugging Face. The model is optimized to support language generation and translation tasks with English, languages other than English, and multilingual prompts. Usage : General use with zero- or few-shot prompts. For translation tasks, include a period to indicate the end of the text you want translated or the model might continue the sentence rather than translate it. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43?context=wx) Size : 13 billion parameters Token limits : Context window length (input + output): 4096 : Note: Lite plan output is limited to 700 Supported natural languages : The model is pretrained on multilingual data in 108 languages and fine-tuned with multilingual data in 46 languages to perform multilingual tasks. Instruction tuning information : BigScience publishes details about its code and data sets. Model architecture : Encoder-decoder License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) Learn more : [Research paper](https://arxiv.org/abs/2211.01786) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) starcoder-15.5b The starcoder-15.5b model is provided by BigCode on Hugging Face. This model can generate code and convert code from one programming language to another. The model is meant to be used by developers to boost their productivity. Usage : Code generation and code conversion : Note: The model output might include code that is taken directly from its training data, which can be licensed code that requires attribution. Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlcode) : [Sample notebook: Use watsonx and BigCode starcoder-15.5b to generate code based on instruction](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6?context=wx) Size : 15.5 billion parameters Token limits : Context window length (input + output): 8192 Supported programming languages : Over 80 programming languages, with an emphasis on Python. Data used during training : This model was trained on over 80 programming languages from GitHub. A filter was applied to exclude from the training data any licensed code or code that is marked with opt-out requests. Nevertheless, the model's output might include code from its training data that requires attribution. The model was not instruction-tuned. Submitting input with only an instruction and no examples might result in poor model output. Model architecture : Decoder License : [License](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) Learn more : [Research paper](https://arxiv.org/abs/2305.06161) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Supported foundation models available with watsonx\.ai # A collection of open source and IBM foundation models are deployed in IBM watsonx\.ai\. The following models are available in watsonx\.ai: <!-- <ul> --> * flan\-t5\-xl\-3b * flan\-t5\-xxl\-11b * flan\-ul2\-20b * gpt\-neox\-20b * granite\-13b\-chat\-v2 * granite\-13b\-chat\-v1 * granite\-13b\-instruct\-v2 * granite\-13b\-instruct\-v1 * llama\-2\-13b\-chat * llama\-2\-70b\-chat * mpt\-7b\-instruct2 * mt0\-xxl\-13b * starcoder\-15\.5b <!-- </ul> --> You can prompt these models in the Prompt Lab or programmatically by using the Python library\. ## Summary of models ## To understand how the model provider, instruction tuning, token limits, and other factors can affect which model you choose, see [Choosing a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-choose.html)\. The following table lists the supported foundation models that IBM provides\. <!-- <table> --> Table 1\. IBM foundation models in watsonx\.ai | Model name | Provider | Instruction\-tuned | Billing class | Maximum tokens <br>Context (input \+ output) | More information | | -------------------------------------------------------- | -------- | ------------------ | ------------- | --------------------------------------------- | ------------------------------------------------------------------------- | | [granite\-13b\-chat\-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#granite-13b-chat) | IBM | Yes | Class 2 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) <br>• [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>• [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) | | [granite\-13b\-chat\-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#granite-13b-chat-v1) | IBM | Yes | Class 2 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) <br>• [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>• [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) | | [granite\-13b\-instruct\-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#granite-13b-instruct) | IBM | Yes | Class 2 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) <br>• [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>• [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) | | [granite\-13b\-instruct\-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#granite-13b-instruct-v1) | IBM | Yes | Class 2 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) <br>• [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) <br>• [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) | <!-- </table ""> --> The following table lists the supported foundation models that third parties provide through Hugging Face\. <!-- <table> --> Table 2\. Supported third party foundation models in watsonx\.ai | Model name | Provider | Instruction\-tuned | Billing class | Maximum tokens <br>Context (input \+ output) | More information | | ------------------------------------------ | ---------- | ------------------ | ------------- | --------------------------------------------- | --------------------------------------------------------------------------------------------------------- | | [flan\-t5\-xl\-3b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#flan-t5-xl-3b) | Google | Yes | Class 1 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) <br>• [Research paper](https://arxiv.org/abs/2210.11416) | | [flan\-t5\-xxl\-11b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#flan-t5-xxl-11b) | Google | Yes | Class 2 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) <br>• [Research paper](https://arxiv.org/abs/2210.11416) | | [flan\-ul2\-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#flan-ul2-20b) | Google | Yes | Class 3 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) <br>• [UL2 research paper](https://arxiv.org/abs/2205.05131v1) <br>• [Flan research paper](https://arxiv.org/abs/2210.11416) | | [gpt\-neox\-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#gpt-neox-20b) | EleutherAI | No | Class 3 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) <br>• [Research paper](https://arxiv.org/abs/2204.06745) | | [llama\-2\-13b\-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#llama-2) | Meta | Yes | Class 1 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) <br>• [Research paper](https://arxiv.org/abs/2307.09288) | | [llama\-2\-70b\-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#llama-2) | Meta | Yes | Class 2 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) <br>• [Research paper](https://arxiv.org/abs/2307.09288) | | [mpt\-7b\-instruct2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#mpt-7b-instruct2) | Mosaic ML | Yes | Class 1 | 2048 | • [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) <br>• [Website](https://www.mosaicml.com/blog/mpt-7b) | | [mt0\-xxl\-13b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#mt0-xxl-13b) | BigScience | Yes | Class 2 | 4096 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) <br>• [Research paper](https://arxiv.org/abs/2211.01786) | | [starcoder\-15\.5b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=en#starcoder-15.5b) | BigCode | No | Class 2 | 8192 | • [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) <br>• [Research paper](https://arxiv.org/abs/2305.06161) | <!-- </table ""> --> <!-- <ul> --> * For a list of which models are provided in each regional data center, see [Regional availability of foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html#data-centers)\. * For information about the billing classes and rate limiting, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html#ru-metering)\. <!-- </ul> --> ## Foundation model details ## The available foundation models support a range of use cases for both natural languages and programming languages\. To see the types of tasks that these models can do, review and try the [sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)\. ### flan\-t5\-xl\-3b ### The flan\-t5\-xl\-3b model is provided by Google on Hugging Face\. This model is based on the pretrained text\-to\-text transfer transformer (T5) model and uses instruction fine\-tuning methods to achieve better zero\- and few\-shot performance\. The model is also fine\-tuned with chain\-of\-thought data to improve its ability to perform reasoning tasks\. **Note**: This foundation model can be tuned by using the Tuning Studio\. **Usage** : General use with zero\- or few\-shot prompts\. **Cost** : Class 1\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) **Size** : 3 billion parameters **Token limits** : Context window length (input \+ output): 4096 : **Note**: Lite plan output is limited to 700 **Supported natural languages** : English, German, French **Instruction tuning information** : The model was fine\-tuned on tasks that involve multiple\-step reasoning from chain\-of\-thought data in addition to traditional natural language processing tasks\. Details about the training data sets used are published. **Model architecture** : Encoder\-decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) : [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) ### flan\-t5\-xxl\-11b ### The flan\-t5\-xxl\-11b model is provided by Google on Hugging Face\. This model is based on the pretrained text\-to\-text transfer transformer (T5) model and uses instruction fine\-tuning methods to achieve better zero\- and few\-shot performance\. The model is also fine\-tuned with chain\-of\-thought data to improve its ability to perform reasoning tasks\. **Usage** : General use with zero\- or few\-shot prompts\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx and Google flan\-t5\-xxl to generate advertising copy](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/73243d67b49a6e05f4cdf351b4b35e21?context=wx) : [Sample notebook: Use watsonx and LangChain to make a series of calls to a language model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx) **Size** : 11 billion parameters **Token limits** : Context window length (input \+ output): 4096 : **Note**: Lite plan output is limited to 700 **Supported natural languages** : English, German, French **Instruction tuning information** : The model was fine\-tuned on tasks that involve multiple\-step reasoning from chain\-of\-thought data in addition to traditional natural language processing tasks\. Details about the training data sets used are published\. **Model architecture** : Encoder\-decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) ### flan\-ul2\-20b ### The flan\-ul2\-20b model is provided by Google on Hugging Face\. This model was trained by using the Unifying Language Learning Paradigms (UL2)\. The model is optimized for language generation, language understanding, text classification, question answering, common sense reasoning, long text reasoning, structured\-knowledge grounding, and information retrieval, in\-context learning, zero\-shot prompting, and one\-shot prompting\. **Usage** : General use with zero\- or few\-shot prompts\. **Cost** : Class 3\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx to summarize cybersecurity documents](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1cb62d6a5847b8ed5cdb6531a08e9104?context=wx) : [Sample notebook: Use watsonx and LangChain to answer questions by using retrieval\-augmented generation (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6?context=wx&audience=wdp) : [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) : [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) **Size** : 20 billion parameters **Token limits** : Context window length (input \+ output): 4096 : **Note**: Lite plan output is limited to 700 **Supported natural languages** : English **Instruction tuning information** : The flan\-ul2\-20b model is pretrained on the colossal, cleaned version of Common Crawl's web crawl corpus\. The model is fine\-tuned with multiple pretraining objectives to optimize it for various natural language processing tasks\. Details about the training data sets used are published\. **Model architecture** : Encoder\-decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Unifying Language Learning (UL2) research paper](https://arxiv.org/abs/2205.05131v1) : [Fine\-tuned Language Model (Flan) research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) ### gpt\-neox\-20b ### The gpt\-neox\-20b model is provided by EleutherAI on Hugging Face\. This model is an autoregressive language model that is trained on diverse English\-language texts to support general\-purpose use cases\. GPT\-NeoX\-20B has not been fine\-tuned for downstream tasks\. **Usage** : Works best with few\-shot prompts\. Accepts special characters, which can be used for generating structured output\. : The data set used for training contains profanity and offensive text\. Be sure to curate any output from the model before using it in an application\. **Cost** : Class 3\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) **Size** : 20 billion parameters **Token limits** : Context window length (input \+ output): 8192 : **Note**: Lite plan output is limited to 700 **Supported natural languages** : English **Data used during training** : The gpt\-neox\-20b model was trained on the Pile\. For more information about the Pile, see [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)\. The Pile was not deduplicated before being used for training\. **Model architecture** : Decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Research paper](https://arxiv.org/abs/2204.06745) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) ### granite\-13b\-chat\-v2 ### The granite\-13b\-chat\-v2 model is provided by IBM\. This model is optimized for dialogue use cases and works well with virtual agent and chat applications\. **Usage** : Generates dialogue output like a chatbot\. Uses a model\-specific prompt format\. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample7a) **Size** : 13 billion parameters **Token limits** : Context window length (input \+ output): 8192 **Supported natural languages** : English **Instruction tuning information** : The Granite family of models is trained on enterprise\-relevant data sets from five domains: internet, academic, code, legal, and finance\. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM\-developed HAP filter\. IBM shares information about the training methods and data sets used\. **Model architecture** : Decoder **License** : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx\.ai, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)\. **Learn more** : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) ### granite\-13b\-chat\-v1 ### The granite\-13b\-chat\-v1 model is provided by IBM\. This model is optimized for dialogue use cases and works well with virtual agent and chat applications\. **Usage** : Generates dialogue output like a chatbot\. Uses a model\-specific prompt format\. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample7a) **Size** : 13 billion parameters **Token limits** : Context window length (input \+ output): 8192 **Supported natural languages** : English **Instruction tuning information** : The Granite family of models is trained on enterprise\-relevant data sets from five domains: internet, academic, code, legal, and finance\. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM\-developed HAP filter\. IBM shares information about the training methods and data sets used\. **Model architecture** : Decoder **License** : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx\.ai, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)\. **Learn more** : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) ### granite\-13b\-instruct\-v2 ### The granite\-13b\-instruct\-v2 model is provided by IBM\. This model was trained with high\-quality finance data, and is a top\-performing model on finance tasks\. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long\-form text, and answering financial or insurance\-related questions\. **Usage** : Supports extraction, summarization, and classification tasks\. Generates useful output for finance\-related tasks\. Uses a model\-specific prompt format\. Accepts special characters, which can be used for generating structured output\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample3b) : [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4c) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4d) : [Sample notebook: Use watsonx and ibm/granite\-13b\-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) **Size** : 13 billion parameters **Token limits** : Context window length (input \+ output): 8192 **Supported natural languages** : English **Instruction tuning information** : The Granite family of models is trained on enterprise\-relevant data sets from five domains: internet, academic, code, legal, and finance\. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM\-developed HAP filter\. IBM shares information about the training methods and data sets used\. **Model architecture** : Decoder **License** : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx\.ai, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)\. **Learn more** : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) ### granite\-13b\-instruct\-v1 ### The granite\-13b\-instruct\-v1 model is provided by IBM\. This model was trained with high\-quality finance data, and is a top\-performing model on finance tasks\. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long\-form text, and answering financial or insurance\-related questions\. **Usage** : Supports extraction, summarization, and classification tasks\. Generates useful output for finance\-related tasks\. Uses a model\-specific prompt format\. Accepts special characters, which can be used for generating structured output\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample3b) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4d) : [Sample notebook: Use watsonx and ibm/granite\-13b\-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx) **Size** : 13 billion parameters **Token limits** : Context window length (input \+ output): 8192 **Supported natural languages** : English **Instruction tuning information** : The Granite family of models is trained on enterprise\-relevant data sets from five domains: internet, academic, code, legal, and finance\. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM\-developed HAP filter\. IBM shares information about the training methods and data sets used\. **Model architecture** : Decoder **License** : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx\.ai, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)\. **Learn more** : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) ### Llama\-2 Chat ### The Llama\-2 Chat model is provided by Meta on Hugging Face\. The fine\-tuned model is useful for chat generation\. The model is pretrained with publicly available online data and fine\-tuned using reinforcement learning from human feedback\. You can choose to use the 13 billion parameter or 70 billion parameter version of the model\. **Usage** : Generates dialogue output like a chatbot\. Uses a model\-specific prompt format\. **Cost** : 13b: Class 1 : 70b: Class 2 : For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample7b) : [Sample notebook: Use watsonx and Meta llama\-2\-70b\-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) : [Sample notebook: Use watsonx and Meta llama\-2\-70b\-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) **Available sizes** : 13 billion parameters : 70 billion parameters **Token limits** : Context window length (input \+ output): 4096 : Lite plan output is limited as follows: : \- 70b version: 900 : \- 13b version: 2048 **Supported natural languages** : English **Instruction tuning information** : Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources\. The fine\-tuning data includes publicly available instruction data sets and more than one million new examples that were annotated by humans\. **Model architecture** : Llama 2 is an auto\-regressive decoder\-only language model that uses an optimized transformer architecture\. The tuned versions use supervised fine\-tuning and reinforcement learning with human feedback\. **License** : [License](https://ai.meta.com/llama/license/) **Learn more** : [Research paper](https://arxiv.org/abs/2307.09288) : [13b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) : [70b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) ### mpt\-7b\-instruct2 ### The mpt\-7b\-instruct2 model is provided by MosaicML on Hugging Face\. This model is a fine\-tuned version of the base MosaicML Pretrained Transformer (MPT) model that was trained to handle long inputs\. This version of the model was optimized by IBM for following short\-form instructions\. **Usage** : General use with zero\- or few\-shot prompts\. **Cost** : Class 1\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) **Size** : 7 billion parameters **Token limits** : Context window length (input \+ output): 2048 : **Note**: Lite plan output is limited to 500 **Supported natural languages** : English **Instruction tuning information** : The dataset that was used to train this model is a combination of the Dolly dataset from Databrick and a filtered subset of the Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback training data from Anthropic\. During filtering, parts of dialog exchanges that contain instruction-following steps were extracted to be used as samples. **Model architecture** : Encoder\-decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) : [Blog](https://www.mosaicml.com/blog/mpt-7b) ### mt0\-xxl\-13b ### The mt0\-xxl\-13b model is provided by BigScience on Hugging Face\. The model is optimized to support language generation and translation tasks with English, languages other than English, and multilingual prompts\. **Usage** : General use with zero\- or few\-shot prompts\. For translation tasks, include a period to indicate the end of the text you want translated or the model might continue the sentence rather than translate it\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Simple introduction to retrieval\-augmented generation with watsonx\.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43?context=wx) **Size** : 13 billion parameters **Token limits** : Context window length (input \+ output): 4096 : **Note**: Lite plan output is limited to 700 **Supported natural languages** : The model is pretrained on multilingual data in 108 languages and fine\-tuned with multilingual data in 46 languages to perform multilingual tasks\. **Instruction tuning information** : BigScience publishes details about its code and data sets\. **Model architecture** : Encoder\-decoder **License** : [Apache 2\.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt) **Learn more** : [Research paper](https://arxiv.org/abs/2211.01786) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) ### starcoder\-15\.5b ### The starcoder\-15\.5b model is provided by BigCode on Hugging Face\. This model can generate code and convert code from one programming language to another\. The model is meant to be used by developers to boost their productivity\. **Usage** : Code generation and code conversion : Note: The model output might include code that is taken directly from its training data, which can be licensed code that requires attribution\. **Cost** : Class 2\. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. **Try it out** : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#code) : [Sample notebook: Use watsonx and BigCode starcoder\-15\.5b to generate code based on instruction](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6?context=wx) **Size** : 15\.5 billion parameters **Token limits** : Context window length (input \+ output): 8192 **Supported programming languages** : Over 80 programming languages, with an emphasis on Python\. **Data used during training** : This model was trained on over 80 programming languages from GitHub\. A filter was applied to exclude from the training data any licensed code or code that is marked with opt\-out requests\. Nevertheless, the model's output might include code from its training data that requires attribution\. The model was not instruction\-tuned\. Submitting input with only an instruction and no examples might result in poor model output\. **Model architecture** : Decoder **License** : [License](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement) **Learn more** : [Research paper](https://arxiv.org/abs/2305.06161) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html?context=cdpaas&locale=en
Foundation models
Foundation models Build generative AI solutions with foundation models in IBM watsonx.ai. Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. Watsonx.ai has a range of deployed large language models for you to try. For details, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). Foundation model architecture Foundation models represent a fundamentally different model architecture and purpose for AI systems. The following diagram illustrates the difference between traditional AI models and foundation models. ![Comparison of traditional AI models to foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-overview-diagram.png) As shown in the diagram, traditional AI models specialize in specific tasks. Most traditional AI models are built by using machine learning, which requires a large, structured, well-labeled data set that encompasses a specific task that you want to tackle. Often these data sets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time. After it is trained, a traditional AI model can do a single task well. The traditional AI model uses what it learns from patterns in the training data to predict outcomes in unknown data. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them. In contrast, foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks. Foundation models were first used to generate text by calculating the most-probable next word in natural language translation tasks. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well. Instead of creating your own foundation models, you use existing deployed models and engineer prompts to generate the results that you need. Methods of working with foundation models The possibilities and applications of foundation models are just starting to be discovered. Explore and validate use cases with foundation models in watsonx.ai to automate, simplify, and speed up existing processes or provide value in a new way. You can interact with foundation models in the following ways: * Engineer prompts and inference deployed foundation models directly by using the Prompt Lab * Inference deployed foundation models programmatically by using the Python library * Tune foundation models to return output in a certain style or format by using the Tuning Studio Learn more * [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) * [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) * [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) * [Security and privacy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) * [Model terms of use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html) * [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html) * [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) * [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) Parent topic:[Analyzing data and working with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
# Foundation models # Build generative AI solutions with foundation models in IBM watsonx\.ai\. Foundation models are large AI models that have billions of parameters and are trained on terabytes of data\. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more\. Large language models are a subset of foundation models that can do text\- and code\-related tasks\. Watsonx\.ai has a range of deployed large language models for you to try\. For details, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\. ## Foundation model architecture ## Foundation models represent a fundamentally different model architecture and purpose for AI systems\. The following diagram illustrates the difference between traditional AI models and foundation models\. ![Comparison of traditional AI models to foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-overview-diagram.png) As shown in the diagram, traditional AI models specialize in specific tasks\. Most traditional AI models are built by using machine learning, which requires a large, structured, well\-labeled data set that encompasses a specific task that you want to tackle\. Often these data sets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time\. After it is trained, a traditional AI model can do a single task well\. The traditional AI model uses what it learns from patterns in the training data to predict outcomes in unknown data\. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them\. In contrast, foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks\. Foundation models were first used to generate text by calculating the most\-probable next word in natural language translation tasks\. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well\. Instead of creating your own foundation models, you use existing deployed models and engineer prompts to generate the results that you need\. ## Methods of working with foundation models ## The possibilities and applications of foundation models are just starting to be discovered\. Explore and validate use cases with foundation models in watsonx\.ai to automate, simplify, and speed up existing processes or provide value in a new way\. You can interact with foundation models in the following ways: <!-- <ul> --> * Engineer prompts and inference deployed foundation models directly by using the Prompt Lab * Inference deployed foundation models programmatically by using the Python library * Tune foundation models to return output in a certain style or format by using the Tuning Studio <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) * [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html) * [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) * [Security and privacy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) * [Model terms of use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html) * [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html) * [Retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) * [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </ul> --> **Parent topic:**[Analyzing data and working with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) <!-- </article "role="article" "> -->
78A8C07B83DF1B01276353D098E84F12304636E2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html?context=cdpaas&locale=en
Prompt Lab
Prompt Lab In the Prompt Lab in IBM watsonx.ai, you can experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts. You use the Prompt Lab to engineer effective prompts that you submit to deployed foundation models for inferencing. You do not use the Prompt Lab to create new foundation models. This video provides a visual method to learn the concepts and tasks in this documentation. Requirements If you signed up for watsonx.ai and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab. You must meet these requirements to use the Prompt Lab: * You must have a project. * You must have the Editor or Admin role in the project. * The project must have an associated Watson Machine Learning service instance. Otherwise, you are prompted to associate the service when you open the Prompt Lab. Creating and running a prompt To create and run a new prompt, complete the following steps: 1. From the [watsonx.ai home page](https://dataplatform.cloud.ibm.com/wx/home?context=wx), choose a project, and then click Experiment with foundation models and build prompts. 1. Select a model. 2. Enter a prompt. 3. If necessary, update model parameters or add prompt variables. 4. Click Generate. 5. To preserve your work, so you can reuse or share a prompt with collaborators in the current project, save your work as a project asset. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html). To run a sample prompt, complete the following steps: 1. From the Sample prompts menu in the Prompt Lab, select a sample prompt. The prompt is opened in the editor and an appropriate model is selected. 2. Click Generate. Prompt editing options You type your prompt in the prompt editor. The prompt editor has the following modes: Freeform : You add your prompt in plain text. Your prompt text is sent to the model exactly as you typed it. : Quotation marks in your text are escaped with a backslash ("). Newline characters are represented by n. Apostrophes are escaped (it'''s) so that they can be handled properly in the cURL command. Structured : You add parts of your prompt into the appropriate fields: : - Instruction: Add an instruction if it makes sense for your use case. An instruction is an imperative statement, such as Summarize the following article. : - Examples: Add one or more pairs of examples that contain the input and the corresponding output that you want. Providing a few example input-and-output pairs in your prompt is called few-shot prompting. If you need a specific prefix to the input or the output, you can replace the default labels, "Input:" or "Output:", with the labels you want to use. A space is added between the example label and the example text. : - Test your input: In the Try area, enter the final input of your prompt. : Structured mode is designed to help new users create effective prompts. Text from the fields is sent to the model in a template format. Model and prompt configuration options You must specify which model to prompt and can optionally set parameters that control the generated result. Model choices In the Prompt Lab, you can submit your prompt to any of the models that are supported by watsonx.ai. You can choose recently-used models from the drop-down list. Or you can click View all foundation models to view all the supported models, filter them by task, and read high-level information about the models. If you tuned a foundation model by using the Tuning Studio and deployed the tuned model, your tuned model is also available for prompting from the Prompt Lab. Model parameters To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria. For more information, see [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html). Prompt variables To add flexibility to your prompts, you can define prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). AI guardrails When you set the AI guardrails switcher to On, harmful language is automatically removed from the input prompt text and from the output that is generated by the model. Specifically, any sentence in the input or output that contains harmful language is replaced with a message that says that potentially harmful text was removed. Prompt code If you want to run the prompt programmatically, you can view and copy the prompt code or use the Python library. View code When you click the View code icon (![</>](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code.svg)), a cURL command is displayed that you can call from outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token. For information about generating the access token, see [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey). Programmatic alternative to the Prompt Lab The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts. However, you can also prompt foundation models in watsonx.ai programmatically by using the Python library. For details, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html). Available prompts In the side panel, you can access sample prompts, your session history, and saved prompts. Samples A collection of sample prompts are available in the Prompt Lab. The samples demonstrate effective prompt text and model parameters for different tasks, including classification, extraction, content generation, question answering, and summarization. When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically. History As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt. To load a previous prompt, click the entry in the history and then click Restore. Saved From the Saved prompt templates menu, you can load any prompts that you saved to the current project as a prompt template asset. Learn more * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) * [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html) * [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html) * [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html) * [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) * [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) * Try these tutorials: * [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) * [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) * Watch these other prompt lab videos Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Prompt Lab # In the Prompt Lab in IBM watsonx\.ai, you can experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts\. You use the Prompt Lab to engineer effective prompts that you submit to deployed foundation models for inferencing\. You do not use the Prompt Lab to create new foundation models\. This video provides a visual method to learn the concepts and tasks in this documentation\. ## Requirements ## If you signed up for watsonx\.ai and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab\. You must meet these requirements to use the Prompt Lab: <!-- <ul> --> * You must have a project\. * You must have the **Editor** or **Admin** role in the project\. * The project must have an associated Watson Machine Learning service instance\. Otherwise, you are prompted to associate the service when you open the Prompt Lab\. <!-- </ul> --> ## Creating and running a prompt ## To create and run a new prompt, complete the following steps: <!-- <ol> --> 1. From the [watsonx\.ai home page](https://dataplatform.cloud.ibm.com/wx/home?context=wx), choose a project, and then click **Experiment with foundation models and build prompts**\. <!-- </ol> --> <!-- <ol> --> 1. Select a model\. 2. Enter a prompt\. 3. If necessary, update model parameters or add prompt variables\. 4. Click **Generate**\. 5. To preserve your work, so you can reuse or share a prompt with collaborators in the current project, save your work as a project asset\. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html)\. <!-- </ol> --> To run a sample prompt, complete the following steps: <!-- <ol> --> 1. From the *Sample prompts* menu in the Prompt Lab, select a sample prompt\. The prompt is opened in the editor and an appropriate model is selected. 2. Click **Generate**\. <!-- </ol> --> ## Prompt editing options ## You type your prompt in the prompt editor\. The prompt editor has the following modes: **Freeform** : You add your prompt in plain text\. Your prompt text is sent to the model exactly as you typed it\. : Quotation marks in your text are escaped with a backslash (`\"`)\. Newline characters are represented by `\n`\. Apostrophes are escaped (`it'\''s`) so that they can be handled properly in the cURL command\. **Structured** : You add parts of your prompt into the appropriate fields: : \- **Instruction**: Add an instruction if it makes sense for your use case\. An instruction is an imperative statement, such as *Summarize the following article*\. : \- **Examples**: Add one or more pairs of examples that contain the input and the corresponding output that you want\. Providing a few example input\-and\-output pairs in your prompt is called *few\-shot prompting*\. If you need a specific prefix to the input or the output, you can replace the default labels, "Input:" or "Output:", with the labels you want to use\. A space is added between the example label and the example text\. : \- **Test your input**: In the *Try* area, enter the final input of your prompt\. : Structured mode is designed to help new users create effective prompts\. Text from the fields is sent to the model in a template format\. ## Model and prompt configuration options ## You must specify which model to prompt and can optionally set parameters that control the generated result\. ### Model choices ### In the Prompt Lab, you can submit your prompt to any of the models that are supported by watsonx\.ai\. You can choose recently\-used models from the drop\-down list\. Or you can click **View all foundation models** to view all the supported models, filter them by task, and read high\-level information about the models\. If you tuned a foundation model by using the Tuning Studio and deployed the tuned model, your tuned model is also available for prompting from the Prompt Lab\. ### Model parameters ### To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria\. For more information, see [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html)\. ### Prompt variables ### To add flexibility to your prompts, you can define prompt variables\. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time\. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)\. ### AI guardrails ### When you set the **AI guardrails** switcher to **On**, harmful language is automatically removed from the input prompt text and from the output that is generated by the model\. Specifically, any sentence in the input or output that contains harmful language is replaced with a message that says that potentially harmful text was removed\. ## Prompt code ## If you want to run the prompt programmatically, you can view and copy the prompt code or use the Python library\. ### View code ### When you click the **View code** icon (![</>](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code.svg)), a cURL command is displayed that you can call from outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response\. In the command, there is a placeholder for an IBM Cloud IAM token\. For information about generating the access token, see [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey)\. ### Programmatic alternative to the Prompt Lab ### The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts\. However, you can also prompt foundation models in watsonx\.ai programmatically by using the Python library\. For details, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)\. ## Available prompts ## In the side panel, you can access sample prompts, your session history, and saved prompts\. ### Samples ### A collection of sample prompts are available in the Prompt Lab\. The samples demonstrate effective prompt text and model parameters for different tasks, including classification, extraction, content generation, question answering, and summarization\. When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically\. ### History ### As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt\. To load a previous prompt, click the entry in the history and then click **Restore**\. ### Saved ### From the *Saved prompt templates* menu, you can load any prompts that you saved to the current project as a prompt template asset\. ## Learn more ## <!-- <ul> --> * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) * [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html) * [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html) * [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html) * [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) * [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html) * Try these tutorials: <!-- <ul> --> * [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) * [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) <!-- </ul> --> <!-- </ul> --> <!-- <ul> --> * Watch these other prompt lab videos <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
E5D702E67E93752155510B56A3B2F464E190EBA2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en
Sample foundation model prompts for common tasks
Sample foundation model prompts for common tasks Try these samples to learn how different prompts can guide foundation models to do common tasks. How to use this topic Explore the sample prompts in this topic: * Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai * See what text is generated. * See how different models generate different output. * Change the prompt text and parameters to see how results vary. There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation. This video provides a visual method to learn the concepts and tasks in this documentation. Video chapters [ 0:11 ] Introduction to prompts and Prompt Lab [ 0:33 ] Key concept: Everything is text completion [ 1:34 ] Useful prompt pattern: Few-shot prompt [ 1:58 ] Stopping criteria: Max tokens, stop sequences [ 3:32 ] Key concept: Fine-tuning [ 4:32 ] Useful prompt pattern: Zero-shot prompt [ 5:32 ] Key concept: Be flexible, try different prompts [ 6:14 ] Next steps: Experiment with sample prompts Samples overview You can find samples that prompt foundation models to generate output that supports the following tasks: * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enclassification) * [Extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enextraction) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=engeneration) * [Question answering (QA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enqa) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensummarization) * [Code generation and conversion](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=encode) * [Dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=endialogue) The following table shows the foundation models that are used in task-specific samples. A checkmark indicates that the model is used in a sample for the associated task. Table 1. Models used in samples for certain tasks Model Classification Extraction Generation QA Summarization Coding Dialogue flan-t5-xxl-11b ✓ ✓ flan-ul2-20b ✓ ✓ ✓ gpt-neox-20b ✓ ✓ ✓ granite-13b-chat-v1 ✓ granite-13b-instruct-v1 ✓ ✓ granite-13b-instruct-v2 ✓ ✓ ✓ llama-2 chat ✓ mpt-7b-instruct2 ✓ ✓ mt0-xxl-13b ✓ ✓ starcoder-15.5b ✓ The following table summarizes the available sample prompts. Table 2. List of sample prompts Scenario Prompt editor Prompt format Model Decoding Notes [Sample 1a: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1a) Freeform Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses the class names as stop sequences to stop the model after it prints the class name [Sample 1b: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1b) Freeform Few-shot * gpt-neox-20b <br>* mpt-7b-instruct Greedy * Uses the class names as stop sequences [Sample 1c: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1c) Structured Few-shot * gpt-neox-20b <br>* mpt-7b-instruct Greedy * Uses the class names as stop sequences [Sample 2a: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample2a) Freeform Zero-shot * flan-ul2-20b <br>* granite-13b-instruct-v2 Greedy [Sample 3a: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3a) Freeform Few-shot * gpt-neox-20b Sampling * Generates formatted output <br>* Uses two newline characters as a stop sequence to stop the model after one list [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3b) Structured Few-shot * gpt-neox-20b Sampling * Generates formatted output. <br>* Uses two newline characters as a stop sequence [Sample 3c: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3c) Freeform Zero-shot * granite-13b-instruct-v1 <br>* granite-13b-instruct-v2 Greedy * Generates formatted output [Sample 4a: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4a) Freeform Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses a period "." as a stop sequence to cause the model to return only a single sentence [Sample 4b: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4b) Structured Zero-shot * mt0-xxl-13b <br>* flan-t5-xxl-11b <br>* flan-ul2-20b Greedy * Uses a period "." as a stop sequence <br>* Generates results for multiple inputs at once [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4c) Freeform Zero-shot * granite-13b-instruct-v2 Greedy [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4d) Freeform Zero-shot * granite-13b-instruct-v1 Greedy [Sample 5a: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5a) Freeform Zero-shot * flan-t5-xxl-11b <br>* flan-ul2-20b <br>* mpt-7b-instruct2 Greedy [Sample 5b: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5b) Freeform Few-shot * gpt-neox-20b Greedy [Sample 5c: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5c) Structured Few-shot * gpt-neox-20b Greedy * Generates formatted output <br>* Uses two newline characters as a stop sequence to stop the model after one list [Sample 6a: Generate programmatic code from instructions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6a) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output <br>* Uses <end of code> as a stop sequence [Sample 6b: Convert code from one programming language to another](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6b) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output <br>* Uses <end of code> as a stop sequence [Sample 7a: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7a) Freeform Custom structure * granite-13b-chat-v1 Greedy * Generates dialogue output like a chatbot <br>* Uses a special token that is named END_KEY as a stop sequence [Sample 7b: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7b) Freeform Custom structure * llama-2 chat Greedy * Generates dialogue output like a chatbot <br>* Uses a model-specific prompt format Classification Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently. Sample 1a: Classify a message Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct support team for the issue type. Model choice Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b Decoding Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes. Stopping criteria * Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop. * With such short output, the Max tokens parameter can be set to 5. Prompt text Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Classify this customer message into one of two classes: Question, Problem. Class name: Question Description: The customer is asking a technical question or a how-to question about our products or services. Class name: Problem Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results. Message: I'm having trouble registering for a new account. Class name: Sample 1b: Classify a message Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team. Model choice With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct. Decoding Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes. Stopping criteria * Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop. * With such short output, the Max tokens parameter can be set to 5. Prompt text Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Message: When I try to log in, I get an error. Class name: Problem Message: Where can I find the plan prices? Class name: Question Message: What is the difference between trial and paygo? Class name: Question Message: The registration page crashed, and now I can't create a new account. Class name: Problem Message: What regions are supported? Class name: Question Message: I can't remember my password. Class name: Problem Message: I'm having trouble registering for a new account. Sample 1c: Classify a message Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team. Model choice With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct. Decoding Greedy. The model must return one of the specified class names, not be creative and make up new classes. Stopping criteria * Specify two stop sequences: "Question" and "Problem". After the model generates either of those words, it should stop. * With such short output, the Max tokens parameter can be set to 5. Set up section Paste these headers and examples into the Examples area of the Set up section: Table 2. Classification few-shot examples Message: Class name: When I try to log in, I get an error. Problem Where can I find the plan prices? Question What is the difference between trial and paygo? Question The registration page crashed, and now I can't create a new account. Problem What regions are supported? Question I can't remember my password. Problem Try section Paste this message in the Try section: I'm having trouble registering for a new account. Select the model and set parameters, then click Generate to see the result. Extracting details Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches. Sample 2a: Extract details from a complaint Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience. Model choices flan-ul2-20b, granite-13b-instruct-v2 Decoding Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words. Stopping criteria The list of extracted factors will not be long, so set the Max tokens parameter to 50. Prompt text Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. From the following customer complaint, extract all the factors that caused the customer to be unhappy. Customer complaint: I just tried to book a flight on your incredibly slow website. All the times and prices were confusing. I liked being able to compare the amenities in economy with business class side by side. But I never got to reserve a seat because I didn't understand the seat map. Next time, I'll use a travel agent! Numbered list of all the factors that caused the customer to be unhappy: Generating natural language Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language. Sample 3a: Generate a numbered list on a particular theme Scenario: Generate a numbered list on a particular theme. Model choice gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters. Decoding Sampling. This is a creative task. Set the following parameters: * Temperature: 0.7 * Top P: 1 * Top K: 50 * Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.) Stopping criteria * To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, then click Add sequence. * The list will not be very long, so set the Max tokens parameter to 50. Prompt text Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. What are 4 types of dog breed? 1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination What are 4 kinds of vegetable? 1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower What are the 3 primary colors? 1. Red 2. Green 3. Blue What are 3 ingredients that are good on pizza? Sample 3b: Generate a numbered list on a particular theme Scenario: Generate a numbered list on a particular theme. Model choice gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters. Decoding Sampling. This scenario is a creative one. Set the following parameters: * Temperature: 0.7 * Top P: 1 * Top K: 50 * Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.) Stopping criteria * To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. * The list will not be long, so set the Max tokens parameter to 50. Set up section Paste these headers and examples into the Examples area of the Set up section: Table 3. Generation few-shot examples Input: Output: What are 4 types of dog breed? 1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination What are 4 kinds of vegetable? 1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower What are the 3 primary colors? 1. Red 2. Green 3. Blue Try section Paste this input in the Try section: What are 3 ingredients that are good on pizza? Select the model and set parameters, then click Generate to see the result. Sample 3c: Generate a numbered list on a particular theme Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action. Model choice Similar to gpt-neox-20b, the granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 oe granite-13b-instruct-v1 model is a good choice when you want your generated text to be formatted in a specific way with special characters. Decoding Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. Stopping criteria The summary might run several sentences, so set the Max tokens parameter to 60. Prompt text Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: Refreshing our brand image by creating a new logo. Three problems with this plan are: Question answering Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content. To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html). Sample 4a: Answer a question based on an article Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article. Model choice Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt. Decoding Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer. Stopping criteria To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50. Prompt text Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Article: Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. There are varieties that stay very small, less than 12 inches, and grow well in a pot or hanging basket on a balcony or patio. Some grow into bushes that are a few feet high and wide, and can be grown is larger containers. Other varieties grow into huge bushes that are several feet wide and high in a planter or garden bed. Still other varieties grow as long vines, six feet or more, and love to climb trellises. Tomato plants do best in full sun. You need to water tomatoes deeply and often. Using mulch prevents soil-borne disease from splashing up onto the fruit when you water. Pruning suckers and even pinching the tips will encourage the plant to put all its energy into producing fruit. Answer the following question using only information from the article. Answer in a complete sentence, with proper capitalization and punctuation. If there is no good answer in the article, say "I don't know". Question: Why should you use mulch when growing tomatoes? Answer: You can experiment with asking other questions too, such as: * How large do tomato plants get? * Do tomato plants prefer shade or sun? * Is it easy to grow tomatoes? Try out-of-scope questions too, such as: * How do you grow cucumbers? Sample 4b: Answer a question based on an article Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article. Model choice Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt. Decoding Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer. Stopping criteria To cause the model to return a one-sentence answer, specify a period "." as a stop sequence. The Max tokens parameter can be set to 50. Set up section Paste this text into the Instruction area of the Set up section: Article: Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. There are varieties that stay very small, less than 12 inches, and grow well in a pot or hanging basket on a balcony or patio. Some grow into bushes that are a few feet high and wide, and can be grown is larger containers. Other varieties grow into huge bushes that are several feet wide and high in a planter or garden bed. Still other varieties grow as long vines, six feet or more, and love to climb trellises. Tomato plants do best in full sun. You need to water tomatoes deeply and often. Using mulch prevents soil-borne disease from splashing up onto the fruit when you water. Pruning suckers and even pinching the tips will encourage the plant to put all its energy into producing fruit. Answer the following question using only information from the article. Answer in a complete sentence, with proper capitalization and punctuation. If there is no good answer in the article, say "I don't know". Try section In the Try section, add an extra test row so you can paste each of these two questions in a separate row: Why should you use mulch when growing tomatoes? How do you grow cucumbers? Select the model and set parameters, then click Generate to see two results. Sample 4c: Answer a question based on a document Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document. Model choice Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt. Decoding Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer. Stopping criteria Use a Max tokens parameter of 50. Prompt text Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed. DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. DIALOG: USER: What are foundation models? Sample 4d: Answer general knowledge questions Scenario: Answer general questions about finance. Model choice The granite-13b-instruct-v1 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction. Decoding Greedy. This sample is answering questions, so we don't want creative output. Stopping criteria Set the Max tokens parameter to 200 so the model can return a complete answer. Prompt text The model was tuned for question-answering with examples in the following format: <|user|> content of the question `<|assistant|> new line for the model's answer You can use the exact syntax <|user|> and <|assistant|> in the lines before and after the question or you can replace the values with equivalent terms, such as User and Assistant. If you're using version 1, do not include any trailing white spaces after the <assistant> label, and be sure to add a new line. Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. <|user|> Tell me about interest rates <|assistant|> After the model generates an answer, you can ask a follow-up question. The model uses information from the previous question when it generates a response. <|user|> Who sets it? <|assistant|> The model retains information from a previous question when it answers a follow-up question, but it is not optimized to support an extended dialogue. Note: When you ask a follow-up question, the previous question is submitted again, which adds to the number of tokens that are used. Summarization Summarization tasks save you time by condensing large amounts of text into a few key pieces of information. Sample 5a: Summarize a meeting transcript Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting. Model choice Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mpt-7b-instruct2. Decoding Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. Stopping criteria The summary might run several sentences, so set the Max tokens parameter to 60. Prompt text Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Summarize the following transcript. Transcript: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Summary: Sample 5b: Summarize a meeting transcript Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting. Model choice With few-shot examples, most models can complete this task well. Try: gpt-neox-20b. Decoding Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative. Stopping criteria * To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. * Set the Max tokens parameter to 60. Prompt text Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Transcript: 00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around. Summary: Sam shared an update that project X will be complete at the end of the week. Erin said customer Y will buy our product. And Alex said customer Z will buy our product too. Transcript: 00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done! Summary: Alex suggested considering choice 1. Erin pointed out choice two will take less time. The team agreed with choice 2 for the design solution. Transcript: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Summary: Sample 5c: Summarize a meeting transcript Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting. Model choice gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters. Decoding Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative. Stopping criteria * To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence. * Set the Max tokens parameter to 60. Set up section Paste these headers and examples into the Examples area of the Set up section: Table 4. Summarization few-shot examples Transcript: Summary: 00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around. - Sam shared an update that project X will be complete at the end of the week - Erin said customer Y will buy our product - And Alex said customer Z will buy our product too 00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done! - Alex suggested considering choice 1 - Erin pointed out choice two will take less time - The team agreed with choice 2 for the design solution Try section Paste this message in the Try section: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Select the model and set parameters, then click Generate to see the result. Code generation and conversion Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks. Sample 6a: Generate programmatic code from instructions Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers that are lower than the number that is passed to the function as a variable. Model choice Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided. Decoding Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer. Stopping criteria To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 1,000. Prompt text Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Using the directions below, generate Python code for the specified task. Input: Write a Python function that prints 'Hello World!' string 'n' times. Output: def print_n_times(n): for i in range(n): print("Hello World!") <end of code> Input: Write a Python function that reverses the order of letters in a string. The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order. Output: The output contains Python code similar to the following snippet: def reversed(my_string): return my_string[::-1] Be sure to test the generated code to verify that it works as you expect. For example, if you run reversed("good morning"), the result is 'gninrom doog'. Note: The StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary. Sample 6b: Convert code from one programming language to another Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python. Model choice Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided. Decoding Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer. Stopping criteria To stop the model after it returns a single code snippet, specify <end of code> as the stop sequence. The Max tokens parameter can be set to 300. Prompt text Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code. The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters. The final part of the prompt identifies the language that you want the C++ code snippet to be converted into. Translate the following code from C++ to Python. C++: include "bits/stdc++.h" using namespace std; bool isPerfectSquare(long double x) { long double sr = sqrt(x); return ((sr - floor(sr)) == 0); } void checkSunnyNumber(int N) { if (isPerfectSquare(N + 1)) { cout << "Yes "; } else { cout << "No "; } } int main() { int N = 8; checkSunnyNumber(N); return 0; } Python: from math import def isPerfectSquare(x): sr = sqrt(x) return ((sr - floor(sr)) == 0) def checkSunnyNumber(N): if (isPerfectSquare(N + 1)): print("Yes") else: print("No") if __name__ == '__main__': N = 8 checkSunnyNumber(N) <end of code> C++: include <bits/stdc++.h> using namespace std; int countAPs(int S, int D) { S = S * 2; int answer = 0; for (int i = 1; i <= sqrt(S); i++) { if (S % i == 0) { if (((S / i) - D * i + D) % 2 == 0) answer++; if ((D * i - (S / i) + D) % 2 == 0) answer++; } } return answer; } int main() { int S = 12, D = 1; cout << countAPs(S, D); return 0; } Python: The output contains Python code similar to the following snippet: from math import def countAPs(S, D): S = S * 2 answer = 0 for i in range(1, int(sqrt(S)) + 1): if (S % i == 0): if (((S / i) - D * i + D) % 2 == 0): answer += 1 if ((D * i - (S / i) + D) % 2 == 0): answer += 1 return answer if __name__ == '__main__': S = 12 D = 1 print(countAPs(S, D)) The generated Python code functions the same as the C++ function included in the prompt. Test the generated Python code to verify that it works as you expect. Remember, the StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary. Dialogue Dialogue tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal. Sample 7a: Converse in a dialogue Scenario: Generate dialogue output like a chatbot. Model choice Like other foundation models, granite-13b-chat can be used for multiple tasks. However, it is optimized for carrying on a dialogue. Decoding Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output. Stopping criteria * A helpful feature of the model is the inclusion of a special token that is named END_KEY at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt. Add END_KEY as the stop sequence. * Set the Max tokens parameter to 200 so the model can return a complete answer. Prompt text The model expects the input to follow a specific pattern. Start the input with an instruction. For example, the instruction might read as follows: Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. If you don't know an answer, you state that you don't know or aren't sure about the right answer. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous. Next, add lines to capture the question and answer pattern with the following syntax: Human: content of the question Assistant: new line for the model's answer You can replace the terms Human and Assistant with other terms. If you're using version 1, do not include any trailing white spaces after the Assistant: label, and be sure to add a new line. Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous. Human: How does a bill become a law? Assistant: After the initial output is generated, continue the dialogue by asking a follow-up question. For example, if the output describes how a bill becomes a law in the United States, you can ask about how laws are made in other countries. Human: What about in Canada? Assistant: A few notes about using this sample with the model: * The prompt input outlines the chatbot scenario and describes the personality of the AI assistant. The description explains that the assistant should indicate when it doesn't know an answer. It also directs the assistant to avoid discussing unethical topics. * The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialogue. * The model expects the input to follow a specific pattern. * The generated response from the model is clearly indicated by the keyword END_KEY. You can use this keyword as a stop sequence to help the model generate succinct responses. Sample 7b: Converse in a dialogue Scenario: Generate dialogue output like a chatbot. Model choice Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialogue use cases. The llama-2-70b-chat and llama-2-13b-chat are the only models in watsonx.ai that are fine-tuned for the [INST]<<SYS>><</SYS>>[/INST] prompt format. For more information about this prompt format, see [How to prompt Llama 2](https://huggingface.co/blog/llama2how-to-prompt-llama-2). Decoding Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output. Stopping criteria Set the Max tokens parameter to 100. The template for Llama 2 uses an end-of-string (EOS) token (</s>) to delimit the end of the agent's response. The model output stops generating text when the </s> EOS token is encountered. Prompt text Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result. [INST] <<SYS>> You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>> I have a question about the Earth.[/INST] Dear user, go on. [INST] How many moons are there? [/INST] After the initial output is generated, continue the dialogue to ask about more planets, wrapping your user messages in the same [INST] structure. You don't need to add an <s> token to the start of follow-up questions. The output looks something like this: [INST] <<SYS>> You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>> I have a question about the Earth.[/INST] Dear user, go on. [INST] How many moons are there? [/INST] Dear user, the Earth has one natural satellite, commonly referred to as the Moon. [INST] What about Saturn? [/INST] Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. [INST] And Mars? [/INST] Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively. Notice a few important points about the output that the model generates as you go: * The so-called system instruction, in the <<SYS>><</SYS>> tags, continues to influence the output at each dialogue turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with "Dear user, ". * In true dialogue fashion, the assistant response to the user input "How many moons are there?" takes into account the previous user input "I have a question about the Earth." and returns the number of moons orbiting Earth. * Also in proper dialogue form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any direction.) * Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated. * When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<<SYS>><</SYS>>[/INST] prompt format when you use Llama 2 for any other tasks besides chat. Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
# Sample foundation model prompts for common tasks # Try these samples to learn how different prompts can guide foundation models to do common tasks\. ## How to use this topic ## Explore the sample prompts in this topic: <!-- <ul> --> * Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx\.ai * See what text is generated\. * See how different models generate different output\. * Change the prompt text and parameters to see how results vary\. <!-- </ul> --> There is no one right way to prompt foundation models\. But patterns have been found, in academia and industry, that work fairly reliably\. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation\. This video provides a visual method to learn the concepts and tasks in this documentation\. **Video chapters** \[ 0:11 \] Introduction to prompts and Prompt Lab \[ 0:33 \] Key concept: Everything is text completion \[ 1:34 \] Useful prompt pattern: Few\-shot prompt \[ 1:58 \] Stopping criteria: Max tokens, stop sequences \[ 3:32 \] Key concept: Fine\-tuning \[ 4:32 \] Useful prompt pattern: Zero\-shot prompt \[ 5:32 \] Key concept: Be flexible, try different prompts \[ 6:14 \] Next steps: Experiment with sample prompts ## Samples overview ## You can find samples that prompt foundation models to generate output that supports the following tasks: <!-- <ul> --> * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#classification) * [Extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#extraction) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#generation) * [Question answering (QA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#qa) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#summarization) * [Code generation and conversion](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#code) * [Dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#dialogue) <!-- </ul> --> The following table shows the foundation models that are used in task\-specific samples\. A checkmark indicates that the model is used in a sample for the associated task\. <!-- <table> --> Table 1\. Models used in samples for certain tasks | Model | Classification | Extraction | Generation | QA | Summarization | Coding | Dialogue | | -------------------------- | -------------- | ---------- | ---------- | -- | ------------- | ------ | -------- | | flan\-t5\-xxl\-11b | ✓ | | | ✓ | | | | | flan\-ul2\-20b | ✓ | ✓ | | ✓ | | | | | gpt\-neox\-20b | ✓ | | ✓ | | ✓ | | | | granite\-13b\-chat\-v1 | | | | | | | ✓ | | granite\-13b\-instruct\-v1 | | | ✓ | ✓ | | | | | granite\-13b\-instruct\-v2 | | ✓ | ✓ | ✓ | | | | | llama\-2 chat | | | | | | | ✓ | | mpt\-7b\-instruct2 | ✓ | | | | ✓ | | | | mt0\-xxl\-13b | ✓ | | | ✓ | | | | | starcoder\-15\.5b | | | | | | ✓ | | <!-- </table ""> --> The following table summarizes the available sample prompts\. <!-- <table> --> Table 2\. List of sample prompts | Scenario | Prompt editor | Prompt format | Model | Decoding | Notes | | ------------------------------------------------------------------------------------------------------------------------------------ | ------------- | ---------------- | -------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------- | | [Sample 1a: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample1a) | Freeform | Zero\-shot | • mt0\-xxl\-13b <br>• flan\-t5\-xxl\-11b <br>• flan\-ul2\-20b | Greedy | • Uses the class names as stop sequences to stop the model after it prints the class name | | [Sample 1b: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample1b) | Freeform | Few\-shot | • gpt\-neox\-20b <br>• mpt\-7b\-instruct | Greedy | • Uses the class names as stop sequences | | [Sample 1c: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample1c) | Structured | Few\-shot | • gpt\-neox\-20b <br>• mpt\-7b\-instruct | Greedy | • Uses the class names as stop sequences | | [Sample 2a: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample2a) | Freeform | Zero\-shot | • flan\-ul2\-20b <br>• granite\-13b\-instruct\-v2 | Greedy | | | [Sample 3a: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample3a) | Freeform | Few\-shot | • gpt\-neox\-20b | Sampling | • Generates formatted output <br>• Uses two newline characters as a stop sequence to stop the model after one list | | [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample3b) | Structured | Few\-shot | • gpt\-neox\-20b | Sampling | • Generates formatted output\. <br>• Uses two newline characters as a stop sequence | | [Sample 3c: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample3c) | Freeform | Zero\-shot | • granite\-13b\-instruct\-v1 <br>• granite\-13b\-instruct\-v2 | Greedy | • Generates formatted output | | [Sample 4a: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample4a) | Freeform | Zero\-shot | • mt0\-xxl\-13b <br>• flan\-t5\-xxl\-11b <br>• flan\-ul2\-20b | Greedy | • Uses a period "\." as a stop sequence to cause the model to return only a single sentence | | [Sample 4b: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample4b) | Structured | Zero\-shot | • mt0\-xxl\-13b <br>• flan\-t5\-xxl\-11b <br>• flan\-ul2\-20b | Greedy | • Uses a period "\." as a stop sequence <br>• Generates results for multiple inputs at once | | [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample4c) | Freeform | Zero\-shot | • granite\-13b\-instruct\-v2 | Greedy | | | [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample4d) | Freeform | Zero\-shot | • granite\-13b\-instruct\-v1 | Greedy | | | [Sample 5a: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample5a) | Freeform | Zero\-shot | • flan\-t5\-xxl\-11b <br>• flan\-ul2\-20b <br>• mpt\-7b\-instruct2 | Greedy | | | [Sample 5b: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample5b) | Freeform | Few\-shot | • gpt\-neox\-20b | Greedy | | | [Sample 5c: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample5c) | Structured | Few\-shot | • gpt\-neox\-20b | Greedy | • Generates formatted output <br>• Uses two newline characters as a stop sequence to stop the model after one list | | [Sample 6a: Generate programmatic code from instructions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample6a) | Freeform | Few\-shot | • starcoder\-15\.5b | Greedy | • Generates programmatic code as output <br>• Uses <end of code> as a stop sequence | | [Sample 6b: Convert code from one programming language to another](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample6b) | Freeform | Few\-shot | • starcoder\-15\.5b | Greedy | • Generates programmatic code as output <br>• Uses <end of code> as a stop sequence | | [Sample 7a: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample7a) | Freeform | Custom structure | • granite\-13b\-chat\-v1 | Greedy | • Generates dialogue output like a chatbot <br>• Uses a special token that is named END\_KEY as a stop sequence | | [Sample 7b: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=en#sample7b) | Freeform | Custom structure | • llama\-2 chat | Greedy | • Generates dialogue output like a chatbot <br>• Uses a model\-specific prompt format | <!-- </table ""> --> ## Classification ## Classification is useful for predicting data in distinct categories\. Classifications can be binary, with two classes of data, or multi\-class\. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently\. ### Sample 1a: Classify a message ### Scenario: Given a message that is submitted to a customer\-support chatbot for a cloud software company, classify the customer's message as either a question or a problem\. Depending on the class assignment, the chat is routed to the correct support team for the issue type\. **Model choice** Models that are instruction\-tuned can generally complete this task with this sample prompt\. Suggestions: mt0\-xxl\-13b, flan\-t5\-xxl\-11b, or flan\-ul2\-20b **Decoding** Greedy\. The model must return one of the specified class names; it cannot be creative and make up new classes\. **Stopping criteria** <!-- <ul> --> * Specify two stop sequences: "Question" and "Problem"\. After the model generates either of those words, it should stop\. * With such short output, the Max tokens parameter can be set to 5\. <!-- </ul> --> **Prompt text** Paste this zero\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Classify this customer message into one of two classes: Question, Problem. Class name: Question Description: The customer is asking a technical question or a how-to question about our products or services. Class name: Problem Description: The customer is describing a problem they are having. They might say they are trying something, but it's not working. They might say they are getting an error or unexpected results. Message: I'm having trouble registering for a new account. Class name: ### Sample 1b: Classify a message ### Scenario: Given a message that is submitted to a customer\-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description\. Based on the class type, the chat can be routed to the correct support team\. **Model choice** With few\-shot examples of both classes, most models can complete this task well, including: gpt\-neox\-20b and mpt\-7b\-instruct\. **Decoding** Greedy\. The model must return one of the specified class names; it cannot be creative and make up new classes\. **Stopping criteria** <!-- <ul> --> * Specify two stop sequences: "Question" and "Problem"\. After the model generates either of those words, it should stop\. * With such short output, the Max tokens parameter can be set to 5\. <!-- </ul> --> **Prompt text** Paste this few\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Message: When I try to log in, I get an error. Class name: Problem Message: Where can I find the plan prices? Class name: Question Message: What is the difference between trial and paygo? Class name: Question Message: The registration page crashed, and now I can't create a new account. Class name: Problem Message: What regions are supported? Class name: Question Message: I can't remember my password. Class name: Problem Message: I'm having trouble registering for a new account. ### Sample 1c: Classify a message ### Scenario: Given a message that is submitted to a customer\-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description\. Based on the class type, the chat can be routed to the correct support team\. **Model choice** With few\-shot examples of both classes, most models can complete this task well, including: gpt\-neox\-20b and mpt\-7b\-instruct\. **Decoding** Greedy\. The model must return one of the specified class names, not be creative and make up new classes\. **Stopping criteria** <!-- <ul> --> * Specify two stop sequences: "Question" and "Problem"\. After the model generates either of those words, it should stop\. * With such short output, the Max tokens parameter can be set to 5\. <!-- </ul> --> **Set up section** Paste these headers and examples into the **Examples** area of the **Set up** section: <!-- <table> --> Table 2\. Classification few\-shot examples | **Message:** | **Class name:** | | ---------------------------------------------------------------------- | --------------- | | `When I try to log in, I get an error.` | `Problem` | | `Where can I find the plan prices?` | `Question` | | `What is the difference between trial and paygo?` | `Question` | | `The registration page crashed, and now I can't create a new account.` | `Problem` | | `What regions are supported?` | `Question` | | `I can't remember my password.` | `Problem` | <!-- </table ""> --> **Try section** Paste this message in the **Try** section: I'm having trouble registering for a new account. Select the model and set parameters, then click **Generate** to see the result\. ## Extracting details ## Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches\. ### Sample 2a: Extract details from a complaint ### Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience\. **Model choices** flan\-ul2\-20b, granite\-13b\-instruct\-v2 **Decoding** Greedy\. We need the model to return words that are in the input; the model cannot be creative and make up new words\. **Stopping criteria** The list of extracted factors will not be long, so set the Max tokens parameter to 50\. **Prompt text** Paste this zero\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. From the following customer complaint, extract all the factors that caused the customer to be unhappy. Customer complaint: I just tried to book a flight on your incredibly slow website. All the times and prices were confusing. I liked being able to compare the amenities in economy with business class side by side. But I never got to reserve a seat because I didn't understand the seat map. Next time, I'll use a travel agent! Numbered list of all the factors that caused the customer to be unhappy: ## Generating natural language ## Generation tasks are what large language models do best\. Your prompts can help guide the model to generate useful language\. ### Sample 3a: Generate a numbered list on a particular theme ### Scenario: Generate a numbered list on a particular theme\. **Model choice** gpt\-neox\-20b was trained to recognize and handle special characters, such as the newline character, well\. This model is a good choice when you want your generated text to be formatted a specific way with special characters\. **Decoding** Sampling\. This is a creative task\. Set the following parameters: <!-- <ul> --> * Temperature: 0\.7 * Top P: 1 * Top K: 50 * Random seed: 9045 (To get different output each time you click **Generate**, specify a different value for the Random seed parameter or clear the parameter\.) <!-- </ul> --> **Stopping criteria** <!-- <ul> --> * To make sure the model stops generating text after one list, specify a stop sequence of two newline characters\. To do that, click the **Stop sequence** text box, press the Enter key twice, then click **Add sequence**\. * The list will not be very long, so set the Max tokens parameter to 50\. <!-- </ul> --> **Prompt text** Paste this few\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. What are 4 types of dog breed? 1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination What are 4 kinds of vegetable? 1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower What are the 3 primary colors? 1. Red 2. Green 3. Blue What are 3 ingredients that are good on pizza? ### Sample 3b: Generate a numbered list on a particular theme ### Scenario: Generate a numbered list on a particular theme\. **Model choice** gpt\-neox\-20b was trained to recognize and handle special characters, such as the newline character, well\. This model is a good choice when you want your generated text to be formatted in a specific way with special characters\. **Decoding** Sampling\. This scenario is a creative one\. Set the following parameters: <!-- <ul> --> * Temperature: 0\.7 * Top P: 1 * Top K: 50 * Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter\.) <!-- </ul> --> **Stopping criteria** <!-- <ul> --> * To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters\. To do that, click in the **Stop sequence** text box, press the Enter key twice, then click **Add sequence**\. * The list will not be long, so set the Max tokens parameter to 50\. <!-- </ul> --> **Set up section** Paste these headers and examples into the **Examples** area of the **Set up** section: <!-- <table> --> Table 3\. Generation few\-shot examples | **`Input:`** | **`Output:`** | | -------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- | | `What are 4 types of dog breed?` | `1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog` | | `What are 3 ways to incorporate exercise into your day?` | `1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination` | | `What are 4 kinds of vegetable?` | `1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower` | | `What are the 3 primary colors?` | `1. Red 2. Green 3. Blue` | <!-- </table ""> --> **Try section** Paste this input in the **Try** section: What are 3 ingredients that are good on pizza? Select the model and set parameters, then click **Generate** to see the result\. ### Sample 3c: Generate a numbered list on a particular theme ### Scenario: Ask the model to play devil's advocate\. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action\. **Model choice** Similar to gpt\-neox\-20b, the granite\-13b\-instruct model was trained to recognize and handle special characters, such as the newline character, well\. The granite\-13b\-instruct\-v2 oe granite\-13b\-instruct\-v1 model is a good choice when you want your generated text to be formatted in a specific way with special characters\. **Decoding** Greedy\. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative\. **Stopping criteria** The summary might run several sentences, so set the Max tokens parameter to 60\. **Prompt text** Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: Refreshing our brand image by creating a new logo. Three problems with this plan are: ## Question answering ## Question\-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content\. To help the model return factual answers, implement the retrieval\-augmented generation pattern\. For more information, see [Retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)\. ### Sample 4a: Answer a question based on an article ### Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase\. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing\. Given a question that is related to an article, answer the question based on the article\. **Model choice** Models that are instruction\-tuned, such as mt0\-xxl\-13b, flan\-t5\-xxl\-11b, or flan\-ul2\-20b, can generally complete this task with this sample prompt\. **Decoding** Greedy\. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer\. **Stopping criteria** To cause the model to return a one\-sentence answer, specify a period "\." as a stop sequence\. The Max tokens parameter can be set to 50\. **Prompt text** Paste this zero\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Article: ### Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. There are varieties that stay very small, less than 12 inches, and grow well in a pot or hanging basket on a balcony or patio. Some grow into bushes that are a few feet high and wide, and can be grown is larger containers. Other varieties grow into huge bushes that are several feet wide and high in a planter or garden bed. Still other varieties grow as long vines, six feet or more, and love to climb trellises. Tomato plants do best in full sun. You need to water tomatoes deeply and often. Using mulch prevents soil-borne disease from splashing up onto the fruit when you water. Pruning suckers and even pinching the tips will encourage the plant to put all its energy into producing fruit. ### Answer the following question using only information from the article. Answer in a complete sentence, with proper capitalization and punctuation. If there is no good answer in the article, say "I don't know". Question: Why should you use mulch when growing tomatoes? Answer: You can experiment with asking other questions too, such as: <!-- <ul> --> * How large do tomato plants get? * Do tomato plants prefer shade or sun? * Is it easy to grow tomatoes? <!-- </ul> --> Try out\-of\-scope questions too, such as: <!-- <ul> --> * How do you grow cucumbers? <!-- </ul> --> ### Sample 4b: Answer a question based on an article ### Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase\. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing\. Given a question related to a particular article, answer the question based on the article\. **Model choice** Models that are instruction\-tuned, such as mt0\-xxl\-13b, flan\-t5\-xxl\-11b, or flan\-ul2\-20b, can generally complete this task with this sample prompt\. **Decoding** Greedy\. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer\. **Stopping criteria** To cause the model to return a one\-sentence answer, specify a period "\." as a stop sequence\. The Max tokens parameter can be set to 50\. **Set up section** Paste this text into the **Instruction** area of the **Set up** section: Article: ### Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. There are varieties that stay very small, less than 12 inches, and grow well in a pot or hanging basket on a balcony or patio. Some grow into bushes that are a few feet high and wide, and can be grown is larger containers. Other varieties grow into huge bushes that are several feet wide and high in a planter or garden bed. Still other varieties grow as long vines, six feet or more, and love to climb trellises. Tomato plants do best in full sun. You need to water tomatoes deeply and often. Using mulch prevents soil-borne disease from splashing up onto the fruit when you water. Pruning suckers and even pinching the tips will encourage the plant to put all its energy into producing fruit. ### Answer the following question using only information from the article. Answer in a complete sentence, with proper capitalization and punctuation. If there is no good answer in the article, say "I don't know". **Try section** In the **Try** section, add an extra test row so you can paste each of these two questions in a separate row: Why should you use mulch when growing tomatoes? How do you grow cucumbers? Select the model and set parameters, then click **Generate** to see two results\. ### Sample 4c: Answer a question based on a document ### Scenario: You are creating a chatbot that can answer user questions\. When a user asks a question, you want the agent to answer the question with information from a specific document\. **Model choice** Models that are instruction\-tuned, such as granite\-13b\-instruct\-v2, can complete the task with this sample prompt\. **Decoding** Greedy\. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer\. **Stopping criteria** Use a Max tokens parameter of 50\. **Prompt text** Paste this zero\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed. DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. DIALOG: USER: What are foundation models? ### Sample 4d: Answer general knowledge questions ### Scenario: Answer general questions about finance\. **Model choice** The granite\-13b\-instruct\-v1 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction\. **Decoding** Greedy\. This sample is answering questions, so we don't want creative output\. **Stopping criteria** Set the **Max tokens parameter** to 200 so the model can return a complete answer\. **Prompt text** The model was tuned for question\-answering with examples in the following format: `<|user|>` *content of the question* \`<\|assistant\|> *new line for the model's answer* You can use the exact syntax `<|user|>` and `<|assistant|>` in the lines before and after the question or you can replace the values with equivalent terms, such as `User` and `Assistant`\. If you're using version 1, do not include any trailing white spaces after the `<\assistant\>` label, and be sure to add a new line\. Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. <|user|> Tell me about interest rates <|assistant|> After the model generates an answer, you can ask a follow\-up question\. The model uses information from the previous question when it generates a response\. <|user|> Who sets it? <|assistant|> The model retains information from a previous question when it answers a follow\-up question, but it is not optimized to support an extended dialogue\. Note: When you ask a follow\-up question, the previous question is submitted again, which adds to the number of tokens that are used\. ## Summarization ## Summarization tasks save you time by condensing large amounts of text into a few key pieces of information\. ### Sample 5a: Summarize a meeting transcript ### Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting\. **Model choice** Models that are instruction\-tuned can generally complete this task with this sample prompt\. Suggestions: flan\-t5\-xxl\-11b, flan\-ul2\-20b, or mpt\-7b\-instruct2\. **Decoding** Greedy\. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative\. **Stopping criteria** The summary might run several sentences, so set the Max tokens parameter to 60\. **Prompt text** Paste this zero\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Summarize the following transcript. Transcript: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Summary: ### Sample 5b: Summarize a meeting transcript ### Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting\. **Model choice** With few\-shot examples, most models can complete this task well\. Try: gpt\-neox\-20b\. **Decoding** Greedy\. The model must return the most predictable content based on what's in the prompt, not be too creative\. **Stopping criteria** <!-- <ul> --> * To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters\. To do that, click in the **Stop sequence** text box, press the Enter key twice, then click **Add sequence**\. * Set the Max tokens parameter to 60\. <!-- </ul> --> **Prompt text** Paste this few\-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Transcript: 00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around. Summary: Sam shared an update that project X will be complete at the end of the week. Erin said customer Y will buy our product. And Alex said customer Z will buy our product too. Transcript: 00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done! Summary: Alex suggested considering choice 1. Erin pointed out choice two will take less time. The team agreed with choice 2 for the design solution. Transcript: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Summary: ### Sample 5c: Summarize a meeting transcript ### Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting\. **Model choice** gpt\-neox\-20b was trained to recognize and handle special characters, such as the newline character, well\. This model is a good choice when you want your generated text to be formatted in a specific way with special characters\. **Decoding** Greedy\. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative\. **Stopping criteria** <!-- <ul> --> * To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters\. To do that, click in the **Stop sequence** text box, press the Enter key twice, then click **Add sequence**\. * Set the Max tokens parameter to 60\. <!-- </ul> --> **Set up section** Paste these headers and examples into the **Examples** area of the **Set up** section: <!-- <table> --> Table 4\. Summarization few\-shot examples | **`Transcript:`** | **`Summary:`** | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around.` | `- Sam shared an update that project X will be complete at the end of the week - Erin said customer Y will buy our product - And Alex said customer Z will buy our product too` | | `00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done!` | `- Alex suggested considering choice 1 - Erin pointed out choice two will take less time - The team agreed with choice 2 for the design solution` | <!-- </table ""> --> **Try section** Paste this message in the **Try** section: 00:00 [alex] Let's plan the team party! 00:10 [ali] How about we go out for lunch at the restaurant? 00:21 [sam] Good idea. 00:47 [sam] Can we go to a movie too? 01:04 [alex] Maybe golf? 01:15 [sam] We could give people an option to do one or the other. 01:29 [alex] I like this plan. Let's have a party! Select the model and set parameters, then click **Generate** to see the result\. ## Code generation and conversion ## Foundation models that can generate and convert programmatic code are great resources for developers\. They can help developers to brainstorm and troubleshoot programming tasks\. ### Sample 6a: Generate programmatic code from instructions ### Scenario: You want to generate code from instructions\. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers that are lower than the number that is passed to the function as a variable\. **Model choice** Models that can generate code, such as starcoder\-15\.5b, can generally complete this task when a sample prompt is provided\. **Decoding** Greedy\. The answer must be a valid code snippet\. The model cannot be creative and make up an answer\. **Stopping criteria** To stop the model after it returns a single code snippet, specify `<end of code>` as the stop sequence\. The Max tokens parameter can be set to 1,000\. **Prompt text** Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Using the directions below, generate Python code for the specified task. Input: # Write a Python function that prints 'Hello World!' string 'n' times. Output: def print_n_times(n): for i in range(n): print("Hello World!") <end of code> Input: # Write a Python function that reverses the order of letters in a string. # The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order. Output: The output contains Python code similar to the following snippet: def reversed(my_string): return my_string[::-1] Be sure to test the generated code to verify that it works as you expect\. For example, if you run `reversed("good morning")`, the result is `'gninrom doog'`\. Note: The StarCoder model might generate code that is taken directly from its training data\. As a result, generated code might require attribution\. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary\. ### Sample 6b: Convert code from one programming language to another ### Scenario: You want to convert code from one programming language to another\. Namely, you want to convert a code snippet from C\+\+ to Python\. **Model choice** Models that can generate code, such as starcoder\-15\.5b, can generally complete this task when a sample prompt is provided\. **Decoding** Greedy\. The answer must be a valid code snippet\. The model cannot be creative and make up an answer\. **Stopping criteria** To stop the model after it returns a single code snippet, specify `<end of code>` as the stop sequence\. The Max tokens parameter can be set to 300\. **Prompt text** Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. This prompt includes an example input and output pair\. The input is C\+\+ code and the output is the equivalent function in Python code\. The C\+\+ code snippet to be converted is included next\. It is a function that counts the number of arithmetic progressions with the sum `S` and common difference of `D`, where `S` and `D` are integer values that are passed as parameters\. The final part of the prompt identifies the language that you want the C\+\+ code snippet to be converted into\. Translate the following code from C++ to Python. C++: #include "bits/stdc++.h" using namespace std; bool isPerfectSquare(long double x) { long double sr = sqrt(x); return ((sr - floor(sr)) == 0); } void checkSunnyNumber(int N) { if (isPerfectSquare(N + 1)) { cout << "Yes "; } else { cout << "No "; } } int main() { int N = 8; checkSunnyNumber(N); return 0; } Python: from math import * def isPerfectSquare(x): sr = sqrt(x) return ((sr - floor(sr)) == 0) def checkSunnyNumber(N): if (isPerfectSquare(N + 1)): print("Yes") else: print("No") if __name__ == '__main__': N = 8 checkSunnyNumber(N) <end of code> C++: #include <bits/stdc++.h> using namespace std; int countAPs(int S, int D) { S = S * 2; int answer = 0; for (int i = 1; i <= sqrt(S); i++) { if (S % i == 0) { if (((S / i) - D * i + D) % 2 == 0) answer++; if ((D * i - (S / i) + D) % 2 == 0) answer++; } } return answer; } int main() { int S = 12, D = 1; cout << countAPs(S, D); return 0; } Python: The output contains Python code similar to the following snippet: from math import * def countAPs(S, D): S = S * 2 answer = 0 for i in range(1, int(sqrt(S)) + 1): if (S % i == 0): if (((S / i) - D * i + D) % 2 == 0): answer += 1 if ((D * i - (S / i) + D) % 2 == 0): answer += 1 return answer if __name__ == '__main__': S = 12 D = 1 print(countAPs(S, D)) The generated Python code functions the same as the C\+\+ function included in the prompt\. Test the generated Python code to verify that it works as you expect\. Remember, the StarCoder model might generate code that is taken directly from its training data\. As a result, generated code might require attribution\. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary\. ## Dialogue ## Dialogue tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal\. ### Sample 7a: Converse in a dialogue ### Scenario: Generate dialogue output like a chatbot\. **Model choice** Like other foundation models, granite\-13b\-chat can be used for multiple tasks\. However, it is optimized for carrying on a dialogue\. **Decoding** Greedy\. This sample is answering general knowledge, factual questions, so we don't want creative output\. **Stopping criteria** <!-- <ul> --> * A helpful feature of the model is the inclusion of a special token that is named `END_KEY` at the end of each response\. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input\. This model prevents such repetition by incorporating a reliable stop sequence for the prompt\. Add `END_KEY` as the stop sequence\. * Set the **Max tokens parameter** to 200 so the model can return a complete answer\. <!-- </ul> --> **Prompt text** The model expects the input to follow a specific pattern\. Start the input with an instruction\. For example, the instruction might read as follows: Participate in a dialogue with various people as an AI assistant\. As the Assistant, you are upbeat, professional, and polite\. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can\. You do not give false or misleading information\. If you don't know an answer, you state that you don't know or aren't sure about the right answer\. You prioritize caution over usefulness\. You do not answer questions that are unsafe, immoral, unethical, or dangerous\. Next, add lines to capture the question and answer pattern with the following syntax: Human: *content of the question* Assistant: *new line for the model's answer* You can replace the terms *Human* and *Assistant* with other terms\. If you're using version 1, do not include any trailing white spaces after the *Assistant:* label, and be sure to add a new line\. Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous. Human: How does a bill become a law? Assistant: After the initial output is generated, continue the dialogue by asking a follow\-up question\. For example, if the output describes how a bill becomes a law in the United States, you can ask about how laws are made in other countries\. Human: What about in Canada? Assistant: A few notes about using this sample with the model: <!-- <ul> --> * The prompt input outlines the chatbot scenario and describes the personality of the AI assistant\. The description explains that the assistant should indicate when it doesn't know an answer\. It also directs the assistant to avoid discussing unethical topics\. * The assistant is able to respond to a follow\-up question that relies on information from an earlier exchange in the same dialogue\. * The model expects the input to follow a specific pattern\. * The generated response from the model is clearly indicated by the keyword `END_KEY`\. You can use this keyword as a stop sequence to help the model generate succinct responses\. <!-- </ul> --> ### Sample 7b: Converse in a dialogue ### Scenario: Generate dialogue output like a chatbot\. **Model choice** Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks\. But both Llama 2 models are optimized for dialogue use cases\. The llama\-2\-70b\-chat and llama\-2\-13b\-chat are the only models in watsonx\.ai that are fine\-tuned for the `[INST]<<SYS>><</SYS>>[/INST]` prompt format\. For more information about this prompt format, see [How to prompt Llama 2](https://huggingface.co/blog/llama2#how-to-prompt-llama-2)\. **Decoding** Greedy\. This sample is answering general knowledge, factual questions, so we don't want creative output\. **Stopping criteria** Set the Max tokens parameter to 100\. The template for Llama 2 uses an end\-of\-string (EOS) token (`</s>`) to delimit the end of the agent's response\. The model output stops generating text when the `</s>` EOS token is encountered\. **Prompt text** Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click **Generate** to see the result\. [INST] <<SYS>> You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>> I have a question about the Earth.[/INST] Dear user, go on. [INST] How many moons are there? [/INST] After the initial output is generated, continue the dialogue to ask about more planets, wrapping your user messages in the same `[INST]` structure\. You don't need to add an `<s>` token to the start of follow\-up questions\. The output looks something like this: [INST] <<SYS>> You are a helpful assistant. Begin each response with the phrase "Dear user, ". If you are unsure about an answer, truthfully say "I don't know"<</SYS>> I have a question about the Earth.[/INST] Dear user, go on. [INST] How many moons are there? [/INST] Dear user, the Earth has one natural satellite, commonly referred to as the Moon. [INST] What about Saturn? [/INST] Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede. [INST] And Mars? [/INST] Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively. Notice a few important points about the output that the model generates as you go: <!-- <ul> --> * The so\-called *system instruction*, in the `<<SYS>><</SYS>>` tags, continues to influence the output at each dialogue turn without having to repeat the system instruction\. In other words, the *assistant responses* continue to be prepended with "Dear user, "\. * In true dialogue fashion, the assistant response to the *user input* "How many moons are there?" takes into account the previous user input "I have a question about the Earth\." and returns the number of moons orbiting Earth\. * Also in proper dialogue form, the assistant responses continue to follow the topic of the conversation, which is the number of moons\. (Otherwise, the generated output to the vague user message "And Mars?" could wander off in any direction\.) * Caution: Newline (carriage\-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated\. * When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible\. Do not use the `[INST]<<SYS>><</SYS>>[/INST]` prompt format when you use Llama 2 for any other tasks besides chat\. <!-- </ul> --> **Parent topic:**[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) <!-- </article "role="article" "> -->
38DBE0E16434502696281563802B76F3E38B25D2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html?context=cdpaas&locale=en
Saving your work
Saving your work Prompt engineering involves trial and error. Keep track of your experimentation and save model-and-prompt combinations that generate the output you want. When you save your work, you can choose to save it as different asset types. Saving your work as an asset makes it possible to share your work with collaborators in the current project. Table 1: Asset types Asset type When to use this asset type What is saved How to retrieve the asset Prompt template asset When you find a combination of prompt static text, prompt variables, and prompt engineering parameters that generate the results you want from a specific model and want to reuse it. Prompt text, model, prompt engineering parameters, and prompt variables. <br>Note: The output that is generated by the model is not saved. From the Saved prompt templates tab Prompt session asset When you want to keep track of the steps involved with your experimentation so you know what you've tried and what you haven't. Prompt text, model, prompt engineering parameters, and model output for up to 500 prompts that are submitted during a prompt engineering session. From the History tab Notebook asset When you want to work with models programmatically, but want to start from the Prompt Lab interface for a better prompt engineering experience. Prompt text, model, prompt engineering parameters, and prompt variable names and default values are formatted as Python code and stored as a notebook. From the Assets page of the project Each of these asset types is available from the project's Assets page. Project collaborators with the Admin or Editor role can open and work with them. Your prompt template and prompt session assets are locked automatically, but you can unlock them by clicking the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/lockicon-new.png)). Saving your work To save your prompt engineering work, complete the following steps: 1. From the header of the prompt editor, click Save work, and then click Save as. 2. Choose an asset type. 3. Name the asset, and then optionally add a description. 4. Choose the task type that best matches your goal. 5. If you save the prompt as a notebook asset only: Select View in project after saving. 6. Click Save. Working with prompts saved in a notebook When you save your work as a notebook asset, a Python notebook is built. To work with a prompt notebook asset, complete the following steps: 1. Open the notebook asset from the Assets tab of your project. 2. Click the Edit icon (![edit notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/edit.svg)) to instantiate the notebook so you can step through the code. The notebook contains runnable code that manages the following steps for you: * Authenticates with the service. * Defines a Python class. * Defines the input text for the model and declares any prompt variables. You can edit the static prompt text and assign values to prompt variables. * Uses the defined class to call the watsonx.ai inferencing API and pass your input to the foundation model. * Shows the output that is generated by the foundation model. 3. Use the notebook as is, or change it to meet the needs of your use case. The Python code that is generated by using the Prompt Lab executes successfully. You must test and validate any changes that you make to the code. Working with saved prompt templates To continue working with a saved prompt, open it from the Saved prompt templates tab of the Prompt Lab. When you open a saved prompt template, Autosave is on, which means that any changes you make to the prompt will be reflected in the saved prompt template asset. If you want the prompt template that you saved to remain unchanged, click New prompt to start a new prompt. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). Working with saved prompt sessions To continue working with a saved prompt session, open it from the History tab of the Prompt Lab. To review previous prompt submissions, you can click a prompt entry from the history to open it in the prompt editor. If you prefer the results from the earlier prompt, you can reset it as your current prompt by clicking Restore. When you restore an earlier prompt, your current prompt session is replaced by the earlier version of the prompt session. Learn more * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
# Saving your work # Prompt engineering involves trial and error\. Keep track of your experimentation and save model\-and\-prompt combinations that generate the output you want\. When you save your work, you can choose to save it as different asset types\. Saving your work as an asset makes it possible to share your work with collaborators in the current project\. <!-- <table> --> Table 1: Asset types | Asset type | When to use this asset type | What is saved | How to retrieve the asset | | --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------- | | Prompt template asset | When you find a combination of prompt static text, prompt variables, and prompt engineering parameters that generate the results you want from a specific model and want to reuse it\. | Prompt text, model, prompt engineering parameters, and prompt variables\. <br>**Note**: The output that is generated by the model is not saved\. | From the **Saved prompt templates** tab | | Prompt session asset | When you want to keep track of the steps involved with your experimentation so you know what you've tried and what you haven't\. | Prompt text, model, prompt engineering parameters, and model output for up to 500 prompts that are submitted during a prompt engineering session\. | From the **History** tab | | Notebook asset | When you want to work with models programmatically, but want to start from the Prompt Lab interface for a better prompt engineering experience\. | Prompt text, model, prompt engineering parameters, and prompt variable names and default values are formatted as Python code and stored as a notebook\. | From the **Assets** page of the project | <!-- </table ""> --> Each of these asset types is available from the project's **Assets** page\. Project collaborators with the **Admin** or **Editor** role can open and work with them\. Your prompt template and prompt session assets are locked automatically, but you can unlock them by clicking the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/lockicon-new.png))\. ## Saving your work ## To save your prompt engineering work, complete the following steps: <!-- <ol> --> 1. From the header of the prompt editor, click **Save work**, and then click **Save as**\. 2. Choose an asset type\. 3. Name the asset, and then optionally add a description\. 4. Choose the task type that best matches your goal\. 5. *If you save the prompt as a notebook asset only*: Select **View in project after saving**\. 6. Click **Save**\. <!-- </ol> --> ## Working with prompts saved in a notebook ## When you save your work as a notebook asset, a Python notebook is built\. To work with a prompt notebook asset, complete the following steps: <!-- <ol> --> 1. Open the notebook asset from the **Assets** tab of your project\. 2. Click the Edit icon (![edit notebook icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/edit.svg)) to instantiate the notebook so you can step through the code\. The notebook contains runnable code that manages the following steps for you: <!-- <ul> --> * Authenticates with the service. * Defines a Python class. * Defines the input text for the model and declares any prompt variables. You can edit the static prompt text and assign values to prompt variables. * Uses the defined class to call the watsonx.ai inferencing API and pass your input to the foundation model. * Shows the output that is generated by the foundation model. <!-- </ul> --> 3. Use the notebook as is, or change it to meet the needs of your use case\. The Python code that is generated by using the Prompt Lab executes successfully. You must test and validate any changes that you make to the code. <!-- </ol> --> ## Working with saved prompt templates ## To continue working with a saved prompt, open it from the **Saved prompt templates** tab of the Prompt Lab\. When you open a saved prompt template, **Autosave** is on, which means that any changes you make to the prompt will be reflected in the saved prompt template asset\. If you want the prompt template that you saved to remain unchanged, click **New prompt** to start a new prompt\. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)\. ## Working with saved prompt sessions ## To continue working with a saved prompt session, open it from the **History** tab of the Prompt Lab\. To review previous prompt submissions, you can click a prompt entry from the history to open it in the prompt editor\. If you prefer the results from the earlier prompt, you can reset it as your current prompt by clicking **Restore**\. When you restore an earlier prompt, your current prompt session is replaced by the earlier version of the prompt session\. ## Learn more ## <!-- <ul> --> * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) <!-- </ul> --> **Parent topic:**[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) <!-- </article "role="article" "> -->
F839CD35991DF790F17239C9C63BFCAE701F3D65
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html?context=cdpaas&locale=en
Tips for writing foundation model prompts: prompt engineering
Tips for writing foundation model prompts: prompt engineering Part art, part science, prompt engineering is the process of crafting prompt text to best effect for a given model and parameters. When it comes to prompting foundation models, there isn't just one right answer. There are usually multiple ways to prompt a foundation model for a successful result. Use the Prompt Lab to experiment with crafting prompts. * For help using the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). * Try the samples that are available from the Sample prompts tab. * Learn from documented samples. See [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html). As you experiment, remember these tips. The tips in this topic will help you successfully prompt most text-generating foundation models. Tip 1: Always remember that everything is text completion Your prompt is the text you submit for processing by a foundation model. The Prompt Lab in IBM watsonx.ai is not a chatbot interface. For most models, simply asking a question or typing an instruction usually won't yield the best results. That's because the model isn't answering your prompt, the model is appending text to it. This image demonstrates prompt text and generated output: * Prompt text: "I took my dog " * Generated output: "to the park." ![Text completion in Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-text-completion.png) Tip 2: Include all the needed prompt components Effective prompts usually have one or more of the following components: instruction, context, examples, and cue. Instruction An instruction is an imperative statement that tells the model what to do. For example, if you want the model to list ideas for a dog-walking business, your instruction could be: "List ideas for starting a dog-walking business:" Context Including background or contextual information in your prompt can nudge the model output in a desired direction. Specifically, (tokenized) words that appear in your prompt text are more likely to be included in the generated output. Examples To indicate the format or shape that you want the model response to be, include one or more pairs of example input and corresponding desired output showing the pattern you want the generated text to follow. (Including one example in your prompt is called one-shot prompting, including two or more examples in your prompt is called few-shot prompting, and when your prompt has no examples, that's called zero-shot prompting.) Note that when you are prompting models that have been fine-tuned, you might not need examples. Cue A cue is text at the end of the prompt that is likely to start the generated output on a desired path. (Remember, as much as it seems like the model is responding to your prompt, the model is really appending text to your prompt or continuing your prompt.) Tip 3: Include descriptive details The more guidance, the better. Experiment with including descriptive phrases related to aspects of your ideal result: content, style, and length. Including these details in your prompt can cause a more creative or more complete result to be generated. For example, you could improve upon the sample instruction given previously: * Original: "List ideas for starting a dog-walking business" * Improved: "List ideas for starting a large, wildly successful dog-walking business" Example Before In this image, you can see a prompt with the original, simple instruction. This prompt doesn't produce great results. ![Example prompt text with just a simple instruction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-too-simple.png) After In this image, you can see all the prompt components: instruction (complete with descriptive details), context, example, and cue. This prompt produces a much better result. ![Example prompt text with an instruction, context, an example, and a cue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-components.png) You can experiment with this prompt in the Prompt Lab yourself: Model: gpt-neox-20b Decoding: Sampling * Temperature: 0.7 * Top P: 1 * Top K: 50 * Repetition penalty: 1.02 Stopping criteria: * Stop sequence: Two newline characters * Min tokens: 0 * Max tokens: 80 Prompt text: Copy this prompt text and paste it into the freeform prompt editor in Prompt Lab, then click Generate to see a result. With no random seed specified, results will vary each time you submit the prompt. Based on the following industry research, suggest ideas for starting a large, wildly successful dog-walking business. Industry research: The most successful dog-walking businesses cater to owners' needs and desires while also providing great care to the dogs. For example, owners want flexible hours, a shuttle to pick up and drop off dogs at home, and personalized services, such as custom meal and exercise plans. Consider too how social media has permeated our lives. Web-enabled interaction provide images and video that owners will love to share online, which is great advertising for the business. Ideas for starting a lemonade business: - Set up a lemonade stand - Partner with a restaurant - Get a celebrity to endorse the lemonade Ideas for starting a large, wildly successful dog-walking business: Learn more * [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) * [Avoiding hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html) * [Generating accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html) Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
# Tips for writing foundation model prompts: prompt engineering # Part art, part science, *prompt engineering* is the process of crafting prompt text to best effect for a given model and parameters\. When it comes to prompting foundation models, there isn't just one right answer\. There are usually multiple ways to prompt a foundation model for a successful result\. Use the Prompt Lab to experiment with crafting prompts\. <!-- <ul> --> * For help using the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)\. * Try the samples that are available from the **Sample prompts** tab\. * Learn from documented samples\. See [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)\. <!-- </ul> --> As you experiment, remember these tips\. The tips in this topic will help you successfully prompt most text\-generating foundation models\. ## Tip 1: Always remember that everything is text completion ## Your *prompt* is the text you submit for processing by a foundation model\. The Prompt Lab in IBM watsonx\.ai is not a chatbot interface\. For most models, simply asking a question or typing an instruction usually won't yield the best results\. That's because the model isn't *answering* your prompt, the model is *appending text to it*\. This image demonstrates prompt text and generated output: <!-- <ul> --> * Prompt text: "I took my dog " * Generated output: "to the park\." <!-- </ul> --> ![Text completion in Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-text-completion.png) ## Tip 2: Include all the needed prompt components ## Effective prompts usually have one or more of the following components: instruction, context, examples, and cue\. ### Instruction ### An instruction is an imperative statement that tells the model what to do\. For example, if you want the model to list ideas for a dog\-walking business, your instruction could be: "List ideas for starting a dog\-walking business:" ### Context ### Including background or contextual information in your prompt can nudge the model output in a desired direction\. Specifically, (tokenized) words that appear in your prompt text are more likely to be included in the generated output\. ### Examples ### To indicate the format or shape that you want the model response to be, include one or more pairs of example input and corresponding desired output showing the pattern you want the generated text to follow\. (Including one example in your prompt is called *one\-shot prompting*, including two or more examples in your prompt is called *few\-shot* prompting, and when your prompt has no examples, that's called *zero\-shot* prompting\.) Note that when you are prompting models that have been fine\-tuned, you might not need examples\. ### Cue ### A cue is text at the end of the prompt that is likely to start the generated output on a desired path\. (Remember, as much as it seems like the model is *responding to your prompt*, the model is really *appending text to your prompt* or *continuing your prompt*\.) ## Tip 3: Include descriptive details ## The more guidance, the better\. Experiment with including descriptive phrases related to aspects of your ideal result: content, style, and length\. Including these details in your prompt can cause a more creative or more complete result to be generated\. For example, you could improve upon the sample instruction given previously: <!-- <ul> --> * Original: "List ideas for starting a dog\-walking business" * Improved: "List ideas for starting a large, wildly successful dog\-walking business" <!-- </ul> --> ## Example ## ### Before ### In this image, you can see a prompt with the original, simple instruction\. This prompt doesn't produce great results\. ![Example prompt text with just a simple instruction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-too-simple.png) ### After ### In this image, you can see all the prompt components: instruction (complete with descriptive details), context, example, and cue\. This prompt produces a much better result\. ![Example prompt text with an instruction, context, an example, and a cue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-lab-prompt-components.png) You can experiment with this prompt in the Prompt Lab yourself: **Model:** gpt\-neox\-20b **Decoding:** Sampling <!-- <ul> --> * Temperature: 0\.7 * Top P: 1 * Top K: 50 * Repetition penalty: 1\.02 <!-- </ul> --> **Stopping criteria:** <!-- <ul> --> * Stop sequence: Two newline characters * Min tokens: 0 * Max tokens: 80 <!-- </ul> --> **Prompt text:** Copy this prompt text and paste it into the freeform prompt editor in Prompt Lab, then click **Generate** to see a result\. With no random seed specified, results will vary each time you submit the prompt\. Based on the following industry research, suggest ideas for starting a large, wildly successful dog-walking business. Industry research: *** The most successful dog-walking businesses cater to owners' needs and desires while also providing great care to the dogs. For example, owners want flexible hours, a shuttle to pick up and drop off dogs at home, and personalized services, such as custom meal and exercise plans. Consider too how social media has permeated our lives. Web-enabled interaction provide images and video that owners will love to share online, which is great advertising for the business. *** Ideas for starting a lemonade business: - Set up a lemonade stand - Partner with a restaurant - Get a celebrity to endorse the lemonade Ideas for starting a large, wildly successful dog-walking business: ## Learn more ## <!-- <ul> --> * [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) * [Avoiding hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html) * [Generating accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html) <!-- </ul> --> **Parent topic:**[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) <!-- </article "role="article" "> -->
6049D5AA5DE41309E6281534A464ABD6898A758C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=en
Building reusable prompts
Building reusable prompts Prompt engineering to find effective prompts for a model takes time and effort. Stretch the benefits of your work by building prompts that you can reuse and share with others. A great way to add flexibility to a prompt is to add prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. Using variables to change prompt text dynamically Variables help you to generalize a prompt so that it can be reused more easily. For example, a prompt for a generative task might contain the following static text: Write a story about a dog. If you replace the text dog with a variable that is named {animal}, you add support for dynamic content to the prompt. Write a story about a {animal}. With the variable {animal}, the text can still be used to prompt the model for a story about a dog. But now it can be reused to ask for a story about a cat, a mouse, or another animal, simply by swapping the value that is specified for the {animal} variable. Creating prompt variables To create a prompt variable, complete the following steps: 1. From the Prompt Lab, review the text in your prompt for words or phrases that, when converted to a variable, will make the prompt easier to reuse. 2. Click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) at the start of the page. The Prompt variables panel is displayed where you can add variable name-and-value pairs. 3. Click New variable. 4. Click to add a variable name, tab to the next field, and then add a default value. The variable name can contain alphanumeric characters or an underscore (_), but cannot begin with a number. The default value for the variable is a fallback value; it is used every time that the prompt is submitted, unless someone overwrites the default value by specifying a new value for the variable. 5. Repeat the previous step to add more variables. The following table shows some examples of the types of variables that you might want to add. | Variable name | Default value | |---------------|---------------| | country | Ireland | | city | Boston | | project | Project X | | company | IBM | 6. Replace static text in the prompt with your variables. Select the word or phrase in the prompt that you want to replace, and then click the Prompt variables icon (![{#}}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) within the text box to see a list of available variables. Click the variable that you want to use from the list. The variable replaces the selected text. It is formatted with the syntax {variable name}, where the variable name is surrounded by braces. If your static text already contains variables that are formatted with braces, they are ignored unless prompt variables of the same name exist. 7. To specify a value for a variable at run time, open the Prompt variables panel, click Preview, and then add a value for the variable. You can also change the variable value from the edit view of the Prompt variables panel, but the value you specify will become the new default value. When you find a set of prompt static text, prompt variables, and prompt engineering parameters that generates the results you want from a model, save the prompt as a prompt template asset. After you save the prompt template asset, you can reuse the prompt or share it with collaborators in the current project. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html). Examples of reusing prompts The following examples help illustrate ways that using prompt variables can add versatility to your prompts. * [Thank you note example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=enthank-you-example) * [Devil's advocate example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=endevil-example) Thank you note example Replace static text in the Thank you note generation built-in sample prompt with variables to make the prompt reusable. To add versatility to a built-in prompt, complete the following steps: 1. From the Prompt Lab, click Sample prompts to list the built-in sample prompts. From the Generation section, click Thank you note generation. The input for the built-in sample prompt is added to the prompt editor and the flan-ul2-20b model is selected. Write a thank you note for attending a workshop. Attendees: interns Topic: codefest, AI Tone: energetic 2. Review the text for words or phrases that make good variable candidates. In this example, if the following words are replaced, the prompt meaning will change: * workshop * interns * codefest * AI * energetic 3. Create a variable to represent each word in the list. Add the current value as the default value for the variable. | Variable name | Value | |---------------|---------------| | event | workshop | | attendees | interns | | topic1 | codefest | | topic2 | AI | | tone | energetic | 4. Click Preview to review the variables that you added. 5. Update the static prompt text to use variables in place of words. Write a thank you note for attending a {event}. Attendees: {attendees} Topic: {topic1}, {topic2} Tone: {tone} ![Screenshot that shows static text in the prompt editor being replaced with variables.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-var-replacement.png) The original meaning of the prompt is maintained. 6. Now, change the values of the variables to change the meaning of the prompt. From the Fill in prompt variables view of the prompt variables panel, add values for the variables. | Variable name | Value | |---------------|---------------| | event | human resources presentation | | attendees | expecting parents | | topic1 | resources for new parents | | topic2 | parental leave | | tone | supportive | You effectively converted the original prompt into the following prompt: Write a thank you note for attending a human resources presentation. Attendees: expecting parents Topic: resources for new parents, parental leave Tone: supportive Click Generate to see how the model responds. 7. Swap the values for the variables to reuse the same prompt again to generate thank you notes for usability test attendees. | Variable name | Value | |---------------|-------| | event | usability test | | attendees | user volunteers | | topic1 | testing out new features | | topic2 | sharing early feedback | | tone | appreciative | Click Generate to see how the model responds. Devil's advocate example Use prompt variables to reuse effective examples that you devise for a prompt. You can guide a foundation model to answer in an expected way by adding a few examples that establish a pattern for the model to follow. This kind of prompt is called a few-shot prompt. Inventing good examples for a prompt requires imagination and testing and can be time-consuming. If you successfully create a few-shot prompt that proves to be effective, you can make it reusable by adding prompt variables. Maybe you want to use the granite-13b-instruct-v1 model to help you consider risks or problems that might arise from an action or plan under consideration. For example, the prompt might have the following instruction and examples: You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: Refreshing our brand image by creating a new logo. Three problems with this plan are: You can reuse the prompt by completing the following steps: 1. Replace the text that describes the action that you are considering with a variable. For example, you can add the following variable: | Variable name | Default value | |---------------|---------------| | plan | Refreshing our brand image by creating a new logo. | 2. Replace the static text that defines the plan with the {plan} variable. You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: {plan} Three problems with this plan are: Now you can use the same prompt to prompt the model to brainstorm about other actions. 3. Change the text in the {plan} variable to describe a different plan, and then click Generate to send the new input to the model. Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
# Building reusable prompts # Prompt engineering to find effective prompts for a model takes time and effort\. Stretch the benefits of your work by building prompts that you can reuse and share with others\. A great way to add flexibility to a prompt is to add *prompt variables*\. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time\. ## Using variables to change prompt text dynamically ## Variables help you to generalize a prompt so that it can be reused more easily\. For example, a prompt for a generative task might contain the following static text: `Write a story about a dog.` If you replace the text *dog* with a variable that is named `{animal}`, you add support for dynamic content to the prompt\. `Write a story about a {animal}.` With the variable `{animal}`, the text can still be used to prompt the model for a story about a dog\. But now it can be reused to ask for a story about a cat, a mouse, or another animal, simply by swapping the value that is specified for the `{animal}` variable\. ## Creating prompt variables ## To create a prompt variable, complete the following steps: <!-- <ol> --> 1. From the Prompt Lab, review the text in your prompt for words or phrases that, when converted to a variable, will make the prompt easier to reuse\. 2. Click the **Prompt variables** icon (![\{\#\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) at the start of the page\. The *Prompt variables* panel is displayed where you can add variable name-and-value pairs. 3. Click **New variable**\. 4. Click to add a variable name, tab to the next field, and then add a default value\. The variable name can contain alphanumeric characters or an underscore (\_), but cannot begin with a number. The default value for the variable is a fallback value; it is used every time that the prompt is submitted, unless someone overwrites the default value by specifying a new value for the variable. 5. Repeat the previous step to add more variables\. The following table shows some examples of the types of variables that you might want to add. \| Variable name \| Default value \| \|---------------\|---------------\| \| country \| Ireland \| \| city \| Boston \| \| project \| Project X \| \| company \| IBM \| 6. Replace static text in the prompt with your variables\. Select the word or phrase in the prompt that you want to replace, and then click the **Prompt variables** icon (![\{\#\}\}](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/parameter.svg)) within the text box to see a list of available variables. Click the variable that you want to use from the list. The variable replaces the selected text. It is formatted with the syntax `{variable name}`, where the variable name is surrounded by braces. If your static text already contains variables that are formatted with braces, they are ignored unless prompt variables of the same name exist. 7. To specify a value for a variable at run time, open the *Prompt variables* panel, click **Preview**, and then add a value for the variable\. You can also change the variable value from the edit view of the *Prompt variables* panel, but the value you specify will become the new default value. <!-- </ol> --> When you find a set of prompt static text, prompt variables, and prompt engineering parameters that generates the results you want from a model, save the prompt as a prompt template asset\. After you save the prompt template asset, you can reuse the prompt or share it with collaborators in the current project\. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html)\. ## Examples of reusing prompts ## The following examples help illustrate ways that using prompt variables can add versatility to your prompts\. <!-- <ul> --> * [Thank you note example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=en#thank-you-example) * [Devil's advocate example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=en#devil-example) <!-- </ul> --> ### Thank you note example ### Replace static text in the *Thank you note generation* built\-in sample prompt with variables to make the prompt reusable\. To add versatility to a built\-in prompt, complete the following steps: <!-- <ol> --> 1. From the Prompt Lab, click **Sample prompts** to list the built\-in sample prompts\. From the *Generation* section, click **Thank you note generation**\. The input for the built-in sample prompt is added to the prompt editor and the flan-ul2-20b model is selected. Write a thank you note for attending a workshop. Attendees: interns Topic: codefest, AI Tone: energetic 2. Review the text for words or phrases that make good variable candidates\. In this example, if the following words are replaced, the prompt meaning will change: <!-- <ul> --> * workshop * interns * codefest * AI * energetic <!-- </ul> --> 3. Create a variable to represent each word in the list\. Add the current value as the default value for the variable\. \| Variable name \| Value \| \|---------------\|---------------\| \| event \| workshop \| \| attendees \| interns \| \| topic1 \| codefest \| \| topic2 \| AI \| \| tone \| energetic \| 4. Click **Preview** to review the variables that you added\. 5. Update the static prompt text to use variables in place of words\. Write a thank you note for attending a {event}. Attendees: {attendees} Topic: {topic1}, {topic2} Tone: {tone} ![Screenshot that shows static text in the prompt editor being replaced with variables.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-prompt-var-replacement.png) The original meaning of the prompt is maintained. 6. Now, change the values of the variables to change the meaning of the prompt\. From the *Fill in prompt variables* view of the prompt variables panel, add values for the variables. \| Variable name \| Value \| \|---------------\|---------------\| \| event \| human resources presentation \| \| attendees \| expecting parents \| \| topic1 \| resources for new parents \| \| topic2 \| parental leave \| \| tone \| supportive \| You effectively converted the original prompt into the following prompt: Write a thank you note for attending a human resources presentation. Attendees: expecting parents Topic: resources for new parents, parental leave Tone: supportive Click **Generate** to see how the model responds. 7. Swap the values for the variables to reuse the same prompt again to generate thank you notes for usability test attendees\. \| Variable name \| Value \| \|---------------\|-------\| \| event \| usability test \| \| attendees \| user volunteers \| \| topic1 \| testing out new features \| \| topic2 \| sharing early feedback \| \| tone \| appreciative \| Click **Generate** to see how the model responds. <!-- </ol> --> ### Devil's advocate example ### Use prompt variables to reuse effective examples that you devise for a prompt\. You can guide a foundation model to answer in an expected way by adding a few examples that establish a pattern for the model to follow\. This kind of prompt is called a *few\-shot prompt*\. Inventing good examples for a prompt requires imagination and testing and can be time\-consuming\. If you successfully create a few\-shot prompt that proves to be effective, you can make it reusable by adding prompt variables\. Maybe you want to use the granite\-13b\-instruct\-v1 model to help you consider risks or problems that might arise from an action or plan under consideration\. For example, the prompt might have the following instruction and examples: You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: Refreshing our brand image by creating a new logo. Three problems with this plan are: You can reuse the prompt by completing the following steps: <!-- <ol> --> 1. Replace the text that describes the action that you are considering with a variable\. For example, you can add the following variable: \| Variable name \| Default value \| \|---------------\|---------------\| \| plan \| Refreshing our brand image by creating a new logo. \| 2. Replace the static text that defines the plan with the `{plan}` variable\. You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks. Plan we are considering: Extend our store hours. Three problems with this plan are: 1. We'll have to pay more for staffing. 2. Risk of theft increases late at night. 3. Clerks might not want to work later hours. Plan we are considering: Open a second location for our business. Three problems with this plan are: 1. Managing two locations will be more than twice as time-consuming than managed just one. 2. Creating a new location doesn't guarantee twice as many customers. 3. A new location means added real estate, utility, and personnel expenses. Plan we are considering: {plan} Three problems with this plan are: Now you can use the same prompt to prompt the model to brainstorm about other actions. 3. Change the text in the `{plan}` variable to describe a different plan, and then click **Generate** to send the new input to the model\. <!-- </ol> --> **Parent topic:**[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) <!-- </article "role="article" "> -->
B1AF301F18E6444DA2842CC71F9AC38505EE5E1F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=cdpaas&locale=en
Foundation models Python library
Foundation models Python library You can prompt foundation models in IBM watsonx.ai programmatically by using the Python library. The Watson Machine Learning Python library is a publicly available library that you can use to work with Watson Machine Learning services. The Watson Machine Learning service hosts the watsonx.ai foundation models. Using the Python library After you create a prompt in the Prompt Lab, you can save the prompt as a notebook, and then edit the notebook. Using the generated notebook as a starting point is useful because it handles the initial setup steps, such as getting credentials and the project ID information for you. If you want to work with the models directly from a notebook, you can do so by using the Watson Machine Learning Python library. The ibm-watson-machine-learning Python library is publicly available on PyPI from the url: [https://pypi.org/project/ibm-watson-machine-learning/](https://pypi.org/project/ibm-watson-machine-learning/). However, you can install it in your development environment by using the following command: pip install ibm-watson-machine-learning If you installed the library before, include the -U parameter to ensure that you have the latest version. pip install -U ibm-watson-machine-learning For more information about the available methods for working with foundation models, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html). You need to take some steps before you can use the Python library: * [Setting up credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) * [Looking up your project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=cdpaas&locale=enproject-id) Looking up your project ID To prompt foundation models in IBM watsonx.ai programmatically, you need to pass the identifier (ID) of a project that has an instance of IBM Watson Machine Learning associated with it. To get the ID of a project, complete the following steps: 1. Navigate to the project in the watsonx web console, open the project, and then click the Manage tab. 2. Copy the project ID from the Details section of the General page. Examples The following examples show you how to use the library to perform a few basic tasks in a notebook. Example 1: List available foundation models You can view [ModelTypes](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.utils.enums.ModelTypes) to see available foundation models. Python code from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes import json print( json.dumps( ModelTypes._member_names_, indent=2 ) ) Sample output [ "FLAN_T5_XXL", "FLAN_UL2", "MT0_XXL", ... ] Example: View details of a foundation model You can view details, such as a short description and foundation model limits, by using [get_details()](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details). Python code from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes from ibm_watson_machine_learning.foundation_models import Model import json my_credentials = { "url" : "https://us-south.ml.cloud.ibm.com", "apikey" : {my-IBM-Cloud-API-key} } model_id = ModelTypes.MPT_7B_INSTRUCT2 gen_parms = None project_id = {my-project-ID} space_id = None verify = False model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) model_details = model.get_details() print( json.dumps( model_details, indent=2 ) ) Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID. Sample output { "model_id": "ibm/mpt-7b-instruct2", "label": "mpt-7b-instruct2", "provider": "IBM", "source": "Hugging Face", "short_description": "MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by IBM.", ... } Example: Prompt a foundation model with default parameters Prompt a foundation model to generate a response. Python code from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes from ibm_watson_machine_learning.foundation_models import Model import json my_credentials = { "url" : "https://us-south.ml.cloud.ibm.com", "apikey" : {my-IBM-Cloud-API-key} } model_id = ModelTypes.FLAN_T5_XXL gen_parms = None project_id = {my-project-ID} space_id = None verify = False model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) prompt_txt = "In today's sales meeting, we " gen_parms_override = None generated_response = model.generate( prompt_txt, gen_parms_override ) print( json.dumps( generated_response, indent=2 ) ) Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID. Sample output { "model_id": "google/flan-t5-xxl", "created_at": "2023-07-27T03:40:17.575Z", "results": [ { "generated_text": "will discuss the new product line.", "generated_token_count": 8, "input_token_count": 10, "stop_reason": "EOS_TOKEN" } ], ... } Learn more * [Credentials for prompting foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Foundation models Python library # You can prompt foundation models in IBM watsonx\.ai programmatically by using the Python library\. The Watson Machine Learning Python library is a publicly available library that you can use to work with Watson Machine Learning services\. The Watson Machine Learning service hosts the watsonx\.ai foundation models\. ## Using the Python library ## After you create a prompt in the Prompt Lab, you can save the prompt as a notebook, and then edit the notebook\. Using the generated notebook as a starting point is useful because it handles the initial setup steps, such as getting credentials and the project ID information for you\. If you want to work with the models directly from a notebook, you can do so by using the Watson Machine Learning Python library\. The `ibm-watson-machine-learning` Python library is publicly available on PyPI from the url: [https://pypi\.org/project/ibm\-watson\-machine\-learning/](https://pypi.org/project/ibm-watson-machine-learning/)\. However, you can install it in your development environment by using the following command: pip install ibm-watson-machine-learning If you installed the library before, include the `-U` parameter to ensure that you have the latest version\. pip install -U ibm-watson-machine-learning For more information about the available methods for working with foundation models, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html)\. You need to take some steps before you can use the Python library: <!-- <ul> --> * [Setting up credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) * [Looking up your project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=cdpaas&locale=en#project-id) <!-- </ul> --> ### Looking up your project ID ### To prompt foundation models in IBM watsonx\.ai programmatically, you need to pass the identifier (ID) of a project that has an instance of IBM Watson Machine Learning associated with it\. To get the ID of a project, complete the following steps: <!-- <ol> --> 1. Navigate to the project in the watsonx web console, open the project, and then click the **Manage** tab\. 2. Copy the project ID from the *Details* section of the *General* page\. <!-- </ol> --> ## Examples ## The following examples show you how to use the library to perform a few basic tasks in a notebook\. ### Example 1: List available foundation models ### You can view [`ModelTypes`](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html#ibm_watson_machine_learning.foundation_models.utils.enums.ModelTypes) to see available foundation models\. **Python code** from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes import json print( json.dumps( ModelTypes._member_names_, indent=2 ) ) **Sample output** [ "FLAN_T5_XXL", "FLAN_UL2", "MT0_XXL", ... ] ### Example: View details of a foundation model ### You can view details, such as a short description and foundation model limits, by using [`get_details()`](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html#ibm_watson_machine_learning.foundation_models.Model.get_details)\. **Python code** from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes from ibm_watson_machine_learning.foundation_models import Model import json my_credentials = { "url" : "https://us-south.ml.cloud.ibm.com", "apikey" : {my-IBM-Cloud-API-key} } model_id = ModelTypes.MPT_7B_INSTRUCT2 gen_parms = None project_id = {my-project-ID} space_id = None verify = False model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) model_details = model.get_details() print( json.dumps( model_details, indent=2 ) ) Note:Replace `{my-IBM-Cloud-API-key}` and `{my-project-ID}` with your API key and project ID\. **Sample output** { "model_id": "ibm/mpt-7b-instruct2", "label": "mpt-7b-instruct2", "provider": "IBM", "source": "Hugging Face", "short_description": "MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by IBM.", ... } ### Example: Prompt a foundation model with default parameters ### Prompt a foundation model to generate a response\. **Python code** from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes from ibm_watson_machine_learning.foundation_models import Model import json my_credentials = { "url" : "https://us-south.ml.cloud.ibm.com", "apikey" : {my-IBM-Cloud-API-key} } model_id = ModelTypes.FLAN_T5_XXL gen_parms = None project_id = {my-project-ID} space_id = None verify = False model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify ) prompt_txt = "In today's sales meeting, we " gen_parms_override = None generated_response = model.generate( prompt_txt, gen_parms_override ) print( json.dumps( generated_response, indent=2 ) ) Note:Replace `{my-IBM-Cloud-API-key}` and `{my-project-ID}` with your API key and project ID\. **Sample output** { "model_id": "google/flan-t5-xxl", "created_at": "2023-07-27T03:40:17.575Z", "results": [ { "generated_text": "will discuss the new product line.", "generated_token_count": 8, "input_token_count": 10, "stop_reason": "EOS_TOKEN" } ], ... } ## Learn more ## <!-- <ul> --> * [Credentials for prompting foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html) <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
752D982C2F694FFEE2A312CEA6ADF22C2384D4B2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html?context=cdpaas&locale=en
Retrieval-augmented generation
Retrieval-augmented generation You can use foundation models in IBM watsonx.ai to generate factually accurate output that is grounded in information in a knowledge base by applying the retrieval-augmented generation pattern. This video provides a visual method to learn the concepts and tasks in this documentation. Video chapters [ 0:08 ] Scenario description [ 0:27 ] Overview of pattern [ 1:03 ] Knowledge base [ 1:22 ] Search component [ 1:41 ] Prompt augmented with context [ 2:13 ] Generating output [ 2:31 ] Full solution [ 2:55 ] Considerations for search [ 3:58 ] Considerations for prompt text [ 5:01 ] Considerations for explainability Providing context in your prompt improves accuracy Foundation models can generate output that is factually inaccurate for various reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text. Example The following prompt includes context to establish some facts: Aisha recently painted the kitchen yellow, which is her favorite color. Aisha's favorite color is Unless Aisha is a famous person whose favorite color was mentioned in many online articles that are included in common pretraining data sets, without the context at the beginning of the prompt, no foundation model could reliably generate the correct completion of the sentence at the end of the prompt. If you prompt a model with text that includes fact-filled context, then the output the model generates is more likely to be accurate. For more details, see [Generating factually accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html). The retrieval-augmented generation pattern You can scale out the technique of including context in your prompts by using information in a knowledge base. The following diagram illustrates the retrieval-augmented generation pattern. Although the diagram shows a question-answering example, the same workflow supports other use cases. ![Diagram that shows adding search results to the input for retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-rag.png) The retrieval-augmented generation pattern involves the following steps: 1. Search in your knowledge base for content that is related to the user's input. 2. Pull the most relevant search results into your prompt as context and add an instruction, such as “Answer the following question by using only information from the following passages.” 3. Only if the foundation model that you're using is not instruction-tuned: Add a few examples that demonstrate the expected input and output format. 4. Send the combined prompt text to the model to generate output. The origin of retrieval-augmented generation The term retrieval-augmented generation (RAG) was introduced in this paper: [Retrieval-augmented generation for knowledge-intensive NLP tasks](https://arxiv.org/abs/2005.11401). > We build RAG models where the parametric memory is a pre-trained seq2seq transformer, and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. In that paper, the term "RAG models" refers to a specific implementation of a retriever (a specific query encoder and vector-based document search index) and a generator (a specific pre-trained, generative language model). However, the basic search-and-generate approach can be generalized to use different retriever components and foundation models. Knowledge base The knowledge base can be any collection of information-containing artifacts, such as: * Process information in internal company wiki pages * Files in GitHub (in any format: Markdown, plain text, JSON, code) * Messages in a collaboration tool * Topics in product documentation * Text passages in a database like Db2 * A collection of legal contracts in PDF files * Customer support tickets in a content management system Retriever The retriever can be any combination of search and content tools that reliably returns relevant content from the knowledge base: * Search tools like IBM Watson Discovery * Search and content APIs (GitHub has APIs like this, for example) * Vector databases (such as chromadb) Generator The generator component can use any model in watsonx.ai, whichever one suits your use case, prompt format, and content you are pulling in for context. Examples The following examples demonstrate how to apply the retrieval-augmented generation pattern. Retrieval-augmented generation examples Example Description Link Simple introduction Uses a small knowledge base and a simple search component to demonstrate the basic pattern. [Introduction to retrieval-augmented generation](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43) Introduction to RAG with Discovery Contains the steps and code to demonstrate the retrieval-augmented generation pattern in IBM watsonx.ai by using IBM Watson Discovery as the search component. [Simple introduction to retrieval-augmented generation with watsonx.ai and Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec) Real-world example The watsonx.ai documentation has a search-and-answer feature that can answer basic what-is questions by using the topics in the documentation as a knowledge base. [Answering watsonx.ai questions using a foundation model](https://ibm.biz/watsonx-llm-search) Example with LangChain Contains the steps and code to demonstrate support of retrieval-augumented generation with LangChain in watsonx.ai. It introduces commands for data retrieval, knowledge base building and querying, and model testing. [Use watsonx and LangChain to answer questions by using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6) Example with LangChain and an Elasticsearch vector database Demonstrates how to use LangChain to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) Example with the Elasticsearch Python SDK Demonstrates how to use the Elasticsearch Python SDK to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) Example with LangChain and a SingleStore database Shows you how to apply retrieval-augmented generation to large language models in watsonx by using the SingleStore database. [RAG with SingleStore and watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/daf645b2-281d-4969-9292-5012f3b18215) Learn more Try these tutorials: * [Prompt a foundation model by using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) * [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Retrieval\-augmented generation # You can use foundation models in IBM watsonx\.ai to generate factually accurate output that is grounded in information in a knowledge base by applying the retrieval\-augmented generation pattern\. This video provides a visual method to learn the concepts and tasks in this documentation\. **Video chapters** \[ 0:08 \] Scenario description \[ 0:27 \] Overview of pattern \[ 1:03 \] Knowledge base \[ 1:22 \] Search component \[ 1:41 \] Prompt augmented with context \[ 2:13 \] Generating output \[ 2:31 \] Full solution \[ 2:55 \] Considerations for search \[ 3:58 \] Considerations for prompt text \[ 5:01 \] Considerations for explainability ## Providing context in your prompt improves accuracy ## Foundation models can generate output that is factually inaccurate for various reasons\. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text\. ### Example ### The following prompt includes context to establish some facts: Aisha recently painted the kitchen yellow, which is her favorite color. Aisha's favorite color is Unless Aisha is a famous person whose favorite color was mentioned in many online articles that are included in common pretraining data sets, without the context at the beginning of the prompt, no foundation model could reliably generate the correct completion of the sentence at the end of the prompt\. If you prompt a model with text that includes fact\-filled context, then the output the model generates is more likely to be accurate\. For more details, see [Generating factually accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html)\. ## The retrieval\-augmented generation pattern ## You can scale out the technique of including context in your prompts by using information in a knowledge base\. The following diagram illustrates the retrieval\-augmented generation pattern\. Although the diagram shows a question\-answering example, the same workflow supports other use cases\. ![Diagram that shows adding search results to the input for retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-rag.png) The retrieval\-augmented generation pattern involves the following steps: <!-- <ol> --> 1. Search in your knowledge base for content that is related to the user's input\. 2. Pull the most relevant search results into your prompt as context and add an instruction, such as “Answer the following question by using only information from the following passages\.” 3. *Only if the foundation model that you're using is not instruction\-tuned*: Add a few examples that demonstrate the expected input and output format\. 4. Send the combined prompt text to the model to generate output\. <!-- </ol> --> ### The origin of retrieval\-augmented generation ### The term *retrieval\-augmented generation* (RAG) was introduced in this paper: [Retrieval\-augmented generation for knowledge\-intensive NLP tasks](https://arxiv.org/abs/2005.11401)\. > We build RAG models where the parametric memory is a pre\-trained seq2seq transformer, and the non\-parametric memory is a dense vector index of Wikipedia, accessed with a pre\-trained neural retriever\. In that paper, the term "RAG models" refers to a specific implementation of a *retriever* (a specific query encoder and vector\-based document search index) and a *generator* (a specific pre\-trained, generative language model)\. However, the basic search\-and\-generate approach can be generalized to use different retriever components and foundation models\. ### Knowledge base ### The knowledge base can be any collection of information\-containing artifacts, such as: <!-- <ul> --> * Process information in internal company wiki pages * Files in GitHub (in any format: Markdown, plain text, JSON, code) * Messages in a collaboration tool * Topics in product documentation * Text passages in a database like Db2 * A collection of legal contracts in PDF files * Customer support tickets in a content management system <!-- </ul> --> ### Retriever ### The retriever can be any combination of search and content tools that reliably returns relevant content from the knowledge base: <!-- <ul> --> * Search tools like IBM Watson Discovery * Search and content APIs (GitHub has APIs like this, for example) * Vector databases (such as chromadb) <!-- </ul> --> ### Generator ### The generator component can use any model in watsonx\.ai, whichever one suits your use case, prompt format, and content you are pulling in for context\. ## Examples ## The following examples demonstrate how to apply the retrieval\-augmented generation pattern\. <!-- <table> --> Retrieval\-augmented generation examples | Example | Description | Link | | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Simple introduction | Uses a small knowledge base and a simple search component to demonstrate the basic pattern\. | [Introduction to retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43) | | Introduction to RAG with Discovery | Contains the steps and code to demonstrate the retrieval\-augmented generation pattern in IBM watsonx\.ai by using IBM Watson Discovery as the search component\. | [Simple introduction to retrieval\-augmented generation with watsonx\.ai and Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec) | | Real\-world example | The watsonx\.ai documentation has a search\-and\-answer feature that can answer basic what\-is questions by using the topics in the documentation as a knowledge base\. | [Answering watsonx\.ai questions using a foundation model](https://ibm.biz/watsonx-llm-search) | | Example with LangChain | Contains the steps and code to demonstrate support of retrieval\-augumented generation with LangChain in watsonx\.ai\. It introduces commands for data retrieval, knowledge base building and querying, and model testing\. | [Use watsonx and LangChain to answer questions by using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6) | | Example with LangChain and an Elasticsearch vector database | Demonstrates how to use LangChain to apply an embedding model to documents in an Elasticsearch vector database\. The notebook then indexes and uses the data store to generate answers to incoming questions\. | [Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) | | Example with the Elasticsearch Python SDK | Demonstrates how to use the Elasticsearch Python SDK to apply an embedding model to documents in an Elasticsearch vector database\. The notebook then indexes and uses the data store to generate answers to incoming questions\. | [Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp) | | Example with LangChain and a SingleStore database | Shows you how to apply retrieval\-augmented generation to large language models in watsonx by using the SingleStore database\. | [RAG with SingleStore and watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/daf645b2-281d-4969-9292-5012f3b18215) | <!-- </table ""> --> ## Learn more ## Try these tutorials: <!-- <ul> --> * [Prompt a foundation model by using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html) * [Prompt a foundation model with the retrieval\-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html) <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
38FB0908B90954D96CEFF54BA975DE832286A0A7
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html?context=cdpaas&locale=en
Security and privacy for foundation models
Security and privacy for foundation models Your work with foundation models is secure and private, in the same way that all your work on watsonx is secure and private. Foundation models that you interact with through watsonx are hosted in IBM Cloud. Your data is not sent to any third-party or open source platforms. The foundation model prompts that you create and engineer in the Prompt Lab or send by using the API are accessible only by you. Your prompts are used only by you and are submitted only to models you choose. Your prompt text is not accessible or used by IBM or any other person or organization. You control whether prompts, model choices, and prompt engineering parameter settings are saved. When saved, your data is stored in a dedicated IBM Cloud Object Storage bucket that is associated with your project. Data that is stored in your project storage bucket is encrypted at rest and in motion. You can delete your stored data at any time. Privacy of text in Prompt Lab during a session Text that you submit by clicking Generate from the prompt editor in Prompt Lab is reformatted as tokens, and then submitted to the foundation model you choose. The submitted message is encrypted in transit. Your prompt text is not saved unless you choose to save your work. Unsaved prompt text is kept in the web page until the page is refreshed, at which time the prompt text is deleted. Privacy and security of saved work How saved work is managed differs based on the asset type that you choose to save: * Prompt template asset: The current prompt text, model, prompt engineering parameters, and any prompt variables are saved as a prompt template asset and stored in the IBM Cloud Object Storage bucket that is associated with your project. Prompt template assets are retained until they are deleted or changed by you. When autosave is on, if you open a saved prompt and change the text, the text in the saved prompt template asset is replaced. * Prompt session asset: A prompt session asset includes the prompt input text, model, prompt engineering parameters, and model output. After you create the prompt session asset, prompt information for up to 500 submitted prompts is stored in the project storage bucket where it is retained for 30 days. * Notebook asset: Your prompt, model, prompt engineering parameters, and any prompt variables are formatted as Python code and stored as a notebook asset in the project storage bucket. Only people with Admin or Editor role access to the project or the project storage bucket can view saved assets. You control who can access your project and its associated Cloud Object Storage bucket. * For more information about asset security, see [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html). * For more information about managing project access, see [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) Logging and text in the Prompt Lab Nothing that you add to the prompt editor or submit to a model from the Prompt Lab or by using the API is logged by IBM. Messages that are generated by foundation models and returned to the Prompt Lab also are not logged. Ownership of your content and foundation model output Content that you upload into watsonx is yours. IBM does not use the content that you upload to watsonx or the output generated by a foundation model to further train or improve any IBM developed models. IBM does not claim to have any ownership rights to any foundation model outputs. You remain solely responsible for your content and the output of any foundation model. Learn more * [Watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document) * [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) * [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Security and privacy for foundation models # Your work with foundation models is secure and private, in the same way that all your work on watsonx is secure and private\. Foundation models that you interact with through watsonx are hosted in IBM Cloud\. Your data is not sent to any third\-party or open source platforms\. The foundation model prompts that you create and engineer in the Prompt Lab or send by using the API are accessible only by you\. Your prompts are used only by you and are submitted only to models you choose\. Your prompt text is not accessible or used by IBM or any other person or organization\. You control whether prompts, model choices, and prompt engineering parameter settings are saved\. When saved, your data is stored in a dedicated IBM Cloud Object Storage bucket that is associated with your project\. Data that is stored in your project storage bucket is encrypted at rest and in motion\. You can delete your stored data at any time\. ## Privacy of text in Prompt Lab during a session ## Text that you submit by clicking **Generate** from the prompt editor in Prompt Lab is reformatted as tokens, and then submitted to the foundation model you choose\. The submitted message is encrypted in transit\. Your prompt text is not saved unless you choose to save your work\. Unsaved prompt text is kept in the web page until the page is refreshed, at which time the prompt text is deleted\. ## Privacy and security of saved work ## How saved work is managed differs based on the asset type that you choose to save: <!-- <ul> --> * **Prompt template asset**: The current prompt text, model, prompt engineering parameters, and any prompt variables are saved as a prompt template asset and stored in the IBM Cloud Object Storage bucket that is associated with your project\. Prompt template assets are retained until they are deleted or changed by you\. When autosave is on, if you open a saved prompt and change the text, the text in the saved prompt template asset is replaced\. * **Prompt session asset**: A prompt session asset includes the prompt input text, model, prompt engineering parameters, and model output\. After you create the prompt session asset, prompt information for up to 500 submitted prompts is stored in the project storage bucket where it is retained for 30 days\. * **Notebook asset**: Your prompt, model, prompt engineering parameters, and any prompt variables are formatted as Python code and stored as a notebook asset in the project storage bucket\. <!-- </ul> --> Only people with **Admin** or **Editor** role access to the project or the project storage bucket can view saved assets\. You control who can access your project and its associated Cloud Object Storage bucket\. <!-- <ul> --> * For more information about asset security, see [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html)\. * For more information about managing project access, see [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) <!-- </ul> --> ## Logging and text in the Prompt Lab ## Nothing that you add to the prompt editor or submit to a model from the Prompt Lab or by using the API is logged by IBM\. Messages that are generated by foundation models and returned to the Prompt Lab also are not logged\. ## Ownership of your content and foundation model output ## Content that you upload into watsonx is yours\. IBM does not use the content that you upload to watsonx or the output generated by a foundation model to further train or improve any IBM developed models\. IBM does not claim to have any ownership rights to any foundation model outputs\. You remain solely responsible for your content and the output of any foundation model\. ## Learn more ## <!-- <ul> --> * [Watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=en#detail-document) * [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883) * [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
B193A2795BDEF17A5D204CDD18188A767E2FE7B7
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html?context=cdpaas&locale=en
Tokens and tokenization
Tokens and tokenization A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens. You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab. Converting words to tokens and back again Prompt text is converted to tokens before being processed by foundation models. The correlation between words and tokens is complex: * Sometimes a single word is broken into multiple tokens * The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words) * Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not * The way words are broken into tokens varies from language to language * The way words are broken into tokens varies from model to model For a rough idea, a sentence that has 10 words could be 15 to 20 tokens. The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor. Example The following image shows how this sample input might be tokenized: > Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes. ![Visualization of tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tokenization.png) Notice a few interesting points: * Some words are broken into multiple tokens and some are not * The word "Tomatoes" is broken into multiple tokens at the beginning, but later "tomatoes" is all one token * Spaces are sometimes included at the beginning of a word-token and sometimes spaces are a token all by themselves * Punctuation marks are tokens Token limits Every model has an upper limit to the number of tokens in the input prompt plus the number of tokens in the generated output from the model (sometimes called context window length, context window, context length, or maximum sequence length.) In the Prompt Lab, an informational message shows how many tokens are used in a given prompt submission and the resulting generated output. In the Prompt Lab, you use the Max tokens parameter to specify an upper limit on the number of output tokens for the model to generate. The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html). Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Tokens and tokenization # A *token* is a collection of characters that has semantic meaning for a model\. Tokenization is the process of converting the words in your prompt into tokens\. You can monitor foundation model token usage in a project on the **Environments** page on the **Resource usage** tab\. ## Converting words to tokens and back again ## Prompt text is converted to tokens before being processed by foundation models\. The correlation between words and tokens is complex: <!-- <ul> --> * Sometimes a single word is broken into multiple tokens * The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words) * Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not * The way words are broken into tokens varies from language to language * The way words are broken into tokens varies from model to model <!-- </ul> --> For a rough idea, a sentence that has 10 words could be 15 to 20 tokens\. The raw output from a model is also tokens\. In the Prompt Lab in IBM watsonx\.ai, the output tokens from the model are converted to words to be displayed in the prompt editor\. ### Example ### The following image shows how this sample input might be tokenized: > Tomatoes are one of the most popular plants for vegetable gardens\. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy\. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate\. Tomato plants come in a range of sizes\. ![Visualization of tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tokenization.png) Notice a few interesting points: <!-- <ul> --> * Some words are broken into multiple tokens and some are not * The word "Tomatoes" is broken into multiple tokens at the beginning, but later "tomatoes" is all one token * Spaces are sometimes included at the beginning of a word\-token and sometimes spaces are a token all by themselves * Punctuation marks are tokens <!-- </ul> --> ## Token limits ## Every model has an upper limit to the number of tokens in the input prompt plus the number of tokens in the generated output from the model (sometimes called *context window length*, *context window*, *context length*, or *maximum sequence length*\.) In the Prompt Lab, an informational message shows how many tokens are used in a given prompt submission and the resulting generated output\. In the Prompt Lab, you use the *Max tokens* parameter to specify an upper limit on the number of output tokens for the model to generate\. The maximum number of tokens that are allowed in the output differs by model\. For more information, see the *Maximum tokens* information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\. **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
96597F608C26E68BFC4BDCA45061400D63793523
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html?context=cdpaas&locale=en
Data formats for tuning foundation models
Data formats for tuning foundation models Prepare a set of prompt examples to use to tune the model. The examples must contain the type of input that the model will need to process at run time and the appropriate output for the model to generate in response. You can add one file as training data. The maximum file size that is allowed is 200 MB. Prompt input-and-output example pairs are sometimes also referred to as samples or records. Follow these guidelines when you create your training data: * Add 100 to 1,000 labeled prompt examples to a file. Between 50 to 10,000 examples are allowed. * Use one of the following formats: * JavaScript Object Notation (JSON) * JSON Lines (JSONL) format * Each example must include one input and output pair. * The language of the training data must be English. * If the input or output text includes quotation marks, escape each quotation mark with a backslash(). For example, He said, "Yes.". * To represent a carriage return or line break, you can use a backslash followed by n (n) to represent the new line. For example, ...end of paragraph.nStart of new paragraph. You can control the number of tokens from the input and output that are used during training. If an input or output example from the training data is longer than the specified limit, it will be truncated. Only the allowed maximum number of tokens will be used by the experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.htmltuning-tokens). How tokens are counted differs by model, which makes the number of tokens difficult to estimate. For language-based foundation models, you can think of 256 tokens as about 130—170 words and 128 tokens as about 65—85 words. To learn more about tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). If you are using the model to classify data, follow these extra guidelines: * Try to limit the number of class labels to 10 or fewer. * Include an equal number of examples of each class type. You can use the Prompt Lab to craft examples for the training data. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). JSON example The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSON format. { [ { "input":"Message: When I try to log in, I get an error.", "output":"Class name: Problem" } { "input":"Message: Where can I find the plan prices?", "output":"Class name: Question" } { "input":"Message: What is the difference between trial and paygo?", "output":"Class name: Question" } { "input":"Message: The registration page crashed, and now I can't create a new account.", "output":"Class name: Problem" } { "input":"Message: What regions are supported?", "output":"Class name: Question" } { "input":"Message: I can't remember my password.", "output":"Class name: Problem" } { "input":"Message: I'm having trouble registering for a new account.", "output":"Classname: Problem" } { "input":"Message: A teammate shared a service instance with me, but I can't access it. What's wrong?", "output":"Class name: Problem" } { "input":"Message: What extra privileges does an administrator have?", "output":"Class name: Question" } { "input":"Message: Can I create a service instance for data in a language other than English?", "output":"Class name: Question" } ] } JSONL example The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSONL format. {"input":"Message: When I try to log in, I get an error.","output":"Class name: Problem"} {"input":"Message: Where can I find the plan prices?","output":"Class name: Question"} {"input":"Message: What is the difference between trial and paygo?","output":"Class name: Question"} {"input":"Message: The registration page crashed, and now I can't create a new account.","output":"Class name: Problem"} {"input":"Message: What regions are supported?","output":"Class name: Question"} {"input":"Message: I can't remember my password.","output":"Class name: Problem"} {"input":"Message: I'm having trouble registering for a new account.","output":"Classname: Problem"} {"input":"Message: A teammate shared a service instance with me, but I can't access it. What's wrong?","output":"Class name: Problem"} {"input":"Message: What extra privileges does an administrator have?","output":"Class name: Question"} {"input":"Message: Can I create a service instance for data in a language other than English?","output":"Class name: Question"} Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
# Data formats for tuning foundation models # Prepare a set of prompt examples to use to tune the model\. The examples must contain the type of input that the model will need to process at run time and the appropriate output for the model to generate in response\. You can add one file as training data\. The maximum file size that is allowed is 200 MB\. Prompt input\-and\-output example pairs are sometimes also referred to as *samples* or *records*\. Follow these guidelines when you create your training data: <!-- <ul> --> * Add 100 to 1,000 labeled prompt examples to a file\. Between 50 to 10,000 examples are allowed\. * Use one of the following formats: <!-- <ul> --> * JavaScript Object Notation (JSON) * JSON Lines (JSONL) format <!-- </ul> --> * Each example must include one `input` and `output` pair\. * The language of the training data must be English\. * If the input or output text includes quotation marks, escape each quotation mark with a backslash(`\`)\. For example, `He said, \"Yes.\"`\. * To represent a carriage return or line break, you can use a backslash followed by `n` (`\n`) to represent the new line\. For example, `...end of paragraph.\nStart of new paragraph`\. <!-- </ul> --> You can control the number of tokens from the input and output that are used during training\. If an input or output example from the training data is longer than the specified limit, it will be truncated\. Only the allowed maximum number of tokens will be used by the experiment\. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html#tuning-tokens)\. How tokens are counted differs by model, which makes the number of tokens difficult to estimate\. For language\-based foundation models, you can think of 256 tokens as about 130—170 words and 128 tokens as about 65—85 words\. To learn more about tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)\. If you are using the model to classify data, follow these extra guidelines: <!-- <ul> --> * Try to limit the number of class labels to 10 or fewer\. * Include an equal number of examples of each class type\. <!-- </ul> --> You can use the Prompt Lab to craft examples for the training data\. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)\. ## JSON example ## The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSON format\. { [ { "input":"Message: When I try to log in, I get an error.", "output":"Class name: Problem" } { "input":"Message: Where can I find the plan prices?", "output":"Class name: Question" } { "input":"Message: What is the difference between trial and paygo?", "output":"Class name: Question" } { "input":"Message: The registration page crashed, and now I can't create a new account.", "output":"Class name: Problem" } { "input":"Message: What regions are supported?", "output":"Class name: Question" } { "input":"Message: I can't remember my password.", "output":"Class name: Problem" } { "input":"Message: I'm having trouble registering for a new account.", "output":"Classname: Problem" } { "input":"Message: A teammate shared a service instance with me, but I can't access it. What's wrong?", "output":"Class name: Problem" } { "input":"Message: What extra privileges does an administrator have?", "output":"Class name: Question" } { "input":"Message: Can I create a service instance for data in a language other than English?", "output":"Class name: Question" } ] } ## JSONL example ## The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSONL format\. {"input":"Message: When I try to log in, I get an error.","output":"Class name: Problem"} {"input":"Message: Where can I find the plan prices?","output":"Class name: Question"} {"input":"Message: What is the difference between trial and paygo?","output":"Class name: Question"} {"input":"Message: The registration page crashed, and now I can't create a new account.","output":"Class name: Problem"} {"input":"Message: What regions are supported?","output":"Class name: Question"} {"input":"Message: I can't remember my password.","output":"Class name: Problem"} {"input":"Message: I'm having trouble registering for a new account.","output":"Classname: Problem"} {"input":"Message: A teammate shared a service instance with me, but I can't access it. What's wrong?","output":"Class name: Problem"} {"input":"Message: What extra privileges does an administrator have?","output":"Class name: Question"} {"input":"Message: Can I create a service instance for data in a language other than English?","output":"Class name: Question"} **Parent topic:**[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) <!-- </article "role="article" "> -->
FC8DBF139A485E98914CBB73B8BA684B283AE983
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=en
Deploying a tuned foundation model
Deploying a tuned foundation model Deploy a tuned model so you can add it to a business workflow and start to use foundation models in a meaningful way. Before you begin The tuning experiment that you used to tune the foundation model must be finished. For more information, see [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html). Deploy a tuned model To deploy a tuned model, complete the following steps: 1. From the navigation menu, expand Projects, and then click All projects. 2. Click to open your project. 3. From the Assets tab, click the Experiments asset type. 4. Click to open the tuning experiment for the model you want to deploy. 5. From the Tuned models list, find the completed tuning experiment, and then click New deployment. 6. Name the tuned model. The name of the tuning experiment is used as the tuned model name if you don't change it. The name has a number after it in parentheses, which counts the deployments. The number starts at one and is incremented by one each time you deploy this tuning experiment. 7. Optional: Add a description and tags. 8. In the Target deployment space field, choose a deployment space. The deployment space must be associated with a machine learning instance that is in the same account as the project where the tuned model was created. If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). For more information, see [What is a deployment space?](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=endeployment-space) 9. In the Deployment serving name field, add a label for the deployment. The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated ID that is assigned otherwise. The serving name also abstracts the deployment from its service instance details. Applications refer to this name that allows for the underlying service instance to be changed without impacting users. The name can have up to 36 characters. The supported characters are [a-z,0-9,_]. The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. 10. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed model. 11. Click Deploy. After the tuned model is promoted to the deployment space and deployed, a copy of the tuned model is stored in your project as a model asset. What is a deployment space? When you create a new deployment, a tuned model is promoted to a deployment space, and then deployed. A deployment space is separate from the project where you create the asset. A deployment space is associated with the following services that it uses to deploy assets: * Watson Machine Learning: A product with tools and services you can use to build, train, and deploy machine learning models. This service hosts your turned model. * IBM Cloud Object Storage: A secure platform for storing structured and unstructured data. Your deployed model asset is stored in a Cloud Object Storage bucket that is associated with your project. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Testing the deployed model The true test of your tuned model is how it responds to input that follows tuned-for patterns. You can test the tuned model from one of the following pages: * Prompt Lab: A tool with an intuitive user interface for prompting foundation models. You can customize the prompt parameters for each input. You can also save the prompt as a notebook so you can interact with it programmatically. * Deployment space: Useful when you want to test your model programmatically. From the API Reference tab, you can find information about the available endpoints and code examples. You can also submit input as text and choose to return the output or in a stream, as the output is generated. However, you cannot change the prompt parameters for the input text. To test your tuned model, complete the following steps: 1. From the navigation menu, select Deployments. 2. Click the name of the deployment space where you deployed the tuned model. 3. Click the name of your deployed model. 4. Follow the appropriate steps based on where you want to test the tuned model: * From Prompt Lab: 1. Click Open in Prompt Lab, and then choose the project where you want to work with the model. Prompt Lab opens and the tuned model that you deployed is selected from the Model field. 2. In the Try section, add a prompt to the Input field that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate. For more information about how to use the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). * From the deployment space: 1. Click the Test tab. 2. In the Input data field, add a prompt that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate. You can click View parameter settings to see the prompt parameters that are applied to the model by default. To change the prompt parameters, you must go to the Prompt Lab. Learn more * [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) Parent topic:[Deploying foundation model assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-found-assets.html)
# Deploying a tuned foundation model # Deploy a tuned model so you can add it to a business workflow and start to use foundation models in a meaningful way\. ## Before you begin ## The tuning experiment that you used to tune the foundation model must be finished\. For more information, see [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)\. ## Deploy a tuned model ## To deploy a tuned model, complete the following steps: <!-- <ol> --> 1. From the navigation menu, expand **Projects**, and then click **All projects**\. 2. Click to open your project\. 3. From the *Assets* tab, click the **Experiments** asset type\. 4. Click to open the tuning experiment for the model you want to deploy\. 5. From the *Tuned models* list, find the completed tuning experiment, and then click **New deployment**\. 6. Name the tuned model\. The name of the tuning experiment is used as the tuned model name if you don't change it. The name has a number after it in parentheses, which counts the deployments. The number starts at one and is incremented by one each time you deploy this tuning experiment. 7. **Optional**: Add a description and tags\. 8. In the *Target deployment space* field, choose a deployment space\. The deployment space must be associated with a machine learning instance that is in the same account as the project where the tuned model was created. If you don't have a deployment space, choose **Create a new deployment space**, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). For more information, see [What is a deployment space?](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=en#deployment-space) 9. In the *Deployment serving name* field, add a label for the deployment\. The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated ID that is assigned otherwise. The serving name also abstracts the deployment from its service instance details. Applications refer to this name that allows for the underlying service instance to be changed without impacting users. The name can have up to 36 characters. The supported characters are \[a-z,0-9,\_\]. The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. 10. **Tip**: Select **View deployment in deployment space after creating**\. Otherwise, you need to take more steps to find your deployed model\. 11. Click **Deploy**\. <!-- </ol> --> After the tuned model is promoted to the deployment space and deployed, a copy of the tuned model is stored in your project as a model asset\. ### What is a deployment space? ### When you create a new deployment, a tuned model is promoted to a deployment space, and then deployed\. A deployment space is separate from the project where you create the asset\. A deployment space is associated with the following services that it uses to deploy assets: <!-- <ul> --> * Watson Machine Learning: A product with tools and services you can use to build, train, and deploy machine learning models\. This service hosts your turned model\. * IBM Cloud Object Storage: A secure platform for storing structured and unstructured data\. Your deployed model asset is stored in a Cloud Object Storage bucket that is associated with your project\. <!-- </ul> --> For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\. ## Testing the deployed model ## The true test of your tuned model is how it responds to input that follows tuned\-for patterns\. You can test the tuned model from one of the following pages: <!-- <ul> --> * **Prompt Lab**: A tool with an intuitive user interface for prompting foundation models\. You can customize the prompt parameters for each input\. You can also save the prompt as a notebook so you can interact with it programmatically\. * **Deployment space**: Useful when you want to test your model programmatically\. From the *API Reference* tab, you can find information about the available endpoints and code examples\. You can also submit input as text and choose to return the output or in a stream, as the output is generated\. However, you cannot change the prompt parameters for the input text\. <!-- </ul> --> To test your tuned model, complete the following steps: <!-- <ol> --> 1. From the navigation menu, select **Deployments**\. 2. Click the name of the deployment space where you deployed the tuned model\. 3. Click the name of your deployed model\. 4. Follow the appropriate steps based on where you want to test the tuned model: <!-- <ul> --> * From Prompt Lab: <!-- <ol> --> 1. Click **Open in Prompt Lab**, and then choose the project where you want to work with the model. Prompt Lab opens and the tuned model that you deployed is selected from the Model field. 2. In the *Try* section, add a prompt to the **Input** field that follows the prompt pattern that your tuned model is trained to recognize, and then click **Generate**. <!-- </ol> --> For more information about how to use the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). * From the deployment space: <!-- <ol> --> 1. Click the *Test* tab. 2. In the *Input data* field, add a prompt that follows the prompt pattern that your tuned model is trained to recognize, and then click **Generate**. You can click **View parameter settings** to see the prompt parameters that are applied to the model by default. To change the prompt parameters, you must go to the Prompt Lab. <!-- </ol> --> <!-- </ul> --> <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) <!-- </ul> --> **Parent topic:**[Deploying foundation model assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-found-assets.html) <!-- </article "role="article" "> -->
15A014C514B00FF78C689585F393E21BAE922DB2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html?context=cdpaas&locale=en
Methods for tuning foundation models
Methods for tuning foundation models Learn more about different tuning methods and how they work. Models can be tuned in the following ways: * Fine-tuning: Changes the parameters of the underlying foundation model to guide the model to generate output that is optimized for a task. Note: You currently cannot fine-tune models in Tuning Studio. * Prompt-tuning: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. The underlying foundation model and its parameters are not edited. Only the prompt input is altered. When you prompt-tune a model, the underlying foundation model can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs. How prompt-tuning works Foundation models are sensitive to the input that you give them. Your input, or how you prompt the model, can introduce context that the model will use to tailor its generated output. Prompt engineering to find the right prompt often works well. However, it can be time-consuming, error-prone, and its effectiveness can be restricted by the context window length that is allowed by the underlying model. Prompt-tuning a model in the Tuning Studio applies machine learning to the task of prompt engineering. Instead of adding words to the input itself, prompt-tuning is a method for finding a sequence of values that, when added as a prefix to the input text, improve the model's ability to generate the output you want. This sequence of values is called a prompt vector. Normally, words in the prompt are vectorized by the model. Vectorization is the process of converting text to tokens, and then to numbers defined by the model's tokenizer to identify the tokens. Lastly, the token IDs are encoded, meaning they are converted into a vector representation, which is the input format that is expected by the embedding layer of the model. Prompt-tuning bypasses the model's text-vectorization process and instead crafts a prompt vector directly. This changeable prompt vector is concatenated to the vectorized input text and the two are passed as one input to the embedding layer of the model. Values from this crafted prompt vector affect the word embedding weights that are set by the model and influence the words that the model chooses to add to the output. To find the best values for the prompt vector, you run a tuning experiment. You demonstrate the type of output that you want for a corresponding input by providing the model with input and output example pairs in training data. With each training run of the experiment, the generated output is compared to the training data output. Based on what it learns from differences between the two, the experiment adjusts the values in the prompt vector. After many runs through the training data, the model finds the prompt vector that works best. You can choose to start the training process by providing text that is vectorized by the experiment. Or you can let the experiment use random values in the prompt vector. Either way, unless the initial values are exactly right, they will be changed repeatedly as part of the training process. Providing your own initialization text can help the experiment reach a good result more quickly. The result of the experiment is a tuned version of the underlying model. You submit input to the tuned model for inferencing and the model generates output that follows the tuned-for pattern. For more information about this tuning method, read the research paper named [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691). Learn more * [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
# Methods for tuning foundation models # Learn more about different tuning methods and how they work\. Models can be tuned in the following ways: <!-- <ul> --> * **Fine\-tuning**: Changes the parameters of the underlying foundation model to guide the model to generate output that is optimized for a task\. Note: You currently cannot fine-tune models in Tuning Studio. * **Prompt\-tuning**: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify\. The underlying foundation model and its parameters are not edited\. Only the prompt input is altered\. When you prompt-tune a model, the underlying foundation model can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs. <!-- </ul> --> ## How prompt\-tuning works ## Foundation models are sensitive to the input that you give them\. Your input, or how you *prompt* the model, can introduce context that the model will use to tailor its generated output\. Prompt engineering to find the *right* prompt often works well\. However, it can be time\-consuming, error\-prone, and its effectiveness can be restricted by the context window length that is allowed by the underlying model\. Prompt\-tuning a model in the Tuning Studio applies machine learning to the task of prompt engineering\. Instead of adding words to the input itself, prompt\-tuning is a method for finding a sequence of values that, when added as a prefix to the input text, improve the model's ability to generate the output you want\. This sequence of values is called a *prompt vector*\. Normally, words in the prompt are vectorized by the model\. Vectorization is the process of converting text to tokens, and then to numbers defined by the model's tokenizer to identify the tokens\. Lastly, the token IDs are encoded, meaning they are converted into a vector representation, which is the input format that is expected by the embedding layer of the model\. Prompt\-tuning bypasses the model's text\-vectorization process and instead crafts a prompt vector directly\. This changeable prompt vector is concatenated to the vectorized input text and the two are passed as one input to the embedding layer of the model\. Values from this crafted prompt vector affect the word embedding weights that are set by the model and influence the words that the model chooses to add to the output\. To find the best values for the prompt vector, you run a tuning experiment\. You demonstrate the type of output that you want for a corresponding input by providing the model with input and output example pairs in training data\. With each training run of the experiment, the generated output is compared to the training data output\. Based on what it learns from differences between the two, the experiment adjusts the values in the prompt vector\. After many runs through the training data, the model finds the prompt vector that works best\. You can choose to start the training process by providing text that is vectorized by the experiment\. Or you can let the experiment use random values in the prompt vector\. Either way, unless the initial values are exactly right, they will be changed repeatedly as part of the training process\. Providing your own initialization text can help the experiment reach a good result more quickly\. The result of the experiment is a tuned version of the underlying model\. You submit input to the tuned model for inferencing and the model generates output that follows the tuned\-for pattern\. For more information about this tuning method, read the research paper named [The Power of Scale for Parameter\-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691)\. ## Learn more ## <!-- <ul> --> * [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) <!-- </ul> --> **Parent topic:**[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) <!-- </article "role="article" "> -->
51747F17F413F1F34CFD73D170DE392D874D03DD
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en
Parameters for tuning foundation models
Parameters for tuning foundation models Tuning parameters configure the tuning experiments that you use to tune the model. During the experiment, the tuning model repeatedly adjusts the structure of the prompt so that its predictions can get better over time. The following diagram illustrates the steps that occur during a tuning training experiment run. The parts of the experiment flow that you can configure are highlighted. These decision points correspond with experiment tuning parameters that you control. ![Tuning experiment run process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tuning-training-experiment.png) The diagram shows the following steps of the experiment: 1. Starts from the initialization method that you choose to use to initialize the prompt. If the initialization method parameter is set to text, then you must add the initialization text. 2. If specified, tokenizes the initialization text and converts it into a prompt vector. 3. Reads the training data, tokenizes it, and converts it into batches. The size of the batches is determined by the batch size parameter. 4. Sends input from the examples in the batch to the foundation model for the model to process and generate output. 5. Compares the model's output to the output from the training data that corresponds to the training data input that was submitted. Then, computes the loss gradient, which is the difference between the predicted output and the actual output from the training data. At some point, the experiment adjusts the prompt vector that is added to the input based on the performance of the model. When this adjustment occurs depends on how the Accumulation steps parameter is configured. 6. Adjustments are applied to the prompt vector that was initialized in Step 2. The degree to which the vector is changed is controlled by the Learning rate parameter. The edited prompt vector is added as a prefix to the input from the next example in the training data, and is submitted to the model as input. 7. The process repeats until all of the examples in all of the batches are processed. 8. The entire set of batches are processed again as many times as is specified in the Number of epochs parameter. Note: No layer of the base foundation model is changed during this process. Parameter details The parameters that you change when you tune a model are related to the tuning experiment, not to the underlying foundation model. Table 1: Tuning parameters Parameter name Value options Default value Learn more Initialization method Random, Text Random [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize) Initialization text None None [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize) Batch size 1 - 16 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment) Accumulation steps 1 - 128 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment) Learning rate 0.01 - 0.5 0.3 [Managing the learning rate](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enlearning-rate) Number of epochs (training cycles) 1 - 50 20 [Choosing the number of training runs to complete](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enruns) Segmenting the training data When an experiment runs, the experiment first breaks the training data into smaller batches, and then trains on one batch at a time. Each batch must fit in GPU memory to be processed. To reduce the amount of GPU memory that is needed, you can configure the tuning experiment to postpone making adjustments until more than one batch is processed. Tuning runs on a batch and its performance metrics are calculated, but the prompt vector isn't changed. Instead, the performance information is collected over some number of batches before the cumulative performance metrics are evaluated. Use the following parameters to control how the training data is segmented: Batch size Number of labeled examples (also known as samples) to process at one time. For example, for a data set with 1,000 examples and a batch size of 10, the data set is divided into 100 batches of 10 examples each. If the training data set is small, specify a smaller batch size to ensure that each batch has enough examples in it. Accumulation steps: Number of batches to process before the prompt vector is adjusted. For example, if the data set is divided into 100 batches and you set the accumulation steps value to 10, then the prompt vector is adjusted 10 times instead of 100 times. Initializing prompt tuning When you create an experiment, you can choose whether to specify your own text to serve as the initial prompt vector or let the experiment generate it for you. These new tokens start the training process either in random positions, or based on the embedding of a vocabulary or instruction that you specify in text. Studies show that as the size of the underlying model grows beyond 10 billion parameters, the initialization method that is used becomes less important. The choice that you make when you create the tuning experiment customizes how the prompt is initialized. Initialization method: Choose a method from the following options: * Text: The Prompt Tuning method is used where you specify the initialization text of the prompt yourself. * Random: The Prompt Tuning method is used that allows the experiment to add values that are chosen at random to include with the prompt. Initialization text: The text that you want to add. Specify a task description or instructions similar to what you use for zero-shot prompting. Managing the learning rate The learning rate parameter determines how much to change the prompt vector when the it is adjusted. The higher the number, the greater the change to the vector. Choosing the number of training runs to complete The Number of epochs parameter specifies the number of times to cycle through the training data. For example, with a batch size of 10 and a data set with 1,000 examples, one epoch must process 100 batches and update the prompt vector 100 times. If you set the number of epochs to 20, the model is passed through the data set 20 times, which means it processes a total of 2,000 batches during the tuning process. The higher the number of epochs and bigger your training data, the longer it takes to tune a model. Learn more * [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
# Parameters for tuning foundation models # Tuning parameters configure the tuning experiments that you use to tune the model\. During the experiment, the tuning model repeatedly adjusts the structure of the prompt so that its predictions can get better over time\. The following diagram illustrates the steps that occur during a tuning training experiment run\. The parts of the experiment flow that you can configure are highlighted\. These decision points correspond with experiment tuning parameters that you control\. ![Tuning experiment run process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tuning-training-experiment.png) The diagram shows the following steps of the experiment: <!-- <ol> --> 1. Starts from the initialization method that you choose to use to initialize the prompt\. If the *initialization method* parameter is set to `text`, then you must add the initialization text. 2. If specified, tokenizes the initialization text and converts it into a prompt vector\. 3. Reads the training data, tokenizes it, and converts it into batches\. The size of the batches is determined by the *batch size* parameter. 4. Sends input from the examples in the batch to the foundation model for the model to process and generate output\. 5. Compares the model's output to the output from the training data that corresponds to the training data input that was submitted\. Then, computes the loss gradient, which is the difference between the predicted output and the actual output from the training data\. At some point, the experiment adjusts the prompt vector that is added to the input based on the performance of the model. When this adjustment occurs depends on how the *Accumulation steps* parameter is configured. 6. Adjustments are applied to the prompt vector that was initialized in Step 2\. The degree to which the vector is changed is controlled by the *Learning rate* parameter\. The edited prompt vector is added as a prefix to the input from the next example in the training data, and is submitted to the model as input\. 7. The process repeats until all of the examples in all of the batches are processed\. 8. The entire set of batches are processed again as many times as is specified in the *Number of epochs* parameter\. <!-- </ol> --> Note: No layer of the base foundation model is changed during this process\. ## Parameter details ## The parameters that you change when you tune a model are related to the tuning experiment, not to the underlying foundation model\. <!-- <table> --> Table 1: Tuning parameters | Parameter name | Value options | Default value | Learn more | | ---------------------------------- | ------------- | ------------- | ---------------------------------------------------- | | Initialization method | Random, Text | Random | [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#initialize) | | Initialization text | None | None | [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#initialize) | | Batch size | 1 \- 16 | 16 | [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#segment) | | Accumulation steps | 1 \- 128 | 16 | [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#segment) | | Learning rate | 0\.01 \- 0\.5 | 0\.3 | [Managing the learning rate](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#learning-rate) | | Number of epochs (training cycles) | 1 \- 50 | 20 | [Choosing the number of training runs to complete](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=en#runs) | <!-- </table ""> --> ### Segmenting the training data ### When an experiment runs, the experiment first breaks the training data into smaller batches, and then trains on one batch at a time\. Each batch must fit in GPU memory to be processed\. To reduce the amount of GPU memory that is needed, you can configure the tuning experiment to postpone making adjustments until more than one batch is processed\. Tuning runs on a batch and its performance metrics are calculated, but the prompt vector isn't changed\. Instead, the performance information is collected over some number of batches before the cumulative performance metrics are evaluated\. Use the following parameters to control how the training data is segmented: **Batch size** Number of labeled examples (also known as *samples*) to process at one time\. For example, for a data set with 1,000 examples and a batch size of 10, the data set is divided into 100 batches of 10 examples each\. If the training data set is small, specify a smaller batch size to ensure that each batch has enough examples in it\. **Accumulation steps**: Number of batches to process before the prompt vector is adjusted\. For example, if the data set is divided into 100 batches and you set the accumulation steps value to 10, then the prompt vector is adjusted 10 times instead of 100 times\. ### Initializing prompt tuning ### When you create an experiment, you can choose whether to specify your own text to serve as the initial prompt vector or let the experiment generate it for you\. These new tokens start the training process either in random positions, or based on the embedding of a vocabulary or instruction that you specify in text\. Studies show that as the size of the underlying model grows beyond 10 billion parameters, the initialization method that is used becomes less important\. The choice that you make when you create the tuning experiment customizes how the prompt is initialized\. **Initialization method**: Choose a method from the following options: <!-- <ul> --> * Text: The Prompt Tuning method is used where you specify the initialization text of the prompt yourself\. * Random: The Prompt Tuning method is used that allows the experiment to add values that are chosen at random to include with the prompt\. <!-- </ul> --> **Initialization text**: The text that you want to add\. Specify a task description or instructions similar to what you use for zero\-shot prompting\. ### Managing the learning rate ### The **learning rate** parameter determines how much to change the prompt vector when the it is adjusted\. The higher the number, the greater the change to the vector\. ### Choosing the number of training runs to complete ### The **Number of epochs** parameter specifies the number of times to cycle through the training data\. For example, with a batch size of 10 and a data set with 1,000 examples, one epoch must process 100 batches and update the prompt vector 100 times\. If you set the number of epochs to 20, the model is passed through the data set 20 times, which means it processes a total of 2,000 batches during the tuning process\. The higher the number of epochs and bigger your training data, the longer it takes to tune a model\. ### Learn more ### <!-- <ul> --> * [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) <!-- </ul> --> **Parent topic:**[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) <!-- </article "role="article" "> -->
8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html?context=cdpaas&locale=en
Tuning Studio
Tuning Studio Tune a foundation model with the Tuning Studio to guide an AI foundation model to return useful output. Required permissions : To run training experiments, you must have the Admin or Editor role in a project. : The Tuning Studio is not available with all plans or in all data centers. See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) and [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html). Data format : Tabular: JSON, JSONL. For details, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). Note: You can use the same training data file with one or more tuning experiments. Data size : 50 to 10,000 input and output example pairs. The maximum file size is 200 MB. You use the Tuning Studio to create a tuned version of an existing foundation model. This video provides a visual method to learn the concepts and tasks in this documentation. Foundation models are AI models that are pretrained on terabytes of data from across the internet and other public resources. They are unrivaled in their ability to predict the next best word and generate language. While language-generation can be useful for brainstorming and spurring creativity, it is less useful for achieving concrete tasks. Model tuning, and other techniques, such as retrieval-augmented generation, help you to use foundation models in meaningful ways for your business. With the Tuning Studio, you can tune a smaller foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation. Tuning can help a smaller foundation model achieve results comparable to larger models in the same model family. By tuning and deploying the smaller model, you can reduce long-term inference costs. Much like prompt engineering, tuning a foundation model helps you to influence the content and format of the foundation model output. Knowing what to expect from a foundation model is essential if you want to plug the step of inferencing a foundation model into a business workflow. The following diagram illustrates how tuning a foundation model can help you guide the model to generate useful output. You provide labeled data that illustrates the format and type of output that you want the model to return, which helps the foundation model to follow the established pattern. ![How a tuned model relates to a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tune-overview.png) You can tune a foundation model to optimize the model's ability to do many things, including: * Generate new text in a specific style * Generate text that summarizes or extracts information in a certain way * Classify text To learn more about when tuning a model is the right approach, see [When to tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html). Workflow Tuning a model involves the following tasks: 1. Engineer prompts that work well with the model you want to use. * Find the largest foundation model that works best for the task. * Experiment until you understand which prompt formats show the most potential for getting good results from the model. Tuning doesn't mean you can skip prompt engineering altogether. Experimentation is necessary to find the right foundation model for your use case. Tuning means you can do the work of prompt engineering once and benefit from it again and again. You can use the Prompt Lab to experiment with prompt engineering. For help, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). 2. Create training data to use for model tuning. 3. Create a tuning experiment to tune the model. 4. Evaluate the tuned model. If necessary, change the training data or the experiment parameters and run more experiments until you're satisfied with the results. 5. Deploy the tuned model. Learn more * [When to tune](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html) * [Methods for tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) * [Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) * [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) * [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# Tuning Studio # Tune a foundation model with the Tuning Studio to guide an AI foundation model to return useful output\. **Required permissions** : To run training experiments, you must have the **Admin** or **Editor** role in a project\. : The Tuning Studio is not available with all plans or in all data centers\. See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) and [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html)\. **Data format** : Tabular: JSON, JSONL\. For details, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html)\. Note: You can use the same training data file with one or more tuning experiments\. **Data size** : 50 to 10,000 input and output example pairs\. The maximum file size is 200 MB\. You use the Tuning Studio to create a tuned version of an existing foundation model\. This video provides a visual method to learn the concepts and tasks in this documentation\. Foundation models are AI models that are pretrained on terabytes of data from across the internet and other public resources\. They are unrivaled in their ability to predict the next best word and generate language\. While language\-generation can be useful for brainstorming and spurring creativity, it is less useful for achieving concrete tasks\. Model tuning, and other techniques, such as retrieval\-augmented generation, help you to use foundation models in meaningful ways for your business\. With the Tuning Studio, you can tune a smaller foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation\. Tuning can help a smaller foundation model achieve results comparable to larger models in the same model family\. By tuning and deploying the smaller model, you can reduce long\-term inference costs\. Much like prompt engineering, tuning a foundation model helps you to influence the content and format of the foundation model output\. Knowing what to expect from a foundation model is essential if you want to plug the step of inferencing a foundation model into a business workflow\. The following diagram illustrates how tuning a foundation model can help you guide the model to generate useful output\. You provide labeled data that illustrates the format and type of output that you want the model to return, which helps the foundation model to follow the established pattern\. ![How a tuned model relates to a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/fm-tune-overview.png) You can tune a foundation model to optimize the model's ability to do many things, including: <!-- <ul> --> * Generate new text in a specific style * Generate text that summarizes or extracts information in a certain way * Classify text <!-- </ul> --> To learn more about when tuning a model is the right approach, see [When to tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html)\. ## Workflow ## Tuning a model involves the following tasks: <!-- <ol> --> 1. Engineer prompts that work well with the model you want to use\. <!-- <ul> --> * Find the largest foundation model that works best for the task. * Experiment until you understand which prompt formats show the most potential for getting good results from the model. <!-- </ul> --> Tuning doesn't mean you can skip prompt engineering altogether. Experimentation is necessary to find the right foundation model for your use case. Tuning means you can do the work of prompt engineering once and benefit from it again and again. You can use the Prompt Lab to experiment with prompt engineering. For help, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). 2. Create training data to use for model tuning\. 3. Create a tuning experiment to tune the model\. 4. Evaluate the tuned model\. If necessary, change the training data or the experiment parameters and run more experiments until you're satisfied with the results. 5. Deploy the tuned model\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [When to tune](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html) * [Methods for tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) * [Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html) <!-- </ul> --> <!-- <ul> --> * [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) * [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html?context=cdpaas&locale=en
Tuning a foundation model
Tuning a foundation model To tune a foundation model, create a tuning experiment that guides the foundation model to return the output you want in the format you want. Requirements If you signed up for watsonx.ai and specified the Dallas region, all requirements are met and you're ready to use the Tuning Studio. The Tuning Studio is available from a project that is created for you automatically when you sign up for watsonx.ai. The project is named sandbox and you can use it to get started with testing and customizing foundation models. Before you begin Experiment with the Prompt Lab to determine the best model to use for your task. Craft and try prompts until you find the input and output patterns that generate the best results from the model. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html). Create a set of example prompts that follow the patterns that generate the best results based on your prompt engineering work. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). Tune a model 1. Click the Tune a foundation model with labeled data task. 2. Name the tuning experiment. 3. Optional: Add a description and tags. Add a description as a reminder to yourself and to help collaborators understand the goal of the tuned model. Assigning a tag gives you a way to filter your tuning assets later to show only the assets associated with a tag. 4. Click Create. 5. The flan-t5-xl foundation model is selected for you to tune. To read more about the model, click the Preview icon (![Preview icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-preview-icon.png)) that is displayed from the drop-down list. For more information, see the [model card](https://huggingface.co/google/flan-t5-xl) 6. Choose how to initialize the prompt from the following options: Text : Uses text that you specify. Random : Uses values that are generated for you as part of the tuning experiment. These options are related to the prompt tuning method for tuning models. For more information about how each option affects the tuning experiment, see [How prompt-tuning works](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.htmlhow-prompt-tuning-works). 7. Required for the Text initialization method only: Add the initialization text that you want to include with the prompt. * For a classification task, give an instruction that describes what you want to classify and lists the class labels to be used. For example, Classify whether the sentiment of each comment is Positive or Negative. * For a generative task, describe what you want the model to provide in the output. For example, Make the case for allowing employees to work from home a few days a week. * For a summarization task, give an instruction such as, Summarize the main points from a meeting transcript. 8. Choose a task type. Choose the task type that most closely matches what you want the model to do: Classification : Predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. By separating out customer problems, you can find and address them more quickly. Generation : Generates text. For example, writes a promotional email. Summarization : Generates text that describes the main ideas that are expressed in a body of text. For example, summarizes a research paper. Whichever task you choose, the input is submitted to the underlying foundation model as a generative request type during the experiment. For classification tasks, class names are taken into account in the prompts that are used to tune the model. As models and tuning methods evolve, task-specific enhancements are likely to be added that you can leverage if tasks are represented accurately. 9. Required for classification tasks only: In the Classification output (verbalizer) field, add the class labels that you want the model to use one at a time. Important: Specify the same labels that are used in your training data. During the tuning experiment, class label information is submitted along with the input examples from the training data. 10. Add the training data that will be used to tune the model. You can upload a file or use an asset from your project. To see examples of how to format your file, expand What should your data look like?, and then click Preview template. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). 11. Optional: If you want to limit the size of the input or output examples that are used during training, adjust the maximum number of tokens that are allowed. Expand What should your data look like?, and then drag the sliders to change the values. Limiting the size can reduce the time that it takes to run the tuning experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html?context=cdpaas&locale=entuning-tokens). 12. Optional: Click Configure parameters to edit the parameters that are used by the tuning experiment. The tuning run is configured with parameter values that represent a good starting point for tuning a model. You can adjust them if you want. For more information about the available parameters and what they do, see [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html). After you change parameter values, click Save. 13. Click Start tuning. The tuning experiment begins. It might take a few minutes to a few hours depending on the size of your training data and the availability of compute resources. When the experiment is finished, the status shows as completed. A tuned model asset is not created until after you create a deployment from a completed tuning experiment. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html). Controlling the number of tokens used You can change the number of tokens that are allowed in the model input and output during a tuning experiment. Table 1: Token number parameters Parameter name Value options Default value Maximum input tokens 1 - 256 256 Maximum output tokens 1 - 128 128 You already have some control over the input size. The input text that is used during a tuning experiment comes from your training data. So, you can manage the input size by keeping your example inputs to a set length. However, you might be getting training data that isn't curated from another team or process. In that case, you can use the Maximum input tokens slider to manage the input size. If you set the parameter to 200 and the training data has an example input with 1,000 tokens, for example, the example is truncated. Only the first 200 tokens of the example input are used. The Max output tokens value is important because it controls the number of tokens that the model is allowed to generate as output at training time. You can use the slider to limit the output size, which helps the model to generate concise output. For classification tasks, minimizing the size of the output is a good way to force a generative model to return the class label only, without repeating the classification pattern in the output. For natural language models, words are converted to tokens. 256 tokens is equal to approximately 130—170 words. 128 tokens is equal to approximately 65—85 words. However, token numbers are difficult to estimate and can differ by model. For more information, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). Evaluating the tuning experiment When the experiment is finished, a loss function graph is displayed that illustrates the improvement in the model output over time. The epochs are shown on the x-axis and a measure of the difference between predicted and actual results per epoch is shown on the y-axis. The value that is shown per epoch is calculated from the average gradient value from all of the accumulation steps in the epoch. The best experiment outcome is represented by a downward-sloping curve. A decreasing curve means that the model gets better at generating the expected outputs in the expected format over time. If the gradient value for the last epoch remains too high, you can run another experiment. To help improve the results, try one of the following approaches: * Augment or edit the training data that you're using. * Adjust the experiment parameters. When you're satisfied with the results from the tuning experiment, deploy the tuned foundation model. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html). Learn more * [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) * [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) * [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) * [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
# Tuning a foundation model # To tune a foundation model, create a tuning experiment that guides the foundation model to return the output you want in the format you want\. ## Requirements ## If you signed up for watsonx\.ai and specified the Dallas region, all requirements are met and you're ready to use the Tuning Studio\. The Tuning Studio is available from a project that is created for you automatically when you sign up for watsonx\.ai\. The project is named *sandbox* and you can use it to get started with testing and customizing foundation models\. ## Before you begin ## Experiment with the Prompt Lab to determine the best model to use for your task\. Craft and try prompts until you find the input and output patterns that generate the best results from the model\. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)\. Create a set of example prompts that follow the patterns that generate the best results based on your prompt engineering work\. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html)\. ## Tune a model ## <!-- <ol> --> 1. Click the **Tune a foundation model with labeled data** task\. 2. Name the tuning experiment\. 3. **Optional**: Add a description and tags\. Add a description as a reminder to yourself and to help collaborators understand the goal of the tuned model\. Assigning a tag gives you a way to filter your tuning assets later to show only the assets associated with a tag\. 4. Click **Create**\. 5. The **flan\-t5\-xl** foundation model is selected for you to tune\. To read more about the model, click the **Preview** icon (![Preview icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/autoai-preview-icon.png)) that is displayed from the drop-down list. For more information, see the [model card](https://huggingface.co/google/flan-t5-xl) 6. Choose how to initialize the prompt from the following options: **Text** : Uses text that you specify. **Random** : Uses values that are generated for you as part of the tuning experiment. These options are related to the prompt tuning method for tuning models. For more information about how each option affects the tuning experiment, see [How prompt-tuning works](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html##how-prompt-tuning-works). 7. **Required for the *Text* initialization method only**: Add the initialization text that you want to include with the prompt\. <!-- <ul> --> * For a classification task, give an instruction that describes what you want to classify and lists the class labels to be used. For example, *Classify whether the sentiment of each comment is Positive or Negative*. * For a generative task, describe what you want the model to provide in the output. For example, *Make the case for allowing employees to work from home a few days a week*. * For a summarization task, give an instruction such as, *Summarize the main points from a meeting transcript*. <!-- </ul> --> 8. Choose a task type\. Choose the task type that most closely matches what you want the model to do: **Classification** : Predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. By separating out customer problems, you can find and address them more quickly. **Generation** : Generates text. For example, writes a promotional email. **Summarization** : Generates text that describes the main ideas that are expressed in a body of text. For example, summarizes a research paper. Whichever task you choose, the input is submitted to the underlying foundation model as a generative request type during the experiment. For classification tasks, class names are taken into account in the prompts that are used to tune the model. As models and tuning methods evolve, task-specific enhancements are likely to be added that you can leverage if tasks are represented accurately. 9. **Required for classification tasks only**: In the **Classification output (verbalizer)** field, add the class labels that you want the model to use one at a time\. Important: Specify the same labels that are used in your training data. During the tuning experiment, class label information is submitted along with the input examples from the training data. 10. Add the training data that will be used to tune the model\. You can upload a file or use an asset from your project\. To see examples of how to format your file, expand *What should your data look like?*, and then click **Preview template**. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html). 11. **Optional**: If you want to limit the size of the input or output examples that are used during training, adjust the maximum number of tokens that are allowed\. Expand *What should your data look like?*, and then drag the sliders to change the values\. Limiting the size can reduce the time that it takes to run the tuning experiment\. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html?context=cdpaas&locale=en#tuning-tokens)\. 12. **Optional**: Click *Configure parameters* to edit the parameters that are used by the tuning experiment\. The tuning run is configured with parameter values that represent a good starting point for tuning a model. You can adjust them if you want. For more information about the available parameters and what they do, see [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html). After you change parameter values, click **Save**. 13. Click **Start tuning**\. <!-- </ol> --> The tuning experiment begins\. It might take a few minutes to a few hours depending on the size of your training data and the availability of compute resources\. When the experiment is finished, the status shows as completed\. A tuned model asset is not created until after you create a deployment from a completed tuning experiment\. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html)\. ### Controlling the number of tokens used ### You can change the number of tokens that are allowed in the model input and output during a tuning experiment\. <!-- <table> --> Table 1: Token number parameters | Parameter name | Value options | Default value | | --------------------- | ------------- | ------------- | | Maximum input tokens | 1 \- 256 | 256 | | Maximum output tokens | 1 \- 128 | 128 | <!-- </table ""> --> You already have some control over the input size\. The input text that is used during a tuning experiment comes from your training data\. So, you can manage the input size by keeping your example inputs to a set length\. However, you might be getting training data that isn't curated from another team or process\. In that case, you can use the **Maximum input tokens** slider to manage the input size\. If you set the parameter to 200 and the training data has an example input with 1,000 tokens, for example, the example is truncated\. Only the first 200 tokens of the example input are used\. The **Max output tokens** value is important because it controls the number of tokens that the model is allowed to generate as output at training time\. You can use the slider to limit the output size, which helps the model to generate concise output\. For classification tasks, minimizing the size of the output is a good way to force a generative model to return the class label only, without repeating the classification pattern in the output\. For natural language models, words are converted to tokens\. 256 tokens is equal to approximately 130—170 words\. 128 tokens is equal to approximately 65—85 words\. However, token numbers are difficult to estimate and can differ by model\. For more information, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)\. ## Evaluating the tuning experiment ## When the experiment is finished, a loss function graph is displayed that illustrates the improvement in the model output over time\. The epochs are shown on the x\-axis and a measure of the difference between predicted and actual results per epoch is shown on the y\-axis\. The value that is shown per epoch is calculated from the average gradient value from all of the accumulation steps in the epoch\. The best experiment outcome is represented by a downward\-sloping curve\. A decreasing curve means that the model gets better at generating the expected outputs in the expected format over time\. If the gradient value for the last epoch remains too high, you can run another experiment\. To help improve the results, try one of the following approaches: <!-- <ul> --> * Augment or edit the training data that you're using\. * Adjust the experiment parameters\. <!-- </ul> --> When you're satisfied with the results from the tuning experiment, deploy the tuned foundation model\. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html)\. ## Learn more ## <!-- <ul> --> * [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html) * [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html) <!-- </ul> --> <!-- <ul> --> * [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html) * [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502) <!-- </ul> --> **Parent topic:**[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) <!-- </article "role="article" "> -->
FBC3C5F81D060CD996489B772ABAC886F12130A3
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html?context=cdpaas&locale=en
When to tune a foundation model
When to tune a foundation model Find out when tuning a model can help you use a foundation model to achieve your goals. Tune a foundation model when you want to do the following things: * Reduce the cost of inferencing at scale Larger foundation models typically generate better results. However, they are also more expensive to use. By tuning a model, you can get similar, sometimes even better results from a smaller model that costs less to use. * Get the model's output to use a certain style or format * Improve the model's performance by teaching the model a specialized task * Generate output in a reliable form in response to zero-shot prompts When not to tune a model Tuning a model is not always the right approach for improving the output of a model. For example, tuning a model cannot help you do the following things: * Improve the accuracy of answers in model output If you're using a foundation model for factual recall in a question-answering scenario, tuning will marginally improve answer accuracy. To get factual answers, you must provide factual information as part of your input to the model. Tuning can be used to help the generated factual answers conform to a format that can be more-easily used by a downstream process in a workflow. To learn about methods for returning factual answers, see [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html). * Get the model to use a specific vocabulary in its output consistently Large language models that are trained on large amounts of data formulate a vocabulary based on that initial set of data. You can introduce significant terms to the model from training data that you use to tune the model. However, the model might not use these preferred terms reliably in its output. * Teach a foundation model to perform an entirely new task Experimenting with prompt engineering is an important first step because it helps you understand the type of output that a foundation model is and is not capable of generating. You can use tuning to tweak, tailor, and shape the output that a foundation model is able to return. Learn more * [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) * [Tuning methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
# When to tune a foundation model # Find out when tuning a model can help you use a foundation model to achieve your goals\. Tune a foundation model when you want to do the following things: <!-- <ul> --> * Reduce the cost of inferencing at scale Larger foundation models typically generate better results. However, they are also more expensive to use. By tuning a model, you can get similar, sometimes even better results from a smaller model that costs less to use. * Get the model's output to use a certain style or format * Improve the model's performance by teaching the model a specialized task * Generate output in a reliable form in response to zero\-shot prompts <!-- </ul> --> ## When not to tune a model ## Tuning a model is not always the right approach for improving the output of a model\. For example, tuning a model cannot help you do the following things: <!-- <ul> --> * Improve the accuracy of answers in model output If you're using a foundation model for factual recall in a question-answering scenario, tuning will marginally improve answer accuracy. To get factual answers, you must provide factual information as part of your input to the model. Tuning can be used to help the generated factual answers conform to a format that can be more-easily used by a downstream process in a workflow. To learn about methods for returning factual answers, see [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html). * Get the model to use a specific vocabulary in its output consistently Large language models that are trained on large amounts of data formulate a vocabulary based on that initial set of data. You can introduce significant terms to the model from training data that you use to tune the model. However, the model might not use these preferred terms reliably in its output. * Teach a foundation model to perform an entirely new task Experimenting with prompt engineering is an important first step because it helps you understand the type of output that a foundation model is and is not capable of generating. You can use tuning to tweak, tailor, and shape the output that a foundation model is able to return. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Retreival\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html) * [Tuning methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html) <!-- </ul> --> **Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html) <!-- </article "role="article" "> -->
E3B9F33C36E5636808B137CFA4745E39F3B48D62
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html?context=cdpaas&locale=en
SPSS predictive analytics forecasting using data preparation for time series data in notebooks
SPSS predictive analytics forecasting using data preparation for time series data in notebooks Data preparation for time series data (TSDP) provides the functionality to convert raw time data (in Flattened multi-dimensional format, which includes transactional (event) based and column-based data) into regular time series data (in compact row-based format) which is required by the subsequent time series analysis methods. The main job of TSDP is to generate time series in terms of the combination of each unique value in the dimension fields with metric fields. In addition, it sorts the data based on the timestamp, extracts metadata of time variables, transforms time series with another time granularity (interval) by applying an aggregation or distribution function, checks the data quality, and handles missing values if needed. Python example code: from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation(). setMetricFieldList(["Demand"]). setDateTimeField("Date"). setEncodeSeriesID(True). setInputTimeInterval("MONTH"). setOutTimeInterval("MONTH"). setQualityScoreThreshold(0.0). setConstSeriesThreshold(0.0) tsdpOut = tsdp.transform(data) TimeSeriesDataPreparationConvertor This is the date/time convertor API that's used to provide some functionalities of the date/time convertor inside TSDP for applications to use. There are two use cases for this component: * Compute the time points between a specified start and end time. In this case, the start and end time both occur after the first observation in the previous TSDP\'s output. * Compute the time points between a start index and end index referring to the last observation in the previous TSDP\'s output. Temporal causal modeling Temporal causal modeling (TCM) refers to a suite of methods that attempt to discover key temporal relationships in time series data by using a combination of Granger causality and regression algorithms for variable selection. Python example code: from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettings, Forecast, Fit from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation().setDimFieldList(["Demension1", "Demension2"]). setMetricFieldList(["m1", "m2", "m3", "m4"]). setDateTimeField("date"). setEncodeSeriesID(True). setInputTimeInterval("MONTH"). setOutTimeInterval("MONTH") tsdpOutput = tsdp.transform(changedDF) lcm = LocalContainerManager() lcm.exportContainers("TSDP", tsdp.containers) estimator = TemporalCausal(lcm). setInputContainerKeys(["TSDP"]). setTargetPredictorList([Predictor( targetList="", "", ""]], predictorCandidateList="", "", ""]])]). setMaxNumPredictor(MaxNumberOfPredictor(False, 4)). setMaxLag(MaxLag("SETTING", 5)). setTolerance(1e-6) tcmModel = estimator.fit(tsdpOutput) transformer = tcmModel.setDataEncoded(True). setCILevel(0.95). setOutTargetValues(False). setTargets(FieldSettings(fieldNameList=FieldNameList(seriesIDList=["da1", "db1", "m1"]]))). setReestimate(False). setForecast(Forecast(outForecast=True, forecastSpan=5, outCI=True)). setFit(Fit(outFit=True, outCI=True, outResidual=True)) predictions = transformer.transform(tsdpOutput) rtsdp = ReverseTimeSeriesDataPreparation(lcm). setInputContainerKeys(["TSDP"]). setDeriveFutureIndicatorField(True) rtsdpOutput = rtsdp.transform(predictions) rtsdpOutput.show() Temporal Causal Auto Regressive Model Autoregressive (AR) models are built to compute out-of-sample forecasts for predictor series that aren't target series. These predictor forecasts are then used to compute out-of-sample forecasts for the target series. Model produced by TemporalCausal TemporalCausal exports outputs: * a JSON file that contains TemporalCausal model information * an XML file that contains multi series model Python example code: from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal, TemporalCausalAutoRegressiveModel from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettingsAr, ForecastAr lcm = LocalContainerManager() arEstimator = TemporalCausal(lcm). setInputContainerKeys([tsdp.uid]). setTargetPredictorList([Predictor( targetList = "da1", "db1", "m2"]], predictorCandidateList = "da1", "db1", "m1"], "da1", "db2", "m1"], "da1", "db2", "m2"], "da1", "db3", "m1"], "da1", "db3", "m2"], "da1", "db3", "m3"]])]). setMaxNumPredictor(MaxNumberOfPredictor(False, 5)). setMaxLag(MaxLag("SETTING", 5)) arEstimator.fit(df) tcmAr = TemporalCausalAutoRegressiveModel(lcm). setInputContainerKeys([arEstimator.uid]). setDataEncoded(True). setOutTargetValues(True). setTargets(FieldSettingsAr(FieldNameList( seriesIDList=["da1", "db1", "m1"], "da1", "db2", "m2"], "da1", "db3", "m3"]]))). setForecast(ForecastAr(forecastSpan = 5)) scored = tcmAr.transform(df) scored.show() Temporal Causal Outlier Detection One of the advantages of building TCM models is the ability to detect model-based outliers. Outlier detection refers to a capability to identify the time points in the target series with values that stray too far from their expected (fitted) values based on the TCM models. Temporal Causal Root Cause Analysis The root cause analysis refers to a capability to explore the Granger causal graph in order to analyze the key/root values that resulted in the outlier in question. Temporal Causal Scenario Analysis Scenario analysis refers to a capability of the TCM models to "play-out" the repercussions of artificially setting the value of a time series. A scenario is the set of forecasts that are performed by substituting the values of a root time series by a vector of substitute values. Temporal Causal Summary TCM Summary selects Top N models based on one model quality measure. There are five model quality measures: Root Mean Squared Error (RMSE), Root Mean Squared Percentage Error (RMSPE), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and R squared (RSQUARE). Both N and the model quality measure can be set by the user. Time Series Exploration Time Series Exploration explores the characteristics of time series data based on some statistics and tests to generate preliminary insights about the time series before modeling. It covers not only analytic methods for expert users (including time series clustering, unit root test, and correlations), but also provides an automatic exploration process based on a simple time series decomposition method for business users. Python example code: from spss.ml.forecasting.timeseriesexploration import TimeSeriesExploration tse = TimeSeriesExploration(). setAutoExploration(True). setClustering(True) tseModel = tse.fit(data) predictions = tseModel.transform(data) predictions.show() Reverse Data preparation for time series data Reverse Data preparation for time series data (RTSDP) provides functionality that converts the compact row based (CRB) format that's generated by TimeSeriesDataPreperation (TSDP) or TemporalCausalModel (TCM Score) back to the flattened multidimensional (FMD) format. Python example code: from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.params.temporal import GroupType from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation manager = LocalContainerManager() tsdp = TimeSeriesDataPreparation(manager). setDimFieldList(["Dimension1", "Dimension2", "Dimension3"]). setMetricFieldList( ["Metric1", "Metric2", "Metric3", "Metric4", "Metric5", "Metric6", "Metric7", "Metric8", "Metric9", "Metric10"]). setDateTimeField("TimeStamp"). setEncodeSeriesID(False). setInputTimeInterval("WEEK"). setOutTimeInterval("WEEK"). setMissingImputeType("LINEAR_INTERP"). setQualityScoreThreshold(0.0). setConstSeriesThreshold(0.0). setGroupType( GroupType([("Metric1", "MEAN"), ("Metric2", "SUM"), ("Metric3", "MODE"), ("Metric4", "MIN"), ("Metric5", "MAX")])) tsdpOut = tsdp.transform(changedDF) rtsdp = ReverseTimeSeriesDataPreparation(manager). setInputContainerKeys([tsdp.uid]). setDeriveFutureIndicatorField(True) rtdspOut = rtsdp.transform(tsdpOut) import com.ibm.spss.ml.forecasting.traditional.TimeSeriesForecastingModelReEstimate val tsdp = TimeSeriesDataPreparation(). setDimFieldList(Array("da", "db")). setMetricFieldList(Array("metric")). setDateTimeField("date"). setEncodeSeriesID(false). setInputTimeInterval("MONTH"). setOutTimeInterval("MONTH") val lcm = LocalContainerManager() lcm.exportContainers("k", tsdp.containers) val reestimate = TimeSeriesForecastingModelReEstimate(lcm). setForecast(ForecastEs(outForecast = true, forecastSpan = 4, outCI = true)). setFitSettings(Fit(outFit = true, outCI = true, outResidual = true)). setOutInputData(true). setInputContainerKeys(Seq("k")) val rtsdp = ReverseTimeSeriesDataPreparation(tsdp.manager). setInputContainerKeys(List(tsdp.uid)). setDeriveFutureIndicatorField(true) val pipeline = new Pipeline().setStages(Array(tsdp, reestimate, rtsdp)) val scored = pipeline.fit(data).transform(data) scored.show() Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
# SPSS predictive analytics forecasting using data preparation for time series data in notebooks # Data preparation for time series data (TSDP) provides the functionality to convert raw time data (in Flattened multi\-dimensional format, which includes transactional (event) based and column\-based data) into regular time series data (in compact row\-based format) which is required by the subsequent time series analysis methods\. The main job of TSDP is to generate time series in terms of the combination of each unique value in the dimension fields with metric fields\. In addition, it sorts the data based on the timestamp, extracts metadata of time variables, transforms time series with another time granularity (interval) by applying an aggregation or distribution function, checks the data quality, and handles missing values if needed\. **Python example code:** from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation(). \ setMetricFieldList(["Demand"]). \ setDateTimeField("Date"). \ setEncodeSeriesID(True). \ setInputTimeInterval("MONTH"). \ setOutTimeInterval("MONTH"). \ setQualityScoreThreshold(0.0). \ setConstSeriesThreshold(0.0) tsdpOut = tsdp.transform(data) ## TimeSeriesDataPreparationConvertor ## This is the date/time convertor API that's used to provide some functionalities of the date/time convertor inside TSDP for applications to use\. There are two use cases for this component: <!-- <ul> --> * Compute the time points between a specified start and end time\. In this case, the start and end time both occur after the first observation in the previous TSDP\\'s output\. * Compute the time points between a start index and end index referring to the last observation in the previous TSDP\\'s output\. <!-- </ul> --> ## Temporal causal modeling ## Temporal causal modeling (TCM) refers to a suite of methods that attempt to discover key temporal relationships in time series data by using a combination of Granger causality and regression algorithms for variable selection\. **Python example code:** from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettings, Forecast, Fit from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation tsdp = TimeSeriesDataPreparation().setDimFieldList(["Demension1", "Demension2"]). \ setMetricFieldList(["m1", "m2", "m3", "m4"]). \ setDateTimeField("date"). \ setEncodeSeriesID(True). \ setInputTimeInterval("MONTH"). \ setOutTimeInterval("MONTH") tsdpOutput = tsdp.transform(changedDF) lcm = LocalContainerManager() lcm.exportContainers("TSDP", tsdp.containers) estimator = TemporalCausal(lcm). \ setInputContainerKeys(["TSDP"]). \ setTargetPredictorList([Predictor( targetList="", "", ""]], predictorCandidateList="", "", ""]])]). \ setMaxNumPredictor(MaxNumberOfPredictor(False, 4)). \ setMaxLag(MaxLag("SETTING", 5)). \ setTolerance(1e-6) tcmModel = estimator.fit(tsdpOutput) transformer = tcmModel.setDataEncoded(True). \ setCILevel(0.95). \ setOutTargetValues(False). \ setTargets(FieldSettings(fieldNameList=FieldNameList(seriesIDList=["da1", "db1", "m1"]]))). \ setReestimate(False). \ setForecast(Forecast(outForecast=True, forecastSpan=5, outCI=True)). \ setFit(Fit(outFit=True, outCI=True, outResidual=True)) predictions = transformer.transform(tsdpOutput) rtsdp = ReverseTimeSeriesDataPreparation(lcm). \ setInputContainerKeys(["TSDP"]). \ setDeriveFutureIndicatorField(True) rtsdpOutput = rtsdp.transform(predictions) rtsdpOutput.show() ## Temporal Causal Auto Regressive Model ## Autoregressive (AR) models are built to compute out\-of\-sample forecasts for predictor series that aren't target series\. These predictor forecasts are then used to compute out\-of\-sample forecasts for the target series\. **Model produced by TemporalCausal** TemporalCausal exports outputs: <!-- <ul> --> * a JSON file that contains TemporalCausal model information * an XML file that contains multi series model <!-- </ul> --> **Python example code:** from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.temporalcausal import TemporalCausal, TemporalCausalAutoRegressiveModel from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettingsAr, ForecastAr lcm = LocalContainerManager() arEstimator = TemporalCausal(lcm). \ setInputContainerKeys([tsdp.uid]). \ setTargetPredictorList([Predictor( targetList = "da1", "db1", "m2"]], predictorCandidateList = "da1", "db1", "m1"], "da1", "db2", "m1"], "da1", "db2", "m2"], "da1", "db3", "m1"], "da1", "db3", "m2"], "da1", "db3", "m3"]])]). \ setMaxNumPredictor(MaxNumberOfPredictor(False, 5)). \ setMaxLag(MaxLag("SETTING", 5)) arEstimator.fit(df) tcmAr = TemporalCausalAutoRegressiveModel(lcm).\ setInputContainerKeys([arEstimator.uid]).\ setDataEncoded(True).\ setOutTargetValues(True). \ setTargets(FieldSettingsAr(FieldNameList( seriesIDList=["da1", "db1", "m1"], "da1", "db2", "m2"], "da1", "db3", "m3"]]))).\ setForecast(ForecastAr(forecastSpan = 5)) scored = tcmAr.transform(df) scored.show() ## Temporal Causal Outlier Detection ## One of the advantages of building TCM models is the ability to detect model\-based outliers\. Outlier detection refers to a capability to identify the time points in the target series with values that stray too far from their expected (fitted) values based on the TCM models\. ## Temporal Causal Root Cause Analysis ## The root cause analysis refers to a capability to explore the Granger causal graph in order to analyze the key/root values that resulted in the outlier in question\. ## Temporal Causal Scenario Analysis ## Scenario analysis refers to a capability of the TCM models to "play\-out" the repercussions of artificially setting the value of a time series\. A scenario is the set of forecasts that are performed by substituting the values of a root time series by a vector of substitute values\. ## Temporal Causal Summary ## TCM Summary selects Top N models based on one model quality measure\. There are five model quality measures: Root Mean Squared Error (RMSE), Root Mean Squared Percentage Error (RMSPE), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and R squared (RSQUARE)\. Both N and the model quality measure can be set by the user\. ## Time Series Exploration ## Time Series Exploration explores the characteristics of time series data based on some statistics and tests to generate preliminary insights about the time series before modeling\. It covers not only analytic methods for expert users (including time series clustering, unit root test, and correlations), but also provides an automatic exploration process based on a simple time series decomposition method for business users\. **Python example code:** from spss.ml.forecasting.timeseriesexploration import TimeSeriesExploration tse = TimeSeriesExploration(). \ setAutoExploration(True). \ setClustering(True) tseModel = tse.fit(data) predictions = tseModel.transform(data) predictions.show() ## Reverse Data preparation for time series data ## Reverse Data preparation for time series data (RTSDP) provides functionality that converts the compact row based (CRB) format that's generated by TimeSeriesDataPreperation (TSDP) or TemporalCausalModel (TCM Score) back to the flattened multidimensional (FMD) format\. **Python example code:** from spss.ml.common.wrapper import LocalContainerManager from spss.ml.forecasting.params.temporal import GroupType from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation manager = LocalContainerManager() tsdp = TimeSeriesDataPreparation(manager). \ setDimFieldList(["Dimension1", "Dimension2", "Dimension3"]). \ setMetricFieldList( ["Metric1", "Metric2", "Metric3", "Metric4", "Metric5", "Metric6", "Metric7", "Metric8", "Metric9", "Metric10"]). \ setDateTimeField("TimeStamp"). \ setEncodeSeriesID(False). \ setInputTimeInterval("WEEK"). \ setOutTimeInterval("WEEK"). \ setMissingImputeType("LINEAR_INTERP"). \ setQualityScoreThreshold(0.0). \ setConstSeriesThreshold(0.0). \ setGroupType( GroupType([("Metric1", "MEAN"), ("Metric2", "SUM"), ("Metric3", "MODE"), ("Metric4", "MIN"), ("Metric5", "MAX")])) tsdpOut = tsdp.transform(changedDF) rtsdp = ReverseTimeSeriesDataPreparation(manager). \ setInputContainerKeys([tsdp.uid]). \ setDeriveFutureIndicatorField(True) rtdspOut = rtsdp.transform(tsdpOut) import com.ibm.spss.ml.forecasting.traditional.TimeSeriesForecastingModelReEstimate val tsdp = TimeSeriesDataPreparation(). setDimFieldList(Array("da", "db")). setMetricFieldList(Array("metric")). setDateTimeField("date"). setEncodeSeriesID(false). setInputTimeInterval("MONTH"). setOutTimeInterval("MONTH") val lcm = LocalContainerManager() lcm.exportContainers("k", tsdp.containers) val reestimate = TimeSeriesForecastingModelReEstimate(lcm). setForecast(ForecastEs(outForecast = true, forecastSpan = 4, outCI = true)). setFitSettings(Fit(outFit = true, outCI = true, outResidual = true)). setOutInputData(true). setInputContainerKeys(Seq("k")) val rtsdp = ReverseTimeSeriesDataPreparation(tsdp.manager). setInputContainerKeys(List(tsdp.uid)). setDeriveFutureIndicatorField(true) val pipeline = new Pipeline().setStages(Array(tsdp, reestimate, rtsdp)) val scored = pipeline.fit(data).transform(data) scored.show() **Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) <!-- </article "role="article" "> -->
3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html?context=cdpaas&locale=en
Geospatial data analysis
Geospatial data analysis You can use the geospatio-temporal library to expand your data science analysis in Python notebooks to include location analytics by gathering, manipulating and displaying imagery, GPS, satellite photography and historical data. The gespatio-temporal library is available in all IBM Watson Studio Spark with Python runtime environments. Key functions The geospatio-temporal library includes functions to read and write data, topological functions, geohashing, indexing, ellipsoidal and routing functions. Key aspects of the library include: * All calculated geometries are accurate without the need for projections. * The geospatial functions take advantage of the distributed processing capabilities provided by Spark. * The library includes native geohashing support for geometries used in simple aggregations and in indexing, thereby improving storage retrieval considerably. * The library supports extensions of Spark distributed joins. * The library supports the SQL/MM extensions to Spark SQL. Getting started with the library Before you can start using the library in a notebook, you must register STContext in your notebook to access the st functions. To register STContext: from pyst import STContext stc = STContext(spark.sparkContext._gateway) Next steps After you have registered STContext in your notebook, you can begin exploring the spatio-temporal library for: * Functions to read and write data * Topological functions * Geohashing functions * Geospatial indexing functions * Ellipsoidal functions * Routing functions Check out the following sample Python notebooks to learn how to use these different functions in Python notebooks: * [Use the spatio-temporal library for location analytics](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/92c6ab6ea922d1da6a2cc9496a277005) * [Use spatial indexing to query spatial data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a7432f0c29c5bda2fb42749f3628d981) * [Spatial queries in PySpark](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/27ecffa80bd3a386fffca1d8d1256ba7) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Geospatial data analysis # You can use the geospatio\-temporal library to expand your data science analysis in Python notebooks to include location analytics by gathering, manipulating and displaying imagery, GPS, satellite photography and historical data\. The gespatio\-temporal library is available in all IBM Watson Studio Spark with Python runtime environments\. ## Key functions ## The geospatio\-temporal library includes functions to read and write data, topological functions, geohashing, indexing, ellipsoidal and routing functions\. Key aspects of the library include: <!-- <ul> --> * All calculated geometries are accurate without the need for projections\. * The geospatial functions take advantage of the distributed processing capabilities provided by Spark\. * The library includes native geohashing support for geometries used in simple aggregations and in indexing, thereby improving storage retrieval considerably\. * The library supports extensions of Spark distributed joins\. * The library supports the SQL/MM extensions to Spark SQL\. <!-- </ul> --> ## Getting started with the library ## Before you can start using the library in a notebook, you must register `STContext` in your notebook to access the `st` functions\. To register `STContext`: from pyst import STContext stc = STContext(spark.sparkContext._gateway) ## Next steps ## After you have registered `STContext` in your notebook, you can begin exploring the spatio\-temporal library for: <!-- <ul> --> * Functions to read and write data * Topological functions * Geohashing functions * Geospatial indexing functions * Ellipsoidal functions * Routing functions <!-- </ul> --> Check out the following sample Python notebooks to learn how to use these different functions in Python notebooks: <!-- <ul> --> * [Use the spatio\-temporal library for location analytics](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/92c6ab6ea922d1da6a2cc9496a277005) * [Use spatial indexing to query spatial data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a7432f0c29c5bda2fb42749f3628d981) * [Spatial queries in PySpark](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/27ecffa80bd3a386fffca1d8d1256ba7) <!-- </ul> --> **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html?context=cdpaas&locale=en
Publishing notebooks on GitHub
Publishing notebooks on GitHub To collaborate with stakeholders and other data scientists, you can publish your notebooks in GitHub repositories. You can also use GitHub to back up notebooks for source code management. Watch this video to see how to enable GitHub integration. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. 00:07 Navigate to your profile and settings. 00:11 On the "Integrations" tab, visit the link to generate a GitHub personal access token. 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. 00:29 Copy the token, return to the GitHub integration settings, and paste the token. 00:36 The token is validated when you save it to your profile settings. 00:42 Now, navigate to your projects. 00:44 You enable GitHub integration at the project level on the "Settings" tab. 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL. 00:56 You'll find that on the "Code" tab in the repo. 01:01 Click "Update" to make the connection. 01:05 Now, go to the "Assets" tab and open the notebook you want to publish. 01:14 Notice that this notebook has the credentials replaced with X's. 01:19 It's a best practice to remove or replace credentials before publishing to GitHub. 01:24 So, this notebook is ready for publishing. 01:27 You can provide the target path along with a commit message. 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. 01:42 When you're, ready click "Publish". 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. 01:54 Let's take a look at the commit. 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook. 02:04 Lastly, you can publish as a gist. 02:07 Gists are another way to share your work on GitHub. 02:10 Every gist is a git repository, so it can be forked and cloned. 02:15 There are two types of gists: public and secret. 02:19 If you start out with a secret gist, you can convert it to a public gist later. 02:24 And again, you have the option to remove hidden cells. 02:29 Follow the link to see the published gist. 02:32 So that's the basics of Watson Studio's GitHub integration. 02:37 Find more videos in the Cloud Pak for Data as a Service documentation. Enabling access to GitHub from your account Before you can publish notebooks on GitHub, you must enable your IBM watsonx account to access GitHub. You enable access by creating a personal access token with the required access scope in GitHub and linking the token to your IBM watsonx account. Follow these steps to create a personal access token: 1. Click your avatar in the header, and then click Profile and settings. 2. Go to the Integrations tab and click the GitHub personal access tokens link on the dialog and generate a new token. 3. On the New personal access token page, select repo scope and then click to generate a token. 4. Copy the generated access token and paste it in the GitHub integration dialog window in IBM watsonx. Linking a project to a GitHub repository After you have saved the access token, your project must be connected to an existing GitHub repository. You can only link to one existing GitHub repository from a project. Private repositories are supported. To link a project to an existing GitHub repository, you must have administrator permission to the project. All project collaborators, who have adminstrator or editor permission, can publish files to this GitHub repository. However, these users must have permission to access the repository. Granting user permissions to repositories must be done in GitHub. To connect a project to an existing GitHub repository: 1. Select the Manage tab and go to the Services and Integrations page. 2. Click the Third-party integrations tab. 3. Click Connect integration. 4. Enter your generated access token from Github. Now you can begin publishing notebooks on GitHub. Note:For information on how to change your Git integration, refer to [Managing your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlintegrations). Publishing a notebook on GitHub To publish a notebook on GitHub: 1. Open the notebook in edit mode. 2. Click the GitHub integration icon (![Shows the upload icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/upload.png)) and select Publish on GitHub from the opened notebook's action bar. When you enter the name of the file you want to publish on GitHub, you can specify a folder path in the GitHub repository. Note that notebook files are always pushed to the master branch. If you get this error: An error occurred while publishing the notebook. Invalid access token permissions or repository does not exist. make sure that: * You generated your personal access token, as described in [Enabling access to GitHub from your account](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/enabling-access.html) and the token was not deleted. * The repository that you want to publish your notebook to still exists. Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
# Publishing notebooks on GitHub # To collaborate with stakeholders and other data scientists, you can publish your notebooks in GitHub repositories\. You can also use GitHub to back up notebooks for source code management\. Watch this video to see how to enable GitHub integration\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * Transcript Synchronize transcript with video <!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> | Time | Transcript | | ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 00:00 | This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. | | 00:07 | Navigate to your profile and settings. | | 00:11 | On the "Integrations" tab, visit the link to generate a GitHub personal access token. | | 00:17 | Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. | | 00:29 | Copy the token, return to the GitHub integration settings, and paste the token. | | 00:36 | The token is validated when you save it to your profile settings. | | 00:42 | Now, navigate to your projects. | | 00:44 | You enable GitHub integration at the project level on the "Settings" tab. | | 00:50 | Simply scroll to the bottom and paste the existing GitHub repository URL. | | 00:56 | You'll find that on the "Code" tab in the repo. | | 01:01 | Click "Update" to make the connection. | | 01:05 | Now, go to the "Assets" tab and open the notebook you want to publish. | | 01:14 | Notice that this notebook has the credentials replaced with X's. | | 01:19 | It's a best practice to remove or replace credentials before publishing to GitHub. | | 01:24 | So, this notebook is ready for publishing. | | 01:27 | You can provide the target path along with a commit message. | | 01:31 | You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. | | 01:42 | When you're, ready click "Publish". | | 01:45 | The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. | | 01:54 | Let's take a look at the commit. | | 01:57 | So, there's the commit, and you can navigate to the repository to see the published notebook. | | 02:04 | Lastly, you can publish as a gist. | | 02:07 | Gists are another way to share your work on GitHub. | | 02:10 | Every gist is a git repository, so it can be forked and cloned. | | 02:15 | There are two types of gists: public and secret. | | 02:19 | If you start out with a secret gist, you can convert it to a public gist later. | | 02:24 | And again, you have the option to remove hidden cells. | | 02:29 | Follow the link to see the published gist. | | 02:32 | So that's the basics of Watson Studio's GitHub integration. | | 02:37 | Find more videos in the Cloud Pak for Data as a Service documentation. | <!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> <!-- </ul> --> ## Enabling access to GitHub from your account ## Before you can publish notebooks on GitHub, you must enable your IBM watsonx account to access GitHub\. You enable access by creating a personal access token with the required access scope in GitHub and linking the token to your IBM watsonx account\. Follow these steps to create a personal access token: <!-- <ol> --> 1. Click your avatar in the header, and then click **Profile and settings**\. 2. Go to the **Integrations** tab and click the GitHub personal access tokens link on the dialog and generate a new token\. 3. On the New personal access token page, select repo scope and then click to generate a token\. 4. Copy the generated access token and paste it in the GitHub integration dialog window in IBM watsonx\. <!-- </ol> --> ## Linking a project to a GitHub repository ## After you have saved the access token, your project must be connected to an existing GitHub repository\. You can only link to one existing GitHub repository from a project\. Private repositories are supported\. To link a project to an existing GitHub repository, you must have administrator permission to the project\. All project collaborators, who have adminstrator or editor permission, can publish files to this GitHub repository\. However, these users must have permission to access the repository\. Granting user permissions to repositories must be done in GitHub\. To connect a project to an existing GitHub repository: <!-- <ol> --> 1. Select the **Manage** tab and go to the **Services and Integrations** page\. 2. Click the **Third\-party integrations** tab\. 3. Click **Connect integration**\. 4. Enter your generated access token from Github\. <!-- </ol> --> Now you can begin publishing notebooks on GitHub\. Note:For information on how to change your Git integration, refer to [Managing your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html#integrations)\. ## Publishing a notebook on GitHub ## To publish a notebook on GitHub: <!-- <ol> --> 1. Open the notebook in edit mode\. 2. Click the GitHub integration icon (![Shows the upload icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/upload.png)) and select **Publish on GitHub** from the opened notebook's action bar\. <!-- </ol> --> When you enter the name of the file you want to publish on GitHub, you can specify a folder path in the GitHub repository\. Note that notebook files are always pushed to the master branch\. If you get this error: `An error occurred while publishing the notebook. Invalid access token permissions or repository does not exist.` make sure that: <!-- <ul> --> * You generated your personal access token, as described in [Enabling access to GitHub from your account](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/enabling-access.html) and the token was not deleted\. * The repository that you want to publish your notebook to still exists\. <!-- </ul> --> **Parent topic:**[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html) <!-- </article "role="article" "> -->
3C307031346D4FD7DD1A66E2A2F919713582B075
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html?context=cdpaas&locale=en
Hiding sensitive code cells in a notebook
Hiding sensitive code cells in a notebook If your notebook includes code cells with sensitive data, such as credentials for data sources, you can hide those code cells from anyone you share your notebook with. Any collaborators in the same project can see the cells, but when you share a notebook with a link, those cells will be hidden from anyone who uses the link. To hide code cells: 1. Open the notebook and select the code cell to hide. 2. Insert a comment with the hide tag on the first line of the code cell. For the Python and R languages, enter the following syntax: @hidden_cell ![Syntax for hiding code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/hide_tag.png) Parent topic:[Sharing notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html)
# Hiding sensitive code cells in a notebook # If your notebook includes code cells with sensitive data, such as credentials for data sources, you can hide those code cells from anyone you share your notebook with\. Any collaborators in the same project can see the cells, but when you share a notebook with a link, those cells will be hidden from anyone who uses the link\. To hide code cells: <!-- <ol> --> 1. Open the notebook and select the code cell to hide\. 2. Insert a comment with the hide tag on the first line of the code cell\. For the Python and R languages, enter the following syntax: `# @hidden_cell` ![Syntax for hiding code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/hide_tag.png) <!-- </ol> --> **Parent topic:**[Sharing notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) <!-- </article "role="article" "> -->
AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html?context=cdpaas&locale=en
Installing custom libraries through notebooks
Installing custom libraries through notebooks The prefered way of installing additional Python libraries to use in a notebook is to customize the software configuration of the environment runtime associated with the notebook. You can add the conda or PyPi packages through a customization template when you customize the environment template. See [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). However, if you want to install packages from somewhere else or packages you created on your local machine, for example, you can install and import the packages through the notebook. To install packages other than conda or PyPi packages through your notebook: 1. Add the package to your project storage by clicking the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)), and then browsing the package file or dragging it into your notebook sidebar. 2. Add a project token to the notebook by clicking More > Insert project token from the notebook action bar. The code that is generated by this action initializes the variable project, which is required to access the library you uploaded to object storage. Example of an inserted project token: @hidden_cell The project token is an authorization token that is used to access project resources like data sources, connections, and used by platform APIs. from project_lib import Project project = Project(project_id='7c7a9455-1916-4677-a2a9-a61a75942f58', project_access_token='p-9a4c487075063e610471d6816e286e8d0d222141') pc = project.project_context If you don't have a token, you need to create one. See [Adding a project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). 3. Install the library: Fetch the library file, for example the tar.gz or whatever installable distribution you created with open("xxx-0.1.tar.gz","wb") as f: f.write(project.get_file("xxx-0.1.tar.gz").read()) Install the library !pip install xxx-0.1.tar.gz 1. Now you can import the library: import xxx Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
# Installing custom libraries through notebooks # The prefered way of installing additional Python libraries to use in a notebook is to customize the software configuration of the environment runtime associated with the notebook\. You can add the conda or PyPi packages through a customization template when you customize the environment template\. See [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)\. However, if you want to install packages from somewhere else or packages you created on your local machine, for example, you can install and import the packages through the notebook\. To install packages other than conda or PyPi packages through your notebook: <!-- <ol> --> 1. Add the package to your project storage by clicking the **Upload asset to project** icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)), and then browsing the package file or dragging it into your notebook sidebar\. 2. Add a project token to the notebook by clicking **More > Insert project token** from the notebook action bar\. The code that is generated by this action initializes the variable `project`, which is required to access the library you uploaded to object storage\. Example of an inserted project token: # @hidden_cell # The project token is an authorization token that is used to access project resources like data sources, connections, and used by platform APIs. from project_lib import Project project = Project(project_id='7c7a9455-1916-4677-a2a9-a61a75942f58', project_access_token='p-9a4c487075063e610471d6816e286e8d0d222141') pc = project.project_context If you don't have a token, you need to create one. See [Adding a project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). 3. Install the library: <!-- </ol> --> ```python # Fetch the library file, for example the tar.gz or whatever installable distribution you created with open("xxx-0.1.tar.gz","wb") as f: f.write(project.get_file("xxx-0.1.tar.gz").read()) # Install the library !pip install xxx-0.1.tar.gz ``` <!-- <ol> --> 1. Now you can import the library: import xxx <!-- </ol> --> **Parent topic:**[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) <!-- </article "role="article" "> -->
7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/jupyter-spark.html?context=cdpaas&locale=en
Jupyter kernels and notebook environments
Jupyter kernels and notebook environments Jupyter notebooks run in kernels in Jupyter notebook environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment. The number of notebook Juypter kernels started in an environment depends on the environment type: * CPU or GPU environments When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per user and not per notebook. This means that if you open a second notebook with the same environment template, a second kernel is started in that runtime. Resources are shared. If you want to avoid sharing runtime resources, you must associate each notebook with its own environment template. Important: Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because other notebook kernels could still be active in that runtime. Only stop an environment runtime if you are sure that no kernels are active. * Spark environments When you open a notebook in edit mode in a Spark environment, a dedicated Spark cluster is started, even if another notebook was opened in the same Spark environment template. Each notebook kernel has its own Spark driver and set of Spark executors. No resources are shared. If necessary, you can restart or reconnect to a kernel. When you restart a kernel, the kernel is stopped and then started in the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available. The kernel remains active even if you leave the notebook or close the web browser window. When you reopen the same notebook, the notebook is connected to the same kernel. Only the output cells that were saved (auto-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible. You will not see the output for any cells which ran in the background after you left the notebook or closed the window. To see all of the output cells, you need to rerun the notebook. Learn more * [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Associated Spark services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Runtime scope in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlruntime-scope) Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
# Jupyter kernels and notebook environments # Jupyter notebooks run in kernels in Jupyter notebook environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment\. The number of notebook Juypter kernels started in an environment depends on the environment type: <!-- <ul> --> * CPU or GPU environments When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per user and not per notebook. This means that if you open a second notebook with the same environment template, a second kernel is started in that runtime. Resources are shared. If you want to avoid sharing runtime resources, you must associate each notebook with its own environment template. Important: Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because other notebook kernels could still be active in that runtime. Only stop an environment runtime if you are sure that no kernels are active. * Spark environments When you open a notebook in edit mode in a Spark environment, a dedicated Spark cluster is started, even if another notebook was opened in the same Spark environment template. Each notebook kernel has its own Spark driver and set of Spark executors. No resources are shared. <!-- </ul> --> If necessary, you can restart or reconnect to a kernel\. When you restart a kernel, the kernel is stopped and then started in the same session, but all execution results are lost\. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available\. The kernel remains active even if you leave the notebook or close the web browser window\. When you reopen the same notebook, the notebook is connected to the same kernel\. Only the output cells that were saved (auto\-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible\. You will not see the output for any cells which ran in the background after you left the notebook or closed the window\. To see all of the output cells, you need to rerun the notebook\. ## Learn more ## <!-- <ul> --> * [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </ul> --> <!-- <ul> --> * [Associated Spark services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) <!-- </ul> --> <!-- <ul> --> * [Runtime scope in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#runtime-scope) <!-- </ul> --> **Parent topic:**[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) <!-- </article "role="article" "> -->
A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html?context=cdpaas&locale=en
Libraries and scripts for notebooks
Libraries and scripts for notebooks Watson Studio includes a large selection of preinstalled open source libraries for Python and R in its runtime environments. You can also use preinstalled IBM libraries or install custom libraries. Watson Studio includes the following libraries and the appropriate runtime environments with which you can expand your data analysis: * The Watson Natural Language Processing library in Python and Python with GPU runtime environments. * The gespatio-temporal library in Spark with Python runtime environments * The Xskipper library for data skipping uses the open source in Spark with Python runtime environments * Parquet encryption in Spark with Python runtime environments * The tspy library for time series analysis in Spark with Python runtime environments Listing installed libraries Many of your favorite open source libraries are pre-installed on runtime environments. All you have to do is import them. See [Import preinstalled libraries and packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html?context=cdpaas&locale=enimport-lib). If a library is not preinstalled, you can add it: * Through the notebook Some libraries require a kernel restart after a version change. If you need to work with a library version that isn't pre-installed in the environment in which you start the notebook, and you install this library version through the notebook, the notebook only runs successfully after you restart the kernel. Note that when you run the notebook non-interactively, for example as a notebook job, it fails because the kernel can't be restarted. * By adding a customization to the environment in which the notebook runs If you add a library with a particular version to the software customization, the library is preinstalled at the time the environment is started and no kernel restart is required. Also, if the notebook is run in a scheduled job, it won't fail. The advantage of adding an environment customization is that the library is preinstalled each time the environment runtime is started. Libraries that you add through a notebook are persisted for the lifetime of the runtime only. If the runtime is stopped and later restarted, those libraries are not installed. To see the list of installed libraries in your environment runtime: 1. From the Manage tab, on the project's Environments page, select the environment template. 2. From a notebook, run the appropriate command from a notebook cell: * Python: !pip list --isolated * R: installed.packages() 3. Optional: Add custom libraries and packages to the environment. See [customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). Importing an installed library To import an installed library into your notebook, run the appropriate command from a notebook cell with the library name: * Python: import library_name * R: library(library_name) Alternatively, you can write a script that includes multiple classes and methods and then [import the script into your notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html). Learn more * [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) * [Importing scripts into a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html) * [Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) * [gespatio-temporal library for location analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html) * [Xskipper library for data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html) * [Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) * [tspy library for time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Libraries and scripts for notebooks # Watson Studio includes a large selection of preinstalled open source libraries for Python and R in its runtime environments\. You can also use preinstalled IBM libraries or install custom libraries\. Watson Studio includes the following libraries and the appropriate runtime environments with which you can expand your data analysis: <!-- <ul> --> * The Watson Natural Language Processing library in Python and Python with GPU runtime environments\. * The gespatio\-temporal library in Spark with Python runtime environments * The Xskipper library for data skipping uses the open source in Spark with Python runtime environments * Parquet encryption in Spark with Python runtime environments * The tspy library for time series analysis in Spark with Python runtime environments <!-- </ul> --> ## Listing installed libraries ## Many of your favorite open source libraries are pre\-installed on runtime environments\. All you have to do is import them\. See [Import preinstalled libraries and packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html?context=cdpaas&locale=en#import-lib)\. If a library is not preinstalled, you can add it: <!-- <ul> --> * Through the notebook Some libraries require a kernel restart after a version change. If you need to work with a library version that isn't pre-installed in the environment in which you start the notebook, and you install this library version through the notebook, the notebook only runs successfully after you restart the kernel. Note that when you run the notebook non-interactively, for example as a notebook job, it fails because the kernel can't be restarted. * By adding a customization to the environment in which the notebook runs If you add a library with a particular version to the software customization, the library is preinstalled at the time the environment is started and no kernel restart is required. Also, if the notebook is run in a scheduled job, it won't fail. The advantage of adding an environment customization is that the library is preinstalled each time the environment runtime is started. Libraries that you add through a notebook are persisted for the lifetime of the runtime only. If the runtime is stopped and later restarted, those libraries are not installed. <!-- </ul> --> To see the list of installed libraries in your environment runtime: <!-- <ol> --> 1. From the **Manage** tab, on the project's **Environments** page, select the environment template\. 2. From a notebook, run the appropriate command from a notebook cell: <!-- <ul> --> * Python: `!pip list --isolated` * R: `installed.packages()` <!-- </ul> --> 3. Optional: Add custom libraries and packages to the environment\. See [customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)\. <!-- </ol> --> ## Importing an installed library ## To import an installed library into your notebook, run the appropriate command from a notebook cell with the library name: <!-- <ul> --> * Python: `import library_name` * R: `library(library_name)` <!-- </ul> --> Alternatively, you can write a script that includes multiple classes and methods and then [import the script into your notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html)\. ## Learn more ## <!-- <ul> --> * [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) * [Importing scripts into a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html) * [Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html) * [gespatio\-temporal library for location analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html) * [Xskipper library for data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html) * [Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) * [tspy library for time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html) <!-- </ul> --> **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
773FA6558F9FD3115F36AF9E4B11F67C1F501432
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en
Loading and accessing data in a notebook
Loading and accessing data in a notebook You can integrate data into notebooks by accessing the data from a local file, from free data sets, or from a data source connection. You load that data into a data structure or container in the notebook, for example, a pandas.DataFrame, numpy.array, Spark RDD, or Spark DataFrame. To work with data in a notebook, you can choose between the following options: Recommended methods for adding data to your notebook Option Recommended method Requirements Details Add data from a file on your local system Add a Code snippet that loads your data The file must exist as an asset in your project [Add a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles) Add data from a free data set from the Samples Add a Code snippet that loads your data The data set (file) must exist as an asset in your project [Add a free data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles) Load data from data source connections Add a Code snippet that loads your data The connection must exist as an asset in your project [Add a connection to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) and then [Add a code snippet that loads the data from your data source connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enconns) Access project assets and metadata programmatically Use ibm-watson-studio-lib The data asset must exist in your project [Use the ibm-watson-studio-lib library to interact with data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) Create and use feature store data Use assetframe-lib library functions The data asset must exist in your project [Use the assetframe-lib library for Python to create and use feature store data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) Access data using an API function or an operating system command For example, use wget N/A [Access data using an API function or an operating system command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enapi-function) Important: Make sure that the environment in which the notebook is started has enough memory to store the data that you load to the notebook. The environment must have significantly more memory than the total size of the data that is loaded to the notebook. Some data frameworks, like pandas, can hold multiple copies of the data in memory. Adding a file from your local system To add a file from your local system to your project by using the Jupyterlab notebook editor: 1. Open your notebook in edit mode. 2. From the toolbar, click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and add your file. Tip: You can also drag the file into your notebook sidebar. Load data sets from the Samples The data sets on the Samples contain open data. Watch this short video to see how to work with public data sets in the Samples. This video provides a visual method to learn the concepts and tasks in this documentation. To add a data set from the Samples to your project: 1. From the IBM watsonx navigation menu, select Samples. 2. Find the card for the data set that you want to add. ![A view of data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/datasets.png) 3. Click Add to project, select the project, and click Add. Clicking View project takes you to the project Overview page. The data asset is added to the list of data assets on the project's Assets page. Loading data from files Prerequisites The file must exist as an asset in your project. For details, see [Adding a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) or [Loading a data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm). To load data from a project file to your notebook: 1. Open your notebook in edit mode. 2. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click Read data, and then select the data file from your project. If you want to change your selection, use Edit icon. 3. From the Load as drop-down list, select the load option that you prefer. If you select Credentials, only file access credentials will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds). 4. Click in an empty code cell in your notebook and then click Insert code to cell to insert the generated code. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook. The generated code serves as a quick start to begin working with a data set. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs. To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types). Loading data from data source connections Prerequisites Before you can load data from an IBM data service or from an external data source, you must create or add a connection to your project. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). To load data from an existing data source connection into a data structure in your notebook: 1. Open your notebook in edit mode. 2. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click Read data, and then select the data source connection from your project. 3. Select the schema and choose a table. If you want to change your selection, use Edit icon. 4. Select the load option. If you select Credentials, only metadata will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds). 5. Click in an empty code cell in your notebook and then insert code to the cell. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook. 6. If necessary, enter your personal credentials for locked data connections that are marked with a key icon (![the key symbol for connections with personal credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)). This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. For more information, see [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). The generated code serves as a quick start to begin working with a connection. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs. To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types). Adding credentials You can generate your own code to access the file located in your IBM Cloud Object Storage or a file accessible through a connection. This is useful when, for example, your file format is not supported by the snippet generation tool. With the credentials, you can write your own code to load the data into a data structure in a notebook cell. To add the credentials: 1. Click the Code snippets icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) and then click Read data. 2. Click in an empty code cell in your notebook, select Credentials as the load option, and then load the credentials to the cell. You can also click to copy the credentials to the clipboard and then paste them into your notebook. 3. Insert your credentials into the code in your notebook to access the data. For example, see this code in a [blog for Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f). Use an API function or an operating system command to access the data You can use API functions or operating system commands in your notebook to access data, for example, the wget command to access data by using the HTTP, HTTPS or FTP protocols. When you use these types of API functions and commands, you must include code that sets the project access token. See [Manually add the project access token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). For reference information about the API, see [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api). Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Loading and accessing data in a notebook # You can integrate data into notebooks by accessing the data from a local file, from free data sets, or from a data source connection\. You load that data into a data structure or container in the notebook, for example, a pandas\.DataFrame, numpy\.array, Spark RDD, or Spark DataFrame\. To work with data in a notebook, you can choose between the following options: <!-- <table> --> Recommended methods for adding data to your notebook | Option | Recommended method | Requirements | Details | | ---------------------------------------------------------------- | ------------------------------------------- | ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Add data from a file on your local system | Add a **Code snippet** that loads your data | The file must exist as an asset in your project | [Add a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#add-file-local) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#files) | | Add data from a free data set from the Samples | Add a **Code snippet** that loads your data | The data set (file) must exist as an asset in your project | [Add a free data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#loadcomm) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#files) | | Load data from data source connections | Add a **Code snippet** that loads your data | The connection must exist as an asset in your project | [Add a connection to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) and then [Add a code snippet that loads the data from your data source connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#conns) | | Access project assets and metadata programmatically | Use `ibm-watson-studio-lib` | The data asset must exist in your project | [Use the `ibm-watson-studio-lib` library to interact with data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) | | Create and use feature store data | Use `assetframe-lib` library functions | The data asset must exist in your project | [Use the `assetframe-lib` library for Python to create and use feature store data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) | | Access data using an API function or an operating system command | For example, use `wget` | N/A | [Access data using an API function or an operating system command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#api-function) | <!-- </table ""> --> Important: Make sure that the environment in which the notebook is started has enough memory to store the data that you load to the notebook\. The environment must have significantly more memory than the total size of the data that is loaded to the notebook\. Some data frameworks, like pandas, can hold multiple copies of the data in memory\. ## Adding a file from your local system ## To add a file from your local system to your project by using the Jupyterlab notebook editor: <!-- <ol> --> 1. Open your notebook in edit mode\. 2. From the toolbar, click the **Upload asset to project** icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and add your file\. <!-- </ol> --> Tip: You can also drag the file into your notebook sidebar\. ## Load data sets from the Samples ## The data sets on the Samples contain open data\. Watch this short video to see how to work with public data sets in the Samples\. This video provides a visual method to learn the concepts and tasks in this documentation\. To add a data set from the Samples to your project: <!-- <ol> --> 1. From the IBM watsonx navigation menu, select Samples\. 2. Find the card for the data set that you want to add\. ![A view of data sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/datasets.png) 3. Click **Add to project**, select the project, and click **Add**\. Clicking **View project** takes you to the project Overview page\. The data asset is added to the list of data assets on the project's Assets page\. <!-- </ol> --> ## Loading data from files ## **Prerequisites** The file must exist as an asset in your project\. For details, see [Adding a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#add-file-local) or [Loading a data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#loadcomm)\. To load data from a project file to your notebook: <!-- <ol> --> 1. Open your notebook in edit mode\. 2. Click the **Code snippets** icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click **Read data**, and then select the data file from your project\. If you want to change your selection, use **Edit** icon\. 3. From the **Load as** drop\-down list, select the load option that you prefer\. If you select **Credentials**, only file access credentials will be generated\. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#adding-creds)\. 4. Click in an empty code cell in your notebook and then click **Insert code to cell** to insert the generated code\. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook\. <!-- </ol> --> The generated code serves as a quick start to begin working with a data set\. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs\. To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html#file-types)\. ## Loading data from data source connections ## **Prerequisites** Before you can load data from an IBM data service or from an external data source, you must create or add a connection to your project\. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. To load data from an existing data source connection into a data structure in your notebook: <!-- <ol> --> 1. Open your notebook in edit mode\. 2. Click the **Code snippets** icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)), click **Read data**, and then select the data source connection from your project\. 3. Select the schema and choose a table\. If you want to change your selection, use **Edit** icon\. 4. Select the load option\. If you select **Credentials**, only metadata will be generated\. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=en#adding-creds)\. 5. Click in an empty code cell in your notebook and then insert code to the cell\. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook\. 6. If necessary, enter your personal credentials for locked data connections that are marked with a key icon (![the key symbol for connections with personal credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png))\. This is a one\-time step that permanently unlocks the connection for you\. After you unlock the connection, the key icon is no longer displayed\. For more information, see [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. <!-- </ol> --> The generated code serves as a quick start to begin working with a connection\. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs\. To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html#file-types)\. ## Adding credentials ## You can generate your own code to access the file located in your IBM Cloud Object Storage or a file accessible through a connection\. This is useful when, for example, your file format is not supported by the snippet generation tool\. With the credentials, you can write your own code to load the data into a data structure in a notebook cell\. To add the credentials: <!-- <ol> --> 1. Click the **Code snippets** icon (![the Code snippets icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code-snippets-icon.png)) and then click **Read data**\. 2. Click in an empty code cell in your notebook, select **Credentials** as the load option, and then load the credentials to the cell\. You can also click to copy the credentials to the clipboard and then paste them into your notebook\. 3. Insert your credentials into the code in your notebook to access the data\. For example, see this code in a [blog for Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f)\. <!-- </ol> --> ## Use an API function or an operating system command to access the data ## You can use API functions or operating system commands in your notebook to access data, for example, the `wget` command to access data by using the HTTP, HTTPS or FTP protocols\. When you use these types of API functions and commands, you must include code that sets the project access token\. See [Manually add the project access token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html)\. For reference information about the API, see [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api)\. **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
7BAB40E15D18920009E4168C32265A950A8AFE38
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en
Managing compute resources
Managing compute resources If you have the Admin role or Editor in a project, you can perform management tasks for environments. * [Create an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Customize an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) * [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enstop-active-runtimes) * [Promote an environment template to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html) * [Track capacity unit consumption of runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html) Stop active runtimes You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). Jupyter notebook runtimes are started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. You should only stop a notebook runtime if you are sure that no other notebook kernels are active. Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration of the scheduled job and deactivated again after the job has finished. Project users with Admin role can stop all runtimes in the project. Users added to the project with Editor role can stop the runtimes they started, but can't stop other project users' runtimes. Users added to the project with the viewer role can't see the runtimes in the project. You can stop runtimes from: * The Environment Runtimes page, which lists all active runtimes across all projects for your account, by clicking Administration > Environment runtimes from the Watson Studio navigation menu. * Under Tool runtimes on the Environments page on the Manage tab of your project, which lists the active runtimes for a specific project. * The Environments page when you click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar in the notebook editor. You can stop the runtime under Runtime status. Idle timeouts for: * [Jupyter notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=encpu) * [Spark runtimes for notebooks and Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enspark) * [Notebook with GPU runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=engpu) * [RStudio runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enrstudio) Jupyter notebook idle timeout Runtime idle times differ for the Jupyter notebook runtimes depending on your Watson Studio plan. Idle timeout for default CPU runtimes Plan Idle timeout Lite - Idle stop time: 1 hour <br>- CUH limit: 10 CUHs Professional - Idle stop time: 1 hour <br>- CUH limit: no limit Standard (Legacy) - Idle stop time: 1 hour <br>- CUH limit: no limit Enterprise (Legacy) - Idle stop time: 3 hours <br>- CUH limit: no limit All plans <br>Free runtime - Idle stop time: 1 hour <br>- Maximum lifetime: 12 hours Important: A runtime is started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. Only stop a runtime if you are sure that no kernels are active. Spark idle timeout All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity. The Default Data Refinery XS runtime that is used when you refine data in Data Refinery is stopped after an idle time of 1 hour. Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes. GPU idle timeout All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users. RStudio idle timeout An RStudio is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours. Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
## Managing compute resources ## If you have the **Admin** role or **Editor** in a project, you can perform management tasks for environments\. <!-- <ul> --> * [Create an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Customize an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) * [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en#stop-active-runtimes) * [Promote an environment template to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html) * [Track capacity unit consumption of runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html) <!-- </ul> --> ## Stop active runtimes ## You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs)\. Jupyter notebook runtimes are started per user and not per notebook\. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment\. You should only stop a notebook runtime if you are sure that no other notebook kernels are active\. Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed\. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration of the scheduled job and deactivated again after the job has finished\. Project users with **Admin** role can stop all runtimes in the project\. Users added to the project with **Editor** role can stop the runtimes they started, but can't stop other project users' runtimes\. Users added to the project with the viewer role can't see the runtimes in the project\. You can stop runtimes from: <!-- <ul> --> * The **Environment Runtimes** page, which lists all active runtimes across all projects for your account, by clicking **Administration > Environment runtimes** from the Watson Studio navigation menu\. * Under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project, which lists the active runtimes for a specific project\. * The **Environments** page when you click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar in the notebook editor\. You can stop the runtime under **Runtime status**\. <!-- </ul> --> Idle timeouts for: <!-- <ul> --> * [Jupyter notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en#cpu) * [Spark runtimes for notebooks and Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en#spark) * [Notebook with GPU runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en#gpu) * [RStudio runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=en#rstudio) <!-- </ul> --> ### Jupyter notebook idle timeout ### Runtime idle times differ for the Jupyter notebook runtimes depending on your Watson Studio plan\. <!-- <table> --> Idle timeout for default CPU runtimes | Plan | Idle timeout | | --------------------------- | ------------------------------------------------------------ | | Lite | \- Idle stop time: 1 hour <br>\- CUH limit: 10 CUHs | | Professional | \- Idle stop time: 1 hour <br>\- CUH limit: no limit | | Standard (Legacy) | \- Idle stop time: 1 hour <br>\- CUH limit: no limit | | Enterprise (Legacy) | \- Idle stop time: 3 hours <br>\- CUH limit: no limit | | All plans <br>Free runtime | \- Idle stop time: 1 hour <br>\- Maximum lifetime: 12 hours | <!-- </table ""> --> Important: A runtime is started per user and not per notebook\. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment\. Only stop a runtime if you are sure that no kernels are active\. ### Spark idle timeout ### All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity\. The `Default Data Refinery XS runtime` that is used when you refine data in Data Refinery is stopped after an idle time of 1 hour\. Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes\. ### GPU idle timeout ### All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users\. ### RStudio idle timeout ### An RStudio is stopped for you after an idle time of 2 hour\. During this idle time, you will continue to consume CUHs for which you are billed\. Long compute\-intensive jobs are hard stopped after 24 hours\. **Parent topic:**[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
6349E43EA9B4AC5775DB122E0F6C365D5DB810BF
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html?context=cdpaas&locale=en
Managing the lifecycle of notebooks and scripts
Managing the lifecycle of notebooks and scripts After you have created and tested your notebooks, you can add them to pipelines, publish them to a catalog so that other catalog members can use the notebook in their projects, or share read-only copies outside of Watson Studio so that people who aren't collaborators in your Watson Studio projects can see and use them. R scripts and Shiny apps can't be published or shared using functionality in a project at this time. You can use any of these methods for notebooks: * [Add notebooks to a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) * [Share a URL on social media](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) * [Publish on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html) * [Publish as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html) * [Publish your notebook to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) Make sure that before you share or publish a notebook, you hide any sensitive code, like credentials, that you don't want others to see! See [Hide sensitive cells in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html). Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Managing the lifecycle of notebooks and scripts # After you have created and tested your notebooks, you can add them to pipelines, publish them to a catalog so that other catalog members can use the notebook in their projects, or share read\-only copies outside of Watson Studio so that people who aren't collaborators in your Watson Studio projects can see and use them\. R scripts and Shiny apps can't be published or shared using functionality in a project at this time\. You can use any of these methods for notebooks: <!-- <ul> --> * [Add notebooks to a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) * [Share a URL on social media](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) * [Publish on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html) * [Publish as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html) * [Publish your notebook to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html) <!-- </ul> --> Make sure that before you share or publish a notebook, you hide any sensitive code, like credentials, that you don't want others to see\! See [Hide sensitive cells in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html)\. **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
FF69E780BD8FECEAF7A0ADD24C159679F7359F81
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html?context=cdpaas&locale=en
Markdown cheatsheet
Markdown cheatsheet You can use Markdown tagging to improve the readability of a project readme or the Markdown cells in Jupyter notebooks. The differences between Markdown in the readme files and in notebooks are noted. Headings: Use #s followed by a blank space for notebook titles and section headings: title major headings subheadings 4th level subheadings Emphasis: Use this code: Bold: __string__ or string, Italic: _string_ or string, Strikethrough: string Mathematical symbols: Use this code: $ mathematical symbols $ Monospace font: Surround text with a back single quotation mark (`). Use monospace for file path and file names and for text users enter or message text users see. Line breaks: Sometimes Markdown doesn’t make line breaks when you want them. Put two spaces at the end of the line, or use this code for a manual line break: <br> Indented quoting: Use a greater-than sign (>) and then a space, then type the text. The text is indented and has a gray horizontal line to the left of it until the next carriage return. Bullets: Use the dash sign (-) with a space after it or a space, a dash, and a space (-), to create a circular bullet. To create a sub bullet, use a tab followed a dash and a space. You can also use an asterisk instead of a dash, and it works the same. Numbered lists: Start with 1. followed by a space, then your text. Hit return and numbering is automatic. Start each line with some number and a period, then a space. Tab to indent to get subnumbering. Checkboxes in readme files: Use this code for an unchecked box: ( ) Use this code for a checked box: (x) Tables in readme files: Use this code: Heading Heading text text text text Graphics in notebooks: Drag and drop images to the Markdown cell to attach it to the notebook. To add images to other cell types, use graphics that are hosted on the web with this code, substituting url/name with the full URL and name of the image: <img src="url/filename.gif" alt="Alt text" title="Title text" /> Graphics in readme files: Use this code: Alt text] Geometric shapes: Use this code with a decimal or hex reference number from here: [!UTF-8 Geometric shapes](https://www.w3schools.com/charsets/ref_utf_geometric.asp)&#reference_number; Horizontal lines: Use three asterisks: Internal links: To link to a section, add an anchor above the section title and then create a link. Use this code to create an anchor: <a id="section_ID"></a> Use this code to create the link: [section title](section-ID) Make sure that the section_ID is unique within the notebook or readme. Alternatively, for notebooks you can skip creating anchors and use this code: [section title](section-title) For the text in the parentheses, replace spaces and special characters with a hyphen and make all characters lowercase. Test all links! External links: Use this code: [link text](http://url) To create a link that opens in a new window or tab, use this code: <a href="http://url" target="_blank" rel="noopener noreferrer">link text</a> Test all links! Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Markdown cheatsheet # You can use Markdown tagging to improve the readability of a project readme or the Markdown cells in Jupyter notebooks\. The differences between Markdown in the readme files and in notebooks are noted\. **Headings:** Use \#s followed by a blank space for notebook titles and section headings: `#` title `##` major headings `###` subheadings `####` 4th level subheadings **Emphasis:** Use this code: Bold: `__string__` or `**string**`, Italic: `_string_` or `*string*`, Strikethrough: `~~string~~` **Mathematical symbols:** Use this code: `$ mathematical symbols $` **Monospace font:** Surround text with a back single quotation mark (\`)\. Use monospace for file path and file names and for text users enter or message text users see\. **Line breaks:** Sometimes Markdown doesn’t make line breaks when you want them\. Put two spaces at the end of the line, or use this code for a manual line break: `<br>` **Indented quoting:** Use a greater\-than sign (`>`) and then a space, then type the text\. The text is indented and has a gray horizontal line to the left of it until the next carriage return\. **Bullets:** Use the dash sign (`-`) with a space after it or a space, a dash, and a space (`-`), to create a circular bullet\. To create a sub bullet, use a tab followed a dash and a space\. You can also use an asterisk instead of a dash, and it works the same\. **Numbered lists:** Start with `1.` followed by a space, then your text\. Hit return and numbering is automatic\. Start each line with some number and a period, then a space\. Tab to indent to get subnumbering\. **Checkboxes in readme files:** Use this code for an unchecked box: `- [ ]` Use this code for a checked box: `- [x]` **Tables in readme files:** Use this code: | Heading | Heading | | ----| ----| | text | text | | text | text | **Graphics in notebooks:** Drag and drop images to the Markdown cell to attach it to the notebook\. To add images to other cell types, use graphics that are hosted on the web with this code, substituting *url/name* with the full URL and name of the image: `<img src="url/filename.gif" alt="Alt text" title="Title text" />` **Graphics in readme files:** Use this code: `Alt text] **Geometric shapes:** Use this code with a decimal or hex reference number from here: [!UTF\-8 Geometric shapes](https://www.w3schools.com/charsets/ref_utf_geometric.asp)`&#reference_number;` **Horizontal lines:** Use three asterisks: `***` **Internal links:** To link to a section, add an anchor above the section title and then create a link\. Use this code to create an anchor: `<a id="section_ID"></a>` Use this code to create the link: `[section title](#section-ID)` Make sure that the section\_ID is unique within the notebook or readme\. Alternatively, for notebooks you can skip creating anchors and use this code: `[section title](#section-title)` For the text in the parentheses, replace spaces and special characters with a hyphen and make all characters lowercase\. Test all links\! **External links:** Use this code: `[link text](http://url)` To create a link that opens in a new window or tab, use this code: `<a href="http://url" target="_blank" rel="noopener noreferrer">link text</a>` Test all links\! **Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html?context=cdpaas&locale=en
Accessing asset details
Accessing asset details Display details about an asset and preview data assets in a deployment space. To display details about the asset, click the asset name. For example, click a model name to view details such as the associated software and hardware specifications, the model creation date, and more. Some details, such as the model name, description, and tags, are editable. For data assets, you can also preview the data. Previewing data assets To preview a data asset, click the data asset name. * User's access to the data is based on the API layer. This means that if user's bearer token allows for viewing data, the data preview is displayed. * For tabular data, only a subset of the data is displayed. Also, column names are displayed but their data types are not inferred. * For data in XLS files, only the first worksheet is displayed for preview. * All data from Cloud Object Storage connectors is assumed to be tabular data. MIME types supported for preview: Format Mime types Image image/bmp, image/cmu-raster, image/fif, image/florian, image/g3fax, image/gif, image/ief, image/jpeg, image/jutvision, image/naplps, image/pict, image/png, image/svg+xml, image/vnd.net-fpx, image/vnd.rn-realflash, image/vnd.rn-realpix, image/vnd.wap.wbmp, image/vnd.xiff, image/x-cmu-raster, image/x-dwg, image/x-icon, image/x-jg, image/x-jps, image/x-niff, image/x-pcx, image/x-pict, image/x-portable-anymap, image/x-portable-bitmap, image/x-portable-greymap, image/x-portable-pixmap, image/x-quicktime, image/x-rgb, image/x-tiff, image/x-windows-bmp, image/x-xwindowdump, image/xbm, image/xpm Text application/json, text/asp, text/css, text/csv, text/html, text/mcf, text/pascal, text/plain, text/richtext, text/scriplet, text/tab-separated-values, text/tab-separated-values, text/uri-list, text/vnd.abc, text/vnd.fmi.flexstor, text/vnd.rn-realtext, text/vnd.wap.wml, text/vnd.wap.wmlscript, text/webviewhtml, text/x-asm, text/x-audiosoft-intra, text/x-c, text/x-component, text/x-fortran, text/x-h, text/x-java-source, text/x-la-asf, text/x-m, text/x-pascal, text/x-script, text/x-script.csh, text/x-script.elisp, text/x-script.ksh, text/x-script.lisp, text/x-script.perl, text/x-script.perl-module, text/x-script.python, text/x-script.rexx, text/x-script.tcl, text/x-script.tcsh, text/x-script.zsh, text/x-server-parsed-html, text/x-setext, text/x-sgml, text/x-speech, text/x-uil, text/x-uuencode, text/x-vcalendar, text/xml Tabular data text/csv, application/excel, application/vnd.ms-excel, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, data from connections Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
# Accessing asset details # Display details about an asset and preview data assets in a deployment space\. To display details about the asset, click the asset name\. For example, click a model name to view details such as the associated software and hardware specifications, the model creation date, and more\. Some details, such as the model name, description, and tags, are editable\. For data assets, you can also preview the data\. ## Previewing data assets ## To preview a data asset, click the data asset name\. <!-- <ul> --> * User's access to the data is based on the API layer\. This means that if user's bearer token allows for viewing data, the data preview is displayed\. * For tabular data, only a subset of the data is displayed\. Also, column names are displayed but their data types are not inferred\. * For data in XLS files, only the first worksheet is displayed for preview\. * All data from Cloud Object Storage connectors is assumed to be tabular data\. <!-- </ul> --> MIME types supported for preview: <!-- <table> --> | Format | Mime types | | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Image | image/bmp, image/cmu\-raster, image/fif, image/florian, image/g3fax, image/gif, image/ief, image/jpeg, image/jutvision, image/naplps, image/pict, image/png, image/svg\+xml, image/vnd\.net\-fpx, image/vnd\.rn\-realflash, image/vnd\.rn\-realpix, image/vnd\.wap\.wbmp, image/vnd\.xiff, image/x\-cmu\-raster, image/x\-dwg, image/x\-icon, image/x\-jg, image/x\-jps, image/x\-niff, image/x\-pcx, image/x\-pict, image/x\-portable\-anymap, image/x\-portable\-bitmap, image/x\-portable\-greymap, image/x\-portable\-pixmap, image/x\-quicktime, image/x\-rgb, image/x\-tiff, image/x\-windows\-bmp, image/x\-xwindowdump, image/xbm, image/xpm | | Text | application/json, text/asp, text/css, text/csv, text/html, text/mcf, text/pascal, text/plain, text/richtext, text/scriplet, text/tab\-separated\-values, text/tab\-separated\-values, text/uri\-list, text/vnd\.abc, text/vnd\.fmi\.flexstor, text/vnd\.rn\-realtext, text/vnd\.wap\.wml, text/vnd\.wap\.wmlscript, text/webviewhtml, text/x\-asm, text/x\-audiosoft\-intra, text/x\-c, text/x\-component, text/x\-fortran, text/x\-h, text/x\-java\-source, text/x\-la\-asf, text/x\-m, text/x\-pascal, text/x\-script, text/x\-script\.csh, text/x\-script\.elisp, text/x\-script\.ksh, text/x\-script\.lisp, text/x\-script\.perl, text/x\-script\.perl\-module, text/x\-script\.python, text/x\-script\.rexx, text/x\-script\.tcl, text/x\-script\.tcsh, text/x\-script\.zsh, text/x\-server\-parsed\-html, text/x\-setext, text/x\-sgml, text/x\-speech, text/x\-uil, text/x\-uuencode, text/x\-vcalendar, text/xml | | Tabular data | text/csv, application/excel, application/vnd\.ms\-excel, application/vnd\.openxmlformats\-officedocument\.spreadsheetml\.sheet, data from connections | <!-- </table ""> --> **Parent topic:**[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) <!-- </article "role="article" "> -->
B518A7A2D4AA3B05564C965889116F6A6151A34B
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en
Authenticating for programmatic access
Authenticating for programmatic access To use Watson Machine Learning with the Python client library or the REST API, you must authenticate to secure your work. Learn about the different ways to authenticate and how to apply them to the service of your choosing. You use IBM Cloud® Identity and Access Management (IAM) to make authenticated requests to public IBM Watson™ services. With IAM access policies, you can assign access to more than one resource from a single key. In addition, a user, service ID, and service instance can hold multiple API keys. Security overview Refer to the section that describes your security needs. * [Authentication credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enterminology) * [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython-client) * [Rest API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api) Authentication credentials These terms relate to the security requirements described in this topic. * API keys allow you to easily authenticate when you are using the Python client or APIs and can be used across multiple services. API Keys are considered confidential because they are used to grant access. Treat all API keys as you would a password because anyone with your API key can access your service. * An IAM token is an authentication token that is required to access IBM Cloud services. You can generate a token by using your API key in the token request. For details on using IAM tokens, refer to [Authenticating to Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learningauthentication). To authenticate to a service through its API, pass your credentials to the API. You can pass either a bearer token in an authorization header or an API key. Generating an API key To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account. You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html). Authenticate with an IAM token IAM tokens are temporary security credentials that are valid for 60 minutes. When a token expires, you generate a new one. Tokens can be useful for temporary access to resources. For more information, see [Generating an IBM Cloud IAM token by using an API key](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey). Getting a service-level token You can also authenticate with a service-level token. To generate a service-level token: 1. Refer to the IBM Cloud instructions for [creating a Service ID](https://cloud.ibm.com/iam/serviceids). 2. Generate an API key for that Service ID. 3. Open the space where you plan to keep your deployable assets. 4. On the Access control tab, add the Service ID and assign an access role of Admin or Editor. You can use the service-level token with your API scoring requests. Interfaces * [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython) * [REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api) Python client Refer to: [Watson Machine Learning Python client ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://ibm.github.io/watson-machine-learning-sdk/) To create an instance of the Watson Machine Learning Python client object, you need to pass your credentials to Watson Machine Learning API client. wml_credentials = { "apikey":"123456789", "url": " https://HIJKL" } from ibm_watson_machine_learning import APIClient wml_client = APIClient(wml_credentials) Note:Even though you do not explicitly provide an instance_id, it will be picked up from the associated space or project for billing purposes. For details on plans and billing for Watson Machine Learning services, refer to [Watson Machine Learning plans and runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html). Refer to [sample notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for examples of how to authenticate and then score a model by using the Python client. REST API Refer to: [Watson Machine Learning REST API ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://cloud.ibm.com/apidocs/machine-learning) To use the Watson Machine Learning REST API, you must obtain an IBM Cloud Identity and Access Management (IAM) token. In this example, you would supply your API key in place of the example key. cURL example curl -k -X POST --header "Content-Type: application/x-www-form-urlencoded" --header "Accept: application/json" --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" --data-urlencode "apikey=123456789" "https://iam.cloud.ibm.com/identity/token" The obtained IAM token needs to be prefixed with the word Bearer, and passed in the Authorization header for API calls. Python example import requests Paste your Watson Machine Learning service apikey here apikey = "123456789" Get an IAM token from IBM Cloud url = "https://iam.cloud.ibm.com/identity/token" headers = { "Content-Type" : "application/x-www-form-urlencoded" } data = "apikey=" + apikey + "&grant_type=urn:ibm:params:oauth:grant-type:apikey" response = requests.post( url, headers=headers, data=data, auth=( apikey ) iam_token = response.json()["access_token"] Node.js example var btoa = require( "btoa" ); var request = require( 'request' ); // Paste your Watson Machine Learning service apikey here var apikey = "123456789"; // Use this code as written to get an access token from IBM Cloud REST API // var IBM_Cloud_IAM_uid = "bx"; var IBM_Cloud_IAM_pwd = "bx"; var options = { url : "https://iam.cloud.ibm.com/identity/token", headers : { "Content-Type" : "application/x-www-form-urlencoded", "Authorization" : "Basic " + btoa( IBM_Cloud_IAM_uid + ":" + IBM_Cloud_IAM_pwd ) }, body : "apikey=" + apikey + "&grant_type=urn:ibm:params:oauth:grant-type:apikey" }; request.post( options, function( error, response, body ) { var iam_token = JSON.parse( body )["access_token"]; } ); Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Authenticating for programmatic access # To use Watson Machine Learning with the Python client library or the REST API, you must authenticate to secure your work\. Learn about the different ways to authenticate and how to apply them to the service of your choosing\. You use IBM Cloud® Identity and Access Management (IAM) to make authenticated requests to public IBM Watson™ services\. With IAM access policies, you can assign access to more than one resource from a single key\. In addition, a user, service ID, and service instance can hold multiple API keys\. ## Security overview ## Refer to the section that describes your security needs\. <!-- <ul> --> * [Authentication credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en#terminology) * [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en#python-client) * [Rest API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en#rest-api) <!-- </ul> --> ## Authentication credentials ## These terms relate to the security requirements described in this topic\. <!-- <ul> --> * **API keys** allow you to easily authenticate when you are using the Python client or APIs and can be used across multiple services\. API Keys are considered confidential because they are used to grant access\. Treat all API keys as you would a password because anyone with your API key can access your service\. * An **IAM token** is an authentication token that is required to access IBM Cloud services\. You can generate a token by using your API key in the token request\. For details on using IAM tokens, refer to [Authenticating to Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning#authentication)\. <!-- </ul> --> To authenticate to a service through its API, pass your credentials to the API\. You can pass either a bearer token in an authorization header or an API key\. ### Generating an API key ### To generate an API key from your IBM Cloud user account, go to [Manage access and users \- API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account\. You can also generate and rotate API keys from **Profile and settings > User API key**\. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html)\. ### Authenticate with an IAM token ### IAM tokens are temporary security credentials that are valid for 60 minutes\. When a token expires, you generate a new one\. Tokens can be useful for temporary access to resources\. For more information, see [Generating an IBM Cloud IAM token by using an API key](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey)\. ### Getting a service\-level token ### You can also authenticate with a service\-level token\. To generate a service\-level token: <!-- <ol> --> 1. Refer to the IBM Cloud instructions for [creating a Service ID](https://cloud.ibm.com/iam/serviceids)\. 2. Generate an API key for that Service ID\. 3. Open the space where you plan to keep your deployable assets\. 4. On the **Access control** tab, add the Service ID and assign an access role of **Admin** or **Editor**\. <!-- </ol> --> You can use the service\-level token with your API scoring requests\. ## Interfaces ## <!-- <ul> --> * [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en#python) * [REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=en#rest-api) <!-- </ul> --> ### Python client ### Refer to: [Watson Machine Learning Python client ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://ibm.github.io/watson-machine-learning-sdk/) To create an instance of the Watson Machine Learning Python client object, you need to pass your credentials to Watson Machine Learning API client\. wml_credentials = { "apikey":"123456789", "url": " https://HIJKL" } from ibm_watson_machine_learning import APIClient wml_client = APIClient(wml_credentials) Note:Even though you do not explicitly provide an `instance_id`, it will be picked up from the associated space or project for billing purposes\. For details on plans and billing for Watson Machine Learning services, refer to [Watson Machine Learning plans and runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\. Refer to [sample notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for examples of how to authenticate and then score a model by using the Python client\. ### REST API ### Refer to: [Watson Machine Learning REST API ![external link](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/launch-glyph.png)](https://cloud.ibm.com/apidocs/machine-learning) To use the Watson Machine Learning REST API, you must obtain an IBM Cloud Identity and Access Management (IAM) token\. In this example, you would supply your API key in place of the example key\. #### cURL example #### curl -k -X POST \ --header "Content-Type: application/x-www-form-urlencoded" \ --header "Accept: application/json" \ --data-urlencode "grant_type=urn:ibm:params:oauth:grant-type:apikey" \ --data-urlencode "apikey=123456789" \ "https://iam.cloud.ibm.com/identity/token" The obtained IAM token needs to be prefixed with the word `Bearer`, and passed in the Authorization header for API calls\. #### Python example #### import requests # Paste your Watson Machine Learning service apikey here apikey = "123456789" # Get an IAM token from IBM Cloud url = "https://iam.cloud.ibm.com/identity/token" headers = { "Content-Type" : "application/x-www-form-urlencoded" } data = "apikey=" + apikey + "&grant_type=urn:ibm:params:oauth:grant-type:apikey" response = requests.post( url, headers=headers, data=data, auth=( apikey ) iam_token = response.json()["access_token"] #### Node\.js example #### var btoa = require( "btoa" ); var request = require( 'request' ); // Paste your Watson Machine Learning service apikey here var apikey = "123456789"; // Use this code as written to get an access token from IBM Cloud REST API // var IBM_Cloud_IAM_uid = "bx"; var IBM_Cloud_IAM_pwd = "bx"; var options = { url : "https://iam.cloud.ibm.com/identity/token", headers : { "Content-Type" : "application/x-www-form-urlencoded", "Authorization" : "Basic " + btoa( IBM_Cloud_IAM_uid + ":" + IBM_Cloud_IAM_pwd ) }, body : "apikey=" + apikey + "&grant_type=urn:ibm:params:oauth:grant-type:apikey" }; request.post( options, function( error, response, body ) { var iam_token = JSON.parse( body )["access_token"]; } ); **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
CD27E36E95AE5324468C33CF3A112DC1611CA74C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en
Customizing with third-party and private Python libraries
Customizing with third-party and private Python libraries If your model requires custom components such as user-defined transformers, estimators, or user-defined tensors, you can create a custom software specification that is derived from a base, or a predefined specification. Python functions and Python scripts also support custom software specifications. You can use custom software specification to reference any third-party libraries, user-created Python packages, or both. Third-party libraries or user-created Python packages must be specified as package extensions so that they can be referenced in a custom software specification. You can customize deployment runtimes in these ways: * [Define customizations in a Watson Studio project and then promote them to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-ws) * [Create package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-wml) For more information, see [Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ents). Defining customizations in a Watson Studio project and then promoting them to a deployment space Environments in Watson Studio projects can be customized to include third-party libraries that can be installed from Anaconda or from the PyPI repository. For more information, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). As part of custom environment creation, these steps are performed internally (visible to the user): * A package extension that contains the details of third-party libraries is created in conda YAML format. * A custom software specification with the same name as the custom environment is created and the package extension that is created is associated with this custom software specification. The models or Python functions/scripts created with the custom environment must reference the custom software specification when they are saved in Watson Machine Learning repository in the project scope. Propagating software specifications and package extensions from projects to deployment spaces To export custom software specifications and package extensions that were created in a Watson Studio project to a deployment space: 1. From your project interface, click the Manage tab. 2. Select Environments. 3. Click the Templates tab. 4. From your custom environment's Options menu, select Promote to space. ![Selecting "Promote to space" for a custom environment in Watson Studio interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/promote-custom-env-from-ws.png) Alternatively, when you promote any model or Python function that is associated with a custom environment from a Watson Studio project to a deployment space, the associated custom software specification and package extension is also promoted to the deployment space. If you want to update software specifications and package extensions after you promote them to deployment space, follow these steps: 1. In the deployment space, delete the software specifications, package extensions, and associated models (optional) by using the Watson Machine Learning Python client. 2. In a project, promote the model, function, or script that is associated with the changed custom software specification and package extension to the space. Software specifications are also included when you import a project or space that includes one. Creating package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client You can use the Watson Machine Learning APIs or Python client to define a custom software specification that is derived from a base specification. High-level steps to create a custom software specification that uses third-party libraries or user-created Python packages: 1. Optional: [Save a conda YAML file that contains a list of third-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-conda-yaml) or [save a user-created Python library and create a package extension](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-user-created). Note: This step is not required if the model does not have any dependency on a third-party library or a user-created Python library. 2. Create a custom software specification 3. Add a reference of the package extensions to the custom software specification that you created. Saving a conda YAML file that contains a list of third-party libraries To save a conda YAML file that contains a list of third-party libraries as a package extension and create a custom software specification that is linked to the package extension: 1. Authenticate and create the client. Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). 2. Create and set the default deployment space, then list available software specifications. metadata = { wml_client.spaces.ConfigurationMetaNames.NAME: 'examples-create-software-spec', wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: 'For my models' } space_details = wml_client.spaces.store(meta_props=metadata) space_uid = wml_client.spaces.get_id(space_details) set the default space wml_client.set.default_space(space_uid) see available meta names for software specs print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) wml_client.software_specifications.list() asset_id = 'undefined' pe_asset_id = 'undefined' 3. Create the metadata for package extensions to add to the base specification. pe_metadata = { wml_client.package_extensions.ConfigurationMetaNames.NAME: 'My custom library', optional: wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: wml_client.package_extensions.ConfigurationMetaNames.TYPE: 'conda_yml' } 4. Create a yaml file that contains the list of packages and then save it as customlibrary.yaml. Example yaml file: name: add-regex-package dependencies: - regex For more information, see [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html). 5. Store package extension information. pe_asset_details = wml_client.package_extensions.store( meta_props=pe_metadata, file_path='customlibrary.yaml' ) pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) 6. Create the metadata for the software specification and store the software specification. Get the id of the base software specification base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') create the metadata for software specs ss_metadata = { wml_client.software_specifications.ConfigurationMetaNames.NAME: 'Python 3.9 with pre-installed ML package', wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: 'Adding some custom libraries like regex', optional wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {'guid': base_id}, wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: [{'guid': pe_asset_id}] } store the software spec ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) get the id of the new asset asset_id = wml_client.software_specifications.get_id(ss_asset_details) view new software specification details import pprint as pp ss_asset_details = wml_client.software_specifications.get_details(asset_id) print('Package extensions', pp.pformat( ss_asset_details['entity']['package_extensions'] )) Saving a user-created Python library and creating a package extension For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). To save a user-created Python package as a package extension and create a custom software specification that is linked to the package extension: 1. Authenticate and create the client. Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). 2. Create and set the default deployment space, then list available software specifications. metadata = { wml_client.spaces.ConfigurationMetaNames.NAME: 'examples-create-software-spec', wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: 'For my models' } space_details = wml_client.spaces.store(meta_props=metadata) space_uid = wml_client.spaces.get_id(space_details) set the default space wml_client.set.default_space(space_uid) see available meta names for software specs print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) wml_client.software_specifications.list() asset_id = 'undefined' pe_asset_id = 'undefined' 3. Create the metadata for package extensions to add to the base specification. Note:You can specify pip_zip only as a value for the wml_client.package_extensions.ConfigurationMetaNames.TYPE metadata property. pe_metadata = { wml_client.package_extensions.ConfigurationMetaNames.NAME: 'My Python library', optional: wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: wml_client.package_extensions.ConfigurationMetaNames.TYPE: 'pip.zip' } 4. Specify the path of the user-created Python library. python_lib_file_path="my-python-library-0.1.zip" For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). 5. Store package extension information. pe_asset_details = wml_client.package_extensions.store( meta_props=pe_metadata, file_path=python_lib_file_path ) pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) 6. Create the metadata for the software specification and store the software specification. Get the id of the base software specification base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') create the metadata for software specs ss_metadata = { wml_client.software_specifications.ConfigurationMetaNames.NAME: 'Python 3.9 with pre-installed ML package', wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: 'Adding some custom libraries like regex', optional wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {'guid': base_id}, wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: [{'guid': pe_asset_id}] } store the software spec ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) get the id of the new asset asset_id = wml_client.software_specifications.get_id(ss_asset_details) view new software specification details import pprint as pp ss_asset_details = wml_client.software_specifications.get_details(asset_id) print('Package extensions', pp.pformat( ss_asset_details['entity']['package_extensions'] )) Troubleshooting When a conda yml based custom library installation fails with this error: Encountered error while installing custom library, try these alternatives: * Use a different version of the same package that is available in Anaconda for the concerned Python version. * Install the library from the pypi repository, by using pip. Edit the conda yml installation file contents: name: <conda yml file name> dependencies: - numpy - pip: - pandas==1.2.5 Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html)
# Customizing with third\-party and private Python libraries # If your model requires custom components such as user\-defined transformers, estimators, or user\-defined tensors, you can create a custom software specification that is derived from a base, or a predefined specification\. Python functions and Python scripts also support custom software specifications\. You can use custom software specification to reference any third\-party libraries, user\-created Python packages, or both\. Third\-party libraries or user\-created Python packages must be specified as package extensions so that they can be referenced in a custom software specification\. You can customize deployment runtimes in these ways: <!-- <ul> --> * [Define customizations in a Watson Studio project and then promote them to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en#custom-ws) * [Create package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en#custom-wml) <!-- </ul> --> For more information, see [Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en#ts)\. ## Defining customizations in a Watson Studio project and then promoting them to a deployment space ## Environments in Watson Studio projects can be customized to include third\-party libraries that can be installed from Anaconda or from the PyPI repository\. For more information, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)\. As part of custom environment creation, these steps are performed internally (visible to the user): <!-- <ul> --> * A package extension that contains the details of third\-party libraries is created in `conda YAML format`\. * A custom software specification with the same name as the custom environment is created and the package extension that is created is associated with this custom software specification\. <!-- </ul> --> The models or Python functions/scripts created with the custom environment must reference the custom software specification when they are saved in Watson Machine Learning repository in the project scope\. ### Propagating software specifications and package extensions from projects to deployment spaces ### To export custom software specifications and package extensions that were created in a Watson Studio project to a deployment space: <!-- <ol> --> 1. From your project interface, click the **Manage** tab\. 2. Select **Environments**\. 3. Click the **Templates** tab\. 4. From your custom environment's **Options** menu, select **Promote to space**\. <!-- </ol> --> ![Selecting "Promote to space" for a custom environment in Watson Studio interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/promote-custom-env-from-ws.png) Alternatively, when you promote any model or Python function that is associated with a custom environment from a Watson Studio project to a deployment space, the associated custom software specification and package extension is also promoted to the deployment space\. If you want to update software specifications and package extensions after you promote them to deployment space, follow these steps: <!-- <ol> --> 1. In the deployment space, delete the software specifications, package extensions, and associated models (optional) by using the Watson Machine Learning Python client\. 2. In a project, promote the model, function, or script that is associated with the changed custom software specification and package extension to the space\. <!-- </ol> --> Software specifications are also included when you import a project or space that includes one\. ## Creating package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client ## You can use the Watson Machine Learning APIs or Python client to define a custom software specification that is derived from a base specification\. High\-level steps to create a custom software specification that uses third\-party libraries or user\-created Python packages: <!-- <ol> --> 1. Optional: [Save a conda YAML file that contains a list of third\-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en#save-conda-yaml) or [save a user\-created Python library and create a package extension](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=en#save-user-created)\. Note: This step is not required if the model does not have any dependency on a third-party library or a user-created Python library. 2. Create a custom software specification 3. Add a reference of the package extensions to the custom software specification that you created\. <!-- </ol> --> ### Saving a conda YAML file that contains a list of third\-party libraries ### To save a conda YAML file that contains a list of third\-party libraries as a package extension and create a custom software specification that is linked to the package extension: <!-- <ol> --> 1. Authenticate and create the client\. Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). 2. Create and set the default deployment space, then list available software specifications\. metadata = { wml_client.spaces.ConfigurationMetaNames.NAME: 'examples-create-software-spec', wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: 'For my models' } space_details = wml_client.spaces.store(meta_props=metadata) space_uid = wml_client.spaces.get_id(space_details) # set the default space wml_client.set.default_space(space_uid) # see available meta names for software specs print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) wml_client.software_specifications.list() asset_id = 'undefined' pe_asset_id = 'undefined' 3. Create the metadata for package extensions to add to the base specification\. pe_metadata = { wml_client.package_extensions.ConfigurationMetaNames.NAME: 'My custom library', # optional: # wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: wml_client.package_extensions.ConfigurationMetaNames.TYPE: 'conda_yml' } 4. Create a yaml file that contains the list of packages and then save it as `customlibrary.yaml`\. Example yaml file: name: add-regex-package dependencies: - regex For more information, see [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html). 5. Store package extension information\. pe_asset_details = wml_client.package_extensions.store( meta_props=pe_metadata, file_path='customlibrary.yaml' ) pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) 6. Create the metadata for the software specification and store the software specification\. # Get the id of the base software specification base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') # create the metadata for software specs ss_metadata = { wml_client.software_specifications.ConfigurationMetaNames.NAME: 'Python 3.9 with pre-installed ML package', wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: 'Adding some custom libraries like regex', # optional wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {'guid': base_id}, wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: [{'guid': pe_asset_id}] } # store the software spec ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) # get the id of the new asset asset_id = wml_client.software_specifications.get_id(ss_asset_details) # view new software specification details import pprint as pp ss_asset_details = wml_client.software_specifications.get_details(asset_id) print('Package extensions', pp.pformat( ss_asset_details['entity']['package_extensions'] )) <!-- </ol> --> ### Saving a user\-created Python library and creating a package extension ### For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html)\. To save a user\-created Python package as a package extension and create a custom software specification that is linked to the package extension: <!-- <ol> --> 1. Authenticate and create the client\. Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). 2. Create and set the default deployment space, then list available software specifications\. metadata = { wml_client.spaces.ConfigurationMetaNames.NAME: 'examples-create-software-spec', wml_client.spaces.ConfigurationMetaNames.DESCRIPTION: 'For my models' } space_details = wml_client.spaces.store(meta_props=metadata) space_uid = wml_client.spaces.get_id(space_details) # set the default space wml_client.set.default_space(space_uid) # see available meta names for software specs print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get()) wml_client.software_specifications.list() asset_id = 'undefined' pe_asset_id = 'undefined' 3. Create the metadata for package extensions to add to the base specification\. Note:You can specify `pip_zip` only as a value for the `wml_client.package_extensions.ConfigurationMetaNames.TYPE` metadata property. pe_metadata = { wml_client.package_extensions.ConfigurationMetaNames.NAME: 'My Python library', # optional: # wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: wml_client.package_extensions.ConfigurationMetaNames.TYPE: 'pip.zip' } 4. Specify the path of the user\-created Python library\. python_lib_file_path="my-python-library-0.1.zip" For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). 5. Store package extension information\. pe_asset_details = wml_client.package_extensions.store( meta_props=pe_metadata, file_path=python_lib_file_path ) pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details) 6. Create the metadata for the software specification and store the software specification\. # Get the id of the base software specification base_id = wml_client.software_specifications.get_id_by_name('default_py3.9') # create the metadata for software specs ss_metadata = { wml_client.software_specifications.ConfigurationMetaNames.NAME: 'Python 3.9 with pre-installed ML package', wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION: 'Adding some custom libraries like regex', # optional wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {'guid': base_id}, wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS: [{'guid': pe_asset_id}] } # store the software spec ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata) # get the id of the new asset asset_id = wml_client.software_specifications.get_id(ss_asset_details) # view new software specification details import pprint as pp ss_asset_details = wml_client.software_specifications.get_details(asset_id) print('Package extensions', pp.pformat( ss_asset_details['entity']['package_extensions'] )) <!-- </ol> --> ## Troubleshooting ## When a conda yml based custom library installation fails with this error: `Encountered error while installing custom library`, try these alternatives: <!-- <ul> --> * Use a different version of the same package that is available in Anaconda for the concerned Python version\. * Install the library from the pypi repository, by using pip\. Edit the conda yml installation file contents: name: <conda yml file name> dependencies: - numpy - pip: - pandas==1.2.5 <!-- </ul> --> **Parent topic:**[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html) <!-- </article "role="article" "> -->
9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html?context=cdpaas&locale=en
Requirements for using custom components in ML models
Requirements for using custom components in ML models You can define your own transformers, estimators, functions, classes, and tensor operations in models that you deploy in IBM Watson Machine Learning as online deployments. Defining and using custom components To use custom components in your models, you need to package your custom components in a [Python distribution package](https://packaging.python.org/glossary/term-distribution-package). Package requirements * The package type must be: [source distribution](https://packaging.python.org/glossary/term-source-distribution-or-sdis) (distributions of type Wheel and Egg are not supported) * The package file format must be: .zip * Any third-party dependencies for your custom components must be installable by pip and must be passed to the install_requires argument of the setup function of the setuptools library. Refer to: [Creating a source distribution](https://docs.python.org/2/distutils/sourcedist.html) Storing your custom package You must take extra steps when you store your trained model in the Watson Machine Learning repository: * Store your custom package in the [Watson Machine Learning repository](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store_library) (use the runtimes.store_library function from the Watson Machine Learning Python client, or the store libraries Watson Machine Learning CLI command.) * Create a runtime resource object that references your stored custom package, and then [store the runtime resource object](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store) in the Watson Machine Learning repository (use the runtimes.store function, or the store runtimes command.) * When you store your trained model in the Watson Machine Learning repository, reference your stored runtime resource in the [metadata](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Repository.store_model) that is passed to the store_model function (or the store command.) Supported frameworks These frameworks support custom components: * Scikit-learn * XGBoost * Tensorflow * Python Functions * Python Scripts * Decision Optimization For more information, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html)
# Requirements for using custom components in ML models # You can define your own transformers, estimators, functions, classes, and tensor operations in models that you deploy in IBM Watson Machine Learning as online deployments\. ## Defining and using custom components ## To use custom components in your models, you need to package your custom components in a [Python distribution package](https://packaging.python.org/glossary/#term-distribution-package)\. ### Package requirements ### <!-- <ul> --> * The package type must be: [source distribution](https://packaging.python.org/glossary/#term-source-distribution-or-sdis) (distributions of type Wheel and Egg are not supported) * The package file format must be: `.zip` * Any third\-party dependencies for your custom components must be installable by `pip` and must be passed to the `install_requires` argument of the `setup` function of the `setuptools` library\. <!-- </ul> --> Refer to: [Creating a source distribution](https://docs.python.org/2/distutils/sourcedist.html) ### Storing your custom package ### You must take extra steps when you store your trained model in the Watson Machine Learning repository: <!-- <ul> --> * Store your custom package in the [Watson Machine Learning repository](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#ibm_watson_machine_learning.runtimes.Runtimes.store_library) (use the `runtimes.store_library` function from the Watson Machine Learning Python client, or the `store libraries` Watson Machine Learning CLI command\.) * Create a runtime resource object that references your stored custom package, and then [store the runtime resource object](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#ibm_watson_machine_learning.runtimes.Runtimes.store) in the Watson Machine Learning repository (use the `runtimes.store` function, or the `store runtimes` command\.) * When you store your trained model in the Watson Machine Learning repository, reference your stored runtime resource in the [metadata](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#client.Repository.store_model) that is passed to the `store_model` function (or the `store` command\.) <!-- </ul> --> ## Supported frameworks ## These frameworks support custom components: <!-- <ul> --> * Scikit\-learn * XGBoost * Tensorflow * Python Functions * Python Scripts * Decision Optimization <!-- </ul> --> For more information, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html) **Parent topic:**[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html) <!-- </article "role="article" "> -->
F8E12F246225210B8C984D447B3E15867D2E8869
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html?context=cdpaas&locale=en
Customizing Watson Machine Learning deployment runtimes
Customizing Watson Machine Learning deployment runtimes Create custom Watson Machine Learning deployment runtimes with libraries and packages that are required for your deployments. You can build custom images based on deployment runtime images available in IBM Watson Machine Learning. The images contain preselected open source libraries and selected IBM libraries. For a list of requirements for creating private Python packages, refer to [Requirements for using custom components in ML models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html). You can customize your deployment runtimes by [customizing Python runtimes with third-party libraries and user-created Python packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html) Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Customizing Watson Machine Learning deployment runtimes # Create custom Watson Machine Learning deployment runtimes with libraries and packages that are required for your deployments\. You can build custom images based on deployment runtime images available in IBM Watson Machine Learning\. The images contain preselected open source libraries and selected IBM libraries\. For a list of requirements for creating private Python packages, refer to [Requirements for using custom components in ML models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html)\. You can customize your deployment runtimes by [customizing Python runtimes with third\-party libraries and user\-created Python packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html) **Parent topic:**[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
82512A3915BF43DF08D9106027A67D5E059B2719
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html?context=cdpaas&locale=en
Creating an SPSS Modeler batch job with multiple data sources
Creating an SPSS Modeler batch job with multiple data sources In an SPSS Modeler flow, it's common to have multiple import and export nodes, where multiple import nodes can be fetching data from one or more relational databases. Learn how to use Watson Machine Learning to create an SPSS Modeler batch job with multiple data sources from relational databases. Note:The examples use IBM Db2 and IBM Db2 Warehouse, referred to in examples as dashdb. Connecting to multiple relational databases as input to a batch job The number of import nodes in an SPSS Modeler flow can vary. You might use as many as 60 or 70. However, the number of distinct connections to databases in these cases are just a few, though the table names that are accessed through the connections vary. Rather than specifying the details for every table connection, the approach that is described here focuses on the database connections. Therefore, the batch jobs accept a list of data connections or references by node name that are mapped to connection names in the SPSS Modeler flow's import nodes. For example, assume that if a flow has 30 nodes, only three database connections are used to connect to 30 different tables. In this case, you submit three connections (C1, C2, and C3) to the batch job. C1, C2, and C3 are connection names in the import node of the flow and the node name in the input of the batch job. When a batch job runs, the data reference for a node is provided by mapping the node name with the connection name in the import node. This example illustrates the steps for creating the mapping. The following diagram shows the flow from model creation to job submission: ![SPSS Modeler job with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job.svg) Limitation: The connection reference for a node in a flow is overridden by the reference that is received from the batch job. However, the table name in the import or export node is not overridden. Deployment scenario with example In this example, an SPSS model is built by using 40 import nodes and a single output. The model has the following configuration: * Connections to three databases: 1 Db2 Warehouse (dashDB) and 2 Db2. * The import nodes are read from 40 tables (30 from Db2 Warehouse and 5 each from the Db2 databases). * A single output table is written to a Db2 database. ![SPSS Modeler flow with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job2.svg) Example These steps demonstrate how to create the connections and identify the tables. 1. Create a connection in your project. To run the SPSS Modeler flow, you start in your project and create a connection for each of the three databases your model connects to. You then configure each import node in the flow to point to a table in one of the connected databases. For this example, the database connections in the project are named dashdb_conn, db2_conn1, and db2_conn2. 2. Configure Data Asset to import nodes in your SPSS Modeler flow with connections. Configure each node in the flow to reference one of the three connections you created (dashdb_conn, db2_conn1, and db2_conn2), then specify a table for each node. Note: You can change the name of the connection at the time of the job run. The table names that you select in the flow are referenced when the job runs. You can't overwrite or change them. 3. Save the SPSS model to the Watson Machine Learning repository. For this example, it's helpful to provide the input and output schema when you are saving the model. It simplifies the process of identifying each input when you create and submit the batch job in the Watson Studio user interface. Connections that are referenced in the Data Asset nodes of the SPSS Modeler flow must be provided in the node name field of the input schema. To find the node name, double-click the Data Asset import node in your flow to open its properties: ![Data Asset import node name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/spss-node-name.png) Note:SPSS models that are saved without schemas are still supported for jobs, but you must enter node name fields manually and provide the data asset when you submit the job. This code sample shows how to save the input schema when you save the model (Endpoint: POST /v4/models). { "name": "SPSS Drug Model", "label_column": "label", "type": "spss-modeler_18.1", "runtime": { "href": "/v4/runtimes/spss-modeler_18.1" }, "space": { "href": "/v4/spaces/<space_id>" }, "schemas": { "input": [ { "id": "dashdb_conn", "fields": ] }, { "id": "db2_conn1 ", "fields": ] } , { "id": "db2_conn2", "fields": ] } ], "output": [{ "id": "db2_conn2 ","fields": ] }] } } Note: The number of fields in each of these connections doesn't matter. They’re not validated or used. What's important is the number of connections that are used. 4. Create the batch deployment for the SPSS model. For SPSS models, the creation process of the batch deployment job is the same. You can submit the deployment request with the model that was created in the previous step. 5. Submit SPSS batch jobs. You can submit a batch job from the Watson Studio user interface or by using the REST API. If the schema is saved with the model, the Watson Studio user interface makes it simple to accept input from the connections specified in the schema. Because you already created the data connections, you can select a connected data asset for each node name field that displays in the Watson Studio user interface as you define the job. The name of the connection that is created at the time of job submission can be different from the one used at the time of model creation. However, it must be assigned to the node name field. Submitting a job when schema is not provided If the schema isn't provided in the model metadata at the time the model is saved, you must enter the import node name manually. Further, you must select the data asset in the Watson Studio user interface for each connection. Connections that are referenced in the Data Asset import nodes of the SPSS Modeler flow must be provided in the node name field of the import/export data references. Specifying the connections for a job with data asset This code sample demonstrates how to specify the connections for a job that is submitted by using the REST API (Endpoint: /v4/deployment_jobs). { "deployment": { "href": "/v4/deployments/<deploymentID>" }, "scoring": { "input_data_references": [ { "id": "dashdb_conn", "name": "dashdb_conn", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }, { "id": "db2_conn1 ", "name": "db2_conn1 ", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }, { "id": "db2_conn2 ", "name": "db2_conn2", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }], "output_data_reference": { "id": "db2_conn2" "name": "db2_conn2", "type": "data_asset ", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} } } Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
# Creating an SPSS Modeler batch job with multiple data sources # In an SPSS Modeler flow, it's common to have multiple import and export nodes, where multiple import nodes can be fetching data from one or more relational databases\. Learn how to use Watson Machine Learning to create an SPSS Modeler batch job with multiple data sources from relational databases\. Note:The examples use IBM Db2 and IBM Db2 Warehouse, referred to in examples as *dashdb*\. ## Connecting to multiple relational databases as input to a batch job ## The number of import nodes in an SPSS Modeler flow can vary\. You might use as many as 60 or 70\. However, the number of distinct connections to databases in these cases are just a few, though the table names that are accessed through the connections vary\. Rather than specifying the details for every table connection, the approach that is described here focuses on the database connections\. Therefore, the batch jobs accept a list of data connections or references by *node name* that are mapped to connection names in the SPSS Modeler flow's import nodes\. For example, assume that if a flow has 30 nodes, only three database connections are used to connect to 30 different tables\. In this case, you submit three connections (C1, C2, and C3) to the batch job\. C1, C2, and C3 are connection names in the import node of the flow and the *node name* in the input of the batch job\. When a batch job runs, the data reference for a node is provided by mapping the *node name* with the *connection name* in the import node\. This example illustrates the steps for creating the mapping\. The following diagram shows the flow from model creation to job submission: ![SPSS Modeler job with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job.svg) **Limitation:** The connection reference for a node in a flow is overridden by the reference that is received from the batch job\. However, the table name in the import or export node is not overridden\. ## Deployment scenario with example ## In this example, an SPSS model is built by using 40 import nodes and a single output\. The model has the following configuration: <!-- <ul> --> * Connections to three databases: 1 Db2 Warehouse (dashDB) and 2 Db2\. * The import nodes are read from 40 tables (30 from Db2 Warehouse and 5 each from the Db2 databases)\. * A single output table is written to a Db2 database\. <!-- </ul> --> ![SPSS Modeler flow with multiple inputs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/word_SPSS-multiple-input-job2.svg) ### Example ### These steps demonstrate how to create the connections and identify the tables\. <!-- <ol> --> 1. Create a connection in your project\. To run the SPSS Modeler flow, you start in your project and create a connection for each of the three databases your model connects to. You then configure each import node in the flow to point to a table in one of the connected databases. For this example, the database connections in the project are named `dashdb_conn`, `db2_conn1`, and `db2_conn2`. 2. Configure Data Asset to import nodes in your SPSS Modeler flow with connections\. Configure each node in the flow to reference one of the three connections you created (`dashdb_conn`, `db2_conn1`, and `db2_conn2`), then specify a table for each node. Note: You can change the name of the connection at the time of the job run. The table names that you select in the flow are referenced when the job runs. You can't overwrite or change them. 3. Save the SPSS model to the Watson Machine Learning repository\. For this example, it's helpful to provide the input and output schema when you are saving the model. It simplifies the process of identifying each input when you create and submit the batch job in the Watson Studio user interface. Connections that are referenced in the Data Asset nodes of the SPSS Modeler flow must be provided in the *node name* field of the input schema. To find the *node name*, double-click the Data Asset import node in your flow to open its properties: ![Data Asset import node name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/spss-node-name.png) Note:SPSS models that are saved without schemas are still supported for jobs, but you must enter *node name* fields manually and provide the data asset when you submit the job. This code sample shows how to save the input schema when you save the model (Endpoint: `POST /v4/models`). { "name": "SPSS Drug Model", "label_column": "label", "type": "spss-modeler_18.1", "runtime": { "href": "/v4/runtimes/spss-modeler_18.1" }, "space": { "href": "/v4/spaces/<space_id>" }, "schemas": { "input": [ { "id": "dashdb_conn", "fields": ] }, { "id": "db2_conn1 ", "fields": ] } , { "id": "db2_conn2", "fields": ] } ], "output": [{ "id": "db2_conn2 ","fields": ] }] } } Note: The number of fields in each of these connections doesn't matter. They’re not validated or used. What's important is the number of connections that are used. 4. Create the batch deployment for the SPSS model\. For SPSS models, the creation process of the batch deployment job is the same. You can submit the deployment request with the model that was created in the previous step. 5. Submit SPSS batch jobs\. You can submit a batch job from the Watson Studio user interface or by using the REST API. If the schema is saved with the model, the Watson Studio user interface makes it simple to accept input from the connections specified in the schema. Because you already created the data connections, you can select a connected data asset for each *node name* field that displays in the Watson Studio user interface as you define the job. The name of the connection that is created at the time of job submission can be different from the one used at the time of model creation. However, it must be assigned to the *node name* field. <!-- </ol> --> ### Submitting a job when schema is not provided ### If the schema isn't provided in the model metadata at the time the model is saved, you must enter the *import node name* manually\. Further, you must select the data asset in the Watson Studio user interface for each connection\. Connections that are referenced in the Data Asset import nodes of the SPSS Modeler flow must be provided in the *node name* field of the import/export data references\. #### Specifying the connections for a job with data asset #### This code sample demonstrates how to specify the connections for a job that is submitted by using the REST API (Endpoint: `/v4/deployment_jobs`)\. { "deployment": { "href": "/v4/deployments/<deploymentID>" }, "scoring": { "input_data_references": [ { "id": "dashdb_conn", "name": "dashdb_conn", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }, { "id": "db2_conn1 ", "name": "db2_conn1 ", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }, { "id": "db2_conn2 ", "name": "db2_conn2", "type": "data_asset", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} }], "output_data_reference": { "id": "db2_conn2" "name": "db2_conn2", "type": "data_asset ", "connection": {}, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" }, "schema": {} } } **Parent topic:**[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) <!-- </article "role="article" "> -->
315971AE6C6A4EEDE13E9E1449B2A36F548B928F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html?context=cdpaas&locale=en
Deleting a deployment
Deleting a deployment Delete your deployment when you no longer need it to free up resources. You can delete a deployment from a deployment space, or programmatically, by using the Python client or Watson Machine Learning APIs. Deleting a deployment from a space To remove a deployment: 1. Open the Deployments page of your deployment space. 2. Choose Delete from the action menu for the deployment name. ![Deleting a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-delete.png) Deleting a deployment by using the Python client Use the following method to delete the deployment. client.deployments.delete(deployment_uid) Returns a SUCCESS message. To check that the deployment was removed, you can list deployments and make sure that the deleted deployment is no longer listed. client.deployments.list() Returns: ---- ---- ----- ------- ------------- GUID NAME STATE CREATED ARTIFACT_TYPE ---- ---- ----- ------- ------------- Deleting a deployment by using the REST API Use the DELETE method for deleting a deployment. DELETE /ml/v4/deployments/{deployment_id} For more information, see [Delete](https://cloud.ibm.com/apidocs/machine-learningdeployments-delete). For example, see the following code snippet: curl --location --request DELETE 'https://us-south.ml.cloud.ibm.com/ml/v4/deployments/:deployment_id?space_id=<string>&version=2020-09-01' Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Deleting a deployment # Delete your deployment when you no longer need it to free up resources\. You can delete a deployment from a deployment space, or programmatically, by using the Python client or Watson Machine Learning APIs\. ## Deleting a deployment from a space ## To remove a deployment: <!-- <ol> --> 1. Open the **Deployments** page of your deployment space\. 2. Choose **Delete** from the action menu for the deployment name\. ![Deleting a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-delete.png) <!-- </ol> --> ## Deleting a deployment by using the Python client ## Use the following method to delete the deployment\. client.deployments.delete(deployment_uid) Returns a `SUCCESS` message\. To check that the deployment was removed, you can list deployments and make sure that the deleted deployment is no longer listed\. client.deployments.list() Returns: ---- ---- ----- ------- ------------- GUID NAME STATE CREATED ARTIFACT_TYPE ---- ---- ----- ------- ------------- ## Deleting a deployment by using the REST API ## Use the `DELETE` method for deleting a deployment\. DELETE /ml/v4/deployments/{deployment_id} For more information, see [Delete](https://cloud.ibm.com/apidocs/machine-learning#deployments-delete)\. For example, see the following code snippet: curl --location --request DELETE 'https://us-south.ml.cloud.ibm.com/ml/v4/deployments/:deployment_id?space_id=<string>&version=2020-09-01' **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
53019DD52EDB5790460DFF9A02363856B83CAFB7
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html?context=cdpaas&locale=en
Managing predictive deployments
Managing predictive deployments For proper deployment, you must set up a deployment space and then select and configure a specific deployment type. After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. To be able to deploy assets from a space, you must have a machine learning service instance that is provisioned and associated with that space. For more information, see [Associating a service instance with a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.htmlassociating-instance-with-space). Online and batch deployments provide simple ways to create an online scoring endpoint or do batch scoring with your models. If you want to implement a custom logic: * Create a Python function to use for creating your online endpoint * Write a notebook or script for batch scoring Note: If you create a notebook or a script to perform batch scoring such an asset runs as a platform job, not as a batch deployment. Deployable assets Following is the list of assets that you can deploy from a Watson Machine Learning space, with information on applicable deployment types: List of assets that you can deploy Asset type Batch deployment Online deployment Functions Yes Yes Models Yes Yes Scripts Yes No An R Shiny app is the only asset type that is supported for web app deployments. Notes: * A deployment job is a way of running a batch deployment, or a self-contained asset like a flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html). * Notebooks and flows use notebook environments. You can run them in a deployment space, but they are not deployable. For more information, see: * [Creating online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html) * [Creating batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) * [Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) * [Deploying scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. Some ways to manage or update a deployment are as follows: * [Manage deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html). After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space. * [Update a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html). For example, you can replace a model with a better-performing version without having to create a new deployment. * [Scale a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html) to increase availability and throughput by creating replicas of the deployment. * [Delete a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html) to remove a deployment and free up resources. Learn more * [Full list of asset types that can be added to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Managing predictive deployments # For proper deployment, you must set up a deployment space and then select and configure a specific deployment type\. After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy\. To be able to deploy assets from a space, you must have a machine learning service instance that is provisioned and associated with that space\. For more information, see [Associating a service instance with a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html#associating-instance-with-space)\. Online and batch deployments provide simple ways to create an online scoring endpoint or do batch scoring with your models\. If you want to implement a custom logic: <!-- <ul> --> * Create a Python function to use for creating your online endpoint * Write a notebook or script for batch scoring <!-- </ul> --> Note: If you create a notebook or a script to perform batch scoring such an asset runs as a platform job, not as a batch deployment\. ### Deployable assets ### Following is the list of assets that you can deploy from a Watson Machine Learning space, with information on applicable deployment types: <!-- <table> --> List of assets that you can deploy | Asset type | Batch deployment | Online deployment | | ---------- | ---------------- | ----------------- | | Functions | Yes | Yes | | Models | Yes | Yes | | Scripts | Yes | No | <!-- </table ""> --> An R Shiny app is the only asset type that is supported for web app deployments\. **Notes:** <!-- <ul> --> * A deployment job is a way of running a batch deployment, or a self\-contained asset like a flow in Watson Machine Learning\. You can select the input and output for your job and choose to run it manually or on a schedule\. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)\. * Notebooks and flows use notebook environments\. You can run them in a deployment space, but they are not deployable\. <!-- </ul> --> For more information, see: <!-- <ul> --> * [Creating online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html) * [Creating batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) * [Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) * [Deploying scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html) <!-- </ul> --> After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy\. Some ways to manage or update a deployment are as follows: <!-- <ul> --> * [Manage deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html)\. After you create one or more jobs, you can view and manage them from the **Jobs** tab of your deployment space\. * [Update a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html)\. For example, you can replace a model with a better\-performing version without having to create a new deployment\. * [Scale a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html) to increase availability and throughput by creating replicas of the deployment\. * [Delete a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html) to remove a deployment and free up resources\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Full list of asset types that can be added to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) <!-- </ul> --> **Parent topic:**[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en
Writing deployable Python functions
Writing deployable Python functions Learn how to write a Python function and then store it as an asset that allows for deploying models. For a list of general requirements for deployable functions refer to [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). For information on what happens during a function deployment, refer to [Function deployment process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enfundepro) General requirements for deployable functions To be deployed successfully, a function must meet these requirements: * The Python function file on import must have the score function object as part of its scope. Refer to [Score function requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscore) * Scoring input payload must meet the requirements that are listed in [Scoring input requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscoinreq) * The output payload expected as output of score must include the schema of the score_response variable for status code 200. Note that the prediction parameter, with an array of JSON objects as its value, is mandatory in the score output. * When you use the Python client to save a Python function that contains a reference to an outer function, only the code in the scope of the outer function (including its nested functions) is saved. Therefore, the code outside the outer function's scope will not be saved and thus will not be available when you deploy the function. Score function requirements * Two ways to add the score function object exist: * explicitly, by user * implicitly, by the method that is used to save the Python function as an asset in the Watson Machine Learning repository * The score function must accept a single, JSON input parameter. * The score function must return a JSON-serializable object (for example: dictionaries or lists) Scoring input requirements * The scoring input payload must include an array with the name values, as shown in this example schema. {"input_data": [!{ "values": "Hello world"]] }] } Note: - The input_data parameter is mandatory in the payload. - The input_data parameter can also include additional name-value pairs. * The scoring input payload must be passed as input parameter value for score. This way you can ensure that the value of the score input parameter is handled accordingly inside the score. * The scoring input payload must match the input requirements for the concerned Python function. * The scoring input payload must include an array that matches the [Example input data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enexschema). Example input data schema {"input_data": [!{ "values": "Hello world"]] }] } Example Python code wml_python_function def my_deployable_function(): def score( payload ): message_from_input_payload = payload.get("input_data")[0].get("values")[0] response_message = "Received message - {0}".format(message_from_input_payload) Score using the pre-defined model score_response = { 'predictions': [{'fields': 'Response_message_field'], 'values': response_message]] }] } return score_response return score score = my_deployable_function() You can test your function like this: input_data = { "input_data": [{ "fields": "message" ]!, "values": "Hello world" ]] } ] } function_result = score( input_data ) print( function_result ) It returns the message "Hello world!". Function deployment process The Python code of your Function asset gets loaded as a Python module by the Watson Machine Learning engine by using an import statement. This means that the code will be executed exactly once (when the function is deployed or each time when the corresponding pod gets restarted). The score function that is defined by the Function asset is then called in every prediction request. Handling deployable functions Use one of these methods to create a deployable Python function: * [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest) * [Creating deployable functions through the Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpy) Creating deployable functions through REST API For REST APIs, because the Python function is uploaded directly through a file, the file must already contain the score function. Any one time import that needs to be done to be used later within the score function can be done within the global scope of the file. When this file is deployed as a Python function, the one-time imports available in the global scope get executed during the deployment and later simply reused with every prediction request. Important:The function archive must be a .gz file. Sample score function file: Score function.py --------------------- def score(input_data): return {'predictions': [{'values': 'Just a test']]}]} Sample score function with one time imports: import subprocess subprocess.check_output('pip install gensim --user', shell=True) import gensim def score(input_data): return {'predictions': [{'fields': 'gensim_version'], 'values': gensim.__version__]]}]} Creating deployable functions through the Python client To persist a Python function as an asset, the Python client uses the wml_client.repository.store_function method. You can do that in two ways: * [Persisting a function through a file that contains the Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfufile) * [Persisting a function through the function object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfunob) Persisting a function through a file that contains the Python function This method is the same as persisting the Python function file through REST APIs (score must be defined in the scope of the Python source file). For details, refer to [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest). Important:When you are calling the wml_client.repository.store_function method, pass the file name as the first argument. Persisting a function through the function object You can persist Python function objects by creating Python Closures with a nested function named score. The score function is returned by the outer function that is being stored as a function object, when called. This score function must meet the requirements that are listed in [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). In this case, any one time imports and initial setup logic must be added in the outer nested function so that they get executed during deployment and get used within the score function. Any recurring logic that is needed during the prediction request must be added within the nested score function. Sample Python function save by using the Python client: def my_deployable_function(): import subprocess subprocess.check_output('pip install gensim', shell=True) import gensim def score(input_data): import message_from_input_payload = payload.get("input_data")[0].get("values")[0] response_message = "Received message - {0}".format(message_from_input_payload) Score using the pre-defined model score_response = { 'predictions': [{'fields': 'Response_message_field', 'installed_lib_version'], 'values': response_message, gensim.__version__]] }] } return score_response return score function_meta = { client.repository.FunctionMetaNames.NAME:"test_function", client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: sw_spec_id } func_details = client.repository.store_function(my_deployable_function, function_meta) In this scenario, the Python function takes up the job of creating a Python file taht contains the score function and persisting the function file as an asset in the Watson Machine Learning repository: score = my_deployable_function() Learn more * [Python Closures](https://www.programiz.com/python-programming/closure) * [Closures](https://www.learnpython.org/en/Closures) * [Nested function, Scope of variable & closures in Python](https://www.codesdope.com/blog/article/nested-function-scope-of-variable-closures-in-pyth/) Parent topic:[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html)
# Writing deployable Python functions # Learn how to write a Python function and then store it as an asset that allows for deploying models\. For a list of general requirements for deployable functions refer to [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#reqs)\. For information on what happens during a function deployment, refer to [Function deployment process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#fundepro) ## General requirements for deployable functions ## To be deployed successfully, a function must meet these requirements: <!-- <ul> --> * The Python function file on import must have the `score` function object as part of its scope\. Refer to [Score function requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#score) * Scoring input payload must meet the requirements that are listed in [Scoring input requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#scoinreq) * The output payload expected as output of `score` must include the schema of the `score_response` variable for status code 200\. Note that the `prediction` parameter, with an array of JSON objects as its value, is mandatory in the `score` output\. * When you use the Python client to save a Python function that contains a reference to an outer function, only the code in the scope of the outer function (including its nested functions) is saved\. Therefore, the code outside the outer function's scope will not be saved and thus will not be available when you deploy the function\. <!-- </ul> --> ### Score function requirements ### <!-- <ul> --> * Two ways to add the `score` function object exist: <!-- <ul> --> * explicitly, by user * implicitly, by the method that is used to save the Python function as an asset in the Watson Machine Learning repository <!-- </ul> --> * The `score` function must accept a single, JSON input parameter\. * The `score` function must return a JSON\-serializable object (for example: dictionaries or lists) <!-- </ul> --> ### Scoring input requirements ### <!-- <ul> --> * The scoring input payload must include an array with the name `values`, as shown in this example schema\. {"input_data": [!{ "values": "Hello world"]] }] } Note: - The `input_data` parameter is mandatory in the payload. - The `input_data` parameter can also include additional name-value pairs. * The scoring input payload must be passed as input parameter value for `score`\. This way you can ensure that the value of the `score` input parameter is handled accordingly inside the `score`\. * The scoring input payload must match the input requirements for the concerned Python function\. * The scoring input payload must include an array that matches the [Example input data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#exschema)\. <!-- </ul> --> #### Example input data schema #### {"input_data": [!{ "values": "Hello world"]] }] } ### Example Python code ### #wml_python_function def my_deployable_function(): def score( payload ): message_from_input_payload = payload.get("input_data")[0].get("values")[0] response_message = "Received message - {0}".format(message_from_input_payload) # Score using the pre-defined model score_response = { 'predictions': [{'fields': 'Response_message_field'], 'values': response_message]] }] } return score_response return score score = my_deployable_function() You can test your function like this: input_data = { "input_data": [{ "fields": "message" ]!, "values": "Hello world" ]] } ] } function_result = score( input_data ) print( function_result ) It returns the message "Hello world\!"\. ## Function deployment process ## The Python code of your Function asset gets loaded as a Python module by the Watson Machine Learning engine by using an `import` statement\. This means that the code will be executed exactly once (when the function is deployed or each time when the corresponding pod gets restarted)\. The `score` function that is defined by the Function asset is then called in every prediction request\. ## Handling deployable functions ## Use one of these methods to create a deployable Python function: <!-- <ul> --> * [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#rest) * [Creating deployable functions through the Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#py) <!-- </ul> --> ### Creating deployable functions through REST API ### For REST APIs, because the Python function is uploaded directly through a file, the file must already contain the `score` function\. Any one time import that needs to be done to be used later within the `score` function can be done within the global scope of the file\. When this file is deployed as a Python function, the one\-time imports available in the global scope get executed during the deployment and later simply reused with every prediction request\. Important:The function archive must be a `.gz` file\. Sample `score` function file: Score function.py --------------------- def score(input_data): return {'predictions': [{'values': 'Just a test']]}]} Sample `score` function with one time imports: import subprocess subprocess.check_output('pip install gensim --user', shell=True) import gensim def score(input_data): return {'predictions': [{'fields': 'gensim_version'], 'values': gensim.__version__]]}]} ### Creating deployable functions through the Python client ### To persist a Python function as an asset, the Python client uses the `wml_client.repository.store_function` method\. You can do that in two ways: <!-- <ul> --> * [Persisting a function through a file that contains the Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#persfufile) * [Persisting a function through the function object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#persfunob) <!-- </ul> --> #### Persisting a function through a file that contains the Python function #### This method is the same as persisting the Python function file through REST APIs (`score` must be defined in the scope of the Python source file)\. For details, refer to [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#rest)\. Important:When you are calling the `wml_client.repository.store_function` method, pass the file name as the first argument\. #### Persisting a function through the function object #### You can persist Python function objects by creating Python Closures with a nested function named `score`\. The `score` function is returned by the outer function that is being stored as a function object, when called\. This `score` function must meet the requirements that are listed in [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=en#reqs)\. In this case, any one time imports and initial setup logic must be added in the outer nested function so that they get executed during deployment and get used within the `score` function\. Any recurring logic that is needed during the `prediction` request must be added within the nested `score` function\. Sample Python function save by using the Python client: def my_deployable_function(): import subprocess subprocess.check_output('pip install gensim', shell=True) import gensim def score(input_data): import message_from_input_payload = payload.get("input_data")[0].get("values")[0] response_message = "Received message - {0}".format(message_from_input_payload) # Score using the pre-defined model score_response = { 'predictions': [{'fields': 'Response_message_field', 'installed_lib_version'], 'values': response_message, gensim.__version__]] }] } return score_response return score function_meta = { client.repository.FunctionMetaNames.NAME:"test_function", client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: sw_spec_id } func_details = client.repository.store_function(my_deployable_function, function_meta) In this scenario, the Python function takes up the job of creating a Python file taht contains the `score` function and persisting the function file as an asset in the Watson Machine Learning repository: score = my_deployable_function() ## Learn more ## <!-- <ul> --> * [Python Closures](https://www.programiz.com/python-programming/closure) * [Closures](https://www.learnpython.org/en/Closures) * [Nested function, Scope of variable & closures in Python](https://www.codesdope.com/blog/article/nested-function-scope-of-variable-closures-in-pyth/) <!-- </ul> --> **Parent topic:**[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) <!-- </article "role="article" "> -->
03FF997603B065D2DF1FBB49934CA8C348765ACF
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html?context=cdpaas&locale=en
Deploying Python functions in Watson Machine Learning
Deploying Python functions in Watson Machine Learning You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the deployed function instead of in your application. Sample notebooks for creating and deploying Python functions For examples of how to create and deploy Python functions by using the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to these sample notebooks: Sample name Framework Techniques demonstrated [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model and deploy it. [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Hybrid(Tensorflow) Set up an AI definition <br>Prepare the data <br>Create a Keras model by using Tensorflow <br>Deploy and score the model <br>Define, store, and deploy a Python function [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function <br>Create a web service <br>Score the model The notebooks demonstrate the six steps for creating and deploying a function: 1. Define the function. 2. Authenticate and define a space. 3. Store the function in the repository. 4. Get the software specification. 5. Deploy the stored function. 6. Send data to the function for processing. For links to other sample notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to [Using Watson Machine Learning in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). Increasing scalability for a function When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More replicas allow for a larger volume of scoring requests. The following example uses the Python client API to set the number of replicas to 3. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name":"S", "num_nodes":3} } client.deployments.update(<deployment_id>, change_meta) Learn more * To learn more about defining a deployable Python function, see General requirements for deployable functions section in [Writing and storing deployable Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html). * You can deploy a function from a deployment space through the user interface. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Deploying Python functions in Watson Machine Learning # You can deploy Python functions in Watson Machine Learning the same way that you can deploy models\. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models\. Deploying Python functions gives you the ability to hide details (such as credentials)\. You can also preprocess data before you pass it to models\. Additionally, you can handle errors and include calls to multiple models, all within the deployed function instead of in your application\. ## Sample notebooks for creating and deploying Python functions ## For examples of how to create and deploy Python functions by using the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to these sample notebooks: <!-- <table> --> | Sample name | Framework | Techniques demonstrated | | -------------------------------------------------------------------------------------------------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Use Python function to recognize hand\-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) | Python | Use a function to store a sample model and deploy it\. | | [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) | Hybrid(Tensorflow) | Set up an AI definition <br>Prepare the data <br>Create a Keras model by using Tensorflow <br>Deploy and score the model <br>Define, store, and deploy a Python function | | [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) | Core | Create a Python function <br>Create a web service <br>Score the model | <!-- </table ""> --> The notebooks demonstrate the six steps for creating and deploying a function: <!-- <ol> --> 1. Define the function\. 2. Authenticate and define a space\. 3. Store the function in the repository\. 4. Get the software specification\. 5. Deploy the stored function\. 6. Send data to the function for processing\. <!-- </ol> --> For links to other sample notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to [Using Watson Machine Learning in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\. ## Increasing scalability for a function ## When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default\. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment\. More replicas allow for a larger volume of scoring requests\. The following example uses the Python client API to set the number of replicas to 3\. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name":"S", "num_nodes":3} } client.deployments.update(<deployment_id>, change_meta) ## Learn more ## <!-- <ul> --> * To learn more about defining a deployable Python function, see **General requirements for deployable functions** section in [Writing and storing deployable Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html)\. * You can deploy a function from a deployment space through the user interface\. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\. <!-- </ul> --> **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
8279C6C73A8DB1A593945E5EA339F9EFDE96A61E
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html?context=cdpaas&locale=en
Scaling a deployment
Scaling a deployment When you create an online deployment for a model or function from a deployment space or programmatically, a single copy of the asset is deployed by default. To increase scalability and availability, you can increase the number of copies (replicas) by editing the configuration of the deployment. More copies allow for a larger volume of scoring requests. Deployments can be scaled in the following ways: * Update the configuration for a deployment in a deployment space. * Programmatically, using the Watson Machine Learning Python client library, or the Watson Machine Learning REST APIs. Changing the number of copies of an online deployment from a space 1. Click the Deployment tab of your deployment space. 2. From the action menu for your deployment name, click Edit. 3. In the Edit deployment dialog box, change the number of copies and click Save. Increasing the number of replicas of a deployment programmatically To view or run a working sample of scaling a deployment programmatically, you can increase the number of replicas in the metadata for a deployment. Python example This example uses the Python client to set the number of replicas to 3. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name":"S", "num_nodes":3} } client.deployments.update(<deployment_id>, change_meta) The HARDWARE_SPEC value includes a name because the API requires a name or an ID to be provided. REST API example curl -k -X PATCH -d '[ { "op": "replace", "path": "/hardware_spec", "value": { "name": "S", "num_nodes": 2 } } ]' <Deployment end-point URL> You must specify a name for the hardware_spec value, but the argument is not applied for scaling. Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Scaling a deployment # When you create an online deployment for a model or function from a deployment space or programmatically, a single copy of the asset is deployed by default\. To increase scalability and availability, you can increase the number of copies (replicas) by editing the configuration of the deployment\. More copies allow for a larger volume of scoring requests\. Deployments can be scaled in the following ways: <!-- <ul> --> * Update the configuration for a deployment in a deployment space\. * Programmatically, using the Watson Machine Learning Python client library, or the Watson Machine Learning REST APIs\. <!-- </ul> --> ## Changing the number of copies of an online deployment from a space ## <!-- <ol> --> 1. Click the **Deployment** tab of your deployment space\. 2. From the action menu for your deployment name, click **Edit**\. 3. In the **Edit deployment** dialog box, change the number of copies and click **Save**\. <!-- </ol> --> ## Increasing the number of replicas of a deployment programmatically ## To view or run a working sample of scaling a deployment programmatically, you can increase the number of replicas in the metadata for a deployment\. ### Python example ### This example uses the Python client to set the number of replicas to 3\. change_meta = { client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: { "name":"S", "num_nodes":3} } client.deployments.update(<deployment_id>, change_meta) The HARDWARE\_SPEC value includes a name because the API requires a name or an ID to be provided\. ### REST API example ### curl -k -X PATCH -d '[ { "op": "replace", "path": "/hardware_spec", "value": { "name": "S", "num_nodes": 2 } } ]' <Deployment end-point URL> You must specify a name for the `hardware_spec` value, but the argument is not applied for scaling\. **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
462A5BA596AADF9C38762611CA2578398F234BD4
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en
Updating a deployment
Updating a deployment After you create an online or a batch deployment, you can still update your deployment details and update the assets that are associated with your deployment. For more information, see: * [Update deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-general) * [Update assets associated with a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-assets) Updating deployment details You can update general deployment details, such as deployment name, description, metadata, and tags by using one of these methods: * [Update deployment details from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-ui). * [Update deployment details by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-api). Updating deployment details from the UI 1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit settings. 2. Update the details and then click Save. Tip: You can also update a deployment from the information sheet for the deployment. Updating deployment details by using the Patch API command Use the [Watson Machine Learning API Patch](https://cloud.ibm.com/apidocs/machine-learning-cpmodels-update) command to update deployment details. curl -X PATCH '<deployment endpoint url>/ml/v4/deployments/<deployment id>?space_id=<space id>&version=<YYYY-MM-DD>' n--data-raw '[ { "op": "<operation type>", "path": "<path>", "value": "<new value>" }, { "op": "<operation type>", "path": "<path>", "value": "<new value>" } ]' For example, to update a description for deployment: curl -X PATCH '<deployment endpoint url>/ml/v4/deployments/<deployment_id>?space_id=<space_id>&version=<YYYY-MM-DD>' n--data-raw '[ { "op": "replace", "path": "/description", "value": "<new_description>" }, ]' Notes: * For <operation type>, use "add", "remove", or "replace". Updating assets associated with a deployment After you create an online or batch deployment, you can update the deployed asset from the same endpoint. For example, if you have a better performing model, you can replace the deployed model with the improved version. When the update is complete, the new model is available from the REST API endpoint. Before you update an asset, make sure that these conditions are true: * The framework of the new model is compatible with the existing deployed model. * The input schema exists and matches for the new and deployed model. Caution: Failure to follow these conditions can result in a failed deployment. * For more information, see [Updating an asset from the deployment space UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-ui). * For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-api). Updating an asset from the deployment space UI 1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit. 2. Click Replace asset. From the Select an asset dialog box, select the asset that you want to replace the current asset with and click Select asset. 3. Click Save. Important: Make sure that the new asset is compatible with the deployment. ![Replacing a deployed asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-update.png) Updating an asset by using the Patch API command Use the Watson Machine Learning [API](https://cloud.ibm.com/apidocs/machine-learning)Patch command to update any supported asset. Use this method to patch a model for an online deployment. curl -X PATCH '<deployment endpoint url>/ml/v4/models/<model_id>?space_id=<space_id>&project_id=<project_id>&version=<YYYY-MM-DD>' n--data-raw '[ { "op": "<operation type>", "path": "<path>", "value": "<new value>" }, { "op": "<operation type>", "path": "<path>", "value": "<new value>" } ]' For example, patch a model with ID 6f01d512-fe0f-41cd-9a52-1e200c525c84 in space ID f2ddb8ce-7b10-4846-9ab0-62454a449802: curl -X PATCH '<deployment endpoint url>/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=<project_id>&version=<YYYY-MM-DD>' n--data-raw '[ { "op":"replace", "path":"/asset", "value":{ "id":"6f01d512-fe0f-41cd-9a52-1e200c525c84", "rev":"1" } } ]' A successful output response looks like this: { "entity": { "asset": { "href": "/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802", "id": "6f01d512-fe0f-41cd-9a52-1e200c525c84" }, "custom": { }, "description": "Test V4 deployments", "name": "test_v4_dep_online_space_hardware_spec", "online": { }, "space": { "href": "/v4/spaces/f2ddb8ce-7b10-4846-9ab0-62454a449802", "id": "f2ddb8ce-7b10-4846-9ab0-62454a449802" }, "space_id": "f2ddb8ce-7b10-4846-9ab0-62454a449802", "status": { "online_url": { "url": "https://example.com/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83/predictions" }, "state": "updating" } }, "metadata": { "created_at": "2020-06-08T16:51:08.315Z", "description": "Test V4 deployments", "guid": "349dc1f7-9452-491b-8aa4-0777f784bd83", "href": "/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83", "id": "349dc1f7-9452-491b-8aa4-0777f784bd83", "modified_at": "2020-06-08T16:55:28.348Z", "name": "test_v4_dep_online_space_hardware_spec", "parent": { "href": "" }, "space_id": "f2ddb8ce-7b10-4846-9ab0-62454a449802" } } Notes: * For <operation type>, use "add", "remove", or "replace". * The initial state for the PATCH API output is "updating". Keep polling the status until it changes to "ready", then retrieve the deployment meta. * Only the ASSET attribute can be specified for the asset patch. Changing any other attribute results in an error. * The schema of the current model and the model being patched is compared to the deployed asset. A warning message is returned in the output of the Patch request API if the two don't match. For example, if a mismatch is detected, you can find this information in the output response. "status": { "message": { "text": "The input schema of the asset being patched does not match with the currently deployed asset. Please ensure that the score payloads are up to date as per the asset being patched." }, * For more information, see [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Updating a deployment # After you create an online or a batch deployment, you can still update your deployment details and update the assets that are associated with your deployment\. For more information, see: <!-- <ul> --> * [Update deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#upd-general) * [Update assets associated with a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#upd-assets) <!-- </ul> --> ## Updating deployment details ## You can update general deployment details, such as deployment name, description, metadata, and tags by using one of these methods: <!-- <ul> --> * [Update deployment details from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#update-details-ui)\. * [Update deployment details by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#update-details-api)\. <!-- </ul> --> ### Updating deployment details from the UI ### <!-- <ol> --> 1. From the **Deployments** tab of your deployment space, click the action menu for the deployment and choose **Edit settings**\. 2. Update the details and then click **Save**\. Tip: You can also update a deployment from the information sheet for the deployment. <!-- </ol> --> ### Updating deployment details by using the Patch API command ### Use the [Watson Machine Learning API Patch](https://cloud.ibm.com/apidocs/machine-learning-cp#models-update) command to update deployment details\. curl -X PATCH '<deployment endpoint url>/ml/v4/deployments/<deployment id>?space_id=<space id>&version=<YYYY-MM-DD>' \n--data-raw '[ { "op": "<operation type>", "path": "<path>", "value": "<new value>" }, { "op": "<operation type>", "path": "<path>", "value": "<new value>" } ]' For example, to update a description for deployment: curl -X PATCH '<deployment endpoint url>/ml/v4/deployments/<deployment_id>?space_id=<space_id>&version=<YYYY-MM-DD>' \n--data-raw '[ { "op": "replace", "path": "/description", "value": "<new_description>" }, ]' **Notes**: <!-- <ul> --> * For `<operation type>`, use `"add"`, `"remove"`, or `"replace"`\. <!-- </ul> --> ## Updating assets associated with a deployment ## After you create an online or batch deployment, you can update the deployed asset from the same endpoint\. For example, if you have a better performing model, you can replace the deployed model with the improved version\. When the update is complete, the new model is available from the REST API endpoint\. Before you update an asset, make sure that these conditions are true: <!-- <ul> --> * The framework of the new model is compatible with the existing deployed model\. * The input schema exists and matches for the new and deployed model\. Caution: Failure to follow these conditions can result in a failed deployment. * For more information, see [Updating an asset from the deployment space UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#update-asset-ui)\. * For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=en#update-asset-api)\. <!-- </ul> --> ### Updating an asset from the deployment space UI ### <!-- <ol> --> 1. From the **Deployments** tab of your deployment space, click the action menu for the deployment and choose **Edit**\. 2. Click **Replace asset**\. From the *Select an asset* dialog box, select the asset that you want to replace the current asset with and click **Select asset**\. 3. Click **Save**\. <!-- </ol> --> Important: Make sure that the new asset is compatible with the deployment\. ![Replacing a deployed asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deploy-update.png) ### Updating an asset by using the Patch API command ### Use the Watson Machine Learning [API](https://cloud.ibm.com/apidocs/machine-learning)`Patch` command to update any supported asset\. Use this method to patch a model for an online deployment\. curl -X PATCH '<deployment endpoint url>/ml/v4/models/<model_id>?space_id=<space_id>&project_id=<project_id>&version=<YYYY-MM-DD>' \n--data-raw '[ { "op": "<operation type>", "path": "<path>", "value": "<new value>" }, { "op": "<operation type>", "path": "<path>", "value": "<new value>" } ]' For example, patch a model with ID `6f01d512-fe0f-41cd-9a52-1e200c525c84` in space ID `f2ddb8ce-7b10-4846-9ab0-62454a449802`: curl -X PATCH '<deployment endpoint url>/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=<project_id>&version=<YYYY-MM-DD>' \n--data-raw '[ { "op":"replace", "path":"/asset", "value":{ "id":"6f01d512-fe0f-41cd-9a52-1e200c525c84", "rev":"1" } } ]' A successful output response looks like this: { "entity": { "asset": { "href": "/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802", "id": "6f01d512-fe0f-41cd-9a52-1e200c525c84" }, "custom": { }, "description": "Test V4 deployments", "name": "test_v4_dep_online_space_hardware_spec", "online": { }, "space": { "href": "/v4/spaces/f2ddb8ce-7b10-4846-9ab0-62454a449802", "id": "f2ddb8ce-7b10-4846-9ab0-62454a449802" }, "space_id": "f2ddb8ce-7b10-4846-9ab0-62454a449802", "status": { "online_url": { "url": "https://example.com/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83/predictions" }, "state": "updating" } }, "metadata": { "created_at": "2020-06-08T16:51:08.315Z", "description": "Test V4 deployments", "guid": "349dc1f7-9452-491b-8aa4-0777f784bd83", "href": "/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83", "id": "349dc1f7-9452-491b-8aa4-0777f784bd83", "modified_at": "2020-06-08T16:55:28.348Z", "name": "test_v4_dep_online_space_hardware_spec", "parent": { "href": "" }, "space_id": "f2ddb8ce-7b10-4846-9ab0-62454a449802" } } **Notes:** <!-- <ul> --> * For `<operation type>`, use `"add"`, `"remove"`, or `"replace"`\. * The initial state for the PATCH API output is "updating"\. Keep polling the status until it changes to "ready", then retrieve the deployment meta\. * Only the `ASSET` attribute can be specified for the asset patch\. Changing any other attribute results in an error\. * The schema of the current model and the model being patched is compared to the deployed asset\. A warning message is returned in the output of the Patch request API if the two don't match\. For example, if a mismatch is detected, you can find this information in the output response\. "status": { "message": { "text": "The input schema of the asset being patched does not match with the currently deployed asset. Please ensure that the score payloads are up to date as per the asset being patched." }, * For more information, see [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html#update-soft-specs-api)\. <!-- </ul> --> **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html?context=cdpaas&locale=en
Managing hardware configurations
Managing hardware configurations When you deploy certain assets in Watson Machine Learning, you can choose the type, size, and power of the hardware configuration that matches your computing needs. Deployment types that require hardware specifications Selecting a hardware specification is available for all [batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) types. For [online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html), you can select a specific hardware specification if you're deploying: * Python Functions * Tensorflow models * Models with custom software specifications Hardware configurations available for deploying assets * XS: 1x4 = 1 vCPU and 4 GB RAM * S: 2x8 = 2 vCPU and 8 GB RAM * M: 4x16 = 4 vCPU and 16 GB RAM * L: 8x32 = 8 vCPU and 32 GB RAM * XL: 16x64 = 16 vCPU and 64 GB RAM You can use the XS configuration to deploy: * Python functions * Python scripts * R scripts * Models based on custom libraries and custom images For Decision Optimization deployments, you can use these hardware specifications: * S * M * L * XL Learn more * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Managing hardware configurations # When you deploy certain assets in Watson Machine Learning, you can choose the type, size, and power of the hardware configuration that matches your computing needs\. ## Deployment types that require hardware specifications ## Selecting a hardware specification is available for all [batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) types\. For [online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html), you can select a specific hardware specification if you're deploying: <!-- <ul> --> * Python Functions * Tensorflow models * Models with custom software specifications <!-- </ul> --> ## Hardware configurations available for deploying assets ## <!-- <ul> --> * `XS`: 1x4 = 1 vCPU and 4 GB RAM * `S`: 2x8 = 2 vCPU and 8 GB RAM * `M`: 4x16 = 4 vCPU and 16 GB RAM * `L`: 8x32 = 8 vCPU and 32 GB RAM * `XL`: 16x64 = 16 vCPU and 64 GB RAM <!-- </ul> --> You can use the `XS` configuration to deploy: <!-- <ul> --> * Python functions * Python scripts * R scripts * Models based on custom libraries and custom images <!-- </ul> --> For Decision Optimization deployments, you can use these hardware specifications: <!-- <ul> --> * `S` * `M` * `L` * `XL` <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
19BA0BFC40B6212B42F38487F1533BB65647850E
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en
Importing models to a deployment space
Importing models to a deployment space Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks that are available for importing models. Here, to import a trained model means: 1. Store the trained model in your Watson Machine Learning repository 2. Optional: Deploy the stored model in your Watson Machine Learning service and repository means a Cloud Object Storage bucket. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). You can import a model in these ways: * [Directly through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enui-import) * [By using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) * [By using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) * [Import a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) For more information, see [Importing models by ML framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). For more information, see [Things to consider when you import models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-import-considerations). For an example of how to add a model programmatically by using the Python client, refer to this notebook: * [Use PMML to predict iris species.](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/pmml/Use%20PMML%20to%20predict%20iris%20species.ipynb) For an example of how to add a model programmatically by using the REST API, refer to this notebook: * [Use scikit-learn to predict diabetes progression](https://github.com/IBM/watson-machine-learning-samples/blob/be84bcd25d17211f41fb34ec262b418f6cd6c87b/cloud/notebooks/rest_api/curl/deployments/scikit/Use%20scikit-learn%20to%20predict%20diabetes%20progression.ipynb) Available ways to import models, per framework type This table lists the available ways to import models to Watson Machine Learning, per framework type. Import options for models, per framework type Import option Spark MLlib Scikit-learn XGBoost TensorFlow PyTorch [Importing a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) ✓ ✓ ✓ [Importing a model by using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) ✓ ✓ ✓ ✓ [Importing a model by using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) ✓ ✓ ✓ ✓ Adding a model by using UI Note:If you want to import a model in the PMML format, you can directly import the model .xml file. To import a model by using UI: 1. From the Assets tab of your space in Watson Machine Learning, click Import assets. 2. Select Local file and then select Model. 3. Select the model file that you want to import and click Import. The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml file. Importing a model object Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model object: 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). 2. Store the model object in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Importing a model by using a path to a file Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model by using a path to a file: 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download) to download it. 2. If your model is located locally, place it in a specific directory: !cp <saved model> <target directory> !cd <target directory> 3. For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive: !tar -zcvf <saved model>.tar.gz <saved model> The model file must be at the top-level folder of the directory, for example: assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index 4. Use the path to the saved file to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Importing a model by using a path to a directory Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). To import a model by using a path to a directory: 1. If your model is located in a remote location, refer to [Downloading a model stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download). 2. If your model is located locally, place it in a specific directory: !cp <saved model> <target directory> !cd <target directory> For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top-level folder of the directory, for example: assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index 3. Use the directory path to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo). Downloading a model stored in a remote location Follow this sample code to download your model from a remote location: import os from wget import download target_dir = '<target directory name>' if not os.path.isdir(target_dir): os.mkdir(target_dir) filename = os.path.join(target_dir, '<model name>') if not os.path.isfile(filename): filename = download('<url to model>', out = target_dir) Things to consider when you import models To learn more about importing a specific model type, see: * [Models saved in PMML format](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpmml-import) * [Spark MLlib models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enspark-ml-lib-import) * [Scikit-learn models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enscikit-learn-import) * [XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enxgboost-import) * [TensorFlow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=entf-import) * [PyTorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpt-import) To learn more about frameworks that you can use with Watson Machine Learning, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). Models saved in PMML format * The only available deployment type for models that are imported from PMML is online deployment. * The PMML file must have the .xml file extension. * PMML models cannot be used in an SPSS stream flow. * The PMML file must not contain a prolog. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default. For example, if your file contains a prolog string such as spark-mllib-lr-model-pmml.xml, remove the string before you import the PMML file to the deployment space. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example: :::::::::::::: spark-mllib-lr-model-pmml.xml :::::::::::::: You must remove that prolog before you can import the PMML file to Watson Machine Learning. Spark MLlib models * Only classification and regression models are available. * Custom transformers, user-defined functions, and classes are not available. Scikit-learn models * .pkl and .pickle are the available import formats. * To serialize or pickle the model, use the joblib package. * Only classification and regression models are available. * Pandas Dataframe input type for predict() API is not available. * The only available deployment type for scikit-learn models is online deployment. XGBoost models * .pkl and .pickle are the available import formats. * To serialize or pickle the model, use the joblib package. * Only classification and regression models are available. * Pandas Dataframe input type for predict() API is not available. * The only available deployment type for XGBoost models is online deployment. TensorFlow models * .pb, .h5, and .hdf5 are the available import formats. * To save or serialize a TensorFlow model, use the tf.saved_model.save() method. * tf.estimator is not available. * The only available deployment types for TensorFlow models are: online deployment and batch deployment. PyTorch models * The only available deployment type for PyTorch models is online deployment. * For a Pytorch model to be importable to Watson Machine Learning, it must be previously exported to .onnx format. Refer to this code. torch.onnx.export(<model object>, <prediction/training input data>, "<serialized model>.onnx", verbose=True, input_names=<input tensor names>, output_names=<output tensor names>) Storing a model in your Watson Machine Learning repository Use this code to store your model in your Watson Machine Learning repository: from ibm_watson_machine_learning import APIClient client = APIClient(<your credentials>) sw_spec_uid = client.software_specifications.get_uid_by_name("<software specification name>") meta_props = { client.repository.ModelMetaNames.NAME: "<your model name>", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid, client.repository.ModelMetaNames.TYPE: "<model type>"} client.repository.store_model(model=<your model>, meta_props=meta_props) Notes: * Depending on the model framework used, <your model> can be the actual model object, a full path to a saved model file, or a path to a directory where the model file is located. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats). * For a list of available software specifications to use as <software specification name>, use the client.software_specifications.list() method. * For a list of available model types to use as model_type, refer to [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). * When you export a Pytorch model to the .onnx format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (Watson Machine Learning deployments use the caffe2 ONNX runtime that doesn't support opset versions higher than 9). torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9) * To learn more about how to create the <your credentials> dictionary, refer to [Watson Machine Learning authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
# Importing models to a deployment space # Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models\. Review the model frameworks that are available for importing models\. Here, *to import a trained model* means: <!-- <ol> --> 1. Store the trained model in your Watson Machine Learning repository 2. Optional: Deploy the stored model in your Watson Machine Learning service <!-- </ol> --> and *repository* means a Cloud Object Storage bucket\. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html)\. You can import a model in these ways: <!-- <ul> --> * [Directly through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#ui-import) * [By using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#path-file-import) * [By using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#path-dir-import) * [Import a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#object-import) <!-- </ul> --> For more information, see [Importing models by ML framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#supported-formats)\. For more information, see [Things to consider when you import models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#model-import-considerations)\. For an example of how to add a model programmatically by using the Python client, refer to this notebook: <!-- <ul> --> * [Use PMML to predict iris species\.](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/pmml/Use%20PMML%20to%20predict%20iris%20species.ipynb) <!-- </ul> --> For an example of how to add a model programmatically by using the REST API, refer to this notebook: <!-- <ul> --> * [Use scikit\-learn to predict diabetes progression](https://github.com/IBM/watson-machine-learning-samples/blob/be84bcd25d17211f41fb34ec262b418f6cd6c87b/cloud/notebooks/rest_api/curl/deployments/scikit/Use%20scikit-learn%20to%20predict%20diabetes%20progression.ipynb) <!-- </ul> --> ## Available ways to import models, per framework type ## This table lists the available ways to import models to Watson Machine Learning, per framework type\. <!-- <table> --> Import options for models, per framework type | Import option | Spark MLlib | Scikit\-learn | XGBoost | TensorFlow | PyTorch | | ---------------------------------------------------------------------------------- | ----------- | ------------- | ------- | ---------- | ------- | | [Importing a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#object-import) | ✓ | ✓ | ✓ | | | | [Importing a model by using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#path-file-import) | | ✓ | ✓ | ✓ | ✓ | | [Importing a model by using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#path-dir-import) | | ✓ | ✓ | ✓ | ✓ | <!-- </table ""> --> ### Adding a model by using UI ### Note:If you want to import a model in the PMML format, you can directly import the model `.xml` file\. To import a model by using UI: <!-- <ol> --> 1. From the **Assets** tab of your space in Watson Machine Learning, click **Import assets**\. 2. Select `Local file` and then select **Model**\. 3. Select the model file that you want to import and click **Import**\. <!-- </ol> --> The importing mechanism automatically selects a matching model type and software specification based on the version string in the `.xml` file\. ### Importing a model object ### Note:This import method is supported by a limited number of ML frameworks\. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#supported-formats)\. To import a model object: <!-- <ol> --> 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#model-download)\. 2. Store the model object in your Watson Machine Learning repository\. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#store-in-repo)\. <!-- </ol> --> ### Importing a model by using a path to a file ### Note:This import method is supported by a limited number of ML frameworks\. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#supported-formats)\. To import a model by using a path to a file: <!-- <ol> --> 1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#model-download) to download it\. 2. If your model is located locally, place it in a specific directory: !cp <saved model> <target directory> !cd <target directory> 3. For **Scikit\-learn**, **XGBoost**, **Tensorflow**, and **PyTorch** models, if the downloaded file is not a `.tar.gz` archive, make an archive: !tar -zcvf <saved model>.tar.gz <saved model> The model file must be at the top-level folder of the directory, for example: assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index 4. Use the path to the saved file to store the model file in your Watson Machine Learning repository\. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#store-in-repo)\. <!-- </ol> --> ### Importing a model by using a path to a directory ### Note:This import method is supported by a limited number of ML frameworks\. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#supported-formats)\. To import a model by using a path to a directory: <!-- <ol> --> 1. If your model is located in a remote location, refer to [Downloading a model stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#model-download)\. 2. If your model is located locally, place it in a specific directory: !cp <saved model> <target directory> !cd <target directory> For **scikit-learn**, **XGBoost**, **Tensorflow**, and **PyTorch** models, the model file must be at the top-level folder of the directory, for example: assets/ <saved model> variables/ variables/variables.data-00000-of-00001 variables/variables.index 3. Use the directory path to store the model file in your Watson Machine Learning repository\. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#store-in-repo)\. <!-- </ol> --> ### Downloading a model stored in a remote location ### Follow this sample code to download your model from a remote location: import os from wget import download target_dir = '<target directory name>' if not os.path.isdir(target_dir): os.mkdir(target_dir) filename = os.path.join(target_dir, '<model name>') if not os.path.isfile(filename): filename = download('<url to model>', out = target_dir) ## Things to consider when you import models ## To learn more about importing a specific model type, see: <!-- <ul> --> * [Models saved in PMML format](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#pmml-import) * [Spark MLlib models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#spark-ml-lib-import) * [Scikit\-learn models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#scikit-learn-import) * [XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#xgboost-import) * [TensorFlow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#tf-import) * [PyTorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#pt-import) <!-- </ul> --> To learn more about frameworks that you can use with Watson Machine Learning, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. ### Models saved in PMML format ### <!-- <ul> --> * The only available deployment type for models that are imported from PMML is online deployment\. * The PMML file must have the `.xml` file extension\. * PMML models cannot be used in an SPSS stream flow\. * The PMML file must not contain a prolog\. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default\. For example, if your file contains a prolog string such as `spark-mllib-lr-model-pmml.xml`, remove the string before you import the PMML file to the deployment space\. <!-- </ul> --> Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example: :::::::::::::: spark-mllib-lr-model-pmml.xml :::::::::::::: You must remove that prolog before you can import the PMML file to Watson Machine Learning\. ### Spark MLlib models ### <!-- <ul> --> * Only classification and regression models are available\. * Custom transformers, user\-defined functions, and classes are not available\. <!-- </ul> --> ### Scikit\-learn models ### <!-- <ul> --> * `.pkl` and `.pickle` are the available import formats\. * To serialize or pickle the model, use the `joblib` package\. * Only classification and regression models are available\. * Pandas Dataframe input type for `predict()` API is not available\. * The only available deployment type for scikit\-learn models is online deployment\. <!-- </ul> --> ### XGBoost models ### <!-- <ul> --> * `.pkl` and `.pickle` are the available import formats\. * To serialize or pickle the model, use the `joblib` package\. * Only classification and regression models are available\. * Pandas Dataframe input type for `predict()` API is not available\. * The only available deployment type for XGBoost models is online deployment\. <!-- </ul> --> ### TensorFlow models ### <!-- <ul> --> * `.pb`, `.h5`, and `.hdf5` are the available import formats\. * To save or serialize a TensorFlow model, use the `tf.saved_model.save()` method\. * `tf.estimator` is not available\. * The only available deployment types for TensorFlow models are: online deployment and batch deployment\. <!-- </ul> --> ### PyTorch models ### <!-- <ul> --> * The only available deployment type for PyTorch models is online deployment\. * For a Pytorch model to be importable to Watson Machine Learning, it must be previously exported to `.onnx` format\. Refer to this code\. torch.onnx.export(<model object>, <prediction/training input data>, "<serialized model>.onnx", verbose=True, input_names=<input tensor names>, output_names=<output tensor names>) <!-- </ul> --> ## Storing a model in your Watson Machine Learning repository ## Use this code to store your model in your Watson Machine Learning repository: from ibm_watson_machine_learning import APIClient client = APIClient(<your credentials>) sw_spec_uid = client.software_specifications.get_uid_by_name("<software specification name>") meta_props = { client.repository.ModelMetaNames.NAME: "<your model name>", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid, client.repository.ModelMetaNames.TYPE: "<model type>"} client.repository.store_model(model=<your model>, meta_props=meta_props) **Notes**: <!-- <ul> --> * Depending on the model framework used, `<your model>` can be the actual model object, a full path to a saved model file, or a path to a directory where the model file is located\. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=en#supported-formats)\. * For a list of available software specifications to use as `<software specification name>`, use the `client.software_specifications.list()` method\. * For a list of available model types to use as `model_type`, refer to [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. * When you export a Pytorch model to the `.onnx` format, specify the `keep_initializers_as_inputs=True` flag and set `opset_version` to 9 (Watson Machine Learning deployments use the `caffe2` ONNX runtime that doesn't support opset versions higher than 9)\. torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9) * To learn more about how to create the `<your credentials>` dictionary, refer to [Watson Machine Learning authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html)\. <!-- </ul> --> **Parent topic:**[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) <!-- </article "role="article" "> -->
E008266C010ADFEF841C513AE7BCB91436F9AE9C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html?context=cdpaas&locale=en
Frameworks and software specifications in Watson Machine Learning
Frameworks and software specifications in Watson Machine Learning You can use popular tools, libraries, and frameworks to train and deploy your machine learning models and functions. Overview of software specifications Software specifications define the programming language and version that you use for a building a model or a function. You can use software specifications to configure the software that is used for running your models and functions. You can also define the software version to be used and include your own extensions. For example, you can use conda .yml files or custom libraries. Supported frameworks and software specifications You can use predefined tools, libraries, and frameworks to train and deploy your machine learning models and functions. Examples of supported frameworks include Scikit-learn, Tensorflow, and more. For more information, see [Supported deployment frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). ![Frameworks and software specifications for model delpoyments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/frameworks-software-specs.png) Managing outdated frameworks and software specifications Update software specifications and frameworks in your models when they become outdated. Sometimes, you can seamlessly update your assets. In other cases, you must retrain or redeploy your assets. For more information, see [Managing outdated software specifications or frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html). Parent topic:[Deploying assets with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
# Frameworks and software specifications in Watson Machine Learning # You can use popular tools, libraries, and frameworks to train and deploy your machine learning models and functions\. ## Overview of software specifications ## Software specifications define the programming language and version that you use for a building a model or a function\. You can use software specifications to configure the software that is used for running your models and functions\. You can also define the software version to be used and include your own extensions\. For example, you can use conda \.yml files or custom libraries\. ## Supported frameworks and software specifications ## You can use predefined tools, libraries, and frameworks to train and deploy your machine learning models and functions\. Examples of supported frameworks include Scikit\-learn, Tensorflow, and more\. For more information, see [Supported deployment frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. ![Frameworks and software specifications for model delpoyments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/frameworks-software-specs.png) ## Managing outdated frameworks and software specifications ## Update software specifications and frameworks in your models when they become outdated\. Sometimes, you can seamlessly update your assets\. In other cases, you must retrain or redeploy your assets\. For more information, see [Managing outdated software specifications or frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html)\. **Parent topic:**[Deploying assets with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) <!-- </article "role="article" "> -->
29A9834843B2D6E7417C09A5385B83BCB13D814C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en
Managing outdated software specifications or frameworks
Managing outdated software specifications or frameworks Use these guidelines when you are updating assets that refer to outdated software specifications or frameworks. In some cases, asset update is seamless. In other cases, you must retrain or redeploy the assets. For general guidelines, refer to [Migrating assets that refer to discontinued software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-soft-spec) or [Migrating assets that refer to discontinued framework versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-framewrk). For more information, see the following sections: * [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs) * [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model) * [Updating a Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgr-function) * [Retraining an SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-spss) Managing assets that refer to discontinued software specifications * During migration, assets that refer to the discontinued software specification are mapped to a comparable-supported default software specification (only in cases where the model type is still supported). * When you create new deployments of the migrated assets, the updated software specification in the asset metadata is used. * Existing deployments of the migrated assets are updated to use the new software specification. If deployment or scoring fails due to framework or library version incompatibilities, follow the instructions in [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs). If the problem persists, follow the steps that are listed in [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). Migrating assets that refer to discontinued framework versions * During migration, model types are not be updated. You must manually update this data. For more information, see [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model). * After migration, the existing deployments are removed and new deployments for the deprecated framework are not allowed. Updating software specifications You can update software specifications from the UI or by using the API. For more information, see the following sections: * [Updating software specifications from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-ui) * [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-api) Updating software specifications from the UI 1. From the deployment space, click the model (make sure it does not have any active deployments.) 2. Click the i symbol to check model details. 3. Use the dropdown list to update the software specification. Refer to the example image: ![Updating software specifications through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/update-software-spec-via-ui.png) Updating software specifications by using the API You can update a software specification by using the API Patch command: For software_spec field, type /software_spec. For value field, use either the ID or the name of the new software specification. Refer to this example: curl -X PATCH '<deployment endpoint url>/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=<project_id>&version=<YYYY-MM-DD>' n--data-raw '[ { "op":"replace", "path":"/software_spec", "value":{ "id":"6f01d512-fe0f-41cd-9a52-1e200c525c84" // or "name":"tensorflow_rt22.1-py3.9" } } ]' For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). Updating a machine learning model Follow these steps to update a model built with a deprecated framework. Option 1: Save the model with a compatible framework 1. Download the model by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). The following example shows how to download your model: client.repository.download(<model-id>, filename="xyz.tar.gz") 2. Edit model metadata with the model type and version that is supported in the current release. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). The following example shows how to edit model metadata: model_metadata = { client.repository.ModelMetaNames.NAME: "example model", client.repository.ModelMetaNames.DESCRIPTION: "example description", client.repository.ModelMetaNames.TYPE: "<new model type>", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_uid_by_name("<new software specification name>") } 3. Save the model to the Watson Machine Learning repository. The following example shows how to save the model to the repository: model_details = client.repository.store_model(model="xyz.tar.gz", meta_props=model_metadata) 4. Deploy the model. 5. Score the model to generate predictions. If deployment or scoring fails, the model is not compatible with the new version that was used for saving the model. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-option2). Option 2: Retrain the model with a compatible framework 1. Retrain the model with a model type and version that is supported in the current version. 2. Save the model with the supported model type and version. 3. Deploy and score the model. It is also possible to update a model by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). Updating a Python function Follow these steps to update a Python function built with a deprecated framework. Option 1: Save the Python function with a compatible runtime or software specification 1. Download the Python function by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). 2. Save the Python function with a supported runtime or software specification version. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). 3. Deploy the Python function. 4. Score the Python function to generate predictions. If your Python function fails during scoring, the function is not compatible with the new runtime or software specification version that was used for saving the Python function. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enmodify-option2). Option 2: Modify the function code and save it with a compatible runtime or software specification 1. Modify the Python function code to make it compatible with the new runtime or software specification version. In some cases, you must update dependent libraries that are installed within the Python function code. 2. Save the Python function with the new runtime or software specification version. 3. Deploy and score the Python function. It is also possible to update a function by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api). Retraining an SPSS Modeler flow Some models that were built with SPSS Modeler in IBM Watson Studio Cloud before 1 September 2020 can no longer be deployed by using Watson Machine Learning. This problem is caused by an upgrade of the Python version in supported SPSS Modeler runtimes. If you're using one of the following six nodes in your SPSS Modeler flow, you must rebuild and redeploy your models with SPSS Modeler and Watson Machine Learning: * XGBoost Tree * XGBoost Linear * One-Class SVM * HDBSCAN * KDE Modeling * Gaussian Mixture To retrain your SPSS Modeler flow, follow these steps: * If you're using the Watson Studio user interface, open the SPSS Modeler flow in Watson Studio, retrain, and save the model to Watson Machine Learning. After you save the model to the project, you can promote it to a deployment space and create a new deployment. * If you're using [REST API](https://cloud.ibm.com/apidocs/machine-learning) or [Python client](https://ibm.github.io/watson-machine-learning-sdk/), retrain the model by using SPSS Modeler and save the model to the Watson Machine Learning repository with the model type spss-modeler-18.2. Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html)
# Managing outdated software specifications or frameworks # Use these guidelines when you are updating assets that refer to outdated software specifications or frameworks\. In some cases, asset update is seamless\. In other cases, you must retrain or redeploy the assets\. For general guidelines, refer to [Migrating assets that refer to discontinued software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#discont-soft-spec) or [Migrating assets that refer to discontinued framework versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#discont-framewrk)\. For more information, see the following sections: <!-- <ul> --> * [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#update-soft-specs) * [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#upgrade-model) * [Updating a Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#upgr-function) * [Retraining an SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#retrain-spss) <!-- </ul> --> ## Managing assets that refer to discontinued software specifications ## <!-- <ul> --> * During migration, assets that refer to the discontinued software specification are mapped to a comparable\-supported default software specification (only in cases where the model type is still supported)\. * When you create new deployments of the migrated assets, the updated software specification in the asset metadata is used\. * Existing deployments of the migrated assets are updated to use the new software specification\. If deployment or scoring fails due to framework or library version incompatibilities, follow the instructions in [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#update-soft-specs)\. If the problem persists, follow the steps that are listed in [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#upgrade-model)\. <!-- </ul> --> ## Migrating assets that refer to discontinued framework versions ## <!-- <ul> --> * During migration, model types are not be updated\. You must manually update this data\. For more information, see [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#upgrade-model)\. * After migration, the existing deployments are removed and new deployments for the deprecated framework are not allowed\. <!-- </ul> --> ## Updating software specifications ## You can update software specifications from the UI or by using the API\. For more information, see the following sections: <!-- <ul> --> * [Updating software specifications from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#update-soft-specs-ui) * [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#update-soft-specs-api) <!-- </ul> --> ### Updating software specifications from the UI ### <!-- <ol> --> 1. From the deployment space, click the model (make sure it does not have any active deployments\.) 2. Click the `i` symbol to check model details\. 3. Use the dropdown list to update the software specification\. <!-- </ol> --> Refer to the example image: ![Updating software specifications through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/update-software-spec-via-ui.png) ### Updating software specifications by using the API ### You can update a software specification by using the API Patch command: For `software_spec` field, type `/software_spec`\. For `value` field, use either the ID or the name of the new software specification\. Refer to this example: curl -X PATCH '<deployment endpoint url>/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=<project_id>&version=<YYYY-MM-DD>' \n--data-raw '[ { "op":"replace", "path":"/software_spec", "value":{ "id":"6f01d512-fe0f-41cd-9a52-1e200c525c84" // or "name":"tensorflow_rt22.1-py3.9" } } ]' For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html#update-asset-api)\. ## Updating a machine learning model ## Follow these steps to update a model built with a deprecated framework\. ### Option 1: Save the model with a compatible framework ### <!-- <ol> --> 1. Download the model by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\. The following example shows how to download your model: client.repository.download(<model-id>, filename="xyz.tar.gz") 2. Edit model metadata with the model type and version that is supported in the current release\. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. The following example shows how to edit model metadata: model_metadata = { client.repository.ModelMetaNames.NAME: "example model", client.repository.ModelMetaNames.DESCRIPTION: "example description", client.repository.ModelMetaNames.TYPE: "<new model type>", client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: client.software_specifications.get_uid_by_name("<new software specification name>") } 3. Save the model to the Watson Machine Learning repository\. The following example shows how to save the model to the repository: model_details = client.repository.store_model(model="xyz.tar.gz", meta_props=model_metadata) 4. Deploy the model\. 5. Score the model to generate predictions\. <!-- </ol> --> If deployment or scoring fails, the model is not compatible with the new version that was used for saving the model\. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#retrain-option2)\. ### Option 2: Retrain the model with a compatible framework ### <!-- <ol> --> 1. Retrain the model with a model type and version that is supported in the current version\. 2. Save the model with the supported model type and version\. 3. Deploy and score the model\. <!-- </ol> --> It is also possible to update a model by using the API\. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html#update-asset-api)\. ## Updating a Python function ## Follow these steps to update a Python function built with a deprecated framework\. ### Option 1: Save the Python function with a compatible runtime or software specification ### <!-- <ol> --> 1. Download the Python function by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\. 2. Save the Python function with a supported runtime or software specification version\. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. 3. Deploy the Python function\. 4. Score the Python function to generate predictions\. <!-- </ol> --> If your Python function fails during scoring, the function is not compatible with the new runtime or software specification version that was used for saving the Python function\. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=en#modify-option2)\. ### Option 2: Modify the function code and save it with a compatible runtime or software specification ### <!-- <ol> --> 1. Modify the Python function code to make it compatible with the new runtime or software specification version\. In some cases, you must update dependent libraries that are installed within the Python function code\. 2. Save the Python function with the new runtime or software specification version\. 3. Deploy and score the Python function\. <!-- </ol> --> It is also possible to update a function by using the API\. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html#update-asset-api)\. ## Retraining an SPSS Modeler flow ## Some models that were built with SPSS Modeler in IBM Watson Studio Cloud before 1 September 2020 can no longer be deployed by using Watson Machine Learning\. This problem is caused by an upgrade of the Python version in supported SPSS Modeler runtimes\. If you're using one of the following six nodes in your SPSS Modeler flow, you must rebuild and redeploy your models with SPSS Modeler and Watson Machine Learning: <!-- <ul> --> * XGBoost Tree * XGBoost Linear * One\-Class SVM * HDBSCAN * KDE Modeling * Gaussian Mixture <!-- </ul> --> To retrain your SPSS Modeler flow, follow these steps: <!-- <ul> --> * If you're using the Watson Studio user interface, open the SPSS Modeler flow in Watson Studio, retrain, and save the model to Watson Machine Learning\. After you save the model to the project, you can promote it to a deployment space and create a new deployment\. * If you're using [REST API](https://cloud.ibm.com/apidocs/machine-learning) or [Python client](https://ibm.github.io/watson-machine-learning-sdk/), retrain the model by using SPSS Modeler and save the model to the Watson Machine Learning repository with the model type `spss-modeler-18.2`\. <!-- </ul> --> **Parent topic:**[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) <!-- </article "role="article" "> -->
6F51A9033343574AEE2D292CB23F09D542456389
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-model-tracking.html?context=cdpaas&locale=en
Enabling model tracking with AI factsheets
Enabling model tracking with AI factsheets If your organization is using AI Factsheets as part of an AI governance strategy, you can track models after adding them to a space. Tracking a model populates a factsheet in an associated model use case. The model use cases are maintained in a model inventory in a catalog, providing a way for all stakeholders to view the lifecyle details for a machine learning model. From the inventory, collaborators can view the details for a model as it moves through the model lifecycle, including the request, development, deployment, and evaluation of the model. To enable model tracking by using AI Factsheets: 1. From the asset list in your space, click a model name and then click the Model details tab. 2. Click Track this model. 3. Associate the model with an existing model use case in the inventory or create a new use case. 4. Specify the details for the new use case, including specifying a catalog if you have access to more than one, and save to register the model. A link to the model inventory is added to the model details page. 5. Click the link to open the model use case in the inventory. 6. Optional: update the model use case. For example, add tags, supporting documentation, or other details.
# Enabling model tracking with AI factsheets # If your organization is using AI Factsheets as part of an AI governance strategy, you can track models after adding them to a space\. Tracking a model populates a factsheet in an associated model use case\. The model use cases are maintained in a model inventory in a catalog, providing a way for all stakeholders to view the lifecyle details for a machine learning model\. From the inventory, collaborators can view the details for a model as it moves through the model lifecycle, including the request, development, deployment, and evaluation of the model\. To enable model tracking by using AI Factsheets: <!-- <ol> --> 1. From the asset list in your space, click a model name and then click the **Model details** tab\. 2. Click **Track this model**\. 3. Associate the model with an existing model use case in the inventory or create a new use case\. 4. Specify the details for the new use case, including specifying a catalog if you have access to more than one, and save to register the model\. A link to the model inventory is added to the model details page\. 5. Click the link to open the model use case in the inventory\. 6. Optional: update the model use case\. For example, add tags, supporting documentation, or other details\. <!-- </ol> --> <!-- </article "role="article" "> -->
035EF4A1D7C465E8A72ACC1C5C98198B4E95068B
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=en
Adding conditions to the pipeline
Adding conditions to the pipeline Add conditions to a pipeline to handle various scenarios. Configuring conditions for the pipeline As you create a pipeline, you can specify conditions that must be met before you run the pipeline. For example, you can set a condition that the output from a node must satisfy a particular condition before you proceed with the pipeline execution. To define a condition: 1. Hover over the link between two nodes. 2. Click Add condition. 3. Choose the type of condition: * [Condition Response](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ennode) checks a condition on the status of the previous node. * [Simple condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ensimple) is a no-code condition in the form of an if-then statement. * [Advanced condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=enadvanced) Advanced condition uses expression code, providing the most features and flexibility. 4. Define and save your expression. ![Defining a condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_adding_condition.gif) When you define your expression, a summary captures the condition and the expected result. For example: If Run AutoAI is Successful, then Create deployment node. When you return to the flow, you see an indicator that you defined a condition. Hover over the icon to edit or delete the condition. ![Viewing a successful condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-condition1.png) Configuring a condition based on node status If you select Condition Response as your condition type, the previous node status must satisfy at least one of these conditions to continue with the flow: * Completed - the node activity is completed without error. * Completed with warnings - the node activity is completed but with warnings. * Completed with errors - the node activity is completed, but with errors. * Failed - the node activity failed to complete. * Cancelled - the previous action or activity was canceled. Configuring a simple condition To configure a simple condition, choose the condition that must be satisfied to continue with the flow. 1. Optional: edit the default name. 2. Depending on the node, choose a variable from the drop-down options. For example, if you are creating a condition based on a Run AutoAI node, you can choose Model metric as the variable to base your condition on. 3. Based on the variable, choose an operator from: Equal to, Not equal to, Greater than, Less than, Greater than or equal to, Less than or equal to. 4. Specify the required value. For example, if you are basing a condition on an AutoAI metric, specify a list of values that consists of the available metrics. 5. Optional: click the plus icon to add an And (all conditions must be met) or an Or (either condition must be met) to the expression to build a compound conditional statement. 6. Review the summary and save the condition. Configuring an advanced condition Use coding constructs to build a more complex condition. The next node runs when the condition is met. You build the advanced condition by using the expression builder. 1. Optional: edit the default name. 2. Add items from the Expression elements panel to the Expression canvas to build your condition. You can also type your conditions and the elements autocomplete. 3. When your expression is complete, review the summary and save the condition. Learn more For more information on using the code editor to build an expression, see: * [Functions used in pipelines Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html) Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Adding conditions to the pipeline # Add conditions to a pipeline to handle various scenarios\. ## Configuring conditions for the pipeline ## As you create a pipeline, you can specify conditions that must be met before you run the pipeline\. For example, you can set a condition that the output from a node must satisfy a particular condition before you proceed with the pipeline execution\. To define a condition: <!-- <ol> --> 1. Hover over the link between two nodes\. 2. Click **Add condition**\. 3. Choose the type of condition: <!-- <ul> --> * [Condition Response](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=en#node) checks a condition on the status of the previous node. * [Simple condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=en#simple) is a no-code condition in the form of an if-then statement. * [Advanced condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=en#advanced) Advanced condition uses expression code, providing the most features and flexibility. <!-- </ul> --> 4. Define and save your expression\. <!-- </ol> --> ![Defining a condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_adding_condition.gif) When you define your expression, a summary captures the condition and the expected result\. For example: If **Run AutoAI** is **Successful**, then **Create deployment node**\. When you return to the flow, you see an indicator that you defined a condition\. Hover over the icon to edit or delete the condition\. ![Viewing a successful condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-condition1.png) ## Configuring a condition based on node status ## If you select **Condition Response** as your condition type, the previous node status must satisfy at least one of these conditions to continue with the flow: <!-- <ul> --> * Completed \- the node activity is completed without error\. * Completed with warnings \- the node activity is completed but with warnings\. * Completed with errors \- the node activity is completed, but with errors\. * Failed \- the node activity failed to complete\. * Cancelled \- the previous action or activity was canceled\. <!-- </ul> --> ## Configuring a simple condition ## To configure a simple condition, choose the condition that must be satisfied to continue with the flow\. <!-- <ol> --> 1. *Optional:* edit the default name\. 2. Depending on the node, choose a variable from the drop\-down options\. For example, if you are creating a condition based on a Run AutoAI node, you can choose Model metric as the variable to base your condition on\. 3. Based on the variable, choose an operator from: Equal to, Not equal to, Greater than, Less than, Greater than or equal to, Less than or equal to\. 4. Specify the required value\. For example, if you are basing a condition on an AutoAI metric, specify a list of values that consists of the available metrics\. 5. *Optional:* click the plus icon to add an **And** (all conditions must be met) or an **Or** (either condition must be met) to the expression to build a compound conditional statement\. 6. Review the summary and save the condition\. <!-- </ol> --> ## Configuring an advanced condition ## Use coding constructs to build a more complex condition\. The next node runs when the condition is met\. You build the advanced condition by using the expression builder\. <!-- <ol> --> 1. *Optional:* edit the default name\. 2. Add items from the **Expression elements** panel to the **Expression** canvas to build your condition\. You can also type your conditions and the elements autocomplete\. 3. When your expression is complete, review the summary and save the condition\. <!-- </ol> --> ### Learn more ### For more information on using the code editor to build an expression, see: <!-- <ul> --> * [Functions used in pipelines Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html) <!-- </ul> --> **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
8CF8260D0474AD73D9878CCD361C83102B724733
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html?context=cdpaas&locale=en
Configuring pipeline nodes
Configuring pipeline nodes Configure the nodes of your pipeline to specify inputs and to create outputs as part of your pipeline. Specifying the workspace scope By default, the scope for a pipeline is the project that contains the pipeline. You can explicitly specify a scope other than the default, to locate an asset used in the pipeline. The scope is the project, catalog, or space that contains the asset. From the user interface, you can browse for the scope. Changing the input mode When you are configuring a node, you can specify any resources that include data and notebooks in various ways. Such as directly entering a name or ID, browsing for an asset, or by using the output from a prior node in the pipeline to populate a field. To see what options are available for a field, click the input icon for the field. Depending on the context, options can include: * Select resource: use the asset browser to find an asset such as a data file. * Assign pipeline parameter: assign a value by using a variable configured with a pipeline parameter. For more information, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). * Select from another node: use the output from a node earlier in the pipeline as the value for this field. * Enter the expression: enter code to assign values or identify resources. For more information, see [Coding elements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html). Pipeline nodes and parameters Configure the following types of pipeline nodes: Copy nodes Use Copy nodes to add assets to your pipeline or to export pipeline assets. * Copy assets Copy selected assets from a project or space to a nonempty space. You can copy these assets to a space: - AutoAI experiment - Code package job - Connection - Data Refinery flow - Data Refinery job - Data asset - Deployment job - Environment - Function - Job - Model - Notebook - Notebook job - Pipelines job - Script - Script job - SPSS Modeler job #### Input parameters |Parameter|Description| |---|---| |Source assets |Browse or search for the source asset to add to the list. You can also specify an asset with a pipeline parameter, with the output of another node, or by entering the asset ID| |Target|Browse or search for the target space| |Copy mode|Choose how to handle a case where the flow tries to copy an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Output assets |List of copied assets| * Export assets Export selected assets from the scope, for example, a project or deployment space. The operation exports all the assets by default. You can limit asset selection by building a list of resources to export. #### Input parameters |Parameter|Description| |---|---| |Assets |Choose Scope to export all exportable items or choose List to create a list of specific items to export| |Source project or space |Name of project or space that contains the assets to export| |Exported file |File location for storing the export file| |Creation mode (optional)|Choose how to handle a case where the flow tries to create an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Exported file|Path to exported file| Notes: - If you export a project that contains a notebook, the latest version of the notebook is included in the export file. If the Pipeline with the Run notebook job node was configured to use a different notebook version other than the latest version, the exported Pipeline is automatically reconfigured to use the latest version when imported. This might produce unexpected results or require some reconfiguration after the import. - If assets are self-contained in the exported project, they are retained when you import a new project. Otherwise, some configuration might be required following an import of exported assets. * Import assets Import assets from a ZIP file that contains exported assets. #### Input parameters |Parameter|Description| |---|---| |Path to import target |Browse or search for the assets to import| |Archive file to import |Specify the path to a ZIP file or archive| Notes: After you import a file, paths and references to the imported assets are updated, following these rules: - References to assets from the exported project or space are updated in the new project or space after the import. - If assets from the exported project refer to external assets (included in a different project), the reference to the external asset will persist after the import. - If the external asset no longer exists, the parameter is replaced with an empty value and you must reconfigure the field to point to a valid asset. Create nodes Configure the nodes for creating assets in your pipeline. * Create AutoAI experiment Use this node to train an [AutoAI classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment name|Name of the new experiment| |Scope|A project or a space, where the experiment is going to be created| |Prediction type|The type of model for the following data: binary, classification, or regression| |Prediction column (label)|The prediction column name| |Positive class (optional)|Specify a positive class for a binary classification experiment| |Training data split ratio (optional)|The percentage of data to hold back from training and use to test the pipelines(float: 0.0 - 1.0)| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Algorithms to use|Specify the list of estimators to be used (the list depends on the learning type)| |Optimize metric (optional)| The metric used for model ranking| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI experiment description|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to the saved model| * Create AutoAI time series experiment Use this node to train an [AutoAI time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI time series experiment name|Name of the new experiment| |Scope|A project or a space, where the pipeline is going to be created| |Prediction columns (label)|The name of one or more prediction columns| |Date/time column (optional)|Name of date/time column| |Leverage future values of supporting features|Choose "True" to enable the consideration for supporting (exogenous) features to improve the prediction. For example, include a temperature feature for predicting ice cream sales.| |Supporting features (optional)|Choose supporting features and add to list| |Imputation method (optional)|Choose a technique for imputing missing values in a data set| |Imputation threshold (optional)|Specify an higher threshold for percentage of missing values to supply with the specified imputation method. If the threshold is exceeded, the experiment fails. For example, if you specify that 10% of values can be imputed, and the data set is missing 15% of values, the experiment fails.| |Fill type|Specify how the specified imputation method fill null values. Choose to supply a mean of all values, and median of all values, or specify a fill value.| |Fill value (optional)|If you selected to sepcify a value for replacing null values, enter the value in this field.| |Final training data set|Choose whether to train final pipelines with just the training data or with training data and holdout data. If you choose training data, the generated notebook includes a cell for retrieving holdout data| |Holdout size (optional)|If you are splitting training data into training and holdout data, specify a percentage of the training data to reserve as holdout data for validating the pipelines. Holdout data does not exceed a third of the data.| |Number of backtests (optional)|Customize the backtests to cross-validate your time series experiment| |Gap length (optional)|Adjust the number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap is not used to train the experiment or evaluate the current backtest.| |Lookback window (optional)|A parameter that indicates how many previous time series values are used to predict the current time point.| |Forecast window (optional)|The range that you want to predict based on the data in the lookback window.| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Pipelines to complete|Optionally adjust the number of pipelines to create. More pipelines increase training time and resources.| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI time series experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI time series experiment|Path to the saved model| * Create batch deployment Use this node to create a batch deployment for a machine learning model. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment| * Create data asset Use this node to create a data asset. #### Input parameters |Parameter|Description| |---|---| |File |Path to file in a file storage| |Target scope| Path to the target space or project| |Name (optional)|Name of the data source with optional description, country of origin, and tags| |Description (optional)| Description for the asset| |Origin country (optional)|Origin country for data regulations| |Tags (optional)| Tags to identify assets| |Creation mode|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Data asset|The newly created data asset| * Create deployment space Use this node to create and configure a space that you can use to organize and create deployments. #### Input parameters |Parameter|Description| |---|---| |New space name|Name of the new space with optional description and tags| |New space tags (optional)| Tags to identify the space| |New space COS instance CRN |CRN of the COS service instance| |New space WML instance CRN (optional)|CRN of the Watson Machine Learning service instance| |Creation mode (optional)|How to handle a case where the pipeline tries to create a space and one of the same name exists. One of: ignore, fail, overwrite| |Space description (optional)|Description of the space| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the newly created space| * Create online deployment Use this node to create an online deployment where you can submit test data directly to a web service REST API endpoint. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment| Wait Use nodes to pause a pipeline until an asset is available in the location that is specified in the path. * Wait for all results Use this node to wait until all results from the previous nodes in the pipeline are available so the pipeline can continue. This node takes no inputs and produces no output. When the results are all available, the pipeline continues automatically. * Wait for any result Use this node to wait until any result from the previous nodes in the pipeline is available so the pipeline can continue. Run the downstream nodes as soon as any of the upstream conditions are met. This node takes no inputs and produces no output. When any results are available, the pipeline continues automatically. * Wait for file Wait for an asset to be created or updated in the location that is specified in the path from a job or process earlier in the pipeline. Specify a timeout length to wait for the condition to be met. If 00:00:00 is the specified timeout length, the flow waits indefinitely. #### Input parameters |Parameter|Description| |---|---| |File location|Specify the location in the asset browser where the asset resides. Use the format data_asset/filename where the path is relative to the root. The file must exist and be in the location you specify or the node fails with an error. | |Wait mode| By default the mode is for the file to appear. You can change to waiting for the file to disappear| |Timeout length (optional)|Specify the length of time to wait before you proceed with the pipeline. Use the format hh:mm:ss| |Error policy (optional)| See [Handling errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html)| #### Output parameters |Parameter|Description| |---|---| |Return value|Return value from the node| |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| Control nodes Control the pipeline by adding error handling and logic. * Loops Loops are a node in a Pipeline that operates like a coded loop. The two types of loops are parallel and sequential. You can use loops when the number of iterations for an operation is dynamic. For example, if you don't know the number of notebooks to process, or you want to choose the number of notebooks at run time, you can use a loop to iterate through the list of notebooks. You can also use a loop to iterate through the output of a node or through elements in a data array. ### Loops in parallel Add a parallel looping construct to the pipeline. A parallel loop runs the iterating nodes independently and possibly simultaneously. For example, to train a machine learning model with a set of hyperparameters to find the best performer, you can use a loop to iterate over a list of hyperparameters to train the notebook variations in parallel. The results can be compared later in the flow to find the best notebook. To see limits on the number of loops you can run simultaneously, see [Limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlpipeline-issues). \\ Input parameters when iterating List types |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Parallelism |Maximum number of tasks to be run simultaneously. Must be greater than zero| \\ Input parameters when iterating String types |Parameter|Description| |---|---| |Text input| Text data that the loop reads from| |Separator| A char used to split the text | |Parallelism (optional)| Maximum number of tasks to be run simultaneously. Must be greater than zero| If the input array element type is JSON or any type that is represented as such, this field might decompose it as dictionary. Keys are the original element keys and values are the aliases for output names. \ Loops in sequence Add a sequential loop construct to the pipeline. Loops can iterate over a numeric range, a list, or text with a delimiter. A use case for sequential loops is if you want to try an operation 3 time before you determine whether an operation failed. \\ Input parameters |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Text input| Text data that the loop reads from. Specify a character to split the text.| |Range| Specify the start, end, and optional step for a range to iterate over. The default step is 1.| After you configure the loop iterative range, define a subpipeline flow inside the loop to run until the loop is complete. For example, it can invoke notebook, script, or other flow per iteration. \ Terminate loop In a parallel or sequential loop process flow, you can add a Terminate pipeline node to end the loop process anytime. You must customize the conditions for terminating. Attention: If you use the Terminate loop node, your loop cancels any ongoing tasks and terminates without completing its iteration. * Set user variables Configure a user variable with a key/value pair, then add the list of dynamic variables for this node. For more information on how to create a user variable, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). #### Input parameters x |Parameter|Description| |---|---| |Name| Enter the name, or key, for the variable| |Input type|Choose Expression or Pipeline parameter as the input type. - For expressions, use the built-in Expression Builder to create a variable that results from a custom expression. - For pipeline parameters, assign a pipeline parameter and use the parameter value as input for the user variable. * Terminate pipeline You can initiate and control the termination of a pipeline with a Terminate pipeline node from the Control category. When the error flow runs, you can optionally specify how to handle notebook or training jobs that were initiated by nodes in the pipeline. You must specify whether to wait for jobs to finish, cancel the jobs then stop the pipeline, or stop everything without canceling. Specify the options for the Terminate pipeline node. #### Input parameters |Parameter|Description| |---|---| |Terminator mode (optional)| Choose the behavior for the error flow| Terminator mode can be: - Terminate pipeline run and all running jobs stops all jobs and stops the pipeline. - Cancel all running jobs then terminate pipeline cancels any running jobs before stopping the pipeline. - Terminate pipeline run after running jobs finish waits for running jobs to finish, then stops the pipeline. - Terminate pipeline that is run without stopping jobs stops the pipeline but allows running jobs to continue. Update nodes Use update nodes to replace or update assets to improve performance. For example, if you want to standardize your tags, you can update to replace a tag with a new tag. * Update AutoAI experiment Update the training details for an [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to a project or a space, where the experiment resides| |AutoAI experiment name (optional)| Name of the experiment to be updated, with optional description and tags| |AutoAI experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path of the updated experiment| * Update batch deployment Use these parameters to update a batch deployment. #### Input parameters |Parameter|Description| |---|---| |Deployment| Path to the deployment to be updated| |New name for the deployment (optional)|Name or ID of the deployment to be updated | |New description for the deployment (optional)|Description of the deployment| |New tags for the deployment (optional)| Tags to identify the deployment| |ML asset|Name or ID of the machine learning asset to deploy| |Hardware specification|Update the hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment| * Update deployment space Update the details for a space. #### Input parameters |Parameter|Description| |---|---| |Space|Path of the existing space| |Space name (optional)|Update the space name| |Space description (optional)|Description of the space| |Space tags (optional)|Tags to identify the space| |WML Instance (optional)| Specify a new Machine Learning instance| |WML instance| Specify a new Machine Learning instance. Note: Even if you assign a different name for an instance in the UI, the system name is Machine Learning instance. Differentiate between different instances by using the instance CRN| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the updated space| * Update online deployment Use these parameters to update an online deployment (web service). #### Input parameters |Parameter|Description| |---|---| |Deployment|Path of the existing deployment| |Deployment name (optional)|Update the deployment name| |Deployment description (optional)|Description of the deployment| |Deployment tags (optional)|Tags to identify the deployment| |Asset (optional)|Machine learning asset (or version) to be redeployed| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment| Delete nodes Configure parameters for delete operations. * Delete You can delete: - AutoAI experiment - Batch deployment - Deployment space - Online deployment For each item, choose the asset for deletion. Run nodes Use these nodes to train an experiment, execute a script, or run a data flow. * Run AutoAI experiment Trains and stores [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) pipelines and models. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Browse for the ML Pipeline asset or get the experiment from a pipeline parameter or the output from a previous node. | |Training data asset|Browse or search for the data to train the experiment. Note that you can supply data at runtime by using a pipeline parameter| |Holdout data asset (optional)|Optionally choose a separate file to use for holdout data for testingmodel performance| |Models count (optional)| Specify how many models to save from best performing pipelines. The limit is 3 models| |Run name (optional)|Name of the experiment and optional description and tags| |Model name prefix (optional)| Prefix used to name trained models. Defaults to <(experiment name)> | |Run description (optional)| Description of the new training run| |Run tags (optional)| Tags for new training run| |Creation mode (optional)| Choose how to handle a case where the pipeline flow tries to create an asset and one of the same name exists. One of: ignore, fail, overwrite| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Models | List of paths of highest N trained and persisted model (ordered by selected evaluation metric)| |Best model | path of the winning model (based on selected evaluation metric)| |Model metrics | a list of trained model metrics (each item is a nested object with metrics like: holdout_accuracy, holdout_average_precision, ...)| |Winning model metric |elected evaluation metric of the winning model| |Optimized metric| Metric used to tune the model| |Execution status| Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the job| * Run Bash script Run an inline Bash script to automate a function or process for the pipeline. You can enter the Bash script code manually, or you can import the bash script from a resource, pipeline parameter, or the output of another node. You can also use a Bash script to process large output files. For example, you can generate a large, comma-separated list that you can then iterate over using a loop. In the following example, the user entered the inline script code manually. The script uses the cpdctl tool to search all notebooks with a set variable tag and aggregates the results in a JSON list. The list can then be used in another node, such as running the notebooks returned from the search. ![Example of a bash script node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-config-4.png){: height="50%" width="50%"} #### Input parameters |Parameter|Description| |---|---| |Inline script code|Enter a Bash script in the inline code editor. Optional: Alternatively, you can select a resource, assign a pipeline parameter, or select from another node. | |Environment variables (optional)| Specify a variable name (the key) and a data type and add to the list of variables to use in the script.| |Runtime type (optional)| Select either use standalone runtime (default) or a shared runtime. Use a shared runtime for tasks that require running in shared pods. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Output variables |Configure a key/value pair for each custom variable, then click the Add button to populate the list of dynamic variables for the node| |Return value|Return value from the node| |Standard output|Standard output from the script| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Message associated with the status| #### Rules for Bash script output The output for a Bash script is often the result of a computed expression and can be large. When you are reviewing the properties for a script with valid large output, you can preview or download the output in a viewer. These rules govern what type of large output is valid. - The output of a list_expression is a calculated expression, so it is valid a large output. - String output is treated as a literal value rather than a calculated expression, so it must follow the size limits that govern inline expressions. For example, you are warned when a literal value exceeds 1 KB and values of 2 KB and higher result in an error. #### Referencing a variable in a Bash script The way that you reference a variable in a script depends on whether the variable was created as an input variable or as an output variable. Output variables are created as a file and require a file path in the reference. Specifically: - Input variables are available using the assigned name - Output variable names require that _PATH be appended to the variable name to indicate that values have to be written to the output file pointed by the {output_name}_PATH variable. #### Using SSH in Bash scripts The following steps describe how to use ssh to run your remote Bash script. 1. Create a private key and public key. bash ssh-keygen -t rsa -C "XXX" 2. Copy the public key to the remote host. bash ssh-copy-id USER@REMOTE_HOST 3. On the remote host, check whether the public key contents are added into /root/.ssh/authorized_keys. 4. Copy the public and private keys to a new directory in the Run Bash script node. bash mkdir -p $HOME/.ssh copy private key content echo "-----BEGIN OPENSSH PRIVATE KEY----- ... ... -----END OPENSSH PRIVATE KEY-----" > $HOME/.ssh/id_rsa copy public key content echo "ssh-rsa ...... " > $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null -i $HOME/.ssh/id_rsa USER@REMOTE_HOST "cd /opt/scripts; ls -l; sh 1.sh" \\ Using SSH utilities in Bash scripts The following steps describe how to use sshpass to run your remote Bash script. 1. Put your SSH password file in your system path, such as the mounted storage volume path. 2. Use the SSH password directly in the Run Bash script node: bash cd /mnts/orchestration ls -l sshpass chmod 777 sshpass ./sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USER@REMOTE_HOST "cd /opt/scripts; ls -l; sh 1.sh" * Run batch deployment Configure this node to run selected deployment jobs. #### Input parameters |Parameter|Description| |---|---| |Deployment|Browse or search for the deployment job | |Input data assets|Specify the data used for the batch job Restriction: Input for batch deployment jobs is limited to data assets. Deployments that require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.| |Output asset|Name of the output file for the results of the batch job. You can either select Filename and enter a custom file name, or Data asset and select an existing asset in a space.| |Hardware specification (optional)|Browse for a hardware specification to apply for the job| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job|Path to the file with results from the deployment job| |Job run|ID for the job| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the job| * Run DataStage job * Run Data Refinery job This node runs a specified Data Refinery job. #### Input parameters |Parameter|Description| |---|---| |Data Refinery job |Path to the Data Refinery job.| |Environment | Path of the environment used to run the job Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the Data Refinery job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status|Information on the state of the flow: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the flow| * Run notebook job Use these configuration options to specify how to run a Jupyter Notebook in a pipeline. #### Input parameters |Parameter|Description| |---|---| |Notebook job|Path to the notebook job. | |Environment |Path of the environment used to run the notebook. Attention: Leave the environments field as is to use the default environment. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the notebook language and hardware configuration to avoid a runtime error.| |Environment variables (optional)|List of environment variables used to run the notebook job| |Error policy (optional)| Optionally, override the default error policy for the node| Notes: - Environment variables that you define in a pipeline cannot be used for notebook jobs you run outside of Watson Pipelines. - You can run a notebook from a code package in a regular package. #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the notebook job| |Job run|Information about the job run| |Job name |Name of the job | |Output variables |Configure a key/value pair for each custom variable, then click Add to populate the list of dynamic variables for the node| |Execution status|Information on the state of the run: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the notebook run| * Run Pipelines component Run a reusable pipeline component that is created by using a Python script. For more information, see [Creating a custom component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html). - If a pipeline component is available, configuring the node presents a list of available components. - The component that you choose specifies the input and output for the node. - Once you assign a component to a node, you cannot delete or change the component. You must delete the node and create a new one. * Run Pipelines job Add a pipeline to run a nested pipeline job as part of a containing pipeline. This is a way of adding reusable processes to multiple pipelines. You can use the output from a nested pipeline that is run as input for a node in the containing pipeline. #### Input parameters |Parameter|Description| |---|---| |Pipelines job|Select or enter a path to an existing Pipelines job.| |Environment (optional)| Select the environment to run the Pipelines job in, and assign environment resources. Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Job Run Name (optional) |A default job name is used unless you override it by specifying a custom job name. You can see the job name in the Job Details dashboard.| |Values for local parameters (optional) | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Values from parameter sets (optional) |Edit the parameter sets used by this job. You can choose to use the parameters as defined by default, or use value sets from other pipelines' parameters. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| #### Notes for running nested pipeline jobs If you create a pipeline with nested pipelines and run a pipeline job from the top-level, the pipelines are named and saved as project assets that use this convention: - The top-level pipeline job is named "Trial job - pipeline guid". - All subsequent jobs are named "pipeline_ pipeline guid". * Run SPSS Modeler job Use these configuration options to specify how to run an SPSS Modeler in a pipeline. #### Input parameters |Parameter|Description| |---|---| |SPSS Modeler job|Select or enter a path to an existing SPSS Modeler job.| |Environment (optional)| Select the environment to run the SPSS Modeler job in, and assign environment resources. Attention: Leave the environments field as is to use the default SPSS Modeler runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the hardware configuration to avoid a runtime error.| |Values for local parameters | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| Learn more Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Configuring pipeline nodes # Configure the nodes of your pipeline to specify inputs and to create outputs as part of your pipeline\. ## Specifying the workspace scope ## By default, the scope for a pipeline is the project that contains the pipeline\. You can explicitly specify a scope other than the default, to locate an asset used in the pipeline\. The scope is the project, catalog, or space that contains the asset\. From the user interface, you can browse for the scope\. ## Changing the input mode ## When you are configuring a node, you can specify any resources that include data and notebooks in various ways\. Such as directly entering a name or ID, browsing for an asset, or by using the output from a prior node in the pipeline to populate a field\. To see what options are available for a field, click the input icon for the field\. Depending on the context, options can include: <!-- <ul> --> * Select resource: use the asset browser to find an asset such as a data file\. * Assign pipeline parameter: assign a value by using a variable configured with a pipeline parameter\. For more information, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html)\. * Select from another node: use the output from a node earlier in the pipeline as the value for this field\. * Enter the expression: enter code to assign values or identify resources\. For more information, see [Coding elements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html)\. <!-- </ul> --> ## Pipeline nodes and parameters ## Configure the following types of pipeline nodes: ## Copy nodes ## Use Copy nodes to add assets to your pipeline or to export pipeline assets\. <!-- <ul> --> * Copy assets Copy selected assets from a project or space to a nonempty space. You can copy these assets to a space: - AutoAI experiment - Code package job - Connection - Data Refinery flow - Data Refinery job - Data asset - Deployment job - Environment - Function - Job - Model - Notebook - Notebook job - Pipelines job - Script - Script job - SPSS Modeler job \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Source assets \|Browse or search for the source asset to add to the list. You can also specify an asset with a pipeline parameter, with the output of another node, or by entering the asset ID\| \|Target\|Browse or search for the target space\| \|Copy mode\|Choose how to handle a case where the flow tries to copy an asset and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Output assets \|List of copied assets\| <!-- </ul> --> <!-- <ul> --> * Export assets Export selected assets from the scope, for example, a project or deployment space. The operation exports all the assets by default. You can limit asset selection by building a list of resources to export. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Assets \|Choose **Scope** to export all exportable items or choose **List** to create a list of specific items to export\| \|Source project or space \|Name of project or space that contains the assets to export\| \|Exported file \|File location for storing the export file\| \|Creation mode (optional)\|Choose how to handle a case where the flow tries to create an asset and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Exported file\|Path to exported file\| **Notes:** - If you export a project that contains a notebook, the latest version of the notebook is included in the export file. If the Pipeline with the **Run notebook job** node was configured to use a different notebook version other than the latest version, the exported Pipeline is automatically reconfigured to use the latest version when imported. This might produce unexpected results or require some reconfiguration after the import. - If assets are self-contained in the exported project, they are retained when you import a new project. Otherwise, some configuration might be required following an import of exported assets. <!-- </ul> --> <!-- <ul> --> * Import assets Import assets from a ZIP file that contains exported assets. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Path to import target \|Browse or search for the assets to import\| \|Archive file to import \|Specify the path to a ZIP file or archive\| **Notes:** After you import a file, paths and references to the imported assets are updated, following these rules: - References to assets from the exported project or space are updated in the new project or space after the import. - If assets from the exported project refer to external assets (included in a different project), the reference to the external asset will persist after the import. - If the external asset no longer exists, the parameter is replaced with an empty value and you must reconfigure the field to point to a valid asset. <!-- </ul> --> ## Create nodes ## Configure the nodes for creating assets in your pipeline\. <!-- <ul> --> * Create AutoAI experiment Use this node to train an [AutoAI classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI experiment name\|Name of the new experiment\| \|Scope\|A project or a space, where the experiment is going to be created\| \|Prediction type\|The type of model for the following data: binary, classification, or regression\| \|Prediction column (label)\|The prediction column name\| \|Positive class (optional)\|Specify a positive class for a binary classification experiment\| \|Training data split ratio (optional)\|The percentage of data to hold back from training and use to test the pipelines(float: 0.0 - 1.0)\| \|Algorithms to include (optional)\|Limit the list of estimators to be used (the list depends on the learning type)\| \|Algorithms to use\|Specify the list of estimators to be used (the list depends on the learning type)\| \|Optimize metric (optional)\| The metric used for model ranking\| \|Hardware specification (optional)\|Specify a hardware specification for the experiment\| \|AutoAI experiment description\|Description of the experiment\| \|AutoAI experiment tags (optional)\|Tags to identify the experiment\| \|Creation mode (optional)\|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI experiment\|Path to the saved model\| <!-- </ul> --> <!-- <ul> --> * Create AutoAI time series experiment Use this node to train an [AutoAI time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI time series experiment name\|Name of the new experiment\| \|Scope\|A project or a space, where the pipeline is going to be created\| \|Prediction columns (label)\|The name of one or more prediction columns\| \|Date/time column (optional)\|Name of date/time column\| \|Leverage future values of supporting features\|Choose "True" to enable the consideration for supporting (exogenous) features to improve the prediction. For example, include a temperature feature for predicting ice cream sales.\| \|Supporting features (optional)\|Choose supporting features and add to list\| \|Imputation method (optional)\|Choose a technique for imputing missing values in a data set\| \|Imputation threshold (optional)\|Specify an higher threshold for percentage of missing values to supply with the specified imputation method. If the threshold is exceeded, the experiment fails. For example, if you specify that 10% of values can be imputed, and the data set is missing 15% of values, the experiment fails.\| \|Fill type\|Specify how the specified imputation method fill null values. Choose to supply a mean of all values, and median of all values, or specify a fill value.\| \|Fill value (optional)\|If you selected to sepcify a value for replacing null values, enter the value in this field.\| \|Final training data set\|Choose whether to train final pipelines with just the training data or with training data and holdout data. If you choose training data, the generated notebook includes a cell for retrieving holdout data\| \|Holdout size (optional)\|If you are splitting training data into training and holdout data, specify a percentage of the training data to reserve as holdout data for validating the pipelines. Holdout data does not exceed a third of the data.\| \|Number of backtests (optional)\|Customize the backtests to cross-validate your time series experiment\| \|Gap length (optional)\|Adjust the number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap is not used to train the experiment or evaluate the current backtest.\| \|Lookback window (optional)\|A parameter that indicates how many previous time series values are used to predict the current time point.\| \|Forecast window (optional)\|The range that you want to predict based on the data in the lookback window.\| \|Algorithms to include (optional)\|Limit the list of estimators to be used (the list depends on the learning type)\| \|Pipelines to complete\|Optionally adjust the number of pipelines to create. More pipelines increase training time and resources.\| \|Hardware specification (optional)\|Specify a hardware specification for the experiment\| \|AutoAI time series experiment description (optional)\|Description of the experiment\| \|AutoAI experiment tags (optional)\|Tags to identify the experiment\| \|Creation mode (optional)\|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI time series experiment\|Path to the saved model\| <!-- </ul> --> <!-- <ul> --> * Create batch deployment Use this node to create a batch deployment for a machine learning model. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|ML asset\|Name or ID of the machine learning asset to deploy\| \|New deployment name (optional)\|Name of the new job, with optional description and tags\| \|Creation mode (optional)\|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \|New deployment description (optional)\| Description of the deployment\| \|New deployment tags (optional)\| Tags to identify the deployment\| \|Hardware specification (optional)\|Specify a hardware specification for the job\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|New deployment\| Path of the newly created deployment\| <!-- </ul> --> <!-- <ul> --> * Create data asset Use this node to create a data asset. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|File \|Path to file in a file storage\| \|Target scope\| Path to the target space or project\| \|Name (optional)\|Name of the data source with optional description, country of origin, and tags\| \|Description (optional)\| Description for the asset\| \|Origin country (optional)\|Origin country for data regulations\| \|Tags (optional)\| Tags to identify assets\| \|Creation mode\|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Data asset\|The newly created data asset\| <!-- </ul> --> <!-- <ul> --> * Create deployment space Use this node to create and configure a space that you can use to organize and create deployments. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|New space name\|Name of the new space with optional description and tags\| \|New space tags (optional)\| Tags to identify the space\| \|New space COS instance CRN \|CRN of the COS service instance\| \|New space WML instance CRN (optional)\|CRN of the Watson Machine Learning service instance\| \|Creation mode (optional)\|How to handle a case where the pipeline tries to create a space and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \|Space description (optional)\|Description of the space\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Space\|Path of the newly created space\| <!-- </ul> --> <!-- <ul> --> * Create online deployment Use this node to create an online deployment where you can submit test data directly to a web service REST API endpoint. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|ML asset\|Name or ID of the machine learning asset to deploy\| \|New deployment name (optional)\|Name of the new job, with optional description and tags\| \|Creation mode (optional)\|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \|New deployment description (optional)\| Description of the deployment\| \|New deployment tags (optional)\| Tags to identify the deployment\| \|Hardware specification (optional)\|Specify a hardware specification for the job\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|New deployment\| Path of the newly created deployment\| <!-- </ul> --> ## Wait ## Use nodes to pause a pipeline until an asset is available in the location that is specified in the path\. <!-- <ul> --> * Wait for all results Use this node to wait until all results from the previous nodes in the pipeline are available so the pipeline can continue. This node takes no inputs and produces no output. When the results are all available, the pipeline continues automatically. <!-- </ul> --> <!-- <ul> --> * Wait for any result Use this node to wait until any result from the previous nodes in the pipeline is available so the pipeline can continue. Run the downstream nodes as soon as any of the upstream conditions are met. This node takes no inputs and produces no output. When any results are available, the pipeline continues automatically. <!-- </ul> --> <!-- <ul> --> * Wait for file Wait for an asset to be created or updated in the location that is specified in the path from a job or process earlier in the pipeline. Specify a timeout length to wait for the condition to be met. If 00:00:00 is the specified timeout length, the flow waits indefinitely. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|File location\|Specify the location in the asset browser where the asset resides. Use the format `data_asset/filename` where the path is relative to the root. The file must exist and be in the location you specify or the node fails with an error. \| \|Wait mode\| By default the mode is for the file to appear. You can change to waiting for the file to disappear\| \|Timeout length (optional)\|Specify the length of time to wait before you proceed with the pipeline. Use the format `hh:mm:ss`\| \|Error policy (optional)\| See [Handling errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html)\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Return value\|Return value from the node\| \|Execution status\| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled\| \|Status message\| Message associated with the status\| <!-- </ul> --> ## Control nodes ## Control the pipeline by adding error handling and logic\. <!-- <ul> --> * Loops Loops are a node in a Pipeline that operates like a coded loop. The two types of loops are parallel and sequential. You can use loops when the number of iterations for an operation is dynamic. For example, if you don't know the number of notebooks to process, or you want to choose the number of notebooks at run time, you can use a loop to iterate through the list of notebooks. You can also use a loop to iterate through the output of a node or through elements in a data array. \#\#\# Loops in parallel Add a parallel looping construct to the pipeline. A parallel loop runs the iterating nodes independently and possibly simultaneously. For example, to train a machine learning model with a set of hyperparameters to find the best performer, you can use a loop to iterate over a list of hyperparameters to train the notebook variations in parallel. The results can be compared later in the flow to find the best notebook. To see limits on the number of loops you can run simultaneously, see [Limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html#pipeline-issues). \#\#\#\# Input parameters when iterating List types \|Parameter\|Description\| \|---\|---\| \|List input\| The *List input* parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.\| \|Parallelism \|Maximum number of tasks to be run simultaneously. Must be greater than zero\| \#\#\#\# Input parameters when iterating String types \|Parameter\|Description\| \|---\|---\| \|Text input\| Text data that the loop reads from\| \|Separator\| A char used to split the text \| \|Parallelism (optional)\| Maximum number of tasks to be run simultaneously. Must be greater than zero\| If the input array element type is JSON or any type that is represented as such, this field might decompose it as dictionary. Keys are the original element keys and values are the aliases for output names. \#\#\# Loops in sequence Add a sequential loop construct to the pipeline. Loops can iterate over a numeric range, a list, or text with a delimiter. A use case for sequential loops is if you want to try an operation 3 time before you determine whether an operation failed. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|List input\| The *List input* parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.\| \|Text input\| Text data that the loop reads from. Specify a character to split the text.\| \|Range\| Specify the start, end, and optional step for a range to iterate over. The default step is 1.\| After you configure the loop iterative range, define a subpipeline flow inside the loop to run until the loop is complete. For example, it can invoke notebook, script, or other flow per iteration. \#\#\# Terminate loop In a parallel or sequential loop process flow, you can add a **Terminate pipeline** node to end the loop process anytime. You must customize the conditions for terminating. Attention: If you use the Terminate loop node, your loop cancels any ongoing tasks and terminates without completing its iteration. <!-- </ul> --> <!-- <ul> --> * Set user variables Configure a user variable with a key/value pair, then add the list of dynamic variables for this node. For more information on how to create a user variable, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). \#\#\#\# Input parameters x \|Parameter\|Description\| \|---\|---\| \|Name\| Enter the name, or key, for the variable\| \|Input type\|Choose Expression or Pipeline parameter as the input type. - For expressions, use the built-in Expression Builder to create a variable that results from a custom expression. - For pipeline parameters, assign a pipeline parameter and use the parameter value as input for the user variable. <!-- </ul> --> <!-- <ul> --> * Terminate pipeline You can initiate and control the termination of a pipeline with a Terminate pipeline node from the Control category. When the error flow runs, you can optionally specify how to handle notebook or training jobs that were initiated by nodes in the pipeline. You must specify whether to wait for jobs to finish, cancel the jobs then stop the pipeline, or stop everything without canceling. Specify the options for the Terminate pipeline node. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Terminator mode (optional)\| Choose the behavior for the error flow\| Terminator mode can be: - **Terminate pipeline run and all running jobs** stops all jobs and stops the pipeline. - **Cancel all running jobs then terminate pipeline** cancels any running jobs before stopping the pipeline. - **Terminate pipeline run after running jobs finish** waits for running jobs to finish, then stops the pipeline. - **Terminate pipeline that is run without stopping jobs** stops the pipeline but allows running jobs to continue. <!-- </ul> --> ## Update nodes ## Use update nodes to replace or update assets to improve performance\. For example, if you want to standardize your tags, you can update to replace a tag with a new tag\. <!-- <ul> --> * Update AutoAI experiment Update the training details for an [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI experiment\|Path to a project or a space, where the experiment resides\| \|AutoAI experiment name (optional)\| Name of the experiment to be updated, with optional description and tags\| \|AutoAI experiment description (optional)\|Description of the experiment\| \|AutoAI experiment tags (optional)\|Tags to identify the experiment\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI experiment\|Path of the updated experiment\| <!-- </ul> --> <!-- <ul> --> * Update batch deployment Use these parameters to update a batch deployment. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Deployment\| Path to the deployment to be updated\| \|New name for the deployment (optional)\|Name or ID of the deployment to be updated \| \|New description for the deployment (optional)\|Description of the deployment\| \|New tags for the deployment (optional)\| Tags to identify the deployment\| \|ML asset\|Name or ID of the machine learning asset to deploy\| \|Hardware specification\|Update the hardware specification for the job\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Deployment\|Path of the updated deployment\| <!-- </ul> --> <!-- <ul> --> * Update deployment space Update the details for a space. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Space\|Path of the existing space\| \|Space name (optional)\|Update the space name\| \|Space description (optional)\|Description of the space\| \|Space tags (optional)\|Tags to identify the space\| \|WML Instance (optional)\| Specify a new Machine Learning instance\| \|WML instance\| Specify a new Machine Learning instance. **Note:** Even if you assign a different name for an instance in the UI, the system name is **Machine Learning instance**. Differentiate between different instances by using the instance CRN\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Space\|Path of the updated space\| <!-- </ul> --> <!-- <ul> --> * Update online deployment Use these parameters to update an online deployment (web service). \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Deployment\|Path of the existing deployment\| \|Deployment name (optional)\|Update the deployment name\| \|Deployment description (optional)\|Description of the deployment\| \|Deployment tags (optional)\|Tags to identify the deployment\| \|Asset (optional)\|Machine learning asset (or version) to be redeployed\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Deployment\|Path of the updated deployment\| <!-- </ul> --> ## Delete nodes ## Configure parameters for delete operations\. <!-- <ul> --> * Delete You can delete: - AutoAI experiment - Batch deployment - Deployment space - Online deployment For each item, choose the asset for deletion. <!-- </ul> --> ## Run nodes ## Use these nodes to train an experiment, execute a script, or run a data flow\. <!-- <ul> --> * Run AutoAI experiment Trains and stores [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) pipelines and models. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|AutoAI experiment\|Browse for the ML Pipeline asset or get the experiment from a pipeline parameter or the output from a previous node. \| \|Training data asset\|Browse or search for the data to train the experiment. Note that you can supply data at runtime by using a pipeline parameter\| \|Holdout data asset (optional)\|Optionally choose a separate file to use for holdout data for testingmodel performance\| \|Models count (optional)\| Specify how many models to save from best performing pipelines. The limit is 3 models\| \|Run name (optional)\|Name of the experiment and optional description and tags\| \|Model name prefix (optional)\| Prefix used to name trained models. Defaults to <(experiment name)> \| \|Run description (optional)\| Description of the new training run\| \|Run tags (optional)\| Tags for new training run\| \|Creation mode (optional)\| Choose how to handle a case where the pipeline flow tries to create an asset and one of the same name exists. One of: `ignore`, `fail`, `overwrite`\| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Models \| List of paths of highest *N* trained and persisted model (ordered by selected evaluation metric)\| \|Best model \| path of the winning model (based on selected evaluation metric)\| \|Model metrics \| a list of trained model metrics (each item is a nested object with metrics like: holdout\_accuracy, holdout\_average\_precision, ...)\| \|Winning model metric \|elected evaluation metric of the winning model\| \|Optimized metric\| Metric used to tune the model\| \|Execution status\| Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors\| \|Status message\|Information about the state of the job\| <!-- </ul> --> <!-- <ul> --> * Run Bash script Run an inline Bash script to automate a function or process for the pipeline. You can enter the Bash script code manually, or you can import the bash script from a resource, pipeline parameter, or the output of another node. You can also use a Bash script to process large output files. For example, you can generate a large, comma-separated list that you can then iterate over using a loop. In the following example, the user entered the inline script code manually. The script uses the `cpdctl` tool to search all notebooks with a set variable tag and aggregates the results in a JSON list. The list can then be used in another node, such as running the notebooks returned from the search. ![Example of a bash script node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-config-4.png)\{: height="50%" width="50%"\} \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Inline script code\|Enter a Bash script in the inline code editor. *Optional:* Alternatively, you can select a resource, assign a pipeline parameter, or select from another node. \| \|Environment variables (optional)\| Specify a variable name (the key) and a data type and add to the list of variables to use in the script.\| \|Runtime type (optional)\| Select either use standalone runtime (default) or a shared runtime. Use a shared runtime for tasks that require running in shared pods. \| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Output variables \|Configure a key/value pair for each custom variable, then click the Add button to populate the list of dynamic variables for the node\| \|Return value\|Return value from the node\| \|Standard output\|Standard output from the script\| \|Execution status\|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors\| \|Status message\| Message associated with the status\| \#\#\#\# Rules for Bash script output The output for a Bash script is often the result of a computed expression and can be large. When you are reviewing the properties for a script with valid large output, you can preview or download the output in a viewer. These rules govern what type of large output is valid. - The output of a `list_expression` is a calculated expression, so it is valid a large output. - String output is treated as a literal value rather than a calculated expression, so it must follow the size limits that govern inline expressions. For example, you are warned when a literal value exceeds 1 KB and values of 2 KB and higher result in an error. \#\#\#\# Referencing a variable in a Bash script The way that you reference a variable in a script depends on whether the variable was created as an input variable or as an output variable. Output variables are created as a file and require a file path in the reference. Specifically: - Input variables are available using the assigned name - Output variable names require that `_PATH` be appended to the variable name to indicate that values have to be written to the output file pointed by the `{output_name}_PATH` variable. \#\#\#\# Using SSH in Bash scripts The following steps describe how to use `ssh` to run your remote Bash script. 1. Create a private key and public key. `bash ssh-keygen -t rsa -C "XXX"` 2. Copy the public key to the remote host. `bash ssh-copy-id USER@REMOTE_HOST` 3. On the remote host, check whether the public key contents are added into `/root/.ssh/authorized_keys`. 4. Copy the public and private keys to a new directory in the **Run Bash script** node. `bash mkdir -p $HOME/.ssh #copy private key content echo "-----BEGIN OPENSSH PRIVATE KEY----- ... ... -----END OPENSSH PRIVATE KEY-----" > $HOME/.ssh/id_rsa #copy public key content echo "ssh-rsa ...... " > $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null -i $HOME/.ssh/id_rsa USER@REMOTE_HOST "cd /opt/scripts; ls -l; sh 1.sh"` \#\#\#\# Using SSH utilities in Bash scripts The following steps describe how to use `sshpass` to run your remote Bash script. 1. Put your SSH password file in your system path, such as the mounted storage volume path. 2. Use the SSH password directly in the **Run Bash script** node: `bash cd /mnts/orchestration ls -l sshpass chmod 777 sshpass ./sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USER@REMOTE_HOST "cd /opt/scripts; ls -l; sh 1.sh"` <!-- </ul> --> <!-- <ul> --> * Run batch deployment Configure this node to run selected deployment jobs. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Deployment\|Browse or search for the deployment job \| \|Input data assets\|Specify the data used for the batch job Restriction: Input for batch deployment jobs is limited to data assets. Deployments that require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.\| \|Output asset\|Name of the output file for the results of the batch job. You can either select *Filename* and enter a custom file name, or *Data asset* and select an existing asset in a space.\| \|Hardware specification (optional)\|Browse for a hardware specification to apply for the job\| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Job\|Path to the file with results from the deployment job\| \|Job run\|ID for the job\| \|Execution status\|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors\| \|Status message\| Information about the state of the job\| <!-- </ul> --> <!-- <ul> --> * Run DataStage job <!-- </ul> --> <!-- <ul> --> * Run Data Refinery job This node runs a specified Data Refinery job. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Data Refinery job \|Path to the Data Refinery job.\| \|Environment \| Path of the environment used to run the job Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.\| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Job \|Path to the results from the Data Refinery job\| \|Job run\|Information about the job run\| \|Job name \|Name of the job \| \|Execution status\|Information on the state of the flow: pending, starting, running, completed, canceled, or failed with errors\| \|Status message\| Information about the state of the flow\| <!-- </ul> --> <!-- <ul> --> * Run notebook job Use these configuration options to specify how to run a Jupyter Notebook in a pipeline. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Notebook job\|Path to the notebook job. \| \|Environment \|Path of the environment used to run the notebook. Attention: Leave the environments field as is to use the default environment. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the notebook language and hardware configuration to avoid a runtime error.\| \|Environment variables (optional)\|List of environment variables used to run the notebook job\| \|Error policy (optional)\| Optionally, override the default error policy for the node\| **Notes:** - Environment variables that you define in a pipeline cannot be used for notebook jobs you run outside of Watson Pipelines. - You can run a notebook from a code package in a regular package. \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Job \|Path to the results from the notebook job\| \|Job run\|Information about the job run\| \|Job name \|Name of the job \| \|Output variables \|Configure a key/value pair for each custom variable, then click **Add** to populate the list of dynamic variables for the node\| \|Execution status\|Information on the state of the run: pending, starting, running, completed, canceled, or failed with errors\| \|Status message\|Information about the state of the notebook run\| <!-- </ul> --> <!-- <ul> --> * Run Pipelines component Run a reusable pipeline component that is created by using a Python script. For more information, see [Creating a custom component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html). - If a pipeline component is available, configuring the node presents a list of available components. - The component that you choose specifies the input and output for the node. - Once you assign a component to a node, you cannot delete or change the component. You must delete the node and create a new one. <!-- </ul> --> <!-- <ul> --> * Run Pipelines job Add a pipeline to run a nested pipeline job as part of a containing pipeline. This is a way of adding reusable processes to multiple pipelines. You can use the output from a nested pipeline that is run as input for a node in the containing pipeline. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|Pipelines job\|Select or enter a path to an existing Pipelines job.\| \|Environment (optional)\| Select the environment to run the Pipelines job in, and assign environment resources. Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.\| \|Job Run Name (optional) \|A default job name is used unless you override it by specifying a custom job name. You can see the job name in the **Job Details** dashboard.\| \|Values for local parameters (optional) \| Edit the default job parameters. This option is available only if you have local parameters in the job. \| \|Values from parameter sets (optional) \|Edit the parameter sets used by this job. You can choose to use the parameters as defined by default, or use value sets from other pipelines' parameters. \| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Job \|Path to the results from the pipeline job\| \|Job run\|Information about the job run\| \|Job name \|Name of the job \| \|Execution status\| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled\| \|Status message\| Message associated with the status\| \#\#\#\# Notes for running nested pipeline jobs If you create a pipeline with nested pipelines and run a pipeline job from the top-level, the pipelines are named and saved as project assets that use this convention: - The top-level pipeline job is named "Trial job - *pipeline guid*". - All subsequent jobs are named "pipeline\_ *pipeline guid*". <!-- </ul> --> <!-- <ul> --> * Run SPSS Modeler job Use these configuration options to specify how to run an SPSS Modeler in a pipeline. \#\#\#\# Input parameters \|Parameter\|Description\| \|---\|---\| \|SPSS Modeler job\|Select or enter a path to an existing SPSS Modeler job.\| \|Environment (optional)\| Select the environment to run the SPSS Modeler job in, and assign environment resources. Attention: Leave the environments field as is to use the default SPSS Modeler runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the hardware configuration to avoid a runtime error.\| \|Values for local parameters \| Edit the default job parameters. This option is available only if you have local parameters in the job. \| \|Error policy (optional)\| Optionally, override the default error policy for the node\| \#\#\#\# Output parameters \|Parameter\|Description\| \|---\|---\| \|Job \|Path to the results from the pipeline job\| \|Job run\|Information about the job run\| \|Job name \|Name of the job \| \|Execution status\| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled\| \|Status message\| Message associated with the status\| <!-- </ul> --> ## Learn more ## **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
536EF493AB96990DE8E237EDB8A97DB989EF15C8
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html?context=cdpaas&locale=en
Creating a pipeline
Creating a pipeline Create a pipeline to run an end-to-end scenario to automate all or part of the AI lifecycle. For example, create a pipeline that creates and trains an asset, promotes it to a space, creates a deployment, then scores the model. Watch this video to see how to create and run a sample pipeline. This video provides a visual method to learn the concepts and tasks in this documentation. Overview: Adding a pipeline to a project Follow these steps to add a pipeline to a project: 1. Open a project. 2. Click New task > Automate model lifecycle. 3. Enter a name and an optional description. 4. Click Create to open the canvas. Pipeline access When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. Related services In addition to access to all elements in a pipeline, you must have the services available to run all assets you add to a pipeline. For example, if you automate a pipeline that trains and deploys a model, you must have the Watson Studio and Watson Machine Learning services. If a required service is missing, the pipeline will not run. This table lists assets that require services in addition to Watson Studio: Asset Required service AutoAI experiment Watson Machine Learning Batch deployment job Watson Machine Learning Online deployment (web service) Watson Machine Learning Overview: Building a pipeline Follow these high-level steps to build and run a pipeline. 1. Drag any node objects onto the canvas. For example, drag a Run notebook job node onto the canvas. 2. Use the action menu for each node to view and select options. 3. Configure a node as required. You are prompted to supply the required input options. For some nodes, you can view or configure output options as well. For examples of configuring nodes, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). 4. Drag from one node to another to connect and order the pipeline. 5. Optional: Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to configure runtime options for the pipeline. 6. When the pipeline is complete, click the Run icon on the toolbar to run the pipeline. You can run a trial to test the pipeline, or you can schedule a job when you are confident in the pipeline. Configuring nodes As you add nodes to a pipeline, you must configure them to provide all of the required details. For example, if you add a node to run an AutoAI experiment, you must configure the node to specify the experiment, load the training data, and specify the output file: ![AutoAI node parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/OE-run-autoai-node.png) Connecting nodes When you build a complete pipeline, the nodes must be connected in the order in which they run in the pipeline. To connect nodes, hover over a node and drag a connection to the target node. Disconnected nodes are run in parallel. ![Connecting nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_conecting_nodes_gif.gif) Defining pipeline parameters A pipeline parameter defines a global variable for the whole pipeline. Use pipeline parameters to specify data from one of these categories: Parameter type Can specify Basic JSON types such as string, integer, or a JSON object CPDPath Resources available within the platform, such as assets, asset containers, connections, notebooks, hardware specs, projects, spaces, or jobs InstanceCRN Storage, machine learning instances, and other services. Other Various configuration types, such as status, timeout length, estimator, error policies and other various configuration types. To specify a pipeline parameter: 1. Click the global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to open the Manage global objects window. 2. Select the Pipeline parameters tab to configure parameters. 3. Click Add pipeline parameter. 4. Specify a name and an optional description. 5. Select a type and provide any required information. 6. Click Add when the definition is complete, and repeat the previous steps until you finish defining the parameters. 7. Close the Manage global objects dialog. The parameters are now available to the pipeline. Next steps [Configure pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
# Creating a pipeline # Create a pipeline to run an end\-to\-end scenario to automate all or part of the AI lifecycle\. For example, create a pipeline that creates and trains an asset, promotes it to a space, creates a deployment, then scores the model\. Watch this video to see how to create and run a sample pipeline\. This video provides a visual method to learn the concepts and tasks in this documentation\. ## Overview: Adding a pipeline to a project ## Follow these steps to add a pipeline to a project: <!-- <ol> --> 1. Open a project\. 2. Click **New task > Automate model lifecycle**\. 3. Enter a name and an optional description\. 4. Click **Create** to open the canvas\. <!-- </ol> --> ### Pipeline access ### When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline\. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline\. ### Related services ### In addition to access to all elements in a pipeline, you must have the services available to run all assets you add to a pipeline\. For example, if you automate a pipeline that trains and deploys a model, you must have the Watson Studio and Watson Machine Learning services\. If a required service is missing, the pipeline will not run\. This table lists assets that require services in addition to Watson Studio: <!-- <table> --> | Asset | Required service | | ------------------------------- | ----------------------- | | AutoAI experiment | Watson Machine Learning | | Batch deployment job | Watson Machine Learning | | Online deployment (web service) | Watson Machine Learning | <!-- </table ""> --> ## Overview: Building a pipeline ## Follow these high\-level steps to build and run a pipeline\. <!-- <ol> --> 1. Drag any node objects onto the canvas\. For example, drag a **Run notebook job** node onto the canvas\. 2. Use the action menu for each node to view and select options\. 3. Configure a node as required\. You are prompted to supply the required input options\. For some nodes, you can view or configure output options as well\. For examples of configuring nodes, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html)\. 4. Drag from one node to another to connect and order the pipeline\. 5. Optional: Click the **Global objects** icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to configure runtime options for the pipeline\. 6. When the pipeline is complete, click the **Run** icon on the toolbar to run the pipeline\. You can run a trial to test the pipeline, or you can schedule a job when you are confident in the pipeline\. <!-- </ol> --> ### Configuring nodes ### As you add nodes to a pipeline, you must configure them to provide all of the required details\. For example, if you add a node to run an AutoAI experiment, you must configure the node to specify the experiment, load the training data, and specify the output file: ![AutoAI node parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/OE-run-autoai-node.png) ### Connecting nodes ### When you build a complete pipeline, the nodes must be connected in the order in which they run in the pipeline\. To connect nodes, hover over a node and drag a connection to the target node\. Disconnected nodes are run in parallel\. ![Connecting nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipelines_conecting_nodes_gif.gif) ### Defining pipeline parameters ### A pipeline parameter defines a global variable for the whole pipeline\. Use pipeline parameters to specify data from one of these categories: <!-- <table> --> | Parameter type | Can specify | | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------- | | Basic | JSON types such as string, integer, or a JSON object | | CPDPath | Resources available within the platform, such as assets, asset containers, connections, notebooks, hardware specs, projects, spaces, or jobs | | InstanceCRN | Storage, machine learning instances, and other services\. | | Other | Various configuration types, such as status, timeout length, estimator, error policies and other various configuration types\. | <!-- </table ""> --> To specify a pipeline parameter: <!-- <ol> --> 1. Click the global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) in the toolbar to open the **Manage global objects** window\. 2. Select the **Pipeline parameters** tab to configure parameters\. 3. Click **Add pipeline parameter**\. 4. Specify a name and an optional description\. 5. Select a type and provide any required information\. 6. Click **Add** when the definition is complete, and repeat the previous steps until you finish defining the parameters\. 7. Close the **Manage global objects** dialog\. <!-- </ol> --> The parameters are now available to the pipeline\. ## Next steps ## [Configure pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html) **Parent topic:**[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
7F2731C1EBB3F492687A336E1369CD6232512118
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html?context=cdpaas&locale=en
Creating a custom component for use in the pipeline
Creating a custom component for use in the pipeline A custom pipeline component runs a script that you write. You can use custom components to share reusable scripts between pipelines. You create custom components as project assets. You can then use the components in pipelines you create in that project. You can create as many custom components for pipelines as needed. Currently, to create a custom component you must create one programmatically, using a Python function. Creating a component as a project asset To create a custom component, use the Python client to authenticate with IBM Watson Pipelines, code the component, then publish the component to the specified project. After it is available in the project, you can assign it to a node in a pipeline and run it as part of a pipeline flow. This example demonstrates the process of publishing a component that adds two numbers together, then assigning the component to a pipeline node. 1. Publish a function as a component with the latest Python client. Run the following code in a Jupyter Notebook in a project of IBM watsonx. Install libraries ! pip install ibm-watson-pipelines Authentication from ibm_watson_pipelines import WatsonPipelines apikey = '' project_id = 'your_project_id' client = WatsonPipelines.from_apikey(apikey) Define the function of the component If you define the input parameters, users are required to input them in the UI def add_two_numbers(a: int, b: int) -> int: print('Adding numbers: {} + {}.'.format(a, b)) return a + b + 10 Other possible functions might be sending a Slack message, or listing directories in a storage volume, and so on. Publish the component client.publish_component( name='Add numbers', Appears in UI as component name func=add_two_numbers, description='Custom component adding numbers', Appears in UI as component description project_id=project_id, overwrite=True, Overwrites an existing component with the same name ) To generate a new API key: 1. Go to the [IBM Cloud home page](https://cloud.ibm.com/) 2. Click Manage > Access (IAM) 3. Click API keys 4. Click Create 1. Drag the node called Run Pipelines component under Run to the canvas. ![Retrieving the custom component node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-1.png) 2. Choose the name of the component that you want to use. ![Choosing the actual component function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-2.png) 3. Connect and run the node as part of a pipeline job. ![Connecting the component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-3.png) Manage pipeline components To manage your components, use the Python client to manage them. Table 1. Manage pipeline components Method Function client.get_components(project_id=project_id) List components from a project client.get_component(project_id=project_id, component_id=component_id) Get a component by ID client.get_component(project_id=project_id, name=component_name) Get a component by name client.publish_component(component name) Publish a new component client.delete_component(project_id=project_id, component_id=component_id) Delete a component by ID Import and export IBM Watson Pipelines can be imported and exported with pipelines only. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Creating a custom component for use in the pipeline # A custom pipeline component runs a script that you write\. You can use custom components to share reusable scripts between pipelines\. You create custom components as project assets\. You can then use the components in pipelines you create in that project\. You can create as many custom components for pipelines as needed\. Currently, to create a custom component you must create one programmatically, using a Python function\. ## Creating a component as a project asset ## To create a custom component, use the Python client to authenticate with IBM Watson Pipelines, code the component, then publish the component to the specified project\. After it is available in the project, you can assign it to a node in a pipeline and run it as part of a pipeline flow\. This example demonstrates the process of publishing a component that adds two numbers together, then assigning the component to a pipeline node\. <!-- <ol> --> 1. Publish a function as a component with the latest Python client\. Run the following code in a Jupyter Notebook in a project of IBM watsonx\. # Install libraries ! pip install ibm-watson-pipelines # Authentication from ibm_watson_pipelines import WatsonPipelines apikey = '' project_id = 'your_project_id' client = WatsonPipelines.from_apikey(apikey) # Define the function of the component # If you define the input parameters, users are required to # input them in the UI def add_two_numbers(a: int, b: int) -> int: print('Adding numbers: {} + {}.'.format(a, b)) return a + b + 10 # Other possible functions might be sending a Slack message, # or listing directories in a storage volume, and so on. # Publish the component client.publish_component( name='Add numbers', # Appears in UI as component name func=add_two_numbers, description='Custom component adding numbers', # Appears in UI as component description project_id=project_id, overwrite=True, # Overwrites an existing component with the same name ) To generate a new API key: <!-- <ol> --> 1. Go to the [IBM Cloud home page](https://cloud.ibm.com/) 2. Click Manage > Access (IAM) 3. Click API keys 4. Click Create <!-- </ol> --> <!-- </ol> --> <!-- <ol> --> 1. Drag the node called *Run Pipelines component* under **Run** to the canvas\. ![Retrieving the custom component node](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-1.png) 2. Choose the name of the component that you want to use\. ![Choosing the actual component function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-2.png) 3. Connect and run the node as part of a pipeline job\. ![Connecting the component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-orch-custom-comp-3.png) <!-- </ol> --> ## Manage pipeline components ## To manage your components, use the Python client to manage them\. <!-- <table> --> Table 1\. Manage pipeline components | Method | Function | | --------------------------------------------------------------------------- | ------------------------------ | | `client.get_components(project_id=project_id)` | List components from a project | | `client.get_component(project_id=project_id, component_id=component_id)` | Get a component by ID | | `client.get_component(project_id=project_id, name=component_name)` | Get a component by name | | `client.publish_component(component name)` | Publish a new component | | `client.delete_component(project_id=project_id, component_id=component_id)` | Delete a component by ID | <!-- </table ""> --> ### Import and export ### IBM Watson Pipelines can be imported and exported with pipelines only\. **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
05D687FC92FD17804374E20E7F330EDAE142F725
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html?context=cdpaas&locale=en
Handling Pipeline errors
Handling Pipeline errors You can specify how to respond to errors in a pipeline globally, with an error policy, and locally, by overriding the policy on the node level. You can also create a custom error-handling response. Setting global error policy The error policy sets the default behavior for errors in a pipeline. You can override this behavior for any node in the pipeline. To set the global error policy: 1. Click the Manage default settings icon on the toolbar. 2. Choose the default response to an error under the Error policy: * Fail pipeline on error stops the flow and initiates an error-handling flow. * Continue pipeline on error tries to continue running the pipeline. Note: Continue pipeline on error affects nodes that use the default error policy and does not affect node-specific error policies. 3. You can optionally create a custom error-handling response for a flow failure. Specifying an error response If you opt for Fail pipeline on error for either the global error policy or for a node-specific policy, you can further specify what happens on failure. For example, if you check the Show icon on nodes that are linked to an error-handling pipeline, an icon flags a node with an error to help debug the flow. Specifying a node-specific error policy You can override the default error policy for any node in the pipeline. 1. Click a node to open the configuration pane. 2. Check the option to Override default error policy with: * Fail pipeline on error * Continue pipeline on error Viewing all node policies To view all node-specific error handling for a pipeline: 1. Click Manage default settings on the toolbar. 2. Click the view all node policies link under Error policy. A list of all nodes in the pipeline show which nodes use the default policy, and which override the default policy. Click a node name to see the policy details. Use the view filter to show: * All error policies: all nodes * Default policy: all nodes that use the default policy * Override default policy: all nodes that override the default policy * Fail pipeline on error: all nodes that stop the flow on error * Continue pipeline on error: all nodes that try to continue the flow on error Running the Fail on error flow If you specify that the flow fails on error, a secondary error handling flow starts when an error is encountered. Adding a custom error response If Create custom error handling response is checked on default settings for error policy, you can add an error handling node to the canvas so you can configure a custom error response. The response applies to all nodes configured to fail when an error occurs. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Handling Pipeline errors # You can specify how to respond to errors in a pipeline globally, with an error policy, and locally, by overriding the policy on the node level\. You can also create a custom error\-handling response\. ## Setting global error policy ## The error policy sets the default behavior for errors in a pipeline\. You can override this behavior for any node in the pipeline\. To set the global error policy: <!-- <ol> --> 1. Click the **Manage default settings** icon on the toolbar\. 2. Choose the default response to an error under the **Error policy**: <!-- <ul> --> * **Fail pipeline on error** stops the flow and initiates an error-handling flow. * **Continue pipeline on error** tries to continue running the pipeline. Note: **Continue pipeline on error** affects nodes that use the default error policy and does not affect node-specific error policies. <!-- </ul> --> 3. You can optionally create a custom error\-handling response for a flow failure\. <!-- </ol> --> ## Specifying an error response ## If you opt for **Fail pipeline on error** for either the global error policy or for a node\-specific policy, you can further specify what happens on failure\. For example, if you check the **Show icon on nodes that are linked to an error\-handling pipeline**, an icon flags a node with an error to help debug the flow\. ## Specifying a node\-specific error policy ## You can override the default error policy for any node in the pipeline\. <!-- <ol> --> 1. Click a node to open the configuration pane\. 2. Check the option to **Override default error policy with:** <!-- <ul> --> * **Fail pipeline on error** * **Continue pipeline on error** <!-- </ul> --> <!-- </ol> --> ## Viewing all node policies ## To view all node\-specific error handling for a pipeline: <!-- <ol> --> 1. Click **Manage default settings** on the toolbar\. 2. Click the **view all node policies** link under **Error policy**\. <!-- </ol> --> A list of all nodes in the pipeline show which nodes use the default policy, and which override the default policy\. Click a node name to see the policy details\. Use the view filter to show: <!-- <ul> --> * **All error policies**: all nodes * **Default policy**: all nodes that use the default policy * **Override default policy**: all nodes that override the default policy * **Fail pipeline on error**: all nodes that stop the flow on error * **Continue pipeline on error**: all nodes that try to continue the flow on error <!-- </ul> --> ## Running the Fail on error flow ## If you specify that the flow fails on error, a secondary error handling flow starts when an error is encountered\. ## Adding a custom error response ## If **Create custom error handling response** is checked on default settings for error policy, you can add an error handling node to the canvas so you can configure a custom error response\. The response applies to all nodes configured to fail when an error occurs\. **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=en
Functions used in Watson Pipelines's Expression Builder
Functions used in Watson Pipelines's Expression Builder Use these functions in Pipelines code editors, for example, to define a user variable or build an advanced condition. The Experssion Builder uses the categories for coding functions: * [Conversion functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enconversion) * [Standard functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enofext) * [Accessing advanced global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enadvanced) Conversion functions Converts a single data element format to another. Table for basic data type conversion Type Accepts Returns Syntax double int, uint, string double double(val) duration string duration duration(string) <br>Duration must end with "s", which stands for seconds. int int, uint, double, string, timestamp int int(val) timestamp string timestamp timestamp(string) <br>Converts strings to timestamps according to RFC3339, that is "1972-01-01T10:00:20.021-05:00". uint int, double, string uint uint(val) Example For example, to cast a value to type double: double(%val%) When you cast double to int | uint, result rounds toward zero and errors if result is out of range. Standard functions Functions that are unique to IBM Watson Pipelines. sub Replaces substrings of a string that matches the given regular expression that starts at position offset. Syntax (string).sub(substring (string), replacement (string), [occurrence (int), offset (int)]]) returns: the string with substrings updated. Examples 'aaabbbcccbbb'.sub('[b]+','RE') Returns 'aaaREcccRE'. format Formats a string or timestamp according to a format specifier and returns the resulting string. Syntax format as a method of strings (string).format(parameter 1 (string or bool or number)... parameter 10 (string or bool or number)) returns: the string that contains the formatted input values. format as a method of timestamps (timestamp).format(layout(string)) returns: the formatted timestamp in string format. Examples 'number=%d, text=%s'.format(1, 'str') Returns the string 'number=1, text=str'. timestamp('2020-07-24T09:07:29.000-00:00').format('%Y/%m/%d') Returns the string '2020/07/24'. now Returns the current timestamp. Syntax now() returns: the current timestamp. parseTimestamp Returns the current timestamp in string format. Syntax parseTimestamp([timestamp_string(string)] [layout(string)]) returns: the current timestamp to a string of type string. Examples parseTimestamp('2020-07-24T09:07:29Z') Returns '2020-07-24T09:07:29.000-00:00'. min Returns minimum value in list. Syntax (list).min() returns: the minimum value of the list. Examples [1,2,3].min() Returns the integer 1. max Returns maximum value in list. Syntax (list).max() returns: the maximum value of the list. Examples [1,2,3].max() Returns the integer 3. argmin Returns index of minimum value in list. Syntax (list).argmin() returns: the index of the minimum value of the list. Examples [1,2,3].argmin() Returns the integer 0. argmax Returns index of maximum value in list. Syntax (list).argmax() returns: the index of the maximum value of the list. Examples [1,2,3].argmax() Returns the integer 2. sum Returns the sum of values in list. Syntax (list).sum() returns: the index of the maximum value of the list. Examples [1,2,3].argmax() Returns the integer 2. base64.decode Decodes base64-encoded string to bytes. This function returns an error if the string input is not base64-encoded. Syntax base64.decode(base64_encoded_string(string)) returns: the decoded base64-encoded string in byte format. Examples base64.decode('aGVsbG8=') Returns 'hello' in bytes. base64.encode Encodes bytes to a base64-encoded string. Syntax base64.encode(bytes_to_encode (bytes)) returns: the encoded base64-encoded string of the original byte value. Examples base64.decode(b'hello') Returns 'aGVsbG8=' in bytes. charAt Returns the character at the given position. If the position is negative, or greater than the length of the string, the function produces an error. Syntax (string).charAt(index (int)) returns: the character of the specified position in integer format. Examples 'hello'.charAt(4) Returns the character 'o'. indexOf Returns the integer index of the first occurrence of the search string. If the search string is not found the function returns -1. Syntax (string).indexOf(search_string (string), [offset (int)]) returns: the index of the first character occurrence after the offset. Examples 'hello mellow'.indexOf('ello', 2) Returns the integer 7. lowerAscii Returns a new string with ASCII characters turned to lowercase. Syntax (string).lowerAscii() returns: the new lowercase string. Examples 'TacoCat'.lowerAscii() Returns the string 'tacocat'. replace Returns a new string based on the target, which replaces the occurrences of a search string with a replacement string if present. The function accepts an optional limit on the number of substring replacements to be made. Syntax (string).replace(search_string (string), replacement (string), [offset (int)]) returns: the new string with occurrences of a search string replaced. Examples 'hello hello'.replace('he', 'we') Returns the string 'wello wello'. split Returns a list of strings that are split from the input by the separator. The function accepts an optional argument that specifies a limit on the number of substrings that are produced by the split. Syntax (string).split(separator (string), [limit (int)]) returns: the split string as a string list. Examples 'hello hello hello'.split(' ') Returns the string list ['hello', 'hello', 'hello']. substring Returns the substring given a numeric range corresponding to character positions. Optionally you might omit the trailing range for a substring from a character position until the end of a string. Syntax (string).substring(start (int), [end (int)]) returns: the substring at the specified index of the string. Examples 'tacocat'.substring(4) Returns the string 'cat'. trim Returns a new string, which removes the leading and trailing white space in the target string. The trim function uses the Unicode definition of white space, which does not include the zero-width spaces. Syntax (string).trim() returns: the new string with white spaces removed. Examples ' ttrimn '.trim() Returns the string 'trim'. upperAscii Returns a new string where all ASCII characters are upper-cased. Syntax (string).upperAscii() returns: the new string with all characters turned to uppercase. Examples 'TacoCat'.upperAscii() Returns the string 'TACOCAT'. size Returns the length of the string, bytes, list, or map. Syntax (string | bytes | list | map).size() returns: the length of the string, bytes, list, or map array. Examples 'hello'.size() Returns the integer 5. 'hello'.size() Returns the integer 5. ['a','b','c'].size() Returns the integer 3. {'key': 'value'}.size() Returns the integer 1. contains Tests whether the string operand contains the substring. Syntax (string).contains(substring (string)) returns: a Boolean value of whether the substring exists in the string operand. Examples 'hello'.contains('ll') Returns true. endsWith Tests whether the string operand ends with the specified suffix. Syntax (string).endsWith(suffix (string)) returns: a Boolean value of whether the string ends with specified suffix in the string operand. Examples 'hello'.endsWith('llo') Returns true. startsWith Tests whether the string operand starts with the prefix argument. Syntax (string).startsWith(prefix (string)) returns: a Boolean value of whether the string begins with specified prefix in the string operand. Examples 'hello'.startsWith('he') Returns true. matches Tests whether the string operand matches regular expression. Syntax (string).matches(prefix (string)) returns: a Boolean value of whether the string matches the specified regular expression. Examples 'Hello'.matches('[Hh]ello') Returns true. getDate Get the day of the month from the date with time zone (default Coordinated Universal Time), one-based indexing. Syntax (timestamp).getDate([time_zone (string)]) returns: the day of the month with one-based indexing. Examples timestamp('2020-07-24T09:07:29.000-00:00').getDate() Returns 24. getDayOfMonth Get the day of the month from the date with time zone (default Coordinated Universal Time), zero-based indexing. Syntax (timestamp).getDayOfMonth([time_zone (string)]) returns: the day of the month with zero-based indexing. Examples timestamp('2020-07-24T09:07:29.000-00:00').getDayOfMonth() Returns 23. getDayOfWeek Get day of the week from the date with time zone (default Coordinated Universal Time), zero-based indexing, zero for Sunday. Syntax (timestamp).getDayOfWeek([time_zone (string)]) returns: the day of the week with zero-based indexing. Examples timestamp('2020-07-24T09:07:29.000-00:00').getDayOfWeek() Returns 5. getDayOfYear Get the day of the year from the date with time zone (default Coordinated Universal Time), zero-based indexing. Syntax (timestamp).getDayOfYear([time_zone (string)]) returns: the day of the year with zero-based indexing. Examples timestamp('2020-07-24T09:07:29.000-00:00').getDayOfYear() Returns 205. getFullYear Get the year from the date with time zone (default Coordinated Universal Time). Syntax (timestamp).getFullYear([time_zone (string)]) returns: the year from the date. Examples timestamp('2020-07-24T09:07:29.000-00:00').getFullYear() Returns 2020. getMonth Get the month from the date with time zone, 0-11. Syntax (timestamp).getMonth([time_zone (string)]) returns: the month from the date. Examples timestamp('2020-07-24T09:07:29.000-00:00').getMonth() Returns 6. getHours Get hours from the date with time zone, 0-23. Syntax (timestamp).getHours([time_zone (string)]) returns: the hour from the date. Examples timestamp('2020-07-24T09:07:29.000-00:00').getHours() Returns 9. getMinutes Get minutes from the date with time zone, 0-59. Syntax (timestamp).getMinutes([time_zone (string)]) returns: the minute from the date. Examples timestamp('2020-07-24T09:07:29.000-00:00').getMinutes() Returns 7. getSeconds Get seconds from the date with time zone, 0-59. Syntax (timestamp).getSeconds([time_zone (string)]) returns: the second from the date. Examples timestamp('2020-07-24T09:07:29.000-00:00').getSeconds() Returns 29. getMilliseconds Get milliseconds from the date with time zone, 0-999. Syntax (timestamp).getMilliseconds([time_zone (string)]) returns: the millisecond from the date. Examples timestamp('2020-07-24T09:07:29.021-00:00').getMilliseconds() Returns 21. Access to advanced global objects Get node outputs, user variables, and pipeline parameters by using the following Pipelines code. Get user variable Gets the most up-to-date value of a user variable. Syntax vars.<var name> Examples Example Output vars.my_user_var Gets the value of the user variable my_user_var Get parameters Gets the flow parameters. Syntax params.<param name> Examples Example Output params.a Gets the value of the parameter a Get parameter sets Gets the flow parameter sets. Syntax param_set.<param_set_name>.<param_name> Examples Example Output param_set.ps.a Gets the value of the parameter a from a parameter set ps param_sets.config Gets the pipeline configuration values param_sets.config.deadline Gets a date object from the configurations parameter set param_sets.ps["$PARAM"] Gets the value of the parameter $PARAM from a parameter set ps Get task results Get a pipeline task's resulting output and other metrics from a pipeline task after it completes its run. Syntax tasks.<node id>.<output name> Examples Example Output tasks.run_datastage_job Gets the results dictionary of job output tasks.run_datastage_job.results.score Gets the value score of job output tasks.run_datastage_job.results.timestamp Gets the end timestamp of job run tasks.run_datastage_job.results.error Gets the number of errors from job run tasks.loop_task.loop.counter Gets the current loop iterative counter of job run tasks.loop_task.loop.item Gets the current loop iterative item of job run tasks.run_datastage_job.results.status Gets either success or fail status of job run tasks.run_datastage_job.results.status_message Gets the status message of job run tasks.run_datastage_job.results.job_name Gets the job name tasks.run_datastage_job.results.job Gets the Cloud Pak for Data path of job tasks.run_datastage_job.results.job_run Gets the Cloud Pak for Data run path of job run Get pipeline context objects Gets values that are evaluated in the context of a pipeline that is run in a scope (project, space, catalog). Examples Example Output ctx.scope.id Gets scope ID ctx.scope.type Returns either "project", "space", or "catalog" ctx.scope.name Gets scope name ctx.pipeline.id Gets pipeline ID ctx.pipeline.name Gets pipeline name ctx.job.id Gets job ID ctx.run_datastage_job.id Gets job run ID ctx.run_datastage_job.started_at Gets job run start time ctx.user.id Gets the user ID Get error status If the exception handler is triggered, an error object is created and becomes accessible only within the exception handler. Examples Example Output error.status Gets either success or fail status of job run, usually failed error.status_message Gets the error status message error.job Gets the Cloud Pak for Data path of job error.run_datastage_job Gets the Cloud Pak for Data run path of job Parent topic:[Adding conditions to a Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html)
# Functions used in Watson Pipelines's Expression Builder # Use these functions in Pipelines code editors, for example, to define a user variable or build an advanced condition\. The Experssion Builder uses the categories for coding functions: <!-- <ul> --> * [Conversion functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=en#conversion) * [Standard functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=en#ofext) * [Accessing advanced global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=en#advanced) <!-- </ul> --> ## Conversion functions ## Converts a single data element format to another\. ### Table for basic data type conversion ### <!-- <table> --> | Type | Accepts | Returns | Syntax | | ----------- | -------------------------------------- | ----------- | --------------------------------------------------------------------------------------------------------------------------- | | `double` | `int, uint, string` | `double` | `double(val)` | | `duration` | `string` | `duration` | `duration(string)` <br>Duration must end with "s", which stands for seconds\. | | `int` | `int, uint, double, string, timestamp` | `int` | `int(val)` | | `timestamp` | `string` | `timestamp` | `timestamp(string)` <br>Converts strings to timestamps according to RFC3339, that is "1972\-01\-01T10:00:20\.021\-05:00"\. | | `uint` | `int, double, string` | `uint` | `uint(val)` | <!-- </table ""> --> #### Example #### For example, to cast a value to type `double`: double(%val%) When you cast double to `int | uint`, result rounds toward zero and errors if result is out of range\. ## Standard functions ## Functions that are unique to IBM Watson Pipelines\. ### sub ### Replaces substrings of a string that matches the given regular expression that starts at position offset\. #### Syntax #### (string).sub(substring (string), replacement (string), [occurrence (int), offset (int)]]) returns: the string with substrings updated\. #### Examples #### 'aaabbbcccbbb'.sub('[b]+','RE') Returns 'aaaREcccRE'\. ### format ### Formats a string or timestamp according to a format specifier and returns the resulting string\. #### Syntax #### ***format* as a method of strings** (string).format(parameter 1 (string or bool or number)... parameter 10 (string or bool or number)) returns: the string that contains the formatted input values\. ***format* as a method of timestamps** (timestamp).format(layout(string)) returns: the formatted timestamp in string format\. #### Examples #### 'number=%d, text=%s'.format(1, 'str') Returns the string 'number=1, text=str'\. timestamp('2020-07-24T09:07:29.000-00:00').format('%Y/%m/%d') Returns the string '2020/07/24'\. ### now ### Returns the current timestamp\. #### Syntax #### now() returns: the current timestamp\. ### parseTimestamp ### Returns the current timestamp in string format\. #### Syntax #### parseTimestamp([timestamp_string(string)] [layout(string)]) returns: the current timestamp to a string of type string\. #### Examples #### parseTimestamp('2020-07-24T09:07:29Z') Returns '2020\-07\-24T09:07:29\.000\-00:00'\. ### min ### Returns minimum value in list\. #### Syntax #### (list).min() returns: the minimum value of the list\. #### Examples #### [1,2,3].min() Returns the integer 1\. ### max ### Returns maximum value in list\. #### Syntax #### (list).max() returns: the maximum value of the list\. #### Examples #### [1,2,3].max() Returns the integer 3\. ### argmin ### Returns index of minimum value in list\. #### Syntax #### (list).argmin() returns: the index of the minimum value of the list\. #### Examples #### [1,2,3].argmin() Returns the integer 0\. ### argmax ### Returns index of maximum value in list\. #### Syntax #### (list).argmax() returns: the index of the maximum value of the list\. #### Examples #### [1,2,3].argmax() Returns the integer 2\. ### sum ### Returns the sum of values in list\. #### Syntax #### (list).sum() returns: the index of the maximum value of the list\. #### Examples #### [1,2,3].argmax() Returns the integer 2\. ### base64\.decode ### Decodes base64\-encoded string to bytes\. This function returns an error if the string input is not base64\-encoded\. #### Syntax #### base64.decode(base64_encoded_string(string)) returns: the decoded base64\-encoded string in byte format\. #### Examples #### base64.decode('aGVsbG8=') Returns 'hello' in bytes\. ### base64\.encode ### Encodes bytes to a base64\-encoded string\. #### Syntax #### base64.encode(bytes_to_encode (bytes)) returns: the encoded base64\-encoded string of the original byte value\. #### Examples #### base64.decode(b'hello') Returns 'aGVsbG8=' in bytes\. ### charAt ### Returns the character at the given position\. If the position is negative, or greater than the length of the string, the function produces an error\. #### Syntax #### (string).charAt(index (int)) returns: the character of the specified position in integer format\. #### Examples #### 'hello'.charAt(4) Returns the character 'o'\. ### indexOf ### Returns the integer index of the first occurrence of the search string\. If the search string is not found the function returns \-1\. #### Syntax #### (string).indexOf(search_string (string), [offset (int)]) returns: the index of the first character occurrence after the offset\. #### Examples #### 'hello mellow'.indexOf('ello', 2) Returns the integer 7\. ### lowerAscii ### Returns a new string with ASCII characters turned to lowercase\. #### Syntax #### (string).lowerAscii() returns: the new lowercase string\. #### Examples #### 'TacoCat'.lowerAscii() Returns the string 'tacocat'\. ### replace ### Returns a new string based on the target, which replaces the occurrences of a search string with a replacement string if present\. The function accepts an optional limit on the number of substring replacements to be made\. #### Syntax #### (string).replace(search_string (string), replacement (string), [offset (int)]) returns: the new string with occurrences of a search string replaced\. #### Examples #### 'hello hello'.replace('he', 'we') Returns the string 'wello wello'\. ### split ### Returns a list of strings that are split from the input by the separator\. The function accepts an optional argument that specifies a limit on the number of substrings that are produced by the split\. #### Syntax #### (string).split(separator (string), [limit (int)]) returns: the split string as a string list\. #### Examples #### 'hello hello hello'.split(' ') Returns the string list \['hello', 'hello', 'hello'\]\. ### substring ### Returns the substring given a numeric range corresponding to character positions\. Optionally you might omit the trailing range for a substring from a character position until the end of a string\. #### Syntax #### (string).substring(start (int), [end (int)]) returns: the substring at the specified index of the string\. #### Examples #### 'tacocat'.substring(4) Returns the string 'cat'\. ### trim ### Returns a new string, which removes the leading and trailing white space in the target string\. The trim function uses the Unicode definition of white space, which does not include the zero\-width spaces\. #### Syntax #### (string).trim() returns: the new string with white spaces removed\. #### Examples #### ' \ttrim\n '.trim() Returns the string 'trim'\. ### upperAscii ### Returns a new string where all ASCII characters are upper\-cased\. #### Syntax #### (string).upperAscii() returns: the new string with all characters turned to uppercase\. #### Examples #### 'TacoCat'.upperAscii() Returns the string 'TACOCAT'\. ### size ### Returns the length of the string, bytes, list, or map\. #### Syntax #### (string | bytes | list | map).size() returns: the length of the string, bytes, list, or map array\. #### Examples #### 'hello'.size() Returns the integer 5\. 'hello'.size() Returns the integer 5\. ['a','b','c'].size() Returns the integer 3\. {'key': 'value'}.size() Returns the integer 1\. ### contains ### Tests whether the string operand contains the substring\. #### Syntax #### (string).contains(substring (string)) returns: a Boolean value of whether the substring exists in the string operand\. #### Examples #### 'hello'.contains('ll') Returns true\. ### endsWith ### Tests whether the string operand ends with the specified suffix\. #### Syntax #### (string).endsWith(suffix (string)) returns: a Boolean value of whether the string ends with specified suffix in the string operand\. #### Examples #### 'hello'.endsWith('llo') Returns true\. ### startsWith ### Tests whether the string operand starts with the prefix argument\. #### Syntax #### (string).startsWith(prefix (string)) returns: a Boolean value of whether the string begins with specified prefix in the string operand\. #### Examples #### 'hello'.startsWith('he') Returns true\. ### matches ### Tests whether the string operand matches regular expression\. #### Syntax #### (string).matches(prefix (string)) returns: a Boolean value of whether the string matches the specified regular expression\. #### Examples #### 'Hello'.matches('[Hh]ello') Returns true\. ### getDate ### Get the day of the month from the date with time zone (default Coordinated Universal Time), one\-based indexing\. #### Syntax #### (timestamp).getDate([time_zone (string)]) returns: the day of the month with one\-based indexing\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getDate() Returns 24\. ### getDayOfMonth ### Get the day of the month from the date with time zone (default Coordinated Universal Time), zero\-based indexing\. #### Syntax #### (timestamp).getDayOfMonth([time_zone (string)]) returns: the day of the month with zero\-based indexing\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getDayOfMonth() Returns 23\. ### getDayOfWeek ### Get day of the week from the date with time zone (default Coordinated Universal Time), zero\-based indexing, zero for Sunday\. #### Syntax #### (timestamp).getDayOfWeek([time_zone (string)]) returns: the day of the week with zero\-based indexing\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getDayOfWeek() Returns 5\. ### getDayOfYear ### Get the day of the year from the date with time zone (default Coordinated Universal Time), zero\-based indexing\. #### Syntax #### (timestamp).getDayOfYear([time_zone (string)]) returns: the day of the year with zero\-based indexing\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getDayOfYear() Returns 205\. ### getFullYear ### Get the year from the date with time zone (default Coordinated Universal Time)\. #### Syntax #### (timestamp).getFullYear([time_zone (string)]) returns: the year from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getFullYear() Returns 2020\. ### getMonth ### Get the month from the date with time zone, 0\-11\. #### Syntax #### (timestamp).getMonth([time_zone (string)]) returns: the month from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getMonth() Returns 6\. ### getHours ### Get hours from the date with time zone, 0\-23\. #### Syntax #### (timestamp).getHours([time_zone (string)]) returns: the hour from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getHours() Returns 9\. ### getMinutes ### Get minutes from the date with time zone, 0\-59\. #### Syntax #### (timestamp).getMinutes([time_zone (string)]) returns: the minute from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getMinutes() Returns 7\. ### getSeconds ### Get seconds from the date with time zone, 0\-59\. #### Syntax #### (timestamp).getSeconds([time_zone (string)]) returns: the second from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.000-00:00').getSeconds() Returns 29\. ### getMilliseconds ### Get milliseconds from the date with time zone, 0\-999\. #### Syntax #### (timestamp).getMilliseconds([time_zone (string)]) returns: the millisecond from the date\. #### Examples #### timestamp('2020-07-24T09:07:29.021-00:00').getMilliseconds() Returns 21\. ## Access to advanced global objects ## Get node outputs, user variables, and pipeline parameters by using the following Pipelines code\. ### Get user variable ### Gets the most up\-to\-date value of a user variable\. #### Syntax #### vars.<var name> #### Examples #### <!-- <table> --> | Example | Output | | ------------------ | ------------------------------------------------- | | `vars.my_user_var` | Gets the value of the user variable `my_user_var` | <!-- </table ""> --> ### Get parameters ### Gets the flow parameters\. #### Syntax #### params.<param name> #### Examples #### <!-- <table> --> | Example | Output | | ---------- | ----------------------------------- | | `params.a` | Gets the value of the parameter `a` | <!-- </table ""> --> ### Get parameter sets ### Gets the flow parameter sets\. #### Syntax #### param_set.<param_set_name>.<param_name> #### Examples #### <!-- <table> --> | Example | Output | | ---------------------------- | ------------------------------------------------------------------ | | `param_set.ps.a` | Gets the value of the parameter `a` from a parameter set `ps` | | `param_sets.config` | Gets the pipeline configuration values | | `param_sets.config.deadline` | Gets a date object from the configurations parameter set | | `param_sets.ps["$PARAM"]` | Gets the value of the parameter `$PARAM` from a parameter set `ps` | <!-- </table ""> --> ### Get task results ### Get a pipeline task's resulting output and other metrics from a pipeline task after it completes its run\. #### Syntax #### tasks.<node id>.<output name> #### Examples #### <!-- <table> --> | Example | Output | | ------------------------------------------------ | -------------------------------------------------- | | `tasks.run_datastage_job` | Gets the results dictionary of job output | | `tasks.run_datastage_job.results.score` | Gets the value `score` of job output | | `tasks.run_datastage_job.results.timestamp` | Gets the end timestamp of job run | | `tasks.run_datastage_job.results.error` | Gets the number of errors from job run | | `tasks.loop_task.loop.counter` | Gets the current loop iterative counter of job run | | `tasks.loop_task.loop.item` | Gets the current loop iterative item of job run | | `tasks.run_datastage_job.results.status` | Gets either success or fail status of job run | | `tasks.run_datastage_job.results.status_message` | Gets the status message of job run | | `tasks.run_datastage_job.results.job_name` | Gets the job name | | `tasks.run_datastage_job.results.job` | Gets the Cloud Pak for Data path of job | | `tasks.run_datastage_job.results.job_run` | Gets the Cloud Pak for Data run path of job run | <!-- </table ""> --> ### Get pipeline context objects ### Gets values that are evaluated in the context of a pipeline that is run in a scope (project, space, catalog)\. #### Examples #### <!-- <table> --> | Example | Output | | ---------------------------------- | ----------------------------------------------- | | `ctx.scope.id` | Gets scope ID | | `ctx.scope.type` | Returns either "project", "space", or "catalog" | | `ctx.scope.name` | Gets scope name | | `ctx.pipeline.id` | Gets pipeline ID | | `ctx.pipeline.name` | Gets pipeline name | | `ctx.job.id` | Gets job ID | | `ctx.run_datastage_job.id` | Gets job run ID | | `ctx.run_datastage_job.started_at` | Gets job run start time | | `ctx.user.id` | Gets the user ID | <!-- </table ""> --> ### Get error status ### If the exception handler is triggered, an error object is created and becomes accessible only within the exception handler\. #### Examples #### <!-- <table> --> | Example | Output | | ------------------------- | ------------------------------------------------------------- | | `error.status` | Gets either success or fail status of job run, usually failed | | `error.status_message` | Gets the error status message | | `error.job` | Gets the Cloud Pak for Data path of job | | `error.run_datastage_job` | Gets the Cloud Pak for Data run path of job | <!-- </table ""> --> **Parent topic:**[Adding conditions to a Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html) <!-- </article "role="article" "> -->
445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=en
Configuring global objects for Watson Pipelines
Configuring global objects for Watson Pipelines Use global objects to create configurable constants to configure your pipeline at run time. Use parameters or user variables in pipelines to specify values at run time, rather than hardcoding the values. Unlike pipeline parameters, user variables can be dynamically set during the flow. Learn about creating: * [Pipeline parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enflow) * [Parameter sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enparam-set) * [User variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enuser) Pipeline parameters Use pipeline parameters to specify a value at pipeline runtime. For example, if you want a user to enter a deployment space for pipeline output, use a parameter to prompt for the space name to use when the pipeline runs. Specifying the value of the parameter each time that you run the job helps you use the correct resources. About pipeline parameters: * can be assigned as a node value or assign it for the pipeline job. * can be assigned to any node, and a status indicator alerts you. * can be used for multiple nodes. Defining a pipeline parameter 1. Create a pipeline parameter from the node configuration panel from the toolbar. 2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline. 3. Assign a parameter type. Depending on the parameter type, you might need to provide more details or assign a default value. 4. Click Add to list to save the pipeline parameter. Note: Parameter types Parameter types are categorized as: * Basic: including data types to structure input to a pipeline or options for handling the creation of a duplicate space or asset. * Resource: for selecting a project, catalog, space, or asset. * Instance: for selecting a machine learning instance or a Cloud Object Storage instance. * Other: for specifying details, such as creation mode or error policy. Example of using pipeline types To create a parameter of the type Path: 1. Create a parameter set called MASTER_PARAMETER_SET. 2. Create a parameter called file_path and set the type to Path. 3. Set the value of file_path to mnts/workspace/masterdir. 4. Drag the node Wait for file onto the canvas and set the File location value to MASTER_PARAMETER_SET.file_path. 5. Connect the Wait for file with the Run Bash script node so that the latter node runs after the former. 6. Optional: Test your parameter variable: 1. Add the environment variable parameter to your MASTER_PARAMETER_SET parameter set, for example FILE_PATH. 2. Paste the following command into the Script code of the Run Bash script: echo File: $FILE_PATH cat $FILE_PATH 7. Run the pipeline. The path mnts/workspace/masterdir is in both of the nodes' execution logs to see they passed successfully. Configuring a node with a pipeline parameter When you configure a node with a pipeline parameter, you can choose an existing pipeline parameter or create a new one as part of configuring a node. For example: 1. Create a pipeline parameter called creationmode and save it to the parameter list. 2. Configure a Create deployment space node and click to open the configuration panel. 3. Choose the Pipeline parameter as the input for the Creation mode option. 4. Choose the creationmode pipeline parameter and save the configuration. When you run the flow, the pipeline parameter is assigned when the space is created. Parameter sets Parameter sets are a group of related parameters to use in a pipeline. For example, you might create one set of parameters to use in a test environment and another for use in a production environment. Parameter sets can be created as a project asset. Parameter sets created in the project are then available for use in pipelines in that project. Creating a parameter set as a project asset You can create a parameter set as a reusable project asset to use in pipelines. 1. Open an existing project or create a project. 2. Click New task > Collect multiple job parameters with specified values to reuse in jobs from the available tasks. 3. Assign a name for the set, and specify the details for each parameter in the set, including: * Name for the parameter * Data type * Prompt * Default value 4. Optionally create value sets for the parameters in the parameter set. The value sets can be the different values for different contexts. For example, you can create a Test value set with values for a test environment, and a production set for production values. 5. Save the parameter set after you create all the parameters, s. It becomes available for use in pipelines that are created in that project. Adding a parameter set for use in a pipeline To add a parameter set from a project: 1. Click the global objects icon and switch to the Parameter sets tab. 2. Click Add parameter set to add parameter sets from your project that you want to use in your pipeline. 3. You can add or remove parameter sets from the list. The parameter sets you specify for use in your pipeline becomes available when you assign parameters as input in the pipeline. Creating a parameter set from the parameters list in your pipeline You can create a parameter set from the parameters list for your pipeline 1. Click the global objects icon and open the Pipeline Parameters. 2. Select the parameters that you want in the set, then click the Save as parameter set icon. 3. Enter a name and optional description for the set. 4. Save to add the parameter set for use in your pipeline. Using a parameter set in a pipeline To use a parameter set: 1. Choose Assign pipeline parameter as an input type from a node property sheet. 2. Choose the parameter to assign. A list displays all available parameters of the type for that input. Available parameters can be individual parameters, and parameters defined as part of a set. The parameter set name precedes the name of the parameter. For example, Parameter_set_name.Parameter_name. 3. Run the pipeline and select a value set for the corresponding value (if available), assign a value for the parameter, or accept the default value. Note:You can use a parameter set in the expression builder by using the format param_sets.<MyParamSet.BRParam>. If a parameter set value contains an environment variable, you must use this syntax in the expression builder: param_sets.MyParamSet["$ICU_DATA"]. Attention: If you delete a parameter, make sure that you remove the references to the parameter from your job design. If you do not remove the references, your job might fail. Editing a parameter set in a job If you use a parameter set when you define a job, you can choose a value set to populate variables with the values in that set. If you change and save the values, then edit the job and save changes, the parameter set values reset to the defaults. User variables Create user variables to assign values when the flow runs. Unlike pipeline parameters, user variables can be modified during processing. Defining a user variable You can create user variables for use in your pipeline. User variables, like parameters, are defined on the global level and are not specific to any node. The initial value for a user variable must be set when you define it and cannot be set dynamically as the result of any node output. When you define a user variable, you can use the Set user variables node to update it with node output. To create a user variable: 1. Create a variable from the Update variable node configuration panel or from the toolbar. 2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline. 3. Complete the definition of the variable, including choosing a variable type and input type. 4. Click Add to add the variable to the list. It is now available for use in a node. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Configuring global objects for Watson Pipelines # Use global objects to create configurable constants to configure your pipeline at run time\. Use parameters or user variables in pipelines to specify values at run time, rather than hardcoding the values\. Unlike pipeline parameters, user variables can be dynamically set during the flow\. Learn about creating: <!-- <ul> --> * [Pipeline parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=en#flow) * [Parameter sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=en#param-set) * [User variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=en#user) <!-- </ul> --> ## Pipeline parameters ## Use pipeline parameters to specify a value at pipeline runtime\. For example, if you want a user to enter a deployment space for pipeline output, use a parameter to prompt for the space name to use when the pipeline runs\. Specifying the value of the parameter each time that you run the job helps you use the correct resources\. About pipeline parameters: <!-- <ul> --> * can be assigned as a node value or assign it for the pipeline job\. * can be assigned to any node, and a status indicator alerts you\. * can be used for multiple nodes\. <!-- </ul> --> ### Defining a pipeline parameter ### <!-- <ol> --> 1. Create a pipeline parameter from the node configuration panel from the toolbar\. 2. Enter a name and an optional description\. The name must be lower *snake* case with lowercase letters, numbers, and underscores\. For example, `lower_snake_case_with_numbers_123` is a valid name\. The name must begin with a letter\. If the name does not comply, you get a 404 error when you try to run the pipeline\. 3. Assign a parameter type\. Depending on the parameter type, you might need to provide more details or assign a default value\. 4. Click **Add to list** to save the pipeline parameter\. <!-- </ol> --> Note: ### Parameter types ### Parameter types are categorized as: <!-- <ul> --> * **Basic:** including data types to structure input to a pipeline or options for handling the creation of a duplicate space or asset\. * **Resource:** for selecting a project, catalog, space, or asset\. * **Instance:** for selecting a machine learning instance or a Cloud Object Storage instance\. * **Other:** for specifying details, such as creation mode or error policy\. <!-- </ul> --> #### Example of using pipeline types #### To create a parameter of the type **Path**: <!-- <ol> --> 1. Create a parameter set called **MASTER\_PARAMETER\_SET**\. 2. Create a parameter called `file_path` and set the type to **Path**\. 3. Set the value of `file_path` to `mnts/workspace/masterdir`\. 4. Drag the node **Wait for file** onto the canvas and set the *File location* value to `MASTER_PARAMETER_SET.file_path`\. 5. Connect the **Wait for file** with the **Run Bash script** node so that the latter node runs after the former\. 6. *Optional:* Test your parameter variable: <!-- <ol> --> 1. Add the environment variable parameter to your **MASTER\_PARAMETER\_SET** parameter set, for example `FILE_PATH`. 2. Paste the following command into the *Script code* of the **Run Bash script**: echo File: $FILE_PATH cat $FILE_PATH <!-- </ol> --> 7. Run the pipeline\. The path `mnts/workspace/masterdir` is in both of the nodes' execution logs to see they passed successfully\. <!-- </ol> --> ## Configuring a node with a pipeline parameter ## When you configure a node with a pipeline parameter, you can choose an existing pipeline parameter or create a new one as part of configuring a node\. For example: <!-- <ol> --> 1. Create a pipeline parameter called *creationmode* and save it to the parameter list\. 2. Configure a *Create deployment space* node and click to open the configuration panel\. 3. Choose the **Pipeline parameter** as the input for the **Creation mode** option\. 4. Choose the *creationmode* pipeline parameter and save the configuration\. <!-- </ol> --> When you run the flow, the pipeline parameter is assigned when the space is created\. ## Parameter sets ## Parameter sets are a group of related parameters to use in a pipeline\. For example, you might create one set of parameters to use in a test environment and another for use in a production environment\. Parameter sets can be created as a project asset\. Parameter sets created in the project are then available for use in pipelines in that project\. ### Creating a parameter set as a project asset ### You can create a parameter set as a reusable project asset to use in pipelines\. <!-- <ol> --> 1. Open an existing project or create a project\. 2. Click **New task > Collect multiple job parameters with specified values to reuse in jobs** from the available tasks\. 3. Assign a name for the set, and specify the details for each parameter in the set, including: <!-- <ul> --> * Name for the parameter * Data type * Prompt * Default value <!-- </ul> --> 4. Optionally create value sets for the parameters in the parameter set\. The value sets can be the different values for different contexts\. For example, you can create a Test value set with values for a test environment, and a production set for production values\. 5. Save the parameter set after you create all the parameters, s\. It becomes available for use in pipelines that are created in that project\. <!-- </ol> --> ### Adding a parameter set for use in a pipeline ### To add a parameter set from a project: <!-- <ol> --> 1. Click the global objects icon and switch to the **Parameter sets tab**\. 2. Click **Add parameter set** to add parameter sets from your project that you want to use in your pipeline\. 3. You can add or remove parameter sets from the list\. The parameter sets you specify for use in your pipeline becomes available when you assign parameters as input in the pipeline\. <!-- </ol> --> ### Creating a parameter set from the parameters list in your pipeline ### You can create a parameter set from the parameters list for your pipeline <!-- <ol> --> 1. Click the global objects icon and open the Pipeline Parameters\. 2. Select the parameters that you want in the set, then click the **Save as parameter set** icon\. 3. Enter a name and optional description for the set\. 4. Save to add the parameter set for use in your pipeline\. <!-- </ol> --> ### Using a parameter set in a pipeline ### To use a parameter set: <!-- <ol> --> 1. Choose **Assign pipeline parameter** as an input type from a node property sheet\. 2. Choose the parameter to assign\. A list displays all available parameters of the type for that input\. Available parameters can be individual parameters, and parameters defined as part of a set\. The parameter set name precedes the name of the parameter\. For example, *Parameter\_set\_name\.Parameter\_name*\. 3. Run the pipeline and select a value set for the corresponding value (if available), assign a value for the parameter, or accept the default value\. <!-- </ol> --> Note:You can use a parameter set in the expression builder by using the format `param_sets.<MyParamSet.BRParam>`\. If a parameter set value contains an environment variable, you must use this syntax in the expression builder: `param_sets.MyParamSet["$ICU_DATA"]`\. Attention: If you delete a parameter, make sure that you remove the references to the parameter from your job design\. If you do not remove the references, your job might fail\. ### Editing a parameter set in a job ### If you use a parameter set when you define a job, you can choose a value set to populate variables with the values in that set\. If you change and save the values, then edit the job and save changes, the parameter set values reset to the defaults\. ## User variables ## Create user variables to assign values when the flow runs\. Unlike pipeline parameters, user variables can be modified during processing\. ### Defining a user variable ### You can create user variables for use in your pipeline\. User variables, like parameters, are defined on the global level and are not specific to any node\. The initial value for a user variable must be set when you define it and cannot be set dynamically as the result of any node output\. When you define a user variable, you can use the **Set user variables** node to update it with node output\. To create a user variable: <!-- <ol> --> 1. Create a variable from the **Update variable** node configuration panel or from the toolbar\. 2. Enter a name and an optional description\. The name must be lower *snake* case with lowercase letters, numbers, and underscores\. For example, lower\_snake\_case\_with\_numbers\_123 is a valid name\. The name must begin with a letter\. If the name does not comply, you get a 404 error when you try to run the pipeline\. 3. Complete the definition of the variable, including choosing a variable type and input type\. 4. Click **Add** to add the variable to the list\. It is now available for use in a node\. <!-- </ol> --> **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html?context=cdpaas&locale=en
Getting started with the Watson Pipelines editor
Getting started with the Watson Pipelines editor The Watson Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations. You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset. To create a new asset in your project from the Assets tab, click New asset > Automate model lifecycle. To edit an existing asset, click the pipeline asset name on the Assets tab. The canvas opens with a set of annotated tools for you to use to create a pipeline. The canvas includes the following components: ![Pipeline canvas components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-canvas.svg) * The node palette provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook. You can also use nodes to run and update assets. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle. * The toolbar includes shortcuts to options related to running, editing, and viewing the pipeline. * The parameters pane provides context-sensitive options for configuring the elements of your pipeline. The toolbar ![Pipeline toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-toolbar.png) Use the Pipeline editor toolbar to: * Run the pipeline as a trial run or a scheduled job * View the history of pipeline runs * Cut, copy, or paste canvas objects * Delete a selected node * Drop a comment onto the canvas * Configure global objects, such as pipeline parameters or user variables * Manage default settings * Arrange nodes vertically * View last saved timestamp * Zoom in or out * Fit the pipeline to the view * Show or hide global messages Hover over an icon on the toolbar to view the shortcut text. The node palette The node palette provides the objects that you need to create an end-to-end pipeline. Click a top-level node in the palette to see the related nodes. Node category Description Node type Copy Use nodes to copy an asset or file, import assets, or export assets Copy assets <br>Export assets <br>Import assets Create Create assets or containers for assets Create AutoAI experiment <br>Create AutoAI time series experiment <br>Create batch deployment <br>Create data asset <br>Create deployment space <br>Create online deployment Wait Specify node-level conditions for advancing the pipeline run Wait for all results <br>Wait for any result <br>Wait for file Control Specify error handling Loop in parallel <br>Loop in sequence <br>Set user variables <br>Terminate pipeline Update Update the configuration settings for a space, asset, or job. Update AutoAI experiment <br>Update batch deployment <br>Update deployment space <br>Update online deployment Delete Remove a specified asset, job, or space. Delete AutoAI experiment <br>Delete batch deployment <br>Delete deployment space <br>Delete online deployment Run Run an existing or ad hoc job. Run AutoAI experiment <br>Run Bash script <br>Run batch deployment <br>Run Data Refinery job <br>Run notebook job <br>Run pipeline job <br>Run Pipelines component job <br>Run SPSS Modeler job The parameters pane Double-click a node to edit its configuration options. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically. You can define the source of values in various ways. For example, you can specify that the source of value for "ML asset" input for a batch deployment must be the output from a run notebook node. For more information on parameters, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). Next steps * [Planning a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html) * [Explore the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html) * [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
# Getting started with the Watson Pipelines editor # The Watson Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations\. You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset\. To create a new asset in your project from the *Assets* tab, click **New asset > Automate model lifecycle**\. To edit an existing asset, click the pipeline asset name on the *Assets* tab\. The canvas opens with a set of annotated tools for you to use to create a pipeline\. The canvas includes the following components: ![Pipeline canvas components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-canvas.svg) <!-- <ul> --> * The **node palette** provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline\. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces\. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook\. You can also use nodes to run and update assets\. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline\. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle\. * The **toolbar** includes shortcuts to options related to running, editing, and viewing the pipeline\. * The **parameters pane** provides context\-sensitive options for configuring the elements of your pipeline\. <!-- </ul> --> ### The toolbar ### ![Pipeline toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/Pipeline-toolbar.png) Use the Pipeline editor toolbar to: <!-- <ul> --> * Run the pipeline as a trial run or a scheduled job * View the history of pipeline runs * Cut, copy, or paste canvas objects * Delete a selected node * Drop a comment onto the canvas * Configure global objects, such as pipeline parameters or user variables * Manage default settings * Arrange nodes vertically * View last saved timestamp * Zoom in or out * Fit the pipeline to the view * Show or hide global messages <!-- </ul> --> Hover over an icon on the toolbar to view the shortcut text\. ### The node palette ### The node palette provides the objects that you need to create an end\-to\-end pipeline\. Click a top\-level node in the palette to see the related nodes\. <!-- <table> --> | Node category | Description | Node type | | ------------- | ------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | Copy | Use nodes to copy an asset or file, import assets, or export assets | Copy assets <br>Export assets <br>Import assets | | Create | Create assets or containers for assets | Create AutoAI experiment <br>Create AutoAI time series experiment <br>Create batch deployment <br>Create data asset <br>Create deployment space <br>Create online deployment | | Wait | Specify node\-level conditions for advancing the pipeline run | Wait for all results <br>Wait for any result <br>Wait for file | | Control | Specify error handling | Loop in parallel <br>Loop in sequence <br>Set user variables <br>Terminate pipeline | | Update | Update the configuration settings for a space, asset, or job\. | Update AutoAI experiment <br>Update batch deployment <br>Update deployment space <br>Update online deployment | | Delete | Remove a specified asset, job, or space\. | Delete AutoAI experiment <br>Delete batch deployment <br>Delete deployment space <br>Delete online deployment | | Run | Run an existing or ad hoc job\. | Run AutoAI experiment <br>Run Bash script <br>Run batch deployment <br>Run Data Refinery job <br>Run notebook job <br>Run pipeline job <br>Run Pipelines component job <br>Run SPSS Modeler job | <!-- </table ""> --> ### The parameters pane ### Double\-click a node to edit its configuration options\. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically\. You can define the source of values in various ways\. For example, you can specify that the source of value for "ML asset" input for a batch deployment must be the output from a run notebook node\. For more information on parameters, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html)\. ## Next steps ## <!-- <ul> --> * [Planning a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html) * [Explore the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html) * [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </ul> --> **Parent topic:**[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
28243D1C0B8BCF04FE3556990D40D1A31F4CB58D
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=en
Manage default settings
Manage default settings You can manage the global settings of your IBM Watson Pipelines such as a default error policy and default rules for node caching. Global settings apply to all nodes in the pipeline unless local node settings overwrite them. To update global settings, click the Manage default settings icon ![gear icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-setting-icon.png) on the toolbar. You can configure: * [Error policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=enerr-pol) * [Node caching](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=ennode-cache) Setting global error policy You can define the behavior of Pipelines when an error occurs. * Fail pipeline on error stops the flow and initiates an error-handling flow. * Continue pipeline on error tries to continue running the pipeline. Error handling You can configure the behavior of Pipelines for error handling. * Create custom-error handling response: Customize an error-handling response. Add an error handling node to the canvas so you can configure a custom error response. The response applies to all configured nodes to fail when an error occurs. * Show icon on nodes linked to error handling pipeline: An icon flags a node with an error to help debug the flow. To learn more about error handling, see [Managing pipeline errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html) Setting node caches Manual caching for nodes sets the default for how the pipeline caches and stores information. You can override these settings for individual nodes. Default cache usage frequency You can change the following cache settings: Caching method Choose whether to enable automatic caching for all nodes or choose to manually set cache conditions for specific nodes. * Enable automatic caching for all nodes (recommended) All nodes that support caching enable it by default. Setting Creation Mode or Copy Mode in your node's settings to Overwrite automatically disables cache, if the node supports these setting parameters. * Enable caching for specific nodes in the node properties panel. In individual nodes, you can select Create data cache at this node in Output to allow caching for individual nodes. A save icon appears on nodes that uses this feature. Cache usage Choose the conditions for using cached data. * Do not use cache * Always use cache * Use cache when all selected conditions are met * Retrying from a previous failed run * Input values for the current pipeline are unchanged from previous run * Pipeline version is unchanged from previous run To view and download your cache data, see Run tracker in your flow. You can download the results by opening a preview of the node's cache and clicking the download icon. Resetting the cache If your cache was enabled, you can choose to reset your cache when you run a Pipelines job. When you click Run again, you can select Clear pipeline cache in Define run settings. By choosing this option, you are overriding the default cache settings to reset the cache for the current run. However, the pipeline still creates cache for subsequent runs while cache is enabled. Managing your Pipelines settings Configure other global settings for your Pipelines asset. Autosave Choose to automatically save your current Pipelines canvas at a selected frequency. Only changes that impact core pipeline flow are saved. Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
# Manage default settings # You can manage the global settings of your IBM Watson Pipelines such as a default error policy and default rules for node caching\. Global settings apply to all nodes in the pipeline unless local node settings overwrite them\. To update global settings, click the **Manage default settings** icon ![gear icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-setting-icon.png) on the toolbar\. You can configure: <!-- <ul> --> * [Error policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=en#err-pol) * [Node caching](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=en#node-cache) <!-- </ul> --> ## Setting global error policy ## You can define the behavior of Pipelines when an error occurs\. <!-- <ul> --> * **Fail pipeline on error** stops the flow and initiates an error\-handling flow\. * **Continue pipeline on error** tries to continue running the pipeline\. <!-- </ul> --> ### Error handling ### You can configure the behavior of Pipelines for error handling\. <!-- <ul> --> * **Create custom\-error handling response**: Customize an error\-handling response\. Add an error handling node to the canvas so you can configure a custom error response\. The response applies to all configured nodes to fail when an error occurs\. <!-- <ul> --> * **Show icon on nodes linked to error handling pipeline**: An icon flags a node with an error to help debug the flow. <!-- </ul> --> <!-- </ul> --> To learn more about error handling, see [Managing pipeline errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html) ## Setting node caches ## Manual caching for nodes sets the default for how the pipeline caches and stores information\. You can override these settings for individual nodes\. ### Default cache usage frequency ### You can change the following cache settings: #### Caching method #### Choose whether to enable automatic caching for all nodes or choose to manually set cache conditions for specific nodes\. <!-- <ul> --> * Enable automatic caching for all nodes (recommended) All nodes that support caching enable it by default. Setting *Creation Mode* or *Copy Mode* in your node's settings to `Overwrite` automatically disables cache, if the node supports these setting parameters. * Enable caching for specific nodes in the node properties panel\. In individual nodes, you can select **Create data cache at this node** in **Output** to allow caching for individual nodes. A save icon appears on nodes that uses this feature. <!-- </ul> --> #### Cache usage #### Choose the conditions for using cached data\. <!-- <ul> --> * Do not use cache * Always use cache * Use cache when all selected conditions are met <!-- <ul> --> * Retrying from a previous failed run * Input values for the current pipeline are unchanged from previous run * Pipeline version is unchanged from previous run <!-- </ul> --> <!-- </ul> --> To view and download your cache data, see **Run tracker** in your flow\. You can download the results by opening a preview of the node's cache and clicking the download icon\. ### Resetting the cache ### If your cache was enabled, you can choose to reset your cache when you run a Pipelines job\. When you click **Run again**, you can select *Clear pipeline cache* in **Define run settings**\. By choosing this option, you are overriding the default cache settings to reset the cache for the current run\. However, the pipeline still creates cache for subsequent runs while cache is enabled\. ## Managing your Pipelines settings ## Configure other global settings for your Pipelines asset\. ### Autosave ### Choose to automatically save your current Pipelines canvas at a selected frequency\. Only changes that impact core pipeline flow are saved\. **Parent topic:**[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
606EF22CF35AF0EDC961776FB893B07A880F11D4
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html?context=cdpaas&locale=en
IBM Watson Pipelines
IBM Watson Pipelines The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts. To design a pipeline that you drag nodes onto the canvas, specify objects and parameters, then run and monitor the pipeline. Automating the path to production Putting a model into a product is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift. ![Automating the AI lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-cycle-3.svg) Automating the pipeline makes it simpler to build, run, and evaluate a model in a cohesive way, to shorten the time from conception to production. You can assemble the pipeline, then rapidly update and test modifications. The Pipelines canvas provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule. The Pipelines editor also allows for more cohesive collaboration between a data scientist and a ModelOps engineer. A data scientist can create and train a model. A ModelOps engineer can then automate the process of training, deploying, and evaluating the model after it is published to a production environment. Next steps [Add a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) to your project and get to know the canvas tools. Additional resources For more information, see this blog post about [automating the AI lifecycle with a pipeline flow](https://yairschiff.medium.com/automating-the-ai-lifecycle-with-ibm-watson-studio-orchestration-flow-4450f1d725d6).
# IBM Watson Pipelines # The Watson Pipelines editor provides a graphical interface for orchestrating an end\-to\-end flow of assets from creation through deployment\. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts\. To design a pipeline that you drag nodes onto the canvas, specify objects and parameters, then run and monitor the pipeline\. ## Automating the path to production ## Putting a model into a product is a multi\-step process\. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested\. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift\. ![Automating the AI lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-cycle-3.svg) Automating the pipeline makes it simpler to build, run, and evaluate a model in a cohesive way, to shorten the time from conception to production\. You can assemble the pipeline, then rapidly update and test modifications\. The Pipelines canvas provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule\. The Pipelines editor also allows for more cohesive collaboration between a data scientist and a ModelOps engineer\. A data scientist can create and train a model\. A ModelOps engineer can then automate the process of training, deploying, and evaluating the model after it is published to a production environment\. ## Next steps ## [Add a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) to your project and get to know the canvas tools\. ## Additional resources ## For more information, see this blog post about [automating the AI lifecycle with a pipeline flow](https://yairschiff.medium.com/automating-the-ai-lifecycle-with-ibm-watson-studio-orchestration-flow-4450f1d725d6)\. <!-- </article "role="article" "> -->
1BD28F052373C2E70130C7539D399D76F9D2AAFE
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html?context=cdpaas&locale=en
Accessing the components in your pipeline
Accessing the components in your pipeline When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. Collaborators who run the pipeline must also be able to access the pipeline components. Managing pipeline credentials To run a job, the pipeline must have access to IBM Cloud credentials. Typically, a pipeline uses your personal IBM Cloud API key to execute long-running operations in the pipeline without disruption. If credentials are not available when you create the job, you are prompted to supply an API key or create a new one. To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account. You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html). Alternatively, you can request that a key is generated for the pipeline. In either scenario, name and copy the key, protecting it as you would a password. Adding assets to a pipeline When you create a pipeline, you add assets, such as data, notebooks, deployment jobs, or Data Refinery jobs to the pipeline to orchestrate a sequential process. The strongly recommended method for adding assets to a pipeline is to collect the assets in the project containing the pipeline and use the asset browser to select project assets for the pipeline. Attention: Although you can include assets from other projects, doing so can introduce complexities and potential problems in your pipeline and could be prohibited in a future release. The recommended practice is to use assets from the current project. Parent topic:[Getting started with Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html)
## Accessing the components in your pipeline ## When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline\. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline\. Collaborators who run the pipeline must also be able to access the pipeline components\. ### Managing pipeline credentials ### To run a job, the pipeline must have access to IBM Cloud credentials\. Typically, a pipeline uses your personal IBM Cloud API key to execute long\-running operations in the pipeline without disruption\. If credentials are not available when you create the job, you are prompted to supply an API key or create a new one\. To generate an API key from your IBM Cloud user account, go to [Manage access and users \- API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account\. You can also generate and rotate API keys from **Profile and settings > User API key**\. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html)\. Alternatively, you can request that a key is generated for the pipeline\. In either scenario, name and copy the key, protecting it as you would a password\. ## Adding assets to a pipeline ## When you create a pipeline, you add assets, such as data, notebooks, deployment jobs, or Data Refinery jobs to the pipeline to orchestrate a sequential process\. The ***strongly recommended*** method for adding assets to a pipeline is to collect the assets in the project containing the pipeline and use the asset browser to select project assets for the pipeline\. Attention: Although you can include assets from other projects, doing so can introduce complexities and potential problems in your pipeline and could be prohibited in a future release\. The recommended practice is to use assets from the current project\. **Parent topic:**[Getting started with Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) <!-- </article "role="article" "> -->
F5086D0B6258FEF503CB3219F427FFBFF73135E1
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-program.html?context=cdpaas&locale=en
Programming IBM Watson Pipelines
Programming IBM Watson Pipelines You can program in a pipeline by using a notebook, or running Bash scripts in a pipeline. Programming with Bash scripts [Run Bash scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-bash) in a pipeline to compute or process data as part of the flow. Programming with notebooks You can use a notebook to run an end-to-end pipeline or to run parts of a pipeline, such as model training. * For details on creating notebooks and for links to sample notebooks, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html). * For details on running a notebook as a pipeline job, see [Run notebook job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-notebook). Using the Python client Use the [Watson Pipelines Python client](https://pypi.org/project/ibm-watson-pipelines/) for working with pipelines in a notebook. To install the library, use pip to install the latest package of ibm-watson-pipelines in your coding environment. For example, run the following code in your notebook environment or console. !pip install ibm-watson-pipelines Use the client documentation for syntax and descriptions for commands that access pipeline components. Go further To learn more about how to orchestrate external tasks efficiently, see [Making tasks more efficiently with Tekton](https://medium.com/@rafal.bigaj/tekton-and-friends-how-to-orchestrate-external-tasks-efficiently-3fcacf882f6d), a key continuous delivery framework used for Pipelines. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Programming IBM Watson Pipelines # You can program in a pipeline by using a notebook, or running Bash scripts in a pipeline\. ## Programming with Bash scripts ## [Run Bash scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html#run-bash) in a pipeline to compute or process data as part of the flow\. ## Programming with notebooks ## You can use a notebook to run an end\-to\-end pipeline or to run parts of a pipeline, such as model training\. <!-- <ul> --> * For details on creating notebooks and for links to sample notebooks, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)\. * For details on running a notebook as a pipeline job, see [Run notebook job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html#run-notebook)\. <!-- </ul> --> ### Using the Python client ### Use the [Watson Pipelines Python client](https://pypi.org/project/ibm-watson-pipelines/) for working with pipelines in a notebook\. To install the library, use `pip` to install the latest package of `ibm-watson-pipelines` in your coding environment\. For example, run the following code in your notebook environment or console\. !pip install ibm-watson-pipelines Use the client documentation for syntax and descriptions for commands that access pipeline components\. ### Go further ### To learn more about how to orchestrate external tasks efficiently, see [Making tasks more efficiently with Tekton](https://medium.com/@rafal.bigaj/tekton-and-friends-how-to-orchestrate-external-tasks-efficiently-3fcacf882f6d), a key continuous delivery framework used for Pipelines\. **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
AE57C56703B39C9097516D1466B70A3DE57AA1C4
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-run-save.html?context=cdpaas&locale=en
Running a pipeline
Running a pipeline You can run a pipeline in real time to test a flow as you work. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule. To run a pipeline: 1. Click Run pipeline on the toolbar. 2. Choose an option: * Trial run runs the pipeline without creating a job. Use this to test a pipeline. * Create a job presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline. * View history compares all of your runs over time. You must make sure requirements are met when you run a pipeline. For example, you might need a deployment space or an API key to run some of your nodes before you can begin. Using a job run name You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different jobs in the Job details dashboard. Otherwise, you can also assign a local parameter DSJobInvocationId to either a Run pipeline job node or Run DataStage job node. If both the parameter DSJobInvocationId and job name of the node are set, DSJobInvocationId will be used. If neither are set, the default value "job run" is used. Notes on running a pipeline * When you run a pipeline from a trial run or a job, click the node output to view the results of a successful run. If the run fails, error messages and logs are provided to help you correct issues. * Errors in the pipeline are flagged with an error badge. Open the node or condition with an error to change or complete the configuration. * View the consolidated logs to review operations or identify issues with the pipeline. Creating a pipeline job The following are all the configuration options for defining a job to run the pipeline. 1. Name your pipeline job and choose a version. 2. Input your IBM API key. 3. (Optional) Schedule your job by toggling the Schedule button. 1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month. 2. Add exception days to prevent the job from running on certain days. 3. Add a time for terminating the job. 4. (Optional) Enter the pipeline parameters needed for your job, for example assigning a space to a deployment node. To see how to create a pipeline parameter, see Defining pipeline parameters in [Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html). 5. (Optional) Choose if you want to be notified of pipeline job status after running. Saving a version of a pipeline You can save a version of a pipeline and revert to it at a later time. For example, if you want to preserve a particular configuration before you make changes, save a version. You can revert the pipeline to a previous version. When you share a pipeline, the latest version is used. To save a version: 1. Click the Versions icon on the toolbar. 2. In the Versions pane, click Save version to create a new version with a version number incremented by 1. When you run the pipeline, you can choose from available saved versions. Note: You cannot delete a saved version. Exporting pipeline assets When you export project or space assets to import them into a deployment space, you can include pipelines in the list of assets you export to a zip file and then import into a project or space. Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre-production to a production space. Note these considerations for working with pipelines in a space: * Pipelines in a space are read-only. You cannot edit the pipeline. You must edit the pipeline from the project, then export the updated pipeline and import it into the space. * Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline. You can also use parameters to assign values for jobs so you can have different values for each job you configure. * If there is already a pipeline in the space with the same name, the pipeline import will fail. * If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space. * Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail. * If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job, the pipeline job will fail. Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
## Running a pipeline ## You can run a pipeline in real time to test a flow as you work\. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule\. To run a pipeline: <!-- <ol> --> 1. Click **Run pipeline** on the toolbar\. 2. Choose an option: <!-- <ul> --> * **Trial run** runs the pipeline without creating a job. Use this to test a pipeline. * **Create a job** presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline. * **View history** compares all of your runs over time. <!-- </ul> --> <!-- </ol> --> You must make sure requirements are met when you run a pipeline\. For example, you might need a deployment space or an API key to run some of your nodes before you can begin\. ### Using a job run name ### You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different jobs in the **Job details** dashboard\. Otherwise, you can also assign a local parameter `DSJobInvocationId` to either a **Run pipeline job** node or **Run DataStage job** node\. If both the parameter `DSJobInvocationId` and job name of the node are set, `DSJobInvocationId` will be used\. If neither are set, the default value "job run" is used\. ### Notes on running a pipeline ### <!-- <ul> --> * When you run a pipeline from a trial run or a job, click the node output to view the results of a successful run\. If the run fails, error messages and logs are provided to help you correct issues\. * Errors in the pipeline are flagged with an error badge\. Open the node or condition with an error to change or complete the configuration\. * View the consolidated logs to review operations or identify issues with the pipeline\. <!-- </ul> --> ## Creating a pipeline job ## The following are all the configuration options for defining a job to run the pipeline\. <!-- <ol> --> 1. Name your pipeline job and choose a version\. 2. Input your IBM API key\. 3. **(Optional)** Schedule your job by toggling the **Schedule** button\. <!-- <ol> --> 1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month. 2. Add exception days to prevent the job from running on certain days. 3. Add a time for terminating the job. <!-- </ol> --> 4. **(Optional)** Enter the pipeline parameters needed for your job, for example assigning a space to a deployment node\. To see how to create a pipeline parameter, see **Defining pipeline parameters** in [Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)\. 5. **(Optional)** Choose if you want to be notified of pipeline job status after running\. <!-- </ol> --> ## Saving a version of a pipeline ## You can save a version of a pipeline and revert to it at a later time\. For example, if you want to preserve a particular configuration before you make changes, save a version\. You can revert the pipeline to a previous version\. When you share a pipeline, the latest version is used\. To save a version: <!-- <ol> --> 1. Click the Versions icon on the toolbar\. 2. In the Versions pane, click **Save version** to create a new version with a version number incremented by 1\. <!-- </ol> --> When you run the pipeline, you can choose from available saved versions\. Note: You cannot delete a saved version\. ## Exporting pipeline assets ## When you export project or space assets to import them into a deployment space, you can include pipelines in the list of assets you export to a zip file and then import into a project or space\. Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre\-production to a production space\. Note these considerations for working with pipelines in a space: <!-- <ul> --> * Pipelines in a space are read\-only\. You cannot edit the pipeline\. You must edit the pipeline from the project, then export the updated pipeline and import it into the space\. * Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline\. You can also use parameters to assign values for jobs so you can have different values for each job you configure\. * If there is already a pipeline in the space with the same name, the pipeline import will fail\. * If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space\. * Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail\. * If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job, the pipeline job will fail\. <!-- </ul> --> **Parent topic:**[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en
Run the built-in sample pipeline
Run the built-in sample pipeline You can view and run a built-in sample pipeline that uses sample data to learn how to automate machine learning flows in Watson Pipelines. What's happening in the sample pipeline? The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is then copied to a deployment space where it is deployed. The sample illustrates how you can automate an end-to-end flow to make the lifecycle easier to run and monitor. The sample pipeline looks like this: ![Sample orchestration pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-tutorial1.png) The tutorial steps you through this process: 1. [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enset-up) 2. [Preview creating and running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enpreview) 3. [Creating the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=encreate-sample) 4. [Running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enrun-flow) 5. [Reviewing the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enreview-results) 6. [Exploring the sample nodes and configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enexplore-sample) Prerequisites To run this sample, you must first create: * A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), where you can run the sample pipeline. * A [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html), where you can view and test the results. The deployment space is required to run the sample pipeline. Preview creating and running the sample pipeline Watch this video to see how to create and run a sample pipeline. This video provides a visual method to learn the concepts and tasks in this documentation. Creating the sample pipeline Create the sample pipeline in the Pipelines editor. 1. Open the project where you want to create the pipeline. 2. From the Assets tab, click New asset > Automate model lifecycle. 3. Click the Samples tab, and select the Orchestrate an AutoAI experiment. 4. Enter a name for the pipeline. For example, enter Bank marketing sample. 5. Click Create to open the canvas. Running the sample pipeline To run the sample pipeline: 1. Click Run pipeline on the canvas toolbar, then choose Trial run. 2. Select a deployment space when prompted to provide a value for the deployment_space pipeline parameter. 1. Click Select Space. 2. Expand the Spaces section. 3. Select your deployment space. 4. Click Choose. 3. Provide an API key if it is your first time to run a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption. * If you have an existing API key, click Use existing API key, paste the API key, and click Save. * If you don't have an existing API key, click Generate new API key, provide a name, and click Save. Copy the API key, and then save the API key for future use. When you're done, click Close. 4. Click Run to start the pipeline. Reviewing the results When the pipeline run completes, you can view the output to see the results. ![Sample pipeline run output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results1.png) Open the deployment space that you specified as part of the pipeline. You see the new deployment in the space: ![Sample pipeline deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results-space.png) If you want to test the deployment, use the deployment space Test page to submit payload data in JSON format and get a score back. For example, click the JSON tab and enter this input data: {"input_data": [{"fields": "age","job","marital","education","default","balance","housing","loan","contact","day","month","duration","campaign","pdays","previous","poutcome"],"values": "30","unemployed","married","primary","no","1787","no","no","cellular","19","oct","79","1","-1","0","unknown"]]}]} When you click Predict, the model generates output with a confidence score for the prediction of whether a customer subscribes to a term deposit promotion. ![Prediction score for the sample model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-gall-sample-output.png) In this case, the prediction of "no" is accompanied by a confidence score of close to 95%, predicting that the client will most likely not subscribe to a term deposit. Exploring the sample nodes and configuration Get a deeper understanding of how the sample nodes were configured to work in concert in the pipeline sample. Viewing the pipeline parameter A pipeline parameter specifies a setting for the entire pipeline. In the sample pipeline, a pipeline parameter is used to specify a deployment space where the model that is saved from the AutoAI experiment is stored and deployed. You are prompted to select the deployment space the pipeline parameter links to. Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) on the canvas toolbar to view or create pipeline parameters. In the sample pipeline, the pipeline parameter is named deployment_space and is of type Space. Click the name of the pipeline parameter to view the details. In the sample, the pipeline parameter is used with the Create data file node and the Create AutoAI experiment node. ![Flow parameter to specify deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-flow-param3.png) Loading the training data for the AutoAI experiment In this step, a Create data file node is configured to access the data set for the experiment. Click the node to view the configuration. The data file is bank-marketing-data.csv, which provides sample data to predict whether a bank customer signs up for a term deposit. The data rests in a Cloud Object Storage bucket and can be refreshed to keep the model training up to date. Option Value File The location of the data asset for training the AutoAI experiment. In this case, the data file is in a project. File path The name of the asset, bank-marketing-data.csv. Target scope For this sample, the target is a deployment space. Creating the AutoAI experiment The node to Create AutoAI experiment is configured with these values: Option Value AutoAI experiment name onboarding-bank-marketing-prediction Scope For this sample, the target is a deployment space. Prediction type binary Prediction column (label) y Positive class yes Training data split ration 0.9 Algorithms to include GradientBoostingClassifierEstimator <br>XGBClassifierEstimator Algorithms to use 1 Metric to optimize ROC AUC Optimize metric (optional) default Hardware specification (optional) default AutoAI experiment description This experiment uses a sample file, which contains text data that is collected from phone calls to a Portuguese bank in response to a marketing campaign. The classification goal is to predict whether a client subscribes to a term deposit, represented by variable y. AutoAI experiment tags (optional) none Creation mode (optional) default Those options define an experiment that uses the bank marketing data to predict whether a customer is likely to enroll in a promotion. Running the AutoAI experiment In this step, the Run AutoAI experiment node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model. Option Value AutoAI experiment Takes the output from the Create AutoAI node as the input to run the experiment. Training data assets Takes the output from the Create Data File node as the training data input for the experiment. Model count 1 Holdout data asset (optional) none Models count (optional) 3 Run name (optional) none Model name prefix (optional) none Run description (optional) none Run tags (optional) none Creation mode (optional) default Error policy (optional) default Deploying the model to a web service The Create Web deployment node creates an online deployment that is named onboarding-bank-marketing-prediction-deployment so you can deliver data and get predictions back in real time from the REST API endpoint. Option Value ML asset Takes the best model output from the Run AutoAI node as the input to create the deployment. Deployment name onboarding-bank-marketing-prediction-deployment Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
# Run the built\-in sample pipeline # You can view and run a built\-in sample pipeline that uses sample data to learn how to automate machine learning flows in Watson Pipelines\. ## What's happening in the sample pipeline? ## The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model\. The model is then copied to a deployment space where it is deployed\. The sample illustrates how you can automate an end\-to\-end flow to make the lifecycle easier to run and monitor\. The sample pipeline looks like this: ![Sample orchestration pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-tutorial1.png) The tutorial steps you through this process: <!-- <ol> --> 1. [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#set-up) 2. [Preview creating and running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#preview) 3. [Creating the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#create-sample) 4. [Running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#run-flow) 5. [Reviewing the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#review-results) 6. [Exploring the sample nodes and configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=en#explore-sample) <!-- </ol> --> ## Prerequisites ## To run this sample, you must first create: <!-- <ul> --> * A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), where you can run the sample pipeline\. * A [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html), where you can view and test the results\. The deployment space is required to run the sample pipeline\. <!-- </ul> --> ## Preview creating and running the sample pipeline ## Watch this video to see how to create and run a sample pipeline\. This video provides a visual method to learn the concepts and tasks in this documentation\. ## Creating the sample pipeline ## Create the sample pipeline in the Pipelines editor\. <!-- <ol> --> 1. Open the project where you want to create the pipeline\. 2. From the *Assets* tab, click **New asset > Automate model lifecycle**\. 3. Click the **Samples** tab, and select the **Orchestrate an AutoAI experiment**\. 4. Enter a name for the pipeline\. For example, enter *Bank marketing sample*\. 5. Click **Create** to open the canvas\. <!-- </ol> --> ## Running the sample pipeline ## To run the sample pipeline: <!-- <ol> --> 1. Click **Run pipeline** on the canvas toolbar, then choose **Trial run**\. 2. Select a deployment space when prompted to provide a value for the *deployment\_space* pipeline parameter\. <!-- <ol> --> 1. Click **Select Space**. 2. Expand the **Spaces** section. 3. Select your deployment space. 4. Click **Choose**. <!-- </ol> --> 3. Provide an API key if it is your first time to run a pipeline\. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption\. <!-- <ul> --> * If you have an existing API key, click **Use existing API key**, paste the API key, and click **Save**. * If you don't have an existing API key, click **Generate new API key**, provide a name, and click **Save**. Copy the API key, and then save the API key for future use. When you're done, click **Close**. <!-- </ul> --> 4. Click **Run** to start the pipeline\. <!-- </ol> --> ## Reviewing the results ## When the pipeline run completes, you can view the output to see the results\. ![Sample pipeline run output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results1.png) Open the deployment space that you specified as part of the pipeline\. You see the new deployment in the space: ![Sample pipeline deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-results-space.png) If you want to test the deployment, use the deployment space **Test** page to submit payload data in JSON format and get a score back\. For example, click the **JSON** tab and enter this input data: {"input_data": [{"fields": "age","job","marital","education","default","balance","housing","loan","contact","day","month","duration","campaign","pdays","previous","poutcome"],"values": "30","unemployed","married","primary","no","1787","no","no","cellular","19","oct","79","1","-1","0","unknown"]]}]} When you click **Predict**, the model generates output with a confidence score for the prediction of whether a customer subscribes to a term deposit promotion\. ![Prediction score for the sample model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-gall-sample-output.png) In this case, the prediction of "no" is accompanied by a confidence score of close to 95%, predicting that the client will most likely not subscribe to a term deposit\. ## Exploring the sample nodes and configuration ## Get a deeper understanding of how the sample nodes were configured to work in concert in the pipeline sample\. ### Viewing the pipeline parameter ### A pipeline parameter specifies a setting for the entire pipeline\. In the sample pipeline, a pipeline parameter is used to specify a deployment space where the model that is saved from the AutoAI experiment is stored and deployed\. You are prompted to select the deployment space the pipeline parameter links to\. Click the Global objects icon ![global objects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/global-objects-icon.png) on the canvas toolbar to view or create pipeline parameters\. In the sample pipeline, the pipeline parameter is named *deployment\_space* and is of type *Space*\. Click the name of the pipeline parameter to view the details\. In the sample, the pipeline parameter is used with the **Create data file** node and the **Create AutoAI experiment** node\. ![Flow parameter to specify deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/oflow-flow-param3.png) ### Loading the training data for the AutoAI experiment ### In this step, a **Create data file** node is configured to access the data set for the experiment\. Click the node to view the configuration\. The data file is `bank-marketing-data.csv`, which provides sample data to predict whether a bank customer signs up for a term deposit\. The data rests in a Cloud Object Storage bucket and can be refreshed to keep the model training up to date\. <!-- <table> --> | Option | Value | | ------------ | ----------------------------------------------------------------------------------------------------------------- | | File | The location of the data asset for training the AutoAI experiment\. In this case, the data file is in a project\. | | File path | The name of the asset, `bank-marketing-data.csv`\. | | Target scope | For this sample, the target is a deployment space\. | <!-- </table ""> --> ### Creating the AutoAI experiment ### The node to **Create AutoAI experiment** is configured with these values: <!-- <table> --> | Option | Value | | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | AutoAI experiment name | onboarding\-bank\-marketing\-prediction | | Scope | For this sample, the target is a deployment space\. | | Prediction type | binary | | Prediction column (label) | y | | Positive class | yes | | Training data split ration | 0\.9 | | Algorithms to include | GradientBoostingClassifierEstimator <br>XGBClassifierEstimator | | Algorithms to use | 1 | | Metric to optimize | ROC AUC | | Optimize metric (optional) | *default* | | Hardware specification (optional) | *default* | | AutoAI experiment description | This experiment uses a sample file, which contains text data that is collected from phone calls to a Portuguese bank in response to a marketing campaign\. The classification goal is to predict whether a client subscribes to a term deposit, represented by variable y\. | | AutoAI experiment tags (optional) | *none* | | Creation mode (optional) | *default* | <!-- </table ""> --> Those options define an experiment that uses the bank marketing data to predict whether a customer is likely to enroll in a promotion\. ### Running the AutoAI experiment ### In this step, the **Run AutoAI experiment** node runs the AutoAI experiment *onboarding\-bank\-marketing\-prediction*, trains the pipelines, then saves the best model\. <!-- <table> --> | Option | Value | | ----------------------------- | --------------------------------------------------------------------------------------------------- | | AutoAI experiment | Takes the output from the **Create AutoAI** node as the input to run the experiment\. | | Training data assets | Takes the output from the **Create Data File** node as the training data input for the experiment\. | | Model count | 1 | | Holdout data asset (optional) | *none* | | Models count (optional) | 3 | | Run name (optional) | *none* | | Model name prefix (optional) | *none* | | Run description (optional) | *none* | | Run tags (optional) | *none* | | Creation mode (optional) | *default* | | Error policy (optional) | *default* | <!-- </table ""> --> ### Deploying the model to a web service ### The **Create Web deployment** node creates an online deployment that is named *onboarding\-bank\-marketing\-prediction\-deployment* so you can deliver data and get predictions back in real time from the REST API endpoint\. <!-- <table> --> | Option | Value | | --------------- | ------------------------------------------------------------------------------------------------ | | ML asset | Takes the best model output from the **Run AutoAI** node as the input to create the deployment\. | | Deployment name | onboarding\-bank\-marketing\-prediction\-deployment | <!-- </table ""> --> **Parent topic:**[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
BB961AB67F88B50475329FCD1EE2F64137480426
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample2.html?context=cdpaas&locale=en
Run a sample pipeline to compare models
Run a sample pipeline to compare models Download a pre-populated project with the assets you need to run a sample pipeline. The pipeline compares two AutoAI experiments and compares the output, selecting the best model and deploying it as a Web service. The Train AutoAI and reference model sample creates a pre-populated project with the assets you need to run a pre-built pipeline that trains models using a sample data set. After performing some set up and configuration tasks, you can run the sample pipeline to automate the following sequence: * Copy sample assets into a space. * Run a notebook and an AutoAI experiment simultaneously, on a common training data set. * Run another notebook to compare the results from the previous nodes and select the best model, ranked for accuracy. * Copy the winning model to a space and create a web service deployment for the selected model. After the run completes, you can inspect the output in the pipeline editor and then switch to the associated deployment space to [view and test the resulting deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample2.html?context=cdpaas&locale=enview-deploy). Learning goals After running this sample you will know how to: * Configure a Watson Pipeline * Run a Watson Pipeline Downloading the sample Follow these steps to create the sample project from the Samples so you can test the capabilities of IBM Watson Pipelines: 1. Open the [Train AutoAI and reference model sample](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/496c1220779cbe5cccc063534600789f) from the Samples. 2. Click Create project to create the project. 3. Open the project and follow the instructions on the Readme page to set up the pipeline assets. The sample pipeline components The sample project includes: * Pre-built sample Watson Pipeline * Data set called german_credit_data_biased_training.csv used for training a model to predict credit risk * Data set called german_credit_test_data.csv used to test the deployed model * Notebook called reference-model-training-notebook that trains an AutoAI experiment and saves the best pipeline as a model * Notebook called select-winning-model that compares the models and chooses the best to save to the designated deployment space Getting started with the sample To run the sample pipeline, you will need to perform some set-up tasks: 1. Create a deployment space, for example, dev-space which you'll need when you run the notebooks. From the navigation menu, select Deployments > View All Spaces > New deployment space. Fill in the required fields. Note:Make sure you associate a Watson Machine Learning instance with the space or the pipeline run will fail. 2. From the Assets page of the sample project, open the reference-model-training-notebook and follow the steps in the Set up the environment section to acquire and insert an api_key variable as your credentials. 3. After inserting your credentials, click File > Save as version to save the updated notebook to your project. 4. Do the same for the select-winning-model notebook to add credentials and save the updated version of the notebook. Exploring the pipeline After you complete the set up tasks, open the sample pipeline On-boarding - Train AutoAI and reference model and select the best from the Assets page of the sample project. You will see the sample pipeline: ![Sample pipeline from Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1.png) Viewing node configuration As you explore the sample pipeline, double-click on the various nodes to view their configuration. For example, if you click on the first node for copying an asset, you will see this configuration: ![Creating assets configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-config.png) Note that the node that will copy the data asset to a deployment space is configured using a pipeline parameter. The pipeline parameter creates a placeholder for the space you created to use for this pipeline. When you run the pipeline, you are prompted to choose the space. Running the pipeline When you are ready to run the pipeline, click the Run icon and choose Trial job. You are prompted to choose the deployment space for the pipeline and create or supply an API key for the pipeline if one is not already available. As the pipeline runs, you will see status notifications about the progress of the run. Nodes that are processed successfully are marked with a checkmark. ![Running the pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-run.png) Viewing the output When the job completes, click Pipeline output for the run to see a summary of pipeline processes. You can click to expand each section and view the details for each operation. ![Viewing the pipeline output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-output.png) Viewing the deployment in your space After you are done exploring the pipeline and its output, you can view the assets that were created in the space you designated for the pipeline. Open the space. You can see that the models and training data were copied to the space. The winning model is tagged as selected_model. ![Viewing the associated space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-space.png) Viewing the deployment The last step of the pipeline created a web service deployment for the selected model. Click the Deployments tab to view the deployment. ![Viewing the deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-deploy1.png) Testing the deployment You can test the deployment to see the predictions the model will generate. 1. Click the deployment name to view the details. 2. Click the Test tab. 3. Enter this JSON data into the Input form. The payload (input) must match the schema for the model but should not include the prediction column. {"input_data":[{ "fields": "CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"], "values": "no_checking",28,"outstanding_credit","appliances",5990,"500_to_1000","greater_7",5,"male","co-applicant",3,"car_other",55,"none","free",2,"skilled",2,"yes","yes"]] }]} Clicking Predict returns this prediction, indicating a low credit risk for this customer. ![Viewing the prediction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-predict.png) Next steps [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) using your own assets. Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
# Run a sample pipeline to compare models # Download a pre\-populated project with the assets you need to run a sample pipeline\. The pipeline compares two AutoAI experiments and compares the output, selecting the best model and deploying it as a Web service\. The *Train AutoAI and reference model* sample creates a pre\-populated project with the assets you need to run a pre\-built pipeline that trains models using a sample data set\. After performing some set up and configuration tasks, you can run the sample pipeline to automate the following sequence: <!-- <ul> --> * Copy sample assets into a space\. * Run a notebook and an AutoAI experiment simultaneously, on a common training data set\. * Run another notebook to compare the results from the previous nodes and select the best model, ranked for accuracy\. * Copy the winning model to a space and create a web service deployment for the selected model\. <!-- </ul> --> After the run completes, you can inspect the output in the pipeline editor and then switch to the associated deployment space to [view and test the resulting deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample2.html?context=cdpaas&locale=en#view-deploy)\. ## Learning goals ## After running this sample you will know how to: <!-- <ul> --> * Configure a Watson Pipeline * Run a Watson Pipeline <!-- </ul> --> ## Downloading the sample ## Follow these steps to create the sample project from the Samples so you can test the capabilities of IBM Watson Pipelines: <!-- <ol> --> 1. Open the [Train AutoAI and reference model sample](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/496c1220779cbe5cccc063534600789f) from the Samples\. 2. Click **Create project** to create the project\. 3. Open the project and follow the instructions on the Readme page to set up the pipeline assets\. <!-- </ol> --> ## The sample pipeline components ## The sample project includes: <!-- <ul> --> * Pre\-built sample Watson Pipeline * Data set called german\_credit\_data\_biased\_training\.csv used for training a model to predict credit risk * Data set called german\_credit\_test\_data\.csv used to test the deployed model * Notebook called reference\-model\-training\-notebook that trains an AutoAI experiment and saves the best pipeline as a model * Notebook called select\-winning\-model that compares the models and chooses the best to save to the designated deployment space <!-- </ul> --> ## Getting started with the sample ## To run the sample pipeline, you will need to perform some set\-up tasks: <!-- <ol> --> 1. Create a deployment space, for example, *dev\-space* which you'll need when you run the notebooks\. From the navigation menu, select **Deployments > View All Spaces > New deployment space**\. Fill in the required fields\. Note:Make sure you associate a Watson Machine Learning instance with the space or the pipeline run will fail. 2. From the Assets page of the sample project, open the reference\-model\-training\-notebook and follow the steps in the *Set up the environment* section to acquire and insert an api\_key variable as your credentials\. 3. After inserting your credentials, click **File > Save as version** to save the updated notebook to your project\. 4. Do the same for the select\-winning\-model notebook to add credentials and save the updated version of the notebook\. <!-- </ol> --> ## Exploring the pipeline ## After you complete the set up tasks, open the sample pipeline *On\-boarding \- Train AutoAI and reference model and select the best* from the Assets page of the sample project\. You will see the sample pipeline: ![Sample pipeline from Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1.png) ### Viewing node configuration ### As you explore the sample pipeline, double\-click on the various nodes to view their configuration\. For example, if you click on the first node for copying an asset, you will see this configuration: ![Creating assets configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-config.png) Note that the node that will copy the data asset to a deployment space is configured using a pipeline parameter\. The pipeline parameter creates a placeholder for the space you created to use for this pipeline\. When you run the pipeline, you are prompted to choose the space\. ### Running the pipeline ### When you are ready to run the pipeline, click the Run icon and choose **Trial job**\. You are prompted to choose the deployment space for the pipeline and create or supply an API key for the pipeline if one is not already available\. As the pipeline runs, you will see status notifications about the progress of the run\. Nodes that are processed successfully are marked with a checkmark\. ![Running the pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-run.png) ### Viewing the output ### When the job completes, click **Pipeline output** for the run to see a summary of pipeline processes\. You can click to expand each section and view the details for each operation\. ![Viewing the pipeline output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-output.png) ## Viewing the deployment in your space ## After you are done exploring the pipeline and its output, you can view the assets that were created in the space you designated for the pipeline\. Open the space\. You can see that the models and training data were copied to the space\. The winning model is tagged as *selected\_model*\. ![Viewing the associated space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-space.png) ### Viewing the deployment ### The last step of the pipeline created a web service deployment for the selected model\. Click the **Deployments** tab to view the deployment\. ![Viewing the deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-deploy1.png) ### Testing the deployment ### You can test the deployment to see the predictions the model will generate\. <!-- <ol> --> 1. Click the deployment name to view the details\. 2. Click the **Test** tab\. 3. Enter this JSON data into the Input form\. The payload (input) must match the schema for the model but should not include the prediction column\. <!-- </ol> --> {"input_data":[{ "fields": "CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"], "values": "no_checking",28,"outstanding_credit","appliances",5990,"500_to_1000","greater_7",5,"male","co-applicant",3,"car_other",55,"none","free",2,"skilled",2,"yes","yes"]] }]} Clicking **Predict** returns this prediction, indicating a low credit risk for this customer\. ![Viewing the prediction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/pipeline-sample1-predict.png) ## Next steps ## [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) using your own assets\. **Parent topic:**[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) <!-- </article "role="article" "> -->
D1DB4F3B084CB401795C925F280207CBCB3D94AA
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-storage-data-access.html?context=cdpaas&locale=en
Storage and data access for IBM Watson Pipelines
Storage and data access for IBM Watson Pipelines Learn where files and data are stored outside of IBM Watson Pipelines and use it in a Pipelines. Access data on Cloud Object Storage File storage refers to the repository where you store assets to use with the pipeline. It is a Cloud Object Storage bucket that is used as storage for a particular scope, such as a project or deployment space. A storage location is referenced by a Cloud Object Storage data connection in its scope. Refer to a file by pointing to a location such as an object key in a dedicated, self-managed bucket. Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
# Storage and data access for IBM Watson Pipelines # Learn where files and data are stored outside of IBM Watson Pipelines and use it in a Pipelines\. ## Access data on Cloud Object Storage ## File storage refers to the repository where you store assets to use with the pipeline\. It is a Cloud Object Storage bucket that is used as storage for a particular scope, such as a project or deployment space\. A storage location is referenced by a Cloud Object Storage data connection in its scope\. Refer to a file by pointing to a location such as an object key in a dedicated, self\-managed bucket\. **Parent topic:**[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) <!-- </article "role="article" "> -->
CE13AE6812F1E2CA6AD429D4B01AF25F9F398148
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html?context=cdpaas&locale=en
Deploying models with Watson Machine Learning
Deploying models with Watson Machine Learning Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights. This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use. ![Building a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml_overview.svg) IBM Watson Machine Learning architecture and services Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models. Deploying and managing models with Watson Machine Learning Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html). To build and train a model: * Use one of the tools that are listed in [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html). * [Import a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) that you built and trained outside of Watson Studio. Deployment infrastructure * [Deploy trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) as a web service or for batch processing. * [Deploy Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) to simplify AI solutions. Programming Interfaces * Use [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) to work with all of your Watson Machine Learning assets in a notebook. * Use [REST API](https://cloud.ibm.com/apidocs/machine-learning) to call methods from the base URLs for the Watson Machine Learning API endpoints. * When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html). Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Deploying models with Watson Machine Learning # Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights\. This graphic illustrates a typical process for a machine learning model\. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use\. ![Building a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml_overview.svg) ## IBM Watson Machine Learning architecture and services ## Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks\. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models\. ## Deploying and managing models with Watson Machine Learning ## Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit\-Learn, and PyTorch to build and deploy models\. For a list of supported frameworks, refer to [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\. To build and train a model: <!-- <ul> --> * Use one of the tools that are listed in [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)\. * [Import a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) that you built and trained outside of Watson Studio\. <!-- </ul> --> ### Deployment infrastructure ### <!-- <ul> --> * [Deploy trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) as a web service or for batch processing\. * [Deploy Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) to simplify AI solutions\. <!-- </ul> --> ### Programming Interfaces ### <!-- <ul> --> * Use [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) to work with all of your Watson Machine Learning assets in a notebook\. * Use [REST API](https://cloud.ibm.com/apidocs/machine-learning) to call methods from the base URLs for the Watson Machine Learning API endpoints\. * When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests\. For details on checking endpoints, refer to [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html)\. <!-- </ul> --> **Parent topic:**[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
577964B0C132F5EA793054C3FF67417DDA6511D3
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html?context=cdpaas&locale=en
Watson Machine Learning Python client samples and examples
Watson Machine Learning Python client samples and examples Review and use sample Jupyter Notebooks that use Watson Machine Learning Python library to demonstrate machine learning features and techniques. Each notebook lists learning goals so you can find the one that best meets your goals. Training and deploying models from notebooks If you choose to build a machine learning model in a notebook, you must be comfortable with coding in a Jupyter Notebook. A Jupyter Notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model. Learn from sample notebooks Many ways exist to build and train models and then deploy them. Therefore, the best way to learn is to look at annotated samples that step you through the process by using different frameworks. Review representative samples that demonstrate key features. The samples are built by using the V4 version of the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/). Video disclaimer: Some minor steps and graphical elements in the videos might differ from your deployment. Watch this video to learn how to train, deploy, and test a machine learning model in a Jupyter Notebook. This video mirrors the Use scikit-learn to recognize hand-written digits found in the Deployment samples table. This video provides a visual method to learn the concepts and tasks in this documentation. Watch this video to learn how to test a model that was created with AutoAI by using the Watson Machine Learning APIs in Jupyter Notebook. This video provides a visual method to learn the concepts and tasks in this documentation. Helpful variables Use the pre-defined PROJECT_ID environment variable to call the Watson Machine Learning Python client APIs. PROJECT_ID is the guide of the project where your environment is running. Deployment samples View or run these Jupyter Notebooks to see how techniques are implemented by using various frameworks. Some of the samples rely on trained models, which are also available for you to download from the public repository. Sample name Framework Techniques demonstrated [Use scikit-learn and custom library to predict temperature](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/9365d34eeacef267026a2b75b92bfa2f) Scikit-learn Train a model with custom defined transformer <br>Persist the custom-defined transformer and the model in Watson Machine Learning repository <br>Deploy the model by using Watson Machine Learning Service <br>Perform predictions that use the deployed model [Use PMML to predict iris species](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16f5b4) PMML Deploy and score a PMML model [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model, then deploy the sample model. [Use scikit-learn to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4) Scikit-learn Train sklearn model <br>Persist trained model in Watson Machine Learning repository <br>Deploy model for online scoring by using client library <br>Score sample records by using client library [Use Spark and batch deployment to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21719c1) Spark Load a CSV file into an Apache Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Create an Apache Spark machine learning pipeline <br>Train and evaluate a model <br>Persist a pipeline and model in Watson Machine Learning repository <br>Explore and visualize prediction result by using the plotly package <br>Deploy a model for batch scoring by using Watson Machine Learning API [Use Spark and Python to predict Credit Risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2173364) Spark Load a CSV file into an Apache® Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Persist a pipeline and model in Watson Machine Learning repository from tar.gz files <br>Deploy a model for online scoring by using Watson Machine Learning API <br>Score sample data by using the Watson Machine Learning API <br>Explore and visualize prediction results by using the plotly package [Use SPSS to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2175eb9) SPSS Work with the instance <br>Perform an online deployment of the SPSS model <br>Score data by using deployed model [Use XGBoost to classify tumors](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ac820b22cc976f5cf6487260f4c8d9c8) XGBoost Load a CSV file into numpy array <br>Explore data <br>Prepare data for training and evaluation <br>Create an XGBoost machine learning model <br>Train and evaluate a model <br>Use cross-validation to optimize the model's hyperparameters <br>Persist a model in Watson Machine Learning repository <br>Deploy a model for online scoring <br>Score sample data [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Spark Download an externally trained Keras model with dataset. <br>Persist an external model in the Watson Machine Learning repository. <br>Deploy a model for online scoring by using client library. <br>Score sample records by using client library. [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function <br>Create a web service <br>Score the model [Machine Learning artifact management](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/55ef73c276cd1bf2bae266613d08c0f3) Core Export and import artifacts <br>Load, deploy, and score externally created models [Use Decision Optimization to plan your diet](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/5502accad754a3c5dcb3a08f531cea5a) Core Create a diet planning model by using Decision Optimization [Use SPSS and batch deployment with Db2 to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0955ef) SPSS Load a CSV file into an Apache Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Persist a pipeline and model in Watson Machine Learning repository from tar.gz files <br>Deploy a model for online scoring by using Watson Machine Learning API <br>Score sample data by using the Watson Machine Learning API <br>Explore and visualize prediction results by using the plotly package [Use scikit-learn and AI lifecycle capabilities to predict Boston house prices](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b1c7b47) Scikit-learn Load a sample data set from scikit-learn <br>Explore data <br>Prepare data for training and evaluation <br>Create a scikit-learn pipeline <br>Train and evaluate a model <br>Store a model in the Watson Machine Learning repository <br>Deploy a model with AutoAI lifecycle capabilities [German credit risk prediction with Scikit-learn for model monitoring](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/f63c83c7368d2487c943c91a9a28ad67) Scikit-learn Train, create, and deploy a credit risk prediction model with monitoring [Monitor German credit risk model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/48e9f342365736c7bb7a8dfc481bca6e) Scikit-learn Train, create, and deploy a credit risk prediction model with IBM Watson OpenScale capabilities AutoAI samples View or run these Jupyter Notebooks to see how AutoAI model techniques are implemented. Sample name Framework Techniques demonstrated [Use AutoAI and Lale to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16d0c0) Hybrid (AutoAI) with Lale Work with Watson Machine Learning experiments to train AutoAI models <br>Compare trained models quality and select the best one for further refinement <br>Refine the best model and test new variations <br>Deploy and score the trained model [Use AutoAI to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/029d77a73d72a4134c81383d6f103330) Hybrid (AutoAI) Work with Watson Machine Learning experiments to train AutoAI models <br>Compare trained models quality and select the best one for further refinement <br>Refine the best model and test new variations <br>Deploy and score the trained model Next steps * To learn more about using notebook editors, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html). * To learn more about working with notebooks, see [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html). * To learn more about authenticating in a notebook, see [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Watson Machine Learning Python client samples and examples # Review and use sample Jupyter Notebooks that use Watson Machine Learning Python library to demonstrate machine learning features and techniques\. Each notebook lists learning goals so you can find the one that best meets your goals\. ## Training and deploying models from notebooks ## If you choose to build a machine learning model in a notebook, you must be comfortable with coding in a Jupyter Notebook\. A Jupyter Notebook is a web\-based environment for interactive computing\. You can run small pieces of code that process your data, and then immediately view the results of your computation\. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model\. ## Learn from sample notebooks ## Many ways exist to build and train models and then deploy them\. Therefore, the best way to learn is to look at annotated samples that step you through the process by using different frameworks\. Review representative samples that demonstrate key features\. The samples are built by using the V4 version of the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\. Video disclaimer: Some minor steps and graphical elements in the videos might differ from your deployment\. Watch this video to learn how to train, deploy, and test a machine learning model in a Jupyter Notebook\. This video mirrors the **Use scikit\-learn to recognize hand\-written digits** found in the *Deployment samples* table\. This video provides a visual method to learn the concepts and tasks in this documentation\. Watch this video to learn how to test a model that was created with AutoAI by using the Watson Machine Learning APIs in Jupyter Notebook\. This video provides a visual method to learn the concepts and tasks in this documentation\. ### Helpful variables ### Use the pre\-defined `PROJECT_ID` environment variable to call the Watson Machine Learning Python client APIs\. `PROJECT_ID` is the guide of the project where your environment is running\. ## Deployment samples ## View or run these Jupyter Notebooks to see how techniques are implemented by using various frameworks\. Some of the samples rely on trained models, which are also available for you to download from the public repository\. <!-- <table> --> | Sample name | Framework | Techniques demonstrated | | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Use scikit\-learn and custom library to predict temperature](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/9365d34eeacef267026a2b75b92bfa2f) | Scikit\-learn | Train a model with custom defined transformer <br>Persist the custom\-defined transformer and the model in Watson Machine Learning repository <br>Deploy the model by using Watson Machine Learning Service <br>Perform predictions that use the deployed model | | [Use PMML to predict iris species](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16f5b4) | PMML | Deploy and score a PMML model | | [Use Python function to recognize hand\-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) | Python | Use a function to store a sample model, then deploy the sample model\. | | [Use scikit\-learn to recognize hand\-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4) | Scikit\-learn | Train sklearn model <br>Persist trained model in Watson Machine Learning repository <br>Deploy model for online scoring by using client library <br>Score sample records by using client library | | [Use Spark and batch deployment to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21719c1) | Spark | Load a CSV file into an Apache Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Create an Apache Spark machine learning pipeline <br>Train and evaluate a model <br>Persist a pipeline and model in Watson Machine Learning repository <br>Explore and visualize prediction result by using the plotly package <br>Deploy a model for batch scoring by using Watson Machine Learning API | | [Use Spark and Python to predict Credit Risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2173364) | Spark | Load a CSV file into an Apache® Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Persist a pipeline and model in Watson Machine Learning repository from tar\.gz files <br>Deploy a model for online scoring by using Watson Machine Learning API <br>Score sample data by using the Watson Machine Learning API <br>Explore and visualize prediction results by using the plotly package | | [Use SPSS to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2175eb9) | SPSS | Work with the instance <br>Perform an online deployment of the SPSS model <br>Score data by using deployed model | | [Use XGBoost to classify tumors](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ac820b22cc976f5cf6487260f4c8d9c8) | XGBoost | Load a CSV file into numpy array <br>Explore data <br>Prepare data for training and evaluation <br>Create an XGBoost machine learning model <br>Train and evaluate a model <br>Use cross\-validation to optimize the model's hyperparameters <br>Persist a model in Watson Machine Learning repository <br>Deploy a model for online scoring <br>Score sample data | | [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) | Spark | Download an externally trained Keras model with dataset\. <br>Persist an external model in the Watson Machine Learning repository\. <br>Deploy a model for online scoring by using client library\. <br>Score sample records by using client library\. | | [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) | Core | Create a Python function <br>Create a web service <br>Score the model | | [Machine Learning artifact management](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/55ef73c276cd1bf2bae266613d08c0f3) | Core | Export and import artifacts <br>Load, deploy, and score externally created models | | [Use Decision Optimization to plan your diet](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/5502accad754a3c5dcb3a08f531cea5a) | Core | Create a diet planning model by using Decision Optimization | | [Use SPSS and batch deployment with Db2 to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0955ef) | SPSS | Load a CSV file into an Apache Spark DataFrame <br>Explore data <br>Prepare data for training and evaluation <br>Persist a pipeline and model in Watson Machine Learning repository from tar\.gz files <br>Deploy a model for online scoring by using Watson Machine Learning API <br>Score sample data by using the Watson Machine Learning API <br>Explore and visualize prediction results by using the plotly package | | [Use scikit\-learn and AI lifecycle capabilities to predict Boston house prices](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b1c7b47) | Scikit\-learn | Load a sample data set from scikit\-learn <br>Explore data <br>Prepare data for training and evaluation <br>Create a scikit\-learn pipeline <br>Train and evaluate a model <br>Store a model in the Watson Machine Learning repository <br>Deploy a model with AutoAI lifecycle capabilities | | [German credit risk prediction with Scikit\-learn for model monitoring](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/f63c83c7368d2487c943c91a9a28ad67) | Scikit\-learn | Train, create, and deploy a credit risk prediction model with monitoring | | [Monitor German credit risk model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/48e9f342365736c7bb7a8dfc481bca6e) | Scikit\-learn | Train, create, and deploy a credit risk prediction model with IBM Watson OpenScale capabilities | <!-- </table ""> --> ## AutoAI samples ## View or run these Jupyter Notebooks to see how AutoAI model techniques are implemented\. <!-- <table> --> | Sample name | Framework | Techniques demonstrated | | ---------------------------------------------- | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | [Use AutoAI and Lale to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16d0c0) | Hybrid (AutoAI) with Lale | Work with Watson Machine Learning experiments to train AutoAI models <br>Compare trained models quality and select the best one for further refinement <br>Refine the best model and test new variations <br>Deploy and score the trained model | | [Use AutoAI to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/029d77a73d72a4134c81383d6f103330) | Hybrid (AutoAI) | Work with Watson Machine Learning experiments to train AutoAI models <br>Compare trained models quality and select the best one for further refinement <br>Refine the best model and test new variations <br>Deploy and score the trained model | <!-- </table ""> --> ## Next steps ## <!-- <ul> --> * To learn more about using notebook editors, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)\. * To learn more about working with notebooks, see [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html)\. <!-- </ul> --> <!-- <ul> --> * To learn more about authenticating in a notebook, see [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html)\. <!-- </ul> --> **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=en
Managing the Watson Machine Learning service endpoint
Managing the Watson Machine Learning service endpoint You can use IBM Cloud connectivity options for accessing cloud services securely by using service endpoints. When you provision a Watson Machine Learning service instance, you can choose if you want to access your service through the public internet, which is the default setting, or over the IBM Cloud private network. For more information, refer to [IBM Cloud service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint).{: new_window} You can use the Service provisioning page to choose a default endpoint from the following options: * [Public network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enpublic_net) * [Private network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enprivate_net) * Both, public and private networks Public network You can use public network endpoints to connect to Watson Machine Learning service instance on the public network. Your environment needs to have internet access to connect. Private network You can use private network endpoints to connect to your IBM Watson Machine Learning service instance over the IBM Cloud Private network. After you configure your Watson Machine Learning service to use private endpoints, the service is not accessible from the public internet. Private URLs for Watson Machine Learning Private URLs for Watson Machine Learning for each region are as follows: * Dallas - [https://private.us-south.ml.cloud.ibm.com](https://private.us-south.ml.cloud.ibm.com) * London - [https://private.eu-gb.ml.cloud.ibm.com](https://private.eu-gb.ml.cloud.ibm.com) * Frankfurt - [https://private.eu-de.ml.cloud.ibm.com](https://private.eu-de.ml.cloud.ibm.com) * Tokyo - [https://private.jp-tok.ml.cloud.ibm.com](https://private.jp-tok.ml.cloud.ibm.com) Using IBM Cloud service endpoints Follow these steps to enable private network endpoints on your clusters: 1. Use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started) to enable your account to use IBM Cloud service endpoints. 2. Provision a Watson Machine Learning service instance with private endpoints. Provisioning with service endpoints You can provision a Watson Machine Learning service instance with service endpoint by using IBM Cloud UI or IBM Cloud CLI. Provisioning a service endpoint with IBM Cloud UI To configure the endpoints of your IBM Watson Machine Learning service instance, you can use the Endpoints field on the IBM Cloud catalog page. You can configure a public, private, or a mixed network. ![Configure endpoint from the service catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-endpoints.png) IBM Cloud CLI If you provision an IBM Watson Machine Learning service instance by using the IBM Cloud CLI, use the command-line option service-endpoints to configure the Watson Machine Learning endpoints. You can specify the value public (the default value), private, or public-and-private: ibmcloud resource service-instance-create <service instance name> pm-20 <plan_name> <region> --service-endpoints <private/public/public-and-private> For example: ibmcloud resource service-instance-create wml-instance pm-20 standard us-south -p --service-endpoints private or ibmcloud resource service-instance-create wml-instance pm-20 standard us-south --service-endpoints public-and-private Parent topic:[First steps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
# Managing the Watson Machine Learning service endpoint # You can use IBM Cloud connectivity options for accessing cloud services securely by using service endpoints\. When you provision a Watson Machine Learning service instance, you can choose if you want to access your service through the public internet, which is the default setting, or over the IBM Cloud private network\. For more information, refer to [IBM Cloud service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint)\.\{: new\_window\} You can use the Service provisioning page to choose a default endpoint from the following options: <!-- <ul> --> * [Public network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=en#public_net) * [Private network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=en#private_net) * Both, public and private networks <!-- </ul> --> ## Public network ## You can use public network endpoints to connect to Watson Machine Learning service instance on the public network\. Your environment needs to have internet access to connect\. ## Private network ## You can use private network endpoints to connect to your IBM Watson Machine Learning service instance over the IBM Cloud Private network\. After you configure your Watson Machine Learning service to use private endpoints, the service is not accessible from the public internet\. ### Private URLs for Watson Machine Learning ### Private URLs for Watson Machine Learning for each region are as follows: <!-- <ul> --> * Dallas \- [https://private\.us\-south\.ml\.cloud\.ibm\.com](https://private.us-south.ml.cloud.ibm.com) * London \- [https://private\.eu\-gb\.ml\.cloud\.ibm\.com](https://private.eu-gb.ml.cloud.ibm.com) * Frankfurt \- [https://private\.eu\-de\.ml\.cloud\.ibm\.com](https://private.eu-de.ml.cloud.ibm.com) * Tokyo \- [https://private\.jp\-tok\.ml\.cloud\.ibm\.com](https://private.jp-tok.ml.cloud.ibm.com) <!-- </ul> --> ## Using IBM Cloud service endpoints ## Follow these steps to enable private network endpoints on your clusters: <!-- <ol> --> 1. Use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started) to enable your account to use IBM Cloud service endpoints\. 2. Provision a Watson Machine Learning service instance with private endpoints\. <!-- </ol> --> ## Provisioning with service endpoints ## You can provision a Watson Machine Learning service instance with service endpoint by using IBM Cloud UI or IBM Cloud CLI\. ### Provisioning a service endpoint with IBM Cloud UI ### To configure the endpoints of your IBM Watson Machine Learning service instance, you can use the **Endpoints** field on the IBM Cloud catalog page\. You can configure a public, private, or a mixed network\. ![Configure endpoint from the service catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ml-endpoints.png) ### IBM Cloud CLI ### If you provision an IBM Watson Machine Learning service instance by using the IBM Cloud CLI, use the command\-line option service\-endpoints to configure the Watson Machine Learning endpoints\. You can specify the value `public` (the default value), `private`, or `public-and-private`: ibmcloud resource service-instance-create <service instance name> pm-20 <plan_name> <region> --service-endpoints <private/public/public-and-private> For example: ibmcloud resource service-instance-create wml-instance pm-20 standard us-south -p --service-endpoints private or ibmcloud resource service-instance-create wml-instance pm-20 standard us-south --service-endpoints public-and-private **Parent topic:**[First steps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html) <!-- </article "role="article" "> -->
80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=en
Assets in deployment spaces
Assets in deployment spaces Learn about various ways of adding and promoting assets to a space. Find the list of asset types that you can add to a space. Note these considerations for importing assets into a space: * Upon import, some assets are automatically assigned a version number, starting with version 1. This version numbering prevents overwriting existing assets if you import their updated versions later. * Assets or references that are required to run jobs in the space must be part of the import package, or must be added separately. If you don't add these supporting assets or references, jobs fail. The way to add an asset to a space depends on the asset type. You can add some assets directly to a space (for example a model that was created outside of watsonx). Other asset types originate in a project and must be transferred from a project to a space. The third class includes asset types that you can add to a space only as a dependency of another asset. These asset types do not display in the Assets tab in the UI. For more information, see: * [Asset types that you can directly add to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_directly) * [Asset types that are created in projects and can be transferred into a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_transfer) * [Asset types that can be added to a space only as a dependency](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_dependency) For more information about working with space assets, see: * [Accessing asset details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html) Asset types that you can directly add to a space * Connection * Data asset (from a connection or an uploaded file) * Model For more information, see: * For data assets and connections: [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html) * For models: [Importing models into a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) Assets types that are created in projects and can be transferred into a space * Connection * Data Refinery flow * Environment * Function * Job * Model * Script If your asset is located in a standard Watson Studio project, you can transfer the asset to the deployment space by promoting it. For more information, see [Promoting assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html). Alternatively, you can export the project and then import it into the deployment space. For more information, see: * [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) * [Importing spaces and projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html) If you export the whole project, any matching custom environments are exported as well. Asset types that can be added to a space only as a dependency * Hardware Specification * Package Extension * Software Specification * Watson Machine Learning Experiment * Watson Machine Learning Model Definition Learn more * [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) * [Training and deploying machine learning models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Assets in deployment spaces # Learn about various ways of adding and promoting assets to a space\. Find the list of asset types that you can add to a space\. Note these considerations for importing assets into a space: <!-- <ul> --> * Upon import, some assets are automatically assigned a version number, starting with version 1\. This version numbering prevents overwriting existing assets if you import their updated versions later\. * Assets or references that are required to run jobs in the space must be part of the import package, or must be added separately\. If you don't add these supporting assets or references, jobs fail\. <!-- </ul> --> The way to add an asset to a space depends on the asset type\. You can add some assets directly to a space (for example a model that was created outside of watsonx)\. Other asset types originate in a project and must be transferred from a project to a space\. The third class includes asset types that you can add to a space only as a dependency of another asset\. These asset types do not display in the **Assets** tab in the UI\. For more information, see: <!-- <ul> --> * [Asset types that you can directly add to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=en#add_directly) * [Asset types that are created in projects and can be transferred into a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=en#add_transfer) * [Asset types that can be added to a space only as a dependency](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=en#add_dependency) <!-- </ul> --> For more information about working with space assets, see: <!-- <ul> --> * [Accessing asset details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html) <!-- </ul> --> ## Asset types that you can directly add to a space ## <!-- <ul> --> * Connection * Data asset (from a connection or an uploaded file) * Model <!-- </ul> --> For more information, see: <!-- <ul> --> * For data assets and connections: [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html) * For models: [Importing models into a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) <!-- </ul> --> ## Assets types that are created in projects and can be transferred into a space ## <!-- <ul> --> * Connection * Data Refinery flow * Environment * Function * Job * Model * Script <!-- </ul> --> If your asset is located in a standard Watson Studio project, you can transfer the asset to the deployment space by promoting it\. For more information, see [Promoting assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html)\. Alternatively, you can export the project and then import it into the deployment space\. For more information, see: <!-- <ul> --> * [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) * [Importing spaces and projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html) <!-- </ul> --> If you export the whole project, any matching custom environments are exported as well\. ## Asset types that can be added to a space only as a dependency ## <!-- <ul> --> * Hardware Specification * Package Extension * Software Specification * Watson Machine Learning Experiment * Watson Machine Learning Model Definition <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) * [Training and deploying machine learning models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) <!-- </ul> --> **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
D8BD7C30F776F7218860187F535C6B72D1A8DC74
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=en
Adding data assets to a deployment space
Adding data assets to a deployment space Learn about various ways of adding and promoting data assets to a space and data types that are used in deployments. Data can be: * A data file such as a .csv file * A connection to data that is located in a repository such as a database. * Connected data that is located in a storage bucket. For more information, see [Using data from the Cloud Object Storage service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=encos-data). Notes: * For definitions of data-related terms, refer to [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). You can add data to a space in one of these ways: * [Add data and connections to space by using UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-directly) * [Promote a data source, such as a file or a connection from an associated project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html) * [Save a data asset to a space programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-programmatically) * [Import a space or a project, including data assets, into an existing space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). Data added to a space is managed in a similar way to data added to a Watson Studio project. For example: * Adding data to a space creates a new copy of the asset and its attachments within the space, maintaining a reference back to the project asset. If an asset such as a data connection requires access credentials, they persist and are the same whether you are accessing the data from a project or from a space. * Just like with data connection in a project, you can edit data connection details from the space. * Data assets are stored in a space in the same way that they are stored in a project. They use the same file structure for the space as the structure used for the project. Adding data and connections to space by using UI To add data or connections to space by using UI: 1. From the Assets tab of your deployment space, click Import assets. 2. Choose between adding a connected data asset, local file, or connection to a data source: * If you want to add a connected data asset, select Connected data. Choose a connection and click Import. * If you want to add a local file, select Local file > Data asset. Upload your file and click Done. * If you want to add a connection to a data source, select Data access > Connection. Choose a connection and click Import. The data asset displays in the space and is available for use as an input data source in a deployment job. Note:Some types of connections allow for using your personal platform credentials. If you add a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox. Adding data to space programmatically If you are using APIs to create, update, or delete Watson Machine Learning assets, make sure that you are using only Watson Machine Learning [API calls](https://cloud.ibm.com/apidocs/machine-learning). For an example of how to add assets programmatically, refer to this sample notebook: [Use SPSS and batch deployment with Db2 to predict customer churn](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/spss/Use%20SPSS%20and%20batch%20deployment%20with%20DB2%20to%20predict%20customer%20churn.ipynb) Data source reference types in Watson Machine Learning Data source reference types are referenced in Watson Machine Learning requests to represent input data and results locations. Use data_asset and connection_asset for these types of data sources: * Cloud Object Storage * Db2 * Database data Notes: * For Decision Optimization, the reference type is url. Example data_asset payload {"input_data_references": [{ "type": "data_asset", "connection": { }, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" } }] Example connection_asset payload "input_data_references": [{ "type": "connection_asset", "connection": { "id": "<connection_guid>" }, "location": { "bucket": "<bucket_name>", "file_name": "<directory_name>/<file_name>" } <other wdp-properties supported by runtimes> }] For more information, see: * Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) Using data from the Cloud Object Storage service Cloud Object Storage service can be used with deployment jobs through a connected data asset or a connection asset. To use data from the Cloud Object Storage service: 1. Create a connection to IBM Cloud Object Storage by adding a Connection to your project or space and selecting Cloud Object Storage (infrastructure) or Cloud Object Storage as the connector. Provide the secret key, access key, and login URL. Note:When you are creating a connection to Cloud Object Storage or Cloud Object Storage (Infrastructure), you must specify both access_key and secret_key. If access_key and secret_key are not specified, downloading the data from that connection doesn't work in a batch deployment job. For reference, see [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) and [IBM Cloud Object Storage (infrastructure) connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html). 2. Add input and output files to the deployment space as connected data by using the Cloud Object Storage connection that you created. Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
# Adding data assets to a deployment space # Learn about various ways of adding and promoting data assets to a space and data types that are used in deployments\. Data can be: <!-- <ul> --> * A data file such as a \.csv file * A connection to data that is located in a repository such as a database\. * Connected data that is located in a storage bucket\. For more information, see [Using data from the Cloud Object Storage service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=en#cos-data)\. <!-- </ul> --> **Notes:** <!-- <ul> --> * For definitions of data\-related terms, refer to [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)\. <!-- </ul> --> You can add data to a space in one of these ways: <!-- <ul> --> * [Add data and connections to space by using UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=en#add-directly) * [Promote a data source, such as a file or a connection from an associated project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html) * [Save a data asset to a space programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=en#add-programmatically) * [Import a space or a project, including data assets, into an existing space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html)\. <!-- </ul> --> Data added to a space is managed in a similar way to data added to a Watson Studio project\. For example: <!-- <ul> --> * Adding data to a space creates a new copy of the asset and its attachments within the space, maintaining a reference back to the project asset\. If an asset such as a data connection requires access credentials, they persist and are the same whether you are accessing the data from a project or from a space\. * Just like with data connection in a project, you can edit data connection details from the space\. * Data assets are stored in a space in the same way that they are stored in a project\. They use the same file structure for the space as the structure used for the project\. <!-- </ul> --> ## Adding data and connections to space by using UI ## To add data or connections to space by using UI: <!-- <ol> --> 1. From the **Assets** tab of your deployment space, click **Import assets**\. 2. Choose between adding a connected data asset, local file, or connection to a data source: <!-- <ul> --> * If you want to add a connected data asset, select **Connected data**. Choose a connection and click **Import**. * If you want to add a local file, select **Local file > Data asset**. Upload your file and click **Done**. * If you want to add a connection to a data source, select **Data access > Connection**. Choose a connection and click **Import**. <!-- </ul> --> <!-- </ol> --> The data asset displays in the space and is available for use as an input data source in a deployment job\. Note:Some types of connections allow for using your personal platform credentials\. If you add a connection or connected data that uses your personal platform credentials, tick the **Use my platform login credentials** checkbox\. ## Adding data to space programmatically ## If you are using APIs to create, update, or delete Watson Machine Learning assets, make sure that you are using only Watson Machine Learning [API calls](https://cloud.ibm.com/apidocs/machine-learning)\. For an example of how to add assets programmatically, refer to this sample notebook: [Use SPSS and batch deployment with Db2 to predict customer churn](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/spss/Use%20SPSS%20and%20batch%20deployment%20with%20DB2%20to%20predict%20customer%20churn.ipynb) ### Data source reference types in Watson Machine Learning ### Data source reference types are referenced in Watson Machine Learning requests to represent input data and results locations\. Use `data_asset` and `connection_asset` for these types of data sources: <!-- <ul> --> * Cloud Object Storage * Db2 * Database data <!-- </ul> --> **Notes:** <!-- <ul> --> * For Decision Optimization, the reference type is `url`\. <!-- </ul> --> #### Example data\_asset payload #### {"input_data_references": [{ "type": "data_asset", "connection": { }, "location": { "href": "/v2/assets/<asset_id>?space_id=<space_id>" } }] #### Example connection\_asset payload #### "input_data_references": [{ "type": "connection_asset", "connection": { "id": "<connection_guid>" }, "location": { "bucket": "<bucket_name>", "file_name": "<directory_name>/<file_name>" } <other wdp-properties supported by runtimes> }] For more information, see: <!-- <ul> --> * Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) <!-- </ul> --> ## Using data from the Cloud Object Storage service ## Cloud Object Storage service can be used with deployment jobs through a connected data asset or a connection asset\. To use data from the Cloud Object Storage service: <!-- <ol> --> 1. Create a connection to IBM Cloud Object Storage by adding a **Connection** to your project or space and selecting Cloud Object Storage (infrastructure) or Cloud Object Storage as the connector\. Provide the secret key, access key, and login URL\. Note:When you are creating a connection to Cloud Object Storage or Cloud Object Storage (Infrastructure), you must specify both `access_key` and `secret_key`. If `access_key` and `secret_key` are not specified, downloading the data from that connection doesn't work in a batch deployment job. For reference, see [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) and [IBM Cloud Object Storage (infrastructure) connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html). 2. Add input and output files to the deployment space as connected data by using the Cloud Object Storage connection that you created\. <!-- </ol> --> **Parent topic:**[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) <!-- </article "role="article" "> -->
451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html?context=cdpaas&locale=en
Creating deployment spaces
Creating deployment spaces Create a deployment space to store your assets, deploy assets, and manage your deployments. Required permissions: All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services or for Cloud Pak for Data can manage to create deployment spaces. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform). A deployment space is not associated with a project. You can publish assets from multiple projects to a space. For example, you might have a test space for evaluating deployments, and a production space for deployments you want to deploy in business applications. Follow these steps to create a deployment space: 1. From the navigation menu, select Deployments > New deployment space. Enter a name for your deployment space. 2. Optional: Add a description and tags. 3. Select a storage service to store your space assets. * If you have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, choose a repository from the list to store your space assets. * If you do not have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, you are prompted to create one. 4. Optional: If you want to deploy assets from your space, select a machine learning service instance to associate with your deployment space. To associate a machine learning instance to a space, you must: * Be a space administrator. * Have admin access to the machine learning service instance that you want to associate with the space. For more information, see [Creating a Watson Machine Learning service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html). Tip: If you want to evaluate assets in the space, switch to the Manage tab and associate a Watson OpenScale instance. 5. Optional: Assign the space to a deployment stage. Deployment stages are used for [MLOps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/modelops-overview.html), to manage access for assets in various stages of the AI lifecycle. They are also used in [governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html), for tracking assets. Choose from: * Development for assets under development. Assets that are tracked for governance are displayed in the Develop stage of their associated use case. * Testing for assets that are being validated. Assets that are tracked for governance are displayed in the Validate stage of their associated use case. * Production for assets in production. Assets that are tracked for governance are displayed in the Operate stage of their associated use case. 6. Optional: Upload space assets, such as [exported project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) or [exported space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). If the imported space is encrypted, you must enter the password. Tip: If you get an import error, clear your browser cookies and then try again. 7. Click Create. Viewing and managing deployment spaces * To view all deployment spaces that you can access, click Deployments on the navigation menu. * To view any of the details about the space after you create it, such as the associated service instance or storage ID, open your deployment space and then click the Manage tab. * Your space assets are stored in a Cloud Object Storage repository. You can access this repository from IBM Cloud. To find the bucket ID, open your deployment space, and click the Manage tab. Learn more To learn more about adding assets to a space and managing them, see [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). To learn more about creating a space and accessing its details programmatically, see [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3). To learn more about handling spaces programmatically, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Creating deployment spaces # Create a deployment space to store your assets, deploy assets, and manage your deployments\. **Required permissions:** All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services or for Cloud Pak for Data can manage to create deployment spaces\. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html#platform)\. A deployment space is not associated with a project\. You can publish assets from multiple projects to a space\. For example, you might have a test space for evaluating deployments, and a production space for deployments you want to deploy in business applications\. Follow these steps to create a deployment space: <!-- <ol> --> 1. From the navigation menu, select **Deployments** > **New deployment space**\. Enter a name for your deployment space\. 2. Optional: Add a description and tags\. 3. Select a storage service to store your space assets\. <!-- <ul> --> * If you have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, choose a repository from the list to store your space assets. * If you do not have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, you are prompted to create one. <!-- </ul> --> 4. Optional: If you want to deploy assets from your space, select a machine learning service instance to associate with your deployment space\. To associate a machine learning instance to a space, you must: <!-- <ul> --> * Be a space administrator. * Have admin access to the machine learning service instance that you want to associate with the space. For more information, see [Creating a Watson Machine Learning service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html). Tip: If you want to evaluate assets in the space, switch to the **Manage** tab and associate a Watson OpenScale instance. <!-- </ul> --> 5. Optional: Assign the space to a deployment stage\. Deployment stages are used for [MLOps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/modelops-overview.html), to manage access for assets in various stages of the AI lifecycle\. They are also used in [governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html), for tracking assets\. Choose from: <!-- <ul> --> * **Development** for assets under development. Assets that are tracked for governance are displayed in the *Develop* stage of their associated use case. * **Testing** for assets that are being validated. Assets that are tracked for governance are displayed in the *Validate* stage of their associated use case. * **Production** for assets in production. Assets that are tracked for governance are displayed in the *Operate* stage of their associated use case. <!-- </ul> --> 6. Optional: Upload space assets, such as [exported project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) or [exported space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html)\. If the imported space is encrypted, you must enter the password\. Tip: If you get an import error, clear your browser cookies and then try again. 7. Click **Create**\. <!-- </ol> --> ## Viewing and managing deployment spaces ## <!-- <ul> --> * To view all deployment spaces that you can access, click **Deployments** on the navigation menu\. * To view any of the details about the space after you create it, such as the associated service instance or storage ID, open your deployment space and then click the **Manage** tab\. * Your space assets are stored in a Cloud Object Storage repository\. You can access this repository from IBM Cloud\. To find the bucket ID, open your deployment space, and click the **Manage** tab\. <!-- </ul> --> ## Learn more ## To learn more about adding assets to a space and managing them, see [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)\. To learn more about creating a space and accessing its details programmatically, see [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3)\. To learn more about handling spaces programmatically, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning)\. **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
C11E8DEEDBABE64F4789061D10E55AEA415FD51E
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-delete.html?context=cdpaas&locale=en
Deleting deployment spaces
Deleting deployment spaces Delete existing deployment spaces that you don't require anymore. Important:Before you delete a deployment space, you must delete all the deployments that are associated with it. Only a project admin can delete a deployment space. For more information, see [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html). To remove a deployment space, follow these steps: 1. From the navigation menu, click Deployments. 2. In the deployments list, click the Spaces tab and find the deployment space that you want to delete. 3. Hover over the deployment space, select the menu (![Menu icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/open-close-icon.png)) icon, and click Delete. 4. In the confirmation dialog box, click Delete. Learn more To learn more about how to clean up a deployment space and delete it programmatically, refer to: * [Notebook on managing machine learning artifacts](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d093d7b) * [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3) Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Deleting deployment spaces # Delete existing deployment spaces that you don't require anymore\. Important:Before you delete a deployment space, you must delete all the deployments that are associated with it\. Only a project admin can delete a deployment space\. For more information, see [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html)\. To remove a deployment space, follow these steps: <!-- <ol> --> 1. From the navigation menu, click **Deployments**\. 2. In the deployments list, click the **Spaces** tab and find the deployment space that you want to delete\. 3. Hover over the deployment space, select the menu (![Menu icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/open-close-icon.png)) icon, and click **Delete**\. 4. In the confirmation dialog box, click **Delete**\. <!-- </ol> --> ## Learn more ## To learn more about how to clean up a deployment space and delete it programmatically, refer to: <!-- <ul> --> * [Notebook on managing machine learning artifacts](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d093d7b) * [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3) <!-- </ul> --> **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
85E9CAC1F581E61092CFF1F6BE38570EE734C115
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html?context=cdpaas&locale=en
Exporting space assets from deployment spaces
Exporting space assets from deployment spaces You can export assets from a deployment space so that you can share the space with others or reuse the assets in another space. For a list of assets that you can export from space, refer to [Assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). Exporting space assets from the UI Important:To avoid problems with importing the space, export all dependencies together with the space. For more information, see [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html). To export space assets from the UI: 1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon. From the list, select Export space. 2. Click New export file. Specify a file name and an optional description. Tip: To encrypt sensitive data in the exported archive, type the password in the Password field. 3. Select the assets that you want to export with the space. 4. Click Create to create the export file. 5. After the space is exported, click the download (![Download icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/download-icon.png)) to save the file. You can reuse this space by choosing Create a space from a file when you create a new space. Learn more * [Importing spaces and projects into existing deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Exporting space assets from deployment spaces # You can export assets from a deployment space so that you can share the space with others or reuse the assets in another space\. For a list of assets that you can export from space, refer to [Assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)\. ## Exporting space assets from the UI ## Important:To avoid problems with importing the space, export all dependencies together with the space\. For more information, see [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html)\. To export space assets from the UI: <!-- <ol> --> 1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon\. From the list, select **Export space**\. 2. Click **New export file**\. Specify a file name and an optional description\. **Tip:** To encrypt sensitive data in the exported archive, type the password in the **Password** field. 3. Select the assets that you want to export with the space\. 4. Click **Create** to create the export file\. 5. After the space is exported, click the download (![Download icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/download-icon.png)) to save the file\. <!-- </ol> --> You can reuse this space by choosing **Create a space from a file** when you create a new space\. ## Learn more ## <!-- <ul> --> * [Importing spaces and projects into existing deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html)\. <!-- </ul> --> **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
A11374B50B49477362FA00BBB32A277776F7E8E2
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=en
Importing space and project assets into deployment spaces
Importing space and project assets into deployment spaces You can import assets that you export from a deployment space or a project (either a project export or a Git archive) into a new or existing deployment space. This way, you can add assets or update existing assets (for example, replacing a model with its newer version) to use for your deployments. You can import a space or a project export file to [a new deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-new) or an [existing deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-existing) to populate the space with assets. Tip: The export file can come from a Git-enabled project and a Watson Studio project. To create the file to export, create a compressed file for the project that contains the assets to import. Then, follow the steps for importing the compressed file into a new or existing space. Importing a space or a project to a new deployment space To import a space or a project when you are creating a new deployment space: 1. Click New deployment space. 2. Enter the details for the space. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). 3. In the Upload space assets section, upload the exported compressed file that contains data assets and click Create. The assets from the exported file is added as space assets. Importing a space or a project to an existing deployment space To import a space or a project into an existing space: 1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon. From the list, select Import space. 2. Add your compressed file that contains assets from a Watson Studio project or deployment space. Tip: If the space that you are importing is encrypted, enter the password in the Password field. 3. After your asset is imported, click Done. The assets from the exported file is added as space assets. Resolving issues with asset duplication The importing mechanism compares assets that exist in your space with the assets that are being imported. If it encounters an asset with the same name and of the same type: * If the asset type supports revisions, the importing mechanism creates a new revision of the existing asset and fixes the new revision. * If the asset type does not support revisions, the importing mechanism fixes the existing asset. This table describes how import works to resolve cases where assets are duplicated between the import file and the existing space. Scenarios for importing duplicated assets Your space File being imported Result No assets with matching name or type One or more assets with matching name or type All assets are imported. If multiple assets in the import file have the same name, they are imported as duplicate assets in the target space. One asset with matching name or type One asset with matching name or type Matching asset is updated with new version. Other assets are imported normally. One asset with matching name or type More than one asset with matching name or type The first matching asset that is processed is imported as a new version for the existing asset in the space, extra assets with matching name are created as duplicates in the space. Other assets are imported normally. Multiple assets with matching name or type One or more assets with matching name or type Assets with matching names fail to import. Other assets are imported normally. Warning: Multiple assets of the same name in an existing space or multiple assets of the same name in an import file are not fully supported scenarios. The import works as described for the scenarios in the table, but you cannot use versioning capabilities specific to the import. Existing deployments get updated differently, depending on deployment type: * If a batch deployment was created by using the previous version of the asset, the next invocation of the batch deployment job will refer to the updated state of the asset. * If an online deployment was created by using the previous version of the asset, the next "restart" of the deployment refers to the updated state of the asset. Learn more * To learn about adding other types of assets to a space, refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). * To learn about exporting assets from a deployment space, refer to [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Importing space and project assets into deployment spaces # You can import assets that you export from a deployment space or a project (either a project export or a Git archive) into a new or existing deployment space\. This way, you can add assets or update existing assets (for example, replacing a model with its newer version) to use for your deployments\. You can import a space or a project export file to [a new deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=en#import-to-new) or an [existing deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=en#import-to-existing) to populate the space with assets\. Tip: The export file can come from a Git\-enabled project and a Watson Studio project\. To create the file to export, create a compressed file for the project that contains the assets to import\. Then, follow the steps for importing the compressed file into a new or existing space\. ## Importing a space or a project to a new deployment space ## To import a space or a project when you are creating a new deployment space: <!-- <ol> --> 1. Click **New deployment space**\. 2. Enter the details for the space\. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html)\. 3. In the *Upload space assets* section, upload the exported compressed file that contains data assets and click **Create**\. <!-- </ol> --> The assets from the exported file is added as space assets\. ## Importing a space or a project to an existing deployment space ## To import a space or a project into an existing space: <!-- <ol> --> 1. From your deployment space, click the import and export space (![Import or Export space icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/import-export-icon.png)) icon\. From the list, select **Import space**\. 2. Add your compressed file that contains assets from a Watson Studio project or deployment space\. **Tip:** If the space that you are importing is encrypted, enter the password in the **Password** field. 3. After your asset is imported, click **Done**\. <!-- </ol> --> The assets from the exported file is added as space assets\. ## Resolving issues with asset duplication ## The importing mechanism compares assets that exist in your space with the assets that are being imported\. If it encounters an asset with the same name and of the same type: <!-- <ul> --> * If the asset type supports revisions, the importing mechanism creates a new revision of the existing asset and fixes the new revision\. * If the asset type does not support revisions, the importing mechanism fixes the existing asset\. <!-- </ul> --> This table describes how import works to resolve cases where assets are duplicated between the import file and the existing space\. <!-- <table> --> Scenarios for importing duplicated assets | Your space | File being imported | Result | | ------------------------------------------ | ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | No assets with matching name or type | One or more assets with matching name or type | All assets are imported\. If multiple assets in the import file have the same name, they are imported as duplicate assets in the target space\. | | One asset with matching name or type | One asset with matching name or type | Matching asset is updated with new version\. Other assets are imported normally\. | | One asset with matching name or type | More than one asset with matching name or type | The first matching asset that is processed is imported as a new version for the existing asset in the space, extra assets with matching name are created as duplicates in the space\. Other assets are imported normally\. | | Multiple assets with matching name or type | One or more assets with matching name or type | Assets with matching names fail to import\. Other assets are imported normally\. | <!-- </table ""> --> Warning: Multiple assets of the same name in an existing space or multiple assets of the same name in an import file are not fully supported scenarios\. The import works as described for the scenarios in the table, but you cannot use versioning capabilities specific to the import\. Existing deployments get updated differently, depending on deployment type: <!-- <ul> --> * If a batch deployment was created by using the previous version of the asset, the next invocation of the batch deployment job will refer to the updated state of the asset\. * If an online deployment was created by using the previous version of the asset, the next "restart" of the deployment refers to the updated state of the asset\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * To learn about adding other types of assets to a space, refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)\. * To learn about exporting assets from a deployment space, refer to [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html)\. <!-- </ul> --> **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
4DD17198B8E7413469C1837FFDBAF109B307078C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=en
Promoting assets to a deployment space
Promoting assets to a deployment space Learn about how to promote assets from a project to a deployment space and the requirements for promoting specific asset types. Promoting assets to your deployment space You can promote assets from your project to a deployment space. For a list of assets that can be promoted from a project to a deployment space, refer to [Adding assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). When you are promoting assets, you can: * Choose an existing space or create a new one. * Add tags to help identify the promoted asset. * Choose dependent assets to promote them at the same time. Follow these steps to promote your assets to your deployment space: 1. From your project, go to the Assets tab. 2. Select the Options (![Options icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) icon and click Promote to space. Tip: If the asset that you want to promote is a model, you can also click the model name to open the model details page, and then click Promote to deployment space. Notes: * Promoting assets and their dependencies from a project to a space by using the Watson Studio user interface is the recommended method to guarantee that the promotion flow results in a complete asset definition. For example, relying on the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd) to manage the promotion flow of an asset, together with its dependencies, can result in the promoted asset from being inaccessible from the space. * Promoting assets from default Git-based projects is not supported. * Depending on your configuration and the type of asset that you promote, large asset attachments, typically more than 2 GB, can cause the promotion action to time out. For more information, see: * [Promoting connections and connected data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-conn) * [Promoting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-model) * [Promoting notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-nbs) Promoting connections and connected data When you promote a connection that uses personal credentials or Cloud Pak for Data authentication to a deployment space, the credentials are not promoted. You must provide the credentials information again or allow Cloud Pak for Data authentication. Because Storage Volume connections support only personal credentials, to be able to use this type of asset after it is promoted to a space, you must provide the credentials again. Some types of connections allow for using your personal platform credentials. If you promote a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox. Although you can promote any kind of data connection to a space, where you can use the connection is governed by factors such as model and deployment type. For example, you can access any of the connected data by using a script. However, in batch deployments you are limited to particular types of data, as listed in [Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html). Promoting models When you promote a model to a space: * Components that are required for a successful deployment, such as a custom software specification, model definition, or pipeline definition are automatically promoted as well. * The data assets that were used to train the model are not promoted with it. Information on data assets used to train the model is included in model metadata. Promoting notebooks and scripts Tip: If you are using the Notebook editor, you must save a version of the notebook before you can promote it. * If you created a job for a notebook and you selected Log and updated version as the job run result output, the notebook cannot be promoted to a deployment space. * If you are working in a notebook that you created before IBM Cloud Pak for Data 4.0, and you want to promote this notebook to a deployment space, follow these steps to enable promoting it: 1. Save a new version of the notebook. 2. Select the newly created version. 3. Select either Log and notebook or Log only as the job run result output under Advanced configuration. 4. Run your job again. Now you can promote it manually from the project Assets page or programmatically by using CPDCTL commands. * If you want to promote a notebook programmatically, use CPDCTL commands to move the notebook or script to a deployment space. To learn how to use CPDCTL to move notebooks or scripts to spaces, refer to [CPDCTL code samples](https://github.com/IBM/cpdctl/tree/master/samples). For the reference guide, refer to [CPDCTL command reference](https://github.com/IBM/cpdctl/blob/master/README_command_reference.mdnotebook_promote). Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
# Promoting assets to a deployment space # Learn about how to promote assets from a project to a deployment space and the requirements for promoting specific asset types\. ## Promoting assets to your deployment space ## You can promote assets from your project to a deployment space\. For a list of assets that can be promoted from a project to a deployment space, refer to [Adding assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)\. When you are promoting assets, you can: <!-- <ul> --> * Choose an existing space or create a new one\. * Add tags to help identify the promoted asset\. * Choose dependent assets to promote them at the same time\. <!-- </ul> --> Follow these steps to promote your assets to your deployment space: <!-- <ol> --> 1. From your project, go to the **Assets** tab\. 2. Select the **Options** (![Options icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) icon and click **Promote to space**\. <!-- </ol> --> Tip: If the asset that you want to promote is a model, you can also click the model name to open the model details page, and then click Promote to deployment space\. **Notes:** <!-- <ul> --> * Promoting assets and their dependencies from a project to a space by using the Watson Studio user interface is the recommended method to guarantee that the promotion flow results in a complete asset definition\. For example, relying on the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd) to manage the promotion flow of an asset, together with its dependencies, can result in the promoted asset from being inaccessible from the space\. * Promoting assets from default Git\-based projects is not supported\. * Depending on your configuration and the type of asset that you promote, large asset attachments, typically more than 2 GB, can cause the promotion action to time out\. <!-- </ul> --> For more information, see: <!-- <ul> --> * [Promoting connections and connected data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=en#promo-conn) * [Promoting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=en#promo-model) * [Promoting notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=en#promo-nbs) <!-- </ul> --> ## Promoting connections and connected data ## When you promote a connection that uses personal credentials or Cloud Pak for Data authentication to a deployment space, the credentials are not promoted\. You must provide the credentials information again or allow Cloud Pak for Data authentication\. Because Storage Volume connections support only personal credentials, to be able to use this type of asset after it is promoted to a space, you must provide the credentials again\. Some types of connections allow for using your personal platform credentials\. If you promote a connection or connected data that uses your personal platform credentials, tick the *Use my platform login credentials* checkbox\. Although you can promote any kind of data connection to a space, where you can use the connection is governed by factors such as model and deployment type\. For example, you can access any of the connected data by using a script\. However, in batch deployments you are limited to particular types of data, as listed in [Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)\. ## Promoting models ## When you promote a model to a space: <!-- <ul> --> * Components that are required for a successful deployment, such as a custom software specification, model definition, or pipeline definition are automatically promoted as well\. * The data assets that were used to train the model are not promoted with it\. Information on data assets used to train the model is included in model metadata\. <!-- </ul> --> ## Promoting notebooks and scripts ## Tip: If you are using the Notebook editor, you must save a version of the notebook before you can promote it\. <!-- <ul> --> * If you created a job for a notebook and you selected **Log and updated version** as the job run result output, the notebook cannot be promoted to a deployment space\. * If you are working in a notebook that you created before IBM Cloud Pak for Data 4\.0, and you want to promote this notebook to a deployment space, follow these steps to enable promoting it: <!-- <ol> --> 1. Save a new version of the notebook. 2. Select the newly created version. 3. Select either **Log and notebook** or **Log only** as the job run result output under **Advanced configuration**. 4. Run your job again. Now you can promote it manually from the project **Assets** page or programmatically by using CPDCTL commands. <!-- </ol> --> <!-- </ul> --> <!-- <ul> --> * If you want to promote a notebook programmatically, use CPDCTL commands to move the notebook or script to a deployment space\. To learn how to use CPDCTL to move notebooks or scripts to spaces, refer to [CPDCTL code samples](https://github.com/IBM/cpdctl/tree/master/samples)\. For the reference guide, refer to [CPDCTL command reference](https://github.com/IBM/cpdctl/blob/master/README_command_reference.md#notebook_promote)\. <!-- </ul> --> **Parent topic:**[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) <!-- </article "role="article" "> -->
47CC4851C049D805F02BD2058CD5C2FFA157981C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html?context=cdpaas&locale=en
Deployment spaces
Deployment spaces Deployment spaces contain deployable assets, deployments, deployment jobs, associated input and output data, and the associated environments. You can use spaces to deploy various assets and manage your deployments. Deployment spaces are not associated with projects. You can publish assets from multiple projects to a space, and you can deploy assets to more than one space. For example, you might have a test space for evaluating deployments, and a production space for deployments that you want to deploy in business applications. The deployments dashboard is an aggregate view of deployment activity available to you, across spaces. For details, refer to [Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html). When you open a space from the UI, you see these elements: ![Detailed information about a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/DeploymentSpace.svg) You can share a space with other people. When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. For details on space collaborator permissions, refer to [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html). Learn more * [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html) * [Managing assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) * [Creating deployments from a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) * [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html) * [Deleting deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-delete.html)
# Deployment spaces # Deployment spaces contain deployable assets, deployments, deployment jobs, associated input and output data, and the associated environments\. You can use spaces to deploy various assets and manage your deployments\. Deployment spaces are not associated with projects\. You can publish assets from multiple projects to a space, and you can deploy assets to more than one space\. For example, you might have a test space for evaluating deployments, and a production space for deployments that you want to deploy in business applications\. The deployments dashboard is an aggregate view of deployment activity available to you, across spaces\. For details, refer to [Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html)\. When you open a space from the UI, you see these elements: ![Detailed information about a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/DeploymentSpace.svg) You can share a space with other people\. When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels\. For details on space collaborator permissions, refer to [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html)\. ## Learn more ## <!-- <ul> --> * [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html) * [Managing assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) * [Creating deployments from a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) * [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html) * [Deleting deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-delete.html) <!-- </ul> --> <!-- </article "role="article" "> -->
88A9F08917918D1D74C1C2CA702E999747EEB422
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html?context=cdpaas&locale=en
Jupyter Notebook editor
Jupyter Notebook editor The Jupyter Notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked. You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost. When your notebooks are ready, you can create jobs to run the notebooks directly from the Jupyter Notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run. Learn more * [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) * [Create notebooks in the Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) * [Runtime environments for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) * [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) * [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) * [Share and publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Jupyter Notebook editor # The Jupyter Notebook editor is largely used for interactive, exploratory data analysis programming and data visualization\. Only one person can edit a notebook at a time\. All other users can access opened notebooks in view mode only, while they are locked\. You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost\. When your notebooks are ready, you can create jobs to run the notebooks directly from the Jupyter Notebook editor\. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run\. ## Learn more ## <!-- <ul> --> * [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) * [Create notebooks in the Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) * [Runtime environments for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html) * [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) * [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) * [Share and publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html) <!-- </ul> --> **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
CF4254CE9E6D890CCAA2564DA3E9B57071ADE342
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en
Compute resource options for the notebook editor in projects
Compute resource options for the notebook editor in projects When you run a notebook in the notebook editor in a project, you choose an environment template, which defines the compute resources for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software configuration. For notebooks, environment templates include a supported language of Python and R. * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=entypes) * [Runtime releases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enruntime-releases) * [CPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu) * [Spark environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark) * [GPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu) * [Default hardware specifications for scoring models with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enwml) * [Data files in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endata-files) * [Compute usage by service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=encompute) * [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enscope) * [Changing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enchange-env) Types of environments You can use these types of environments for running notebook: * [Anaconda CPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu) for standard workloads. * [Spark environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark) for parallel processing that is provided by the platform or by other services. * [GPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu) for compute-intensive machine learning models. Most environment types for notebooks have default environment templates so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). Environment types for notebooks Environment type Default templates Custom templates Anaconda CPU ✓ ✓ Spark clusters ✓ ✓ GPU ✓ ✓ Runtime releases The default environments for notebooks are added as an affiliate of a runtime release and prefixed with Runtime followed by the release year and release version. A runtime release specifies a list of key data science libraries and a language version, for example Python 3.10. All environments of a runtime release are built based on the library versions defined in the release, thus ensuring the consistent use of data science libraries across all data science applications. The Runtime 22.2 and Runtime 23.1 releases are available for Python 3.10 and R 4.2. While a runtime release is supported, IBM will update the library versions to address security requirements. Note that these updates will not change the <Major>.<Minor> versions of the libraries, but only the <Patch> versions. This ensures that your notebook assets will continue to run. Library packages included in Runtimes For specific versions of popular data science library packages included in Watson Studio runtimes refer to these tables: Table 3. Packages and their versions in the various Runtime releases for Python Library Runtime 22.2 on Python 3.10 Runtime 23.1 on Python 3.10 Keras 2.9 2.12 Lale 0.7 0.7 LightGBM 3.3 3.3 NumPy 1.23 1.23 ONNX 1.12 1.13 ONNX Runtime 1.12 1.13 OpenCV 4.6 4.7 pandas 1.4 1.5 PyArrow 8.0 11.0 PyTorch 1.12 2.0 scikit-learn 1.1 1.1 SciPy 1.8 1.10 SnapML 1.8 1.13 TensorFlow 2.9 2.12 XGBoost 1.6 1.6 Table 4. Packages and their versions in the various Runtime releases for R Library Runtime 22.2 on R 4.2 Runtime 23.1 on R 4.2 arrow 8.0 11.0 car 3.0 3.0 caret 6.0 6.0 catools 1.18 1.18 forecast 8.16 8.16 ggplot2 3.3 3.3 glmnet 4.1 4.1 hmisc 4.7 4.7 keras 2.9 2.12 lme4 1.1 1.1 mvtnorm 1.1 1.1 pandoc 2.12 2.12 psych 2.2 2.2 python 3.10 3.10 randomforest 4.7 4.7 reticulate 1.25 1.25 sandwich 3.0 3.0 scikit-learn 1.1 1.1 spatial 7.3 7.3 tensorflow 2.9 2.12 tidyr 1.2 1.2 xgboost 1.6 1.6 In addition to the libraries listed in the tables, runtimes include many other useful libraries. To see the full list, select the Manage tab in your project, then click Templates, select the Environments tab, and then click on one of the listed environments. CPU environment templates You can select any of the following default CPU environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project. DO Indicates that the environment templates includes the CPLEX and the DOcplex libraries to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments. See [Decision Optimization notebooks](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html). NLP Indicates that the environment templates includes the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html). This default environment should be large enough to run the pre-trained models. Default CPU environment templates for notebooks Name Hardware configuration CUH rate per hour Runtime 22.2 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5 Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 1 Runtime 22.2 on Python 3.10 S 4 vCPU and 16 GB RAM 2 Runtime 23.1 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5 Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 1 Runtime 23.1 on Python 3.10 S 4 vCPU and 16 GB RAM 2 DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6 NLP + DO Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 6 Runtime 22.2 on R 4.2 S 4 vCPU and 16 GB RAM 2 Runtime 23.1 on R 4.2 S 4 vCPU and 16 GB RAM 2 You should stop all active CPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [CPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). Notebooks and CPU environments When you open a notebook in edit mode in a CPU runtime environment, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per single user and not per notebook. This means that if you open a second notebook with the same environment template in the same project, a second kernel is started in the same runtime. Runtime resources are shared by the Jupyter kernels that you start in the runtime. Runtime resources are also shared if the CPU has GPU. If you want to avoid sharing runtimes but want to use the same environment template for multiple notebooks in a project, you should create custom environment templates with the same specifications and associate each notebook with its own template. If necessary, you can restart or reconnect to the kernel. When you restart a kernel, the kernel is stopped and then started in the same session again, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available. Spark environment templates You can select any of the following default Spark environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project. Default Spark environment templates for notebooks Name Hardware configuration CUH rate per hour Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM 1 Default Spark 3.4 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM 1 You should stop all active Spark runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [Spark idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). Large Spark environments If you have the Watson Studio Professional plan, you can create custom environment templates for larger Spark environments. Professional plan users can have up to 35 executors and can choose from the following options for both driver and executor: Hardware configurations for Spark environments Hardware configuration 1 vCPU and 4 GB RAM 1 vCPU and 8 GB RAM 1 vCPU and 12 GB RAM The CUH rate per hour increases by 0.5 for every vCPU that is added. For example, 1x Driver: 3vCPU with 12GB of RAM and 4x Executors: 2vCPU with 8GB of RAM amounts to (3 + (4 * 2)) = 11 vCPUs and 5.5 CUH. Notebooks and Spark environments You can select the same Spark environment template for more than one notebook. Every notebook associated with that environment has its own dedicated Spark cluster and no resources are shared. When you start a Spark environment, extra resources are needed for the Jupyter Enterprise Gateway, Spark Master, and the Spark worker daemons. These extra resources amount to 1 vCPU and 2 GB of RAM for the driver and 1 GB RAM for each executor. You need to take these extra resources into account when selecting the hardware size of a Spark environment. For example: if you create a notebook and select Default Spark 3.3 & Python 3.10, the Spark cluster consumes 3 vCPU and 12 GB RAM but, as 1 vCPU and 4 GB RAM are required for the extra resources, the resources remaining for the notebook are 2 vCPU and 8 GB RAM. File system on a Spark cluster If you want to share files across executors and the driver or kernel of a Spark cluster, you can use the shared file system at /home/spark/shared. If you want to use your own custom libraries, you can store them under /home/spark/shared/user-libs/. There are four subdirectories under /home/spark/shared/user-libs/ that are pre-configured to be made available to Python and R or Java runtimes. The following tables lists the pre-configured subdirectories where you can add your custom libaries. Table 5. Pre-configured subdirectories for custom libraries Directory Type of library /home/spark/shared/user-libs/python3/ Python 3 libraries /home/spark/shared/user-libs/R/ R packages /home/spark/shared/user-libs/spark2/ Java JAR files To share libraries across a Spark driver and executors: 1. Download your custom libraries or JAR files to the appropriate pre-configured directory. 2. Restart the kernel from the notebook menu by clicking Kernel > Restart Kernel. This loads your custom libraries or JAR files in Spark. Note that these libraries are not persisted. When you stop the environment runtime and restart it again later, you need to load the libraries again. GPU environment templates You can select the following GPU environment template for notebooks. The environment templates are listed under Templates on the Environments page on the Manage tab of your project. The GPU environment template names indicate the accelerator power. The GPU environment templates include the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html). Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). Default GPU environment templates for notebooks Name Hardware configuration CUH rate per hour GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68 GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68 GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136 GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136 You should stop all active GPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [GPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). Notebooks and GPU environments GPU environments for notebooks are available only in the Dallas IBM Cloud service region. You can select the same Python and GPU environment template for more than one notebook in a project. In this case, every notebook kernel runs in the same runtime instance and the resources are shared. To avoid sharing runtime resources, create multiple custom environment templates with the same specifications and associate each notebook with its own template. Default hardware specifications for scoring models with Watson Machine Learning When you invoke the Watson Machine Learning API within a notebook, you consume compute resources from the Watson Machine Learning service as well as the compute resources for the notebook kernel. You can select any of the following hardware specifications when you connect to Watson Machine Learning and create a deployment. Hardware specifications available when invoking the Watson Machine Learning service in a notebook Capacity size Hardware configuration CUH rate per hour Extra small 1x4 = 1 vCPU and 4 GB RAM 0.5 Small 2x8 = 2 vCPU and 8 GB RAM 1 Medium 4x16 = 4 vCPU and 16 GB RAM 2 Large 8x32 = 8 vCPU and 32 GB RAM 4 Data files in notebook environments If you are working with large data sets, you should store the data sets in smaller chunks in the IBM Cloud Object Storage associated with your project and process the data in chunks in the notebook. Alternatively, you should run the notebook in a Spark environment. Be aware that the file system of each runtime is non-persistent and cannot be shared across environments. To persist files in Watson Studio, you should use IBM Cloud Object Storage. The easiest way to use IBM Cloud Object Storage in notebooks in projects is to leverage the [project-lib package for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html) or the [project-lib package for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-r.html). Compute usage by service The notebook runtimes consumes compute resources as CUH from Watson Studio, while running default or custom environments. You can monitor the Watson Studio CUH consumption in the project on the Resource usage page on the Manage tab of the project. Notebooks can also consume CUH from the Watson Machine Learning service when the notebook invokes the Watson Machine Learning to score a model. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of the project. Track CUH consumption for Watson Machine Learning in a notebook To calculate capacity unit hours consumed by a notebook, run this code in the notebook: CP = client.service_instance.get_details() CUH = CUH["entity"]["capacity_units"]/(36001000) print(CUH) For example: 'capacity_units': {'current': 19773430} 19773430/(36001000) returns 5.49 CUH For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation. Runtime scope Environment runtimes are always scoped to an environment template and a user within a project. If different users in a project work with the same environment, each user will get a separate runtime. If you select to run a version of a notebook as a scheduled job, each scheduled job will always start in a dedicated runtime. The runtime is stopped when the job finishes. Changing the environment of a notebook You can switch environments for different reasons, for example, you can: * Select an environment with more processing power or more RAM * Change from using an environment without Spark to a Spark environment You can only change the environment of a notebook if the notebook is unlocked. You can change the environment: * From the notebook opened in edit mode: 1. Save your notebook changes. 2. Click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar and then click Environment. 3. Select another template with the compute power and memory capacity from the list. 4. Select Change environment. This stops the active runtime and starts the newly selected environment. * From the Assets page of your project: 1. Select the notebook in the Notebooks section, click Actions > Change Environment and select another environment. The kernel must be stopped before you can change the environment. This new runtime environment will be instantiated the next time the notebook is opened for editing. * In the notebook job by editing the job template. See [Editing job settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details). Next steps * [Creating a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) * [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) * [Stopping active notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) Learn more * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for the notebook editor in projects # When you run a notebook in the notebook editor in a project, you choose an environment template, which defines the compute resources for the runtime environment\. The environment template specifies the type, size, and power of the hardware configuration, plus the software configuration\. For notebooks, environment templates include a supported language of Python and R\. <!-- <ul> --> * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#types) * [Runtime releases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#runtime-releases) * [CPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-cpu) * [Spark environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-spark) * [GPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-gpu) * [Default hardware specifications for scoring models with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#wml) * [Data files in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#data-files) * [Compute usage by service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#compute) * [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#scope) * [Changing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#change-env) <!-- </ul> --> ## Types of environments ## You can use these types of environments for running notebook: <!-- <ul> --> * [Anaconda CPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-cpu) for standard workloads\. * [Spark environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-spark) for parallel processing that is provided by the platform or by other services\. * [GPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=en#default-gpu) for compute\-intensive machine learning models\. <!-- </ul> --> Most environment types for notebooks have default environment templates so you can get started quickly\. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\. <!-- <table> --> Environment types for notebooks | Environment type | Default templates | Custom templates | | ---------------- | ----------------- | ---------------- | | Anaconda CPU | ✓ | ✓ | | Spark clusters | ✓ | ✓ | | GPU | ✓ | ✓ | <!-- </table ""> --> ## Runtime releases ## The default environments for notebooks are added as an affiliate of a runtime release and prefixed with `Runtime` followed by the release year and release version\. A runtime release specifies a list of key data science libraries and a language version, for example Python 3\.10\. All environments of a runtime release are built based on the library versions defined in the release, thus ensuring the consistent use of data science libraries across all data science applications\. The `Runtime 22.2` and `Runtime 23.1` releases are available for Python 3\.10 and R 4\.2\. While a runtime release is supported, IBM will update the library versions to address security requirements\. Note that these updates will not change the `<Major>.<Minor>` versions of the libraries, but only the `<Patch>` versions\. This ensures that your notebook assets will continue to run\. ### Library packages included in Runtimes ### For specific versions of popular data science library packages included in Watson Studio runtimes refer to these tables: <!-- <table> --> Table 3\. Packages and their versions in the various Runtime releases for Python | Library | Runtime 22\.2 on Python 3\.10 | Runtime 23\.1 on Python 3\.10 | | ------------- | ----------------------------- | ----------------------------- | | Keras | 2\.9 | 2\.12 | | Lale | 0\.7 | 0\.7 | | LightGBM | 3\.3 | 3\.3 | | NumPy | 1\.23 | 1\.23 | | ONNX | 1\.12 | 1\.13 | | ONNX Runtime | 1\.12 | 1\.13 | | OpenCV | 4\.6 | 4\.7 | | pandas | 1\.4 | 1\.5 | | PyArrow | 8\.0 | 11\.0 | | PyTorch | 1\.12 | 2\.0 | | scikit\-learn | 1\.1 | 1\.1 | | SciPy | 1\.8 | 1\.10 | | SnapML | 1\.8 | 1\.13 | | TensorFlow | 2\.9 | 2\.12 | | XGBoost | 1\.6 | 1\.6 | <!-- </table ""> --> <!-- <table> --> Table 4\. Packages and their versions in the various Runtime releases for R | Library | Runtime 22\.2 on R 4\.2 | Runtime 23\.1 on R 4\.2 | | ------------- | ----------------------- | ----------------------- | | arrow | 8\.0 | 11\.0 | | car | 3\.0 | 3\.0 | | caret | 6\.0 | 6\.0 | | catools | 1\.18 | 1\.18 | | forecast | 8\.16 | 8\.16 | | ggplot2 | 3\.3 | 3\.3 | | glmnet | 4\.1 | 4\.1 | | hmisc | 4\.7 | 4\.7 | | keras | 2\.9 | 2\.12 | | lme4 | 1\.1 | 1\.1 | | mvtnorm | 1\.1 | 1\.1 | | pandoc | 2\.12 | 2\.12 | | psych | 2\.2 | 2\.2 | | python | 3\.10 | 3\.10 | | randomforest | 4\.7 | 4\.7 | | reticulate | 1\.25 | 1\.25 | | sandwich | 3\.0 | 3\.0 | | scikit\-learn | 1\.1 | 1\.1 | | spatial | 7\.3 | 7\.3 | | tensorflow | 2\.9 | 2\.12 | | tidyr | 1\.2 | 1\.2 | | xgboost | 1\.6 | 1\.6 | <!-- </table ""> --> In addition to the libraries listed in the tables, runtimes include many other useful libraries\. To see the full list, select the **Manage** tab in your project, then click **Templates**, select the **Environments** tab, and then click on one of the listed environments\. ## CPU environment templates ## You can select any of the following default CPU environment templates for notebooks\. The default environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. `DO` Indicates that the environment templates includes the CPLEX and the DOcplex libraries to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments\. See [Decision Optimization notebooks](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html)\. `NLP` Indicates that the environment templates includes the Watson Natural Language Processing library with pre\-trained models for language processing tasks that you can run on unstructured data\. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)\. This default environment should be large enough to run the pre\-trained models\. <!-- <table> --> Default CPU environment templates for notebooks | Name | Hardware configuration | CUH rate per hour | | ------------------------------------------ | ---------------------- | ----------------- | | Runtime 22\.2 on Python 3\.10 XXS | 1 vCPU and 4 GB RAM | 0\.5 | | Runtime 22\.2 on Python 3\.10 XS | 2 vCPU and 8 GB RAM | 1 | | Runtime 22\.2 on Python 3\.10 S | 4 vCPU and 16 GB RAM | 2 | | Runtime 23\.1 on Python 3\.10 XXS | 1 vCPU and 4 GB RAM | 0\.5 | | Runtime 23\.1 on Python 3\.10 XS | 2 vCPU and 8 GB RAM | 1 | | Runtime 23\.1 on Python 3\.10 S | 4 vCPU and 16 GB RAM | 2 | | DO \+ NLP Runtime 22\.2 on Python 3\.10 XS | 2 vCPU and 8 GB RAM | 6 | | NLP \+ DO Runtime 23\.1 on Python 3\.10 XS | 2 vCPU and 8 GB RAM | 6 | | Runtime 22\.2 on R 4\.2 S | 4 vCPU and 16 GB RAM | 2 | | Runtime 23\.1 on R 4\.2 S | 4 vCPU and 16 GB RAM | 2 | <!-- </table ""> --> You should stop all active CPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs)\. See [CPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)\. ### Notebooks and CPU environments ### When you open a notebook in edit mode in a CPU runtime environment, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select\. The runtime is started per single user and not per notebook\. This means that if you open a second notebook with the same environment template in the same project, a second kernel is started in the same runtime\. Runtime resources are shared by the Jupyter kernels that you start in the runtime\. Runtime resources are also shared if the CPU has GPU\. If you want to avoid sharing runtimes but want to use the same environment template for multiple notebooks in a project, you should create custom environment templates with the same specifications and associate each notebook with its own template\. If necessary, you can restart or reconnect to the kernel\. When you restart a kernel, the kernel is stopped and then started in the same session again, but all execution results are lost\. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available\. ## Spark environment templates ## You can select any of the following default Spark environment templates for notebooks\. The default environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. <!-- <table> --> Default Spark environment templates for notebooks | Name | Hardware configuration | CUH rate per hour | | --------------------------- | ----------------------------------------------------------------------- | ----------------- | | Default Spark 3\.3 & R 4\.2 | 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM | 1 | | Default Spark 3\.4 & R 4\.2 | 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM | 1 | <!-- </table ""> --> You should stop all active Spark runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs)\. See [Spark idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)\. ### Large Spark environments ### If you have the Watson Studio Professional plan, you can create custom environment templates for larger Spark environments\. Professional plan users can have up to 35 executors and can choose from the following options for both driver and executor: <!-- <table> --> Hardware configurations for Spark environments | Hardware configuration | | ---------------------- | | 1 vCPU and 4 GB RAM | | 1 vCPU and 8 GB RAM | | 1 vCPU and 12 GB RAM | <!-- </table ""> --> The CUH rate per hour increases by 0\.5 for every vCPU that is added\. For example, `1x Driver: 3vCPU with 12GB of RAM` and `4x Executors: 2vCPU with 8GB of RAM` amounts to `(3 + (4 * 2)) = 11 vCPUs` and `5.5 CUH`\. ### Notebooks and Spark environments ### You can select the same Spark environment template for more than one notebook\. Every notebook associated with that environment has its own dedicated Spark cluster and no resources are shared\. When you start a Spark environment, extra resources are needed for the Jupyter Enterprise Gateway, Spark Master, and the Spark worker daemons\. These extra resources amount to 1 vCPU and 2 GB of RAM for the driver and 1 GB RAM for each executor\. You need to take these extra resources into account when selecting the hardware size of a Spark environment\. For example: if you create a notebook and select `Default Spark 3.3 & Python 3.10`, the Spark cluster consumes 3 vCPU and 12 GB RAM but, as 1 vCPU and 4 GB RAM are required for the extra resources, the resources remaining for the notebook are 2 vCPU and 8 GB RAM\. ### File system on a Spark cluster ### If you want to share files across executors and the driver or kernel of a Spark cluster, you can use the shared file system at `/home/spark/shared`\. If you want to use your own custom libraries, you can store them under `/home/spark/shared/user-libs/`\. There are four subdirectories under `/home/spark/shared/user-libs/` that are pre\-configured to be made available to Python and R or Java runtimes\. The following tables lists the pre\-configured subdirectories where you can add your custom libaries\. <!-- <table> --> Table 5\. Pre\-configured subdirectories for custom libraries | Directory | Type of library | | --------------------------------------- | ------------------ | | `/home/spark/shared/user-libs/python3/` | Python 3 libraries | | `/home/spark/shared/user-libs/R/` | R packages | | `/home/spark/shared/user-libs/spark2/` | Java JAR files | <!-- </table ""> --> To share libraries across a Spark driver and executors: <!-- <ol> --> 1. Download your custom libraries or JAR files to the appropriate pre\-configured directory\. 2. Restart the kernel from the notebook menu by clicking **Kernel > Restart Kernel**\. This loads your custom libraries or JAR files in Spark\. <!-- </ol> --> Note that these libraries are not persisted\. When you stop the environment runtime and restart it again later, you need to load the libraries again\. ## GPU environment templates ## You can select the following GPU environment template for notebooks\. The environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. The GPU environment template names indicate the accelerator power\. The GPU environment templates include the Watson Natural Language Processing library with pre\-trained models for language processing tasks that you can run on unstructured data\. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)\. **~** Indicates that the environment template requires the Watson Studio Professional plan\. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)\. <!-- <table> --> Default GPU environment templates for notebooks | Name | Hardware configuration | CUH rate per hour | | ---------------------------------------------- | ----------------------------------------------------- | ----------------- | | GPU V100 Runtime 22\.2 on Python 3\.10 **~** | 40 vCPU \+ 172 GB RAM \+ 1 NVIDIA TESLA V100 (1 GPU) | 68 | | GPU V100 Runtime 23\.1 on Python 3\.10 **~** | 40 vCPU \+ 172 GB RAM \+ 1 NVIDIA TESLA V100 (1 GPU) | 68 | | GPU 2xV100 Runtime 22\.2 on Python 3\.10 **~** | 80 vCPU and 344 GB RAM \+ 2 NVIDIA TESLA V100 (2 GPU) | 136 | | GPU 2xV100 Runtime 23\.1 on Python 3\.10 **~** | 80 vCPU and 344 GB RAM \+ 2 NVIDIA TESLA V100 (2 GPU) | 136 | <!-- </table ""> --> You should stop all active GPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs)\. See [GPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)\. ### Notebooks and GPU environments ### GPU environments for notebooks are available only in the Dallas IBM Cloud service region\. You can select the same Python and GPU environment template for more than one notebook in a project\. In this case, every notebook kernel runs in the same runtime instance and the resources are shared\. To avoid sharing runtime resources, create multiple custom environment templates with the same specifications and associate each notebook with its own template\. ## Default hardware specifications for scoring models with Watson Machine Learning ## When you invoke the Watson Machine Learning API within a notebook, you consume compute resources from the Watson Machine Learning service as well as the compute resources for the notebook kernel\. You can select any of the following hardware specifications when you connect to Watson Machine Learning and create a deployment\. <!-- <table> --> Hardware specifications available when invoking the Watson Machine Learning service in a notebook | Capacity size | Hardware configuration | CUH rate per hour | | ------------- | --------------------------- | ----------------- | | Extra small | 1x4 = 1 vCPU and 4 GB RAM | 0\.5 | | Small | 2x8 = 2 vCPU and 8 GB RAM | 1 | | Medium | 4x16 = 4 vCPU and 16 GB RAM | 2 | | Large | 8x32 = 8 vCPU and 32 GB RAM | 4 | <!-- </table ""> --> ## Data files in notebook environments ## If you are working with large data sets, you should store the data sets in smaller chunks in the IBM Cloud Object Storage associated with your project and process the data in chunks in the notebook\. Alternatively, you should run the notebook in a Spark environment\. Be aware that the file system of each runtime is non\-persistent and cannot be shared across environments\. To persist files in Watson Studio, you should use IBM Cloud Object Storage\. The easiest way to use IBM Cloud Object Storage in notebooks in projects is to leverage the [`project-lib` package for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html) or the [`project-lib` package for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-r.html)\. ## Compute usage by service ## The notebook runtimes consumes compute resources as CUH from Watson Studio, while running default or custom environments\. You can monitor the Watson Studio CUH consumption in the project on the **Resource usage** page on the **Manage** tab of the project\. Notebooks can also consume CUH from the Watson Machine Learning service when the notebook invokes the Watson Machine Learning to score a model\. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the **Resource usage** page on the **Manage** tab of the project\. ### Track CUH consumption for Watson Machine Learning in a notebook ### To calculate capacity unit hours consumed by a notebook, run this code in the notebook: CP = client.service_instance.get_details() CUH = CUH["entity"]["capacity_units"]/(3600*1000) print(CUH) For example: 'capacity_units': {'current': 19773430} 19773430/(3600*1000) returns 5\.49 CUH For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation\. ## Runtime scope ## Environment runtimes are always scoped to an environment template and a user within a project\. If different users in a project work with the same environment, each user will get a separate runtime\. If you select to run a version of a notebook as a scheduled job, each scheduled job will always start in a dedicated runtime\. The runtime is stopped when the job finishes\. ## Changing the environment of a notebook ## You can switch environments for different reasons, for example, you can: <!-- <ul> --> * Select an environment with more processing power or more RAM * Change from using an environment without Spark to a Spark environment <!-- </ul> --> You can only change the environment of a notebook if the notebook is unlocked\. You can change the environment: <!-- <ul> --> * From the notebook opened in edit mode: <!-- <ol> --> 1. Save your notebook changes. 2. Click the Notebook Info icon (![Notebook Info icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/get-information_32.png)) from the notebook toolbar and then click **Environment**. 3. Select another template with the compute power and memory capacity from the list. 4. Select **Change environment**. This stops the active runtime and starts the newly selected environment. <!-- </ol> --> * From the **Assets** page of your project: <!-- <ol> --> 1. Select the notebook in the Notebooks section, click **Actions > Change Environment** and select another environment. The kernel must be stopped before you can change the environment. This new runtime environment will be instantiated the next time the notebook is opened for editing. <!-- </ol> --> * In the notebook job by editing the job template\. See [Editing job settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#view-job-details)\. <!-- </ul> --> ## Next steps ## <!-- <ul> --> * [Creating a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) * [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) * [Stopping active notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
C6B0055426C9E91760F4923ED42BE91D64FCA6C8
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html?context=cdpaas&locale=en
Notebooks and scripts
Notebooks and scripts You can create, edit and execute Python and R code using Jupyter notebooks and scripts in code editors, for example the notebook editor or an integrated development environment (IDE), like RStudio. Notebooks : A Jupyter notebook is a web-based environment for interactive computing. You can use notebooks to run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data, namely the data, the code computations that process the data, the visualizations of the results, and text and rich media to enhance understanding. Scripts : A script is a file containing a set of commands and comments. The script can be saved and used later to re-execute the saved commands. Unlike in a notebook, the commands in a script can only be executed in a linear fashion. Notebooks Required permissions : Editor or Admin role in a project Tools : Notebook editor Programming languages : Python and R Data format : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. Data size : 5 GB. If your files are larger, you must load the data in multiple parts. Scripts Required permissions : Editor or Admin role in a project Tools : RStudio Programming languages : R Data format : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types. Data size : 5 GB. If your files are larger, you must load the data in multiple parts. Working in the notebook editor The notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked. You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost. When your notebooks are ready, you can create jobs to run the notebooks directly from the notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run. Working in RStudio RStudio is an integrated development environment for working with R scripts or Shiny apps. Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. R scripts and Shiny apps can only be created and used in the RStudio IDE. You can't create jobs for R scripts or R Shiny deployments. Learn more * [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) * [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) * [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
# Notebooks and scripts # You can create, edit and execute Python and R code using Jupyter notebooks and scripts in code editors, for example the notebook editor or an integrated development environment (IDE), like RStudio\. **Notebooks** : A Jupyter notebook is a web\-based environment for interactive computing\. You can use notebooks to run small pieces of code that process your data, and you can immediately view the results of your computation\. Notebooks include all of the building blocks you need to work with data, namely the data, the code computations that process the data, the visualizations of the results, and text and rich media to enhance understanding\. **Scripts** : A script is a file containing a set of commands and comments\. The script can be saved and used later to re\-execute the saved commands\. Unlike in a notebook, the commands in a script can only be executed in a linear fashion\. ## Notebooks ## **Required permissions** : **Editor** or **Admin** role in a project **Tools** : Notebook editor **Programming languages** : Python and R **Data format** : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and \.xlsx and \.xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html)\. for the supported file and database types\. **Data size** : 5 GB\. If your files are larger, you must load the data in multiple parts\. ## Scripts ## **Required permissions** : **Editor** or **Admin** role in a project **Tools** : RStudio **Programming languages** : R **Data format** : All types Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and \.xlsx and \.xls files : Database connections and connected data assets See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html)\. for the supported file and database types\. **Data size** : 5 GB\. If your files are larger, you must load the data in multiple parts\. ## Working in the notebook editor ## The notebook editor is largely used for interactive, exploratory data analysis programming and data visualization\. Only one person can edit a notebook at a time\. All other users can access opened notebooks in view mode only, while they are locked\. You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost\. When your notebooks are ready, you can create jobs to run the notebooks directly from the notebook editor\. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run\. ## Working in RStudio ## RStudio is an integrated development environment for working with R scripts or Shiny apps\. Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically\. R scripts and Shiny apps can only be created and used in the RStudio IDE\. You can't create jobs for R scripts or R Shiny deployments\. ## Learn more ## <!-- <ul> --> * [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html) * [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) * [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) <!-- </ul> --> **Parent topic:**[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) <!-- </article "role="article" "> -->
A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=en
Deployments dashboard
Deployments dashboard The deployments dashboard provides an aggregate view of deployment activity available to you, across spaces. You can get a broad view of deployment activity such as the status of job runs or a list of online deployments. You can also use filters and views to focus on specific job runs or category of runs such as failed runs. ModelOps or DevOps users can review and monitor the activity for an organization. Accessing the Deployments dashboard From the navigation menu, click Deployments. If you don't have any deployment spaces, you are prompted to create a space. This following illustration shows an example of the Deployments dashboard: ![Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deployment-dashboard.png) The dashboard view has two tabs: * [Activity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enactivity): Use the Activity tab to review all of the deployment activity across spaces. You can sort and filter this view to focus on a particular type of activity, such as failed deployments, or jobs with active runs. You can also review metrics such as the number of deployment spaces with active deployments. * [Spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enspaces): Use the Spaces tab to list all the spaces that you can access. You can read the overview information, such as the number of deployments and job runs in a space, or click a space name to view details and create deployments or jobs. Viewing activity View the overview information for finished runs, active runs, or online deployments, or drill down to view details. Finished runs The Finished runs section shows activity in jobs over a specified time interval. The default is to view finished jobs for the last 8 hours. It shows jobs that are completed, canceled, or failed across all of your deployment spaces within the specified time frame. Click View finished runs to view a list of runs. The view provides more detail on the finished runs and a visualization that shows run times. ![Viewing detail for finished jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view3.png) Filter the view to focus on a particular type of activity: ![Filtering job detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view4.png) * Jobs with active runs - Shows jobs that have active runs (running, started, or queued) across all spaces you can access. * Active runs - Shows runs that are in the running, started, or queued state across all jobs you can access. * Jobs with finished runs - Shows jobs with runs that are completed, canceled, or failed. * Finished runs - Shows runs that are completed, canceled, or failed. Active runs The Active runs section displays runs that are currently running or are in the starting or queued state. Click View active runs to view a list of the runs. Online deployments The Deployments section shows all online and R-Shiny deployments, which are sorted into categories for by status. Click View deployments to view the list of deployments that you can access. From any view, you can start from the overview and drill down to see the details for a particular job or run. You can also filter the view to focus on a particular type of deployment. Viewing spaces View a list of spaces that you can access, with overview information such as number of deployments and collaborators. Click the name of a space to view details or add assets, and to create new deployments or jobs. Use filters to modify the view from the default list of all spaces to show Active spaces, with deployments or jobs, or Inactive spaces, with no deployments or jobs. Next steps [Use spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) to organize your deployment activity. Parent topic:[Deploying and managing models and functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Deployments dashboard # The deployments dashboard provides an aggregate view of deployment activity available to you, across spaces\. You can get a broad view of deployment activity such as the status of job runs or a list of online deployments\. You can also use filters and views to focus on specific job runs or category of runs such as failed runs\. ModelOps or DevOps users can review and monitor the activity for an organization\. ## Accessing the Deployments dashboard ## From the navigation menu, click **Deployments**\. If you don't have any deployment spaces, you are prompted to create a space\. This following illustration shows an example of the Deployments dashboard: ![Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/deployment-dashboard.png) The dashboard view has two tabs: <!-- <ul> --> * [**Activity**](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=en#activity): Use the **Activity** tab to review all of the deployment activity across spaces\. You can sort and filter this view to focus on a particular type of activity, such as failed deployments, or jobs with active runs\. You can also review metrics such as the number of deployment spaces with active deployments\. * [**Spaces**](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=en#spaces): Use the **Spaces** tab to list all the spaces that you can access\. You can read the overview information, such as the number of deployments and job runs in a space, or click a space name to view details and create deployments or jobs\. <!-- </ul> --> ## Viewing activity ## View the overview information for finished runs, active runs, or online deployments, or drill down to view details\. ### Finished runs ### The **Finished runs** section shows activity in jobs over a specified time interval\. The default is to view finished jobs for the last 8 hours\. It shows jobs that are completed, canceled, or failed across all of your deployment spaces within the specified time frame\. Click **View finished runs** to view a list of runs\. The view provides more detail on the finished runs and a visualization that shows run times\. ![Viewing detail for finished jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view3.png) Filter the view to focus on a particular type of activity: ![Filtering job detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/operator-view4.png) <!-- <ul> --> * **Jobs with active runs** \- Shows jobs that have active runs (running, started, or queued) across all spaces you can access\. * **Active runs** \- Shows runs that are in the running, started, or queued state across all jobs you can access\. * **Jobs with finished runs** \- Shows jobs with runs that are completed, canceled, or failed\. * **Finished runs** \- Shows runs that are completed, canceled, or failed\. <!-- </ul> --> ### Active runs ### The **Active runs** section displays runs that are currently running or are in the starting or queued state\. Click **View active runs** to view a list of the runs\. ### Online deployments ### The **Deployments** section shows all online and R\-Shiny deployments, which are sorted into categories for by status\. Click **View deployments** to view the list of deployments that you can access\. From any view, you can start from the overview and drill down to see the details for a particular job or run\. You can also filter the view to focus on a particular type of deployment\. ## Viewing spaces ## View a list of spaces that you can access, with overview information such as number of deployments and collaborators\. Click the name of a space to view details or add assets, and to create new deployments or jobs\. Use *filters* to modify the view from the default list of all spaces to show **Active spaces**, with deployments or jobs, or **Inactive spaces**, with no deployments or jobs\. ## Next steps ## [Use spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) to organize your deployment activity\. **Parent topic:**[Deploying and managing models and functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
BD995B62F35EC624DA9E86F9A3383B73B54D9ED7
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en
The parts of a notebook
The parts of a notebook You can see some information about a notebook before you open it on the Assets page of a project. When you open a notebook in edit mode, you can do much more with the notebook by using multiple menu options, toolbars, an information pane, and by editing and running the notebook cells. You can view the following information about a notebook by clicking the Notebooks asset type in the Assets page of your project: * The name of the notebook * The date when the notebook was last modified and the person who made the change * The programming language of the notebook * Whether the notebook is currently locked When you open a notebook in edit mode, the notebook editor includes the following features: * [Menu bar and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enmenu-bar-and-toolbar) * [Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=ennotebook-action-bar) * [The cells in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enthe-cells-in-a-jupyter-notebook) * [Jupyter Code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-code-cells) * [Jupyter markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-markdown-cells) * [Raw Jupyter NBConvert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enraw-jupyter-nbconvert-cells) * [Spark job progress bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enspark-job-progress-bar) * [Project token for authorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) ![menu and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/toolbar.png) You can select notebook features that affect the way the notebook functions and perform the most-used operations within the notebook by clicking an icon. Notebook action bar ![Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/action-bar-Blue.png) You can select features that enhance notebook collaboration. From the action bar, you can: * Publish your notebook as a gist or on GitHub. * Create a permanent URL so that anyone with the link can view your notebook. * Create jobs in which to run your notebook. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html). * Download your notebook. * Add a project token so that code can access the project resources. See [Add code to set the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html). * Generate code snippets to add data from a data asset or a connection to a notebook cell. * View your notebook information. You can: * Change the name of your notebook by editing it in the Name field. * Edit the description of your notebook in the Description field. * View the date when the notebook was created. * View the environment details and runtime status; you can change the notebook runtime from here. See [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html). * Save versions of your notebook. * Upload assets to the project. The cells in a Jupyter notebook A Jupyter notebook consists of a sequence of cells. The flow of a notebook is sequential. You enter code into an input cell, and when you run the cell, the notebook executes the code and prints the output of the computation to an output cell. You can change the code in an input cell and re-run the cell as often as you like. In this way, the notebook follows a read-evaluate-print loop paradigm. You can choose to use tags to describe cells in a notebook. The behavior of a cell is determined by a cell’s type. The different types of cells include: Jupyter code cells Where you can edit and write new code. ![code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code_cells_notebook_bigger.png) Jupyter markdown cells Where you can document the computational process. You can input headings to structure your notebook hierarchically. You can also add and edit image files as attachments to the notebook. The markdown code and images are rendered when the cell is run. ![markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/markdownCells_notebook.png) See [Markdown for Jupyter notebooks cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Raw Jupyter NBConvert cells Where you can write output directly or put code that you don’t want to run. Raw cells are not evaluated by the notebook. ![raw convert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/rawconvert_cells_notebook_bigger.png) Spark job progress bar When you run code in a notebook that triggers Spark jobs, it is often challenging to determine why your code is not running efficiently. To help you better understand what your code is doing and assist you in code debugging, you can monitor the execution of the Spark jobs for a code cell. To enable Spark monitoring for a cell in a notebook: * Select the code cell you want to monitor. * Click the Enable Spark Monitoring icon (![Shows the enable Spark monitoring icon.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ProgressBars-Active.png)) on the notebook toolbar. The progress bars you see display the real time runtime progress of your jobs on the Spark cluster. Each Spark job runs on the cluster in one or more stages, where each stage is a list of tasks that can be run in parallel. The monitoring pane can become very large is the Spark job has many stages. The job monitoring pane also displays the duration of each job and the status of the job stages. A stage can have one of the following statuses: * Running: Stage active and started. * Completed: Stage completed. * Skipped: The results of this stage were cached from a earlier operation and so the task doesn't have to run again. * Pending: Stage hasn't started yet. Click the icon again to disable monitoring in a cell. Note: Spark monitoring is currently only supported in notebooks that run on Python. Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
# The parts of a notebook # You can see some information about a notebook before you open it on the Assets page of a project\. When you open a notebook in edit mode, you can do much more with the notebook by using multiple menu options, toolbars, an information pane, and by editing and running the notebook cells\. You can view the following information about a notebook by clicking the **Notebooks** asset type in the **Assets** page of your project: <!-- <ul> --> * The name of the notebook * The date when the notebook was last modified and the person who made the change * The programming language of the notebook * Whether the notebook is currently locked <!-- </ul> --> When you open a notebook in edit mode, the notebook editor includes the following features: <!-- <ul> --> * [Menu bar and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#menu-bar-and-toolbar) * [Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#notebook-action-bar) * [The cells in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#the-cells-in-a-jupyter-notebook) <!-- <ul> --> * [Jupyter Code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#jupyter-code-cells) * [Jupyter markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#jupyter-markdown-cells) * [Raw Jupyter NBConvert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#raw-jupyter-nbconvert-cells) <!-- </ul> --> * [Spark job progress bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=en#spark-job-progress-bar) * [Project token for authorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) <!-- </ul> --> ![menu and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/toolbar.png) You can select notebook features that affect the way the notebook functions and perform the most\-used operations within the notebook by clicking an icon\. ## Notebook action bar ## ![Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/action-bar-Blue.png) You can select features that enhance notebook collaboration\. From the action bar, you can: <!-- <ul> --> * Publish your notebook as a gist or on GitHub\. * Create a permanent URL so that anyone with the link can view your notebook\. * Create jobs in which to run your notebook\. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)\. * Download your notebook\. * Add a project token so that code can access the project resources\. See [Add code to set the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html)\. * Generate code snippets to add data from a data asset or a connection to a notebook cell\. * View your notebook information\. You can: <!-- <ul> --> * Change the name of your notebook by editing it in the **Name** field. * Edit the description of your notebook in the **Description** field. * View the date when the notebook was created. * View the environment details and runtime status; you can change the notebook runtime from here. See [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html). <!-- </ul> --> * Save versions of your notebook\. * Upload assets to the project\. <!-- </ul> --> ## The cells in a Jupyter notebook ## A Jupyter notebook consists of a sequence of cells\. The flow of a notebook is sequential\. You enter code into an input cell, and when you run the cell, the notebook executes the code and prints the output of the computation to an output cell\. You can change the code in an input cell and re\-run the cell as often as you like\. In this way, the notebook follows a read\-evaluate\-print loop paradigm\. You can choose to use tags to describe cells in a notebook\. The behavior of a cell is determined by a cell’s type\. The different types of cells include: ### Jupyter code cells ### Where you can edit and write new code\. ![code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/code_cells_notebook_bigger.png) ### Jupyter markdown cells ### Where you can document the computational process\. You can input headings to structure your notebook hierarchically\. You can also add and edit image files as attachments to the notebook\. The markdown code and images are rendered when the cell is run\. ![markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/markdownCells_notebook.png) See [Markdown for Jupyter notebooks cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)\. ### Raw Jupyter NBConvert cells ### Where you can write output directly or put code that you don’t want to run\. Raw cells are not evaluated by the notebook\. ![raw convert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/rawconvert_cells_notebook_bigger.png) ## Spark job progress bar ## When you run code in a notebook that triggers Spark jobs, it is often challenging to determine why your code is not running efficiently\. To help you better understand what your code is doing and assist you in code debugging, you can monitor the execution of the Spark jobs for a code cell\. To enable Spark monitoring for a cell in a notebook: <!-- <ul> --> * Select the code cell you want to monitor\. * Click the **Enable Spark Monitoring** icon (![Shows the enable Spark monitoring icon\.](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/ProgressBars-Active.png)) on the notebook toolbar\. <!-- </ul> --> The progress bars you see display the real time runtime progress of your jobs on the Spark cluster\. Each Spark job runs on the cluster in one or more stages, where each stage is a list of tasks that can be run in parallel\. The monitoring pane can become very large is the Spark job has many stages\. The job monitoring pane also displays the duration of each job and the status of the job stages\. A stage can have one of the following statuses: <!-- <ul> --> * `Running`: Stage active and started\. * `Completed`: Stage completed\. * `Skipped`: The results of this stage were cached from a earlier operation and so the task doesn't have to run again\. * `Pending`: Stage hasn't started yet\. <!-- </ul> --> Click the icon again to disable monitoring in a cell\. Note: Spark monitoring is currently only supported in notebooks that run on Python\. **Parent topic:**[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html) <!-- </article "role="article" "> -->
B3F8FB433FC6730284E636B068A5DE98C002DABD
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en
Planning your notebooks and scripts experience
Planning your notebooks and scripts experience To make a plan for using Jupyter notebooks and scripts, first understand the choices that you have, the implications of those choices, and how those choices affect the order of implementation tasks. You can perform most notebook and script related tasks with Editor or Admin role in an analytics project. Before you start working with notebooks and scripts, you should consider the following questions as most tasks need to be completed in a particular order: * Which programming language do you want to work in? * What will your notebooks be doing? * What libraries do you want to work with? * How can you use the notebook or script in IBM watsonx? To create a plan for using Jupyter notebooks or scripts, determine which of the following tasks you must complete. Task Mandatory? Timing [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enproject) Yes This must be your very first task [Adding data assets to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=endata-assets) Yes Before you begin creating notebooks [Picking a programming language](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-lang) Yes Before you select the tool [Selecting a tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enselect-tool) Yes After you've picked the language [Checking the library packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-libs) Yes Before you select a runtime environment [Choosing an appropriate runtime environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enruntime-env) Yes Before you open the development environment [Managing the notebooks and scripts lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enmanage-lifecycle) No When the notebook or script is ready [Uses for notebooks and scripts after creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enuse-options) No When the notebook is ready Creating a project You need to create a project before you can start working in notebooks. Projects You can create an empty project, one from file, or from URL. In this project: * You can use the Juypter Notebook and RStudio. * Notebooks are assets in the project. * Notebook collaboration is based on locking by user at the project level. * R scripts and Shiny apps are not assets in the project. * There is no collaboration on R scripts or Shiny apps. Picking a programming language You can choose to work in the following languages: Notebooks : Python and R Scripts : R scripts and R Shiny apps Selecting a tool In IBM watsonx, you can work with notebook and scripts in the following tool: Juypter Notebook editor : In the Juypter Notebook editor, you can create Python or R notebooks. Notebooks are assets in a project. Collaboration is only at the project level. The notebook is locked by a user when opened and can only be unlocked by the same user or a project admin. RStudio : In RStudio, you can create R scripts and Shiny apps. R scripts are not assets in a project, which means that there is no collaboration at the project level. Checking the library packages When you open a notebook in a runtime environment, you have access to a large selection of preinstalled data science library packages. Many environments also include libraries provided by IBM at no extra charge, such as the Watson Natural Language Processing library in Python environments, libraries to help you access project assets, or libraries for time series or geo-spatial analysis in Spark environments. For a list of the library packages and the versions included in an environment template, select the template on the Templates page from the Manage tab on the project's Environments page. If libraries are missing in a template, you can add them: Through the notebook or script : You can use familiar package install commands for your environment. For example, in Python notebooks, you can use mamba, conda or pip. By creating a custom environment template : When you create a custom template, you can create a software customization and add the libraries you want to include. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). Choosing a runtime environment Choosing the compute environment for your notebook depends on the amount of data you want to process and the complexity of the data analysis processes. Watson Studio offers many default environment templates with different hardware sizes and software configurations to help you quickly get started, without having to create your own templates. These included templates are listed on the Templates page from the Manage tab on the project's Environments page. For more information about the included environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). If the available templates don't suit your needs, you can create custom templates and determine the hardware size and software configuration. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html). Important: Make sure that the environment has enough memory to store the data that you load to the notebook. Oftentimes this means that the environment must have significantly more memory than the total size of the data loaded to the notebook because some data frameworks, like pandas, can hold multiple copies of the data in memory. Working with data To work with data in a notebook, you need to: * Add the data to your project, which turns the data into a project asset. See [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj//manage-data/add-data-project.html) for the different methods for adding data to a project. * Use generated code that loads data from the asset to a data structure in your notebook. For a list of the supported data types, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). * Write your own code to load data if the data source isn't added as a project asset or support for adding generated code isn't available for the project asset. Managing the notebooks and scripts lifecycle After you have created and tested a notebook in your tool, you can: * Publish it to a catalog so that other catalog members can use the notebook in their projects. See [Publishing assets from a project into a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html). * Share a read-only copy outside of Watson Studio so that people who aren't collaborators in your projects can see and use it. See [Sharing notebooks with a URL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html). * Publish it to a GitHub repository. See [Publishing notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). * Publish it as a gist. See [Publishing a notebook as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html). R scripts and Shiny apps can't be published or shared using functionality in a project. Uses for notebooks and scripts after creation The options for a notebook that is created and ready to use in IBM watsonx include: * Running it as a job in a project. See [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). * Running it as part of a Watson Pipeline. See [Configuring pipeline nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html). To ensure that a notebook can be run as a job or in a pipeline: * Ensure that no cells require interactive input by a user. * Ensure that the notebook logs enough detailed information to enable understanding the progress and any failures by looking at the log. * Use environment variables in the code to access configurations if a notebook or script requires them, for example the input data file or the number of training runs. * Using the Watson Machine Learning Python client to build, train and then deploy your models. See [Watson Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html). * Using the Watson Machine Learning REST API to build, train and then deploy your models. R scripts and Shiny apps can only be created and used in the RStudio IDE in IBM watsonx. You can't create jobs for R scripts or R Shiny deployments. Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# Planning your notebooks and scripts experience # To make a plan for using Jupyter notebooks and scripts, first understand the choices that you have, the implications of those choices, and how those choices affect the order of implementation tasks\. You can perform most notebook and script related tasks with Editor or Admin role in an analytics project\. Before you start working with notebooks and scripts, you should consider the following questions as most tasks need to be completed in a particular order: <!-- <ul> --> * Which programming language do you want to work in? * What will your notebooks be doing? * What libraries do you want to work with? * How can you use the notebook or script in IBM watsonx? <!-- </ul> --> To create a plan for using Jupyter notebooks or scripts, determine which of the following tasks you must complete\. <!-- <table> --> | Task | Mandatory? | Timing | | ------------------------------------------------- | ---------- | ------------------------------------------- | | [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#project) | Yes | This must be your very first task | | [Adding data assets to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#data-assets) | Yes | Before you begin creating notebooks | | [Picking a programming language](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#programming-lang) | Yes | Before you select the tool | | [Selecting a tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#select-tool) | Yes | After you've picked the language | | [Checking the library packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#programming-libs) | Yes | Before you select a runtime environment | | [Choosing an appropriate runtime environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#runtime-env) | Yes | Before you open the development environment | | [Managing the notebooks and scripts lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#manage-lifecycle) | No | When the notebook or script is ready | | [Uses for notebooks and scripts after creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=en#use-options) | No | When the notebook is ready | <!-- </table ""> --> ## Creating a project ## You need to create a project before you can start working in notebooks\. **Projects** You can create an empty project, one from file, or from URL\. In this project: <!-- <ul> --> * You can use the Juypter Notebook and RStudio\. * Notebooks are assets in the project\. * Notebook collaboration is based on locking by user at the project level\. * R scripts and Shiny apps are not assets in the project\. * There is no collaboration on R scripts or Shiny apps\. <!-- </ul> --> ## Picking a programming language ## You can choose to work in the following languages: **Notebooks** : Python and R **Scripts** : R scripts and R Shiny apps ## Selecting a tool ## In IBM watsonx, you can work with notebook and scripts in the following tool: **Juypter Notebook editor** : In the Juypter Notebook editor, you can create Python or R notebooks\. Notebooks are assets in a project\. Collaboration is only at the project level\. The notebook is locked by a user when opened and can only be unlocked by the same user or a project admin\. **RStudio** : In RStudio, you can create R scripts and Shiny apps\. R scripts are not assets in a project, which means that there is no collaboration at the project level\. ## Checking the library packages ## When you open a notebook in a runtime environment, you have access to a large selection of preinstalled data science library packages\. Many environments also include libraries provided by IBM at no extra charge, such as the Watson Natural Language Processing library in Python environments, libraries to help you access project assets, or libraries for time series or geo\-spatial analysis in Spark environments\. For a list of the library packages and the versions included in an environment template, select the template on the **Templates** page from the **Manage** tab on the project's **Environments** page\. If libraries are missing in a template, you can add them: **Through the notebook or script** : You can use familiar package install commands for your environment\. For example, in Python notebooks, you can use `mamba`, `conda` or `pip`\. **By creating a custom environment template** : When you create a custom template, you can create a software customization and add the libraries you want to include\. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)\. ## Choosing a runtime environment ## Choosing the compute environment for your notebook depends on the amount of data you want to process and the complexity of the data analysis processes\. Watson Studio offers many default environment templates with different hardware sizes and software configurations to help you quickly get started, without having to create your own templates\. These included templates are listed on the **Templates** page from the **Manage** tab on the project's **Environments** page\. For more information about the included environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)\. If the available templates don't suit your needs, you can create custom templates and determine the hardware size and software configuration\. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)\. Important: Make sure that the environment has enough memory to store the data that you load to the notebook\. Oftentimes this means that the environment must have significantly more memory than the total size of the data loaded to the notebook because some data frameworks, like pandas, can hold multiple copies of the data in memory\. ## Working with data ## To work with data in a notebook, you need to: <!-- <ul> --> * Add the data to your project, which turns the data into a project asset\. See [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj//manage-data/add-data-project.html) for the different methods for adding data to a project\. * Use generated code that loads data from the asset to a data structure in your notebook\. For a list of the supported data types, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html)\. * Write your own code to load data if the data source isn't added as a project asset or support for adding generated code isn't available for the project asset\. <!-- </ul> --> ## Managing the notebooks and scripts lifecycle ## After you have created and tested a notebook in your tool, you can: <!-- <ul> --> * Publish it to a catalog so that other catalog members can use the notebook in their projects\. See [Publishing assets from a project into a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html)\. * Share a read\-only copy outside of Watson Studio so that people who aren't collaborators in your projects can see and use it\. See [Sharing notebooks with a URL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html)\. * Publish it to a GitHub repository\. See [Publishing notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html)\. * Publish it as a gist\. See [Publishing a notebook as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html)\. <!-- </ul> --> R scripts and Shiny apps can't be published or shared using functionality in a project\. ## Uses for notebooks and scripts after creation ## The options for a notebook that is created and ready to use in IBM watsonx include: <!-- <ul> --> * Running it as a job in a project\. See [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)\. * Running it as part of a Watson Pipeline\. See [Configuring pipeline nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html)\. To ensure that a notebook can be run as a job or in a pipeline: <!-- <ul> --> * Ensure that no cells require interactive input by a user. * Ensure that the notebook logs enough detailed information to enable understanding the progress and any failures by looking at the log. * Use environment variables in the code to access configurations if a notebook or script requires them, for example the input data file or the number of training runs. <!-- </ul> --> * Using the Watson Machine Learning Python client to build, train and then deploy your models\. See [Watson Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\. * Using the Watson Machine Learning REST API to build, train and then deploy your models\. <!-- </ul> --> R scripts and Shiny apps can only be created and used in the RStudio IDE in IBM watsonx\. You can't create jobs for R scripts or R Shiny deployments\. **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
1483016BE71021F31B8193239D319F34D8E01C9C
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html?context=cdpaas&locale=en
Supported machine learning tools, libraries, frameworks, and software specifications
Supported machine learning tools, libraries, frameworks, and software specifications In IBM Watson Machine Learning, you can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions. The environment for these models and functions is made up of specific hardware and software specifications. Software specifications define the language and version that you use for a model or function. You can use software specifications to configure the software that is used for running your models and functions. By using software specifications, you can precisely define the software version to be used and include your own extensions (for example, by using conda .yml files or custom libraries). You can get a list of available software and hardware specifications and then use their names and IDs for use with your deployment. For more information, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning). Predefined software specifications You can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions. This table lists the predefined (base) model types and software specifications. List of predefined (base) model types and software specifications Framework** Versions Model Type Default software specification AutoAI 0.1 NA autoai-kb_rt22.2-py3.10 <br>autoai-ts_rt22.2-py3.10 <br>hybrid_0.1 <br>autoai-kb_rt23.1-py3.10 <br>autoai-ts_rt23.1-py3.10 <br>autoai-tsad_rt23.1-py3.10 <br>autoai-tsad_rt22.2-py3.10 Decision Optimization 20.1 do-docplex_20.1 <br>do-opl_20.1 <br>do-cplex_20.1 <br>do-cpo_20.1 do_20.1 Decision Optimization 22.1 do-docplex_22.1 <br>do-opl_22.1 <br>do-cplex_22.1 <br>do-cpo_22.1 do_22.1 Hybrid/AutoML 0.1 wml-hybrid_0.1 hybrid_0.1 PMML 3.0 to 4.3 pmml. (or) pmml..*3.0 - 4.3 pmml-3.0_4.3 PyTorch 1.12 pytorch-onnx_1.12 <br>pytorch-onnx_rt22.2 runtime-22.2-py3.10 <br>pytorch-onnx_rt22.2-py3.10 <br>pytorch-onnx_rt22.2-py3.10-edt PyTorch 2.0 pytorch-onnx_2.0 <br>pytorch-onnx_rt23.1 runtime-23.1-py3.10 <br>pytorch-onnx_rt23.1-py3.10 <br>pytorch-onnx_rt23.1-py3.10-edt <br>pytorch-onnx_rt23.1-py3.10-dist Python Functions 0.1 NA runtime-22.2-py3.10 <br>runtime-23.1-py3.10 Python Scripts 0.1 NA runtime-22.2-py3.10 <br>runtime-23.1-py3.10 Scikit-learn 1.1 scikit-learn_1.1 runtime-22.2-py3.10 <br>runtime-23.1-py3.10 Spark 3.3 mllib_3.3 spark-mllib_3.3 SPSS 17.1 spss-modeler_17.1 spss-modeler_17.1 SPSS 18.1 spss-modeler_18.1 spss-modeler_18.1 SPSS 18.2 spss-modeler_18.2 spss-modeler_18.2 Tensorflow 2.9 tensorflow_2.9 <br>tensorflow_rt22.2 runtime-22.2-py3.10 <br>tensorflow_rt22.2-py3.10 Tensorflow 2.12 tensorflow_2.12 <br>tensorflow_rt23.1 runtime-23.1-py3.10 <br>tensorflow_rt23.1-py3.10-dist <br>tensorflow_rt23.1-py3.10-edt <br>tensorflow_rt23.1-py3.10 XGBoost 1.6 xgboost_1.6 or scikit-learn_1.1 (see notes) runtime-22.2-py3.10 <br>runtime-23.1-py3.10 When you have assets that rely on discontinued software specifications or frameworks, in some cases the migration is seamless. In other cases, your action is required to retrain or redeploy assets. * Existing deployments of models that are built with discontinued framework versions or software specifications are removed on the date of discontinuation. * No new deployments of models that are built with discontinued framework versions or software specifications are allowed. Learn more * To learn more about how to customize software specifications, see [Customizing with third-party and private Python libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html). * To learn more about how to use and customize environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). * To learn more about how to use software specifications for deployments, see the following Jupyter notebooks: * [Using REST API and cURL](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/rest_api/curl/deployments) * [Using the Python client](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/python_sdk/deployments) Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html)
# Supported machine learning tools, libraries, frameworks, and software specifications # In IBM Watson Machine Learning, you can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions\. The environment for these models and functions is made up of specific hardware and software specifications\. Software specifications define the language and version that you use for a model or function\. You can use software specifications to configure the software that is used for running your models and functions\. By using software specifications, you can precisely define the software version to be used and include your own extensions (for example, by using conda \.yml files or custom libraries)\. You can get a list of available software and hardware specifications and then use their names and IDs for use with your deployment\. For more information, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning)\. ## Predefined software specifications ## You can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions\. This table lists the predefined (base) model types and software specifications\. <!-- <table> --> List of predefined (base) model types and software specifications | Framework\*\* | Versions | Model Type | Default software specification | | --------------------- | ------------ | -------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | AutoAI | 0\.1 | NA | autoai\-kb\_rt22\.2\-py3\.10 <br>autoai\-ts\_rt22\.2\-py3\.10 <br>hybrid\_0\.1 <br>autoai\-kb\_rt23\.1\-py3\.10 <br>autoai\-ts\_rt23\.1\-py3\.10 <br>autoai\-tsad\_rt23\.1\-py3\.10 <br>autoai\-tsad\_rt22\.2\-py3\.10 | | Decision Optimization | 20\.1 | do\-docplex\_20\.1 <br>do\-opl\_20\.1 <br>do\-cplex\_20\.1 <br>do\-cpo\_20\.1 | do\_20\.1 | | Decision Optimization | 22\.1 | do\-docplex\_22\.1 <br>do\-opl\_22\.1 <br>do\-cplex\_22\.1 <br>do\-cpo\_22\.1 | do\_22\.1 | | Hybrid/AutoML | 0\.1 | wml\-hybrid\_0\.1 | hybrid\_0\.1 | | PMML | 3\.0 to 4\.3 | pmml*\. (or) pmml*\.\.\*3\.0 \- 4\.3 | pmml\-3\.0\_4\.3 | | PyTorch | 1\.12 | pytorch\-onnx\_1\.12 <br>pytorch\-onnx\_rt22\.2 | runtime\-22\.2\-py3\.10 <br>pytorch\-onnx\_rt22\.2\-py3\.10 <br>pytorch\-onnx\_rt22\.2\-py3\.10\-edt | | PyTorch | 2\.0 | pytorch\-onnx\_2\.0 <br>pytorch\-onnx\_rt23\.1 | runtime\-23\.1\-py3\.10 <br>pytorch\-onnx\_rt23\.1\-py3\.10 <br>pytorch\-onnx\_rt23\.1\-py3\.10\-edt <br>pytorch\-onnx\_rt23\.1\-py3\.10\-dist | | Python Functions | 0\.1 | NA | runtime\-22\.2\-py3\.10 <br>runtime\-23\.1\-py3\.10 | | Python Scripts | 0\.1 | NA | runtime\-22\.2\-py3\.10 <br>runtime\-23\.1\-py3\.10 | | Scikit\-learn | 1\.1 | scikit\-learn\_1\.1 | runtime\-22\.2\-py3\.10 <br>runtime\-23\.1\-py3\.10 | | Spark | 3\.3 | mllib\_3\.3 | spark\-mllib\_3\.3 | | SPSS | 17\.1 | spss\-modeler\_17\.1 | spss\-modeler\_17\.1 | | SPSS | 18\.1 | spss\-modeler\_18\.1 | spss\-modeler\_18\.1 | | SPSS | 18\.2 | spss\-modeler\_18\.2 | spss\-modeler\_18\.2 | | Tensorflow | 2\.9 | tensorflow\_2\.9 <br>tensorflow\_rt22\.2 | runtime\-22\.2\-py3\.10 <br>tensorflow\_rt22\.2\-py3\.10 | | Tensorflow | 2\.12 | tensorflow\_2\.12 <br>tensorflow\_rt23\.1 | runtime\-23\.1\-py3\.10 <br>tensorflow\_rt23\.1\-py3\.10\-dist <br>tensorflow\_rt23\.1\-py3\.10\-edt <br>tensorflow\_rt23\.1\-py3\.10 | | XGBoost | 1\.6 | xgboost\_1\.6 or scikit\-learn\_1\.1 (see notes) | runtime\-22\.2\-py3\.10 <br>runtime\-23\.1\-py3\.10 | <!-- </table ""> --> When you have assets that rely on discontinued software specifications or frameworks, in some cases the migration is seamless\. In other cases, your action is required to retrain or redeploy assets\. <!-- <ul> --> * Existing deployments of models that are built with discontinued framework versions or software specifications are removed on the date of discontinuation\. * No new deployments of models that are built with discontinued framework versions or software specifications are allowed\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * To learn more about how to customize software specifications, see [Customizing with third\-party and private Python libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html)\. * To learn more about how to use and customize environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)\. * To learn more about how to use software specifications for deployments, see the following Jupyter notebooks: <!-- <ul> --> * [Using REST API and cURL](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/rest_api/curl/deployments) * [Using the Python client](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/python_sdk/deployments) <!-- </ul> --> <!-- </ul> --> **Parent topic:**[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html) <!-- </article "role="article" "> -->
6406A3BCB4E9210A9FB00AF248F11F392AF5C205
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html?context=cdpaas&locale=en
Promoting an environment template to a space
Promoting an environment template to a space If you created an environment template and associated it with an asset that you promoted to a deployment space, you can also promote the environment template to the same space. Promoting the environment template to the same space enables running the asset in the same environment that was used in the project. You can only promote environment templates that you created. To promote an environment template associated with an asset that you promoted to a deployment space: 1. From the Manage tab of your project on the Environments page under Templates, select the custom environment template and click Actions > Promote. 2. Select the space that you promoted your asset to as the target deployment space and optionally provide a description and tags. Parent topic:[Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Promoting an environment template to a space # If you created an environment template and associated it with an asset that you promoted to a deployment space, you can also promote the environment template to the same space\. Promoting the environment template to the same space enables running the asset in the same environment that was used in the project\. You can only promote environment templates that you created\. To promote an environment template associated with an asset that you promoted to a deployment space: <!-- <ol> --> 1. From the **Manage** tab of your project on the **Environments** page under **Templates**, select the custom environment template and click **Actions > Promote**\. 2. Select the space that you promoted your asset to as the target deployment space and optionally provide a description and tags\. <!-- </ol> --> **Parent topic:**[Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
B2117B2CD0FEA469149B23FACB6A9F7F32905AFD
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html?context=cdpaas&locale=en
Deploying a prompt template
Deploying a prompt template Deploy a prompt template so you can add it to a business workflow or so you can evaluate the prompt template to measure performance. Before you begin Save a prompt template that contains at least one variable as a project asset. See [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html). Promote a prompt template to a deployment space To deploy a prompt template, complete the following steps: 1. Open the project containing the prompt template. 2. Click Promote to space for the template. ![Promoting a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt1.png) 3. In the Target deployment space field, choose a deployment space or create a new space. Note the following: The deployment space must be associated with a machine learning instance that is in the same account as the project where the prompt template was created. If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). If you plan to evaluate the prompt template in the space, the recommended Deployment stage type for the space is Production. For more information on evaluating, see [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html). Note: The deployment space stage cannot be changed after the space is created. 1. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed asset. 2. From the Assets tab of the deployment space, click Deploy. You create an online deployment, which means you can send data to the endpoint and receive a response in real-time. ![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt2.png) 3. Optional: In the Deployment serving name field, add a unique label for the deployment. The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated unique ID that is assigned otherwise. The serving name also abstracts the deployment from its service instance details. Applications refer to this name, which allows for the underlying service instance to be changed without impacting users. The name can have up to 36 characters. The supported characters are [a-z,0-9,_]. The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. Testing the deployed prompt template After the deployment successfully completes, click the deployment name to view the deployment. ![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt3.png) * API reference tab includes the API endpoints and code snippets that you need to add this prompt template to an application. * Test tab supports testing the prompt template. Enter test data as text, streamed text, or in a JSON file. For details on testing a prompt template, see. If the watsonx.governance service is enabled, you also see these tabs: * Evaluate provides the tools for evaluating the prompt template in the space. Click Activate to choose the dimensions to evaluate. For details, see [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html). * AI Factsheets displays all of the metadata that is collected for the prompt template. Use these details for tracking the prompt template for governance and compliance goals. See [Tracking prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html). For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html). Learn more * [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) * [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Deploying a prompt template # Deploy a prompt template so you can add it to a business workflow or so you can evaluate the prompt template to measure performance\. ## Before you begin ## Save a prompt template that contains at least one variable as a project asset\. See [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)\. ## Promote a prompt template to a deployment space ## To deploy a prompt template, complete the following steps: <!-- <ol> --> 1. Open the project containing the prompt template\. 2. Click **Promote to space** for the template\. ![Promoting a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt1.png) 3. In the *Target deployment space* field, choose a deployment space or create a new space\. Note the following: The deployment space must be associated with a machine learning instance that is in the same account as the project where the prompt template was created. If you don't have a deployment space, choose **Create a new deployment space**, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html). If you plan to evaluate the prompt template in the space, the recommended **Deployment stage** type for the space is *Production*. For more information on evaluating, see [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html). Note: The deployment space stage cannot be changed after the space is created. <!-- </ol> --> <!-- <ol> --> 1. **Tip**: Select **View deployment in deployment space after creating**\. Otherwise, you need to take more steps to find your deployed asset\. 2. From the **Assets** tab of the deployment space, click **Deploy**\. You create an online deployment, which means you can send data to the endpoint and receive a response in real\-time\. ![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt2.png) 3. Optional: In the *Deployment serving name* field, add a unique label for the deployment\. The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated unique ID that is assigned otherwise. The serving name also abstracts the deployment from its service instance details. Applications refer to this name, which allows for the underlying service instance to be changed without impacting users. The name can have up to 36 characters. The supported characters are \[a-z,0-9,\_\]. The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use. <!-- </ol> --> ## Testing the deployed prompt template ## After the deployment successfully completes, click the deployment name to view the deployment\. ![Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/xgov-deploy-prompt3.png) <!-- <ul> --> * **API reference** tab includes the API endpoints and code snippets that you need to add this prompt template to an application\. * **Test** tab supports testing the prompt template\. Enter test data as text, streamed text, or in a JSON file\. For details on testing a prompt template, see\. <!-- </ul> --> If the watsonx\.governance service is enabled, you also see these tabs: <!-- <ul> --> * **Evaluate** provides the tools for evaluating the prompt template in the space\. Click **Activate** to choose the dimensions to evaluate\. For details, see [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html)\. * **AI Factsheets** displays all of the metadata that is collected for the prompt template\. Use these details for tracking the prompt template for governance and compliance goals\. See [Tracking prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html)\. <!-- </ul> --> For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\. ## Learn more ## <!-- <ul> --> * [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html) * [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html) * [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html) <!-- </ul> --> **Parent topic:**[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
0A507FF5262BAD7A3FB3F3C478388CFF78949941
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en
Managing feature groups with assetframe-lib for Python (beta)
Managing feature groups with assetframe-lib for Python (beta) You can use the assetframe-lib to create, view and edit feature group information for data assets in Watson Studio notebooks. Feature groups define additional metadata on columns of your data asset that can be used in downstream Machine Learning tasks. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for more information about using feature groups in the UI. Setting up the assetframe-lib and ibm-watson-studio-lib libraries The assetframe-lib library for Python is pre-installed and can be imported directly in a notebook in Watson Studio. However, it relies on the [ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) library. The following steps describe how to set up both libraries. To insert the project token to your notebook: 1. Click the More icon on your notebook toolbar and then click Insert project token. If a project token exists, a cell is added to your notebook with the following information: from ibm_watson_studio_lib import access_project_or_space wslib = access_project_or_space({"token":"<ProjectToken>"}) <ProjectToken> is the value of the project token. If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. To create a project token: 1. From the Manage tab, select the Access Control page, and click New access token under Access tokens. 2. Enter a name, select Editor role for the project, and create a token. 3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token. 2. Import assetframe-lib and initialize it with the created ibm-watson-studio-lib instance. from assetframe_lib import AssetFrame AssetFrame._wslib = wslib The assetframe-lib functions and methods The assetframe-lib library exposes a set of functions and methods that are grouped in the following way: * [Creating an asset frame](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-assetframe) * [Creating, retrieving and removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-features) * [Specifying feature attributes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enspecify-featureatt) * [Role](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrole) * [Description](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=endescription) * [Fairness information for favorable and unfavorable outcomes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enfairnessinfo) * [Fairness information for monitored and reference groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enmonitoredreference) * [Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=envalue-desc) * [Recipe](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrecipe) * [Tags](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=entags) * [Previewing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enpreview-data) * [Getting fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enget-fairness) Creating an asset frame An asset frame is used to define feature group metadata on an existing data asset or on a pandas DataFrame. You can have exactly one feature group for each asset. If you create an asset frame on a pandas DataFrame, you can store the pandas DataFrame along with the feature group metadata as a data asset in your project. You can use one of the following functions to create your asset frame: * AssetFrame.from_data_asset(asset_name, create_default_features=False) This function creates a new asset frame wrapping an existing data asset in your project. If there is already a feature group for this asset, for example created in the user interface, it is read from the asset metadata. Parameters: - asset_name: (Required) The name of a data asset in your project. - create_default_features: (Optional) Creates features for all columns in the data asset. * AssetFrame.from_pandas(name, dataframe, create_default_features=False) This function creates a new asset frame wrapping a pandas DataFrame. Parameters: * name: (Required) The name of the asset frame. This name will be used as the name of the data asset if you store your feature group in your project in a later step. * dataframe: (Required) A pandas DataFrame that you want to store along with feature group information. * create_default_features: (Optional) Create features for all columns in the dataframe. Example of creating a asset frame from a pandas DataFrame: Create an asset frame from a pandas DataFrame and set the name of the asset frame. af = AssetFrame.from_pandas(dataframe=credit_risk_df, name="Credit Risk Training Data") Creating, retrieving and removing features A feature defines metadata that can be used by downstream Machine Learning tasks. You can create one feature per column in your data set. You can use one of the following functions to create, retrieve or remove columns from your asset frame: * add_feature(column_name, role='Input') This function adds a new feature to your asset frame with the given role. Parameters: * column_name: (Required) The name of the column to create a feature for. * role: (Optional) The role of the feature. It defaults to Input. Valid roles are: * Input: The input for a machine learning model * Target: The target of a prediction model * Identifier: The identifier of a row in your data set. * create_default_features() This function creates features for all columns in your data set. The roles of the features will default to Input. * get_features() This function retrieves all features of the asset frame. * get_feature(column_name) This function retrieves the feature for the given column name. Parameters: * column_name: (Required) The string name of the column to create the feature for. * get_features_by_role(role) This function retrieves all features of the dataframe with the given role. Parameters: * role: (Required) The role that the features must have. This can be Input, Target or Identifier. * remove_feature(feature_or_column_name) This function removes the feature from the asset frame. Parameters: * feature_or_column_name: (Required) A feature or the name of the column to remove the feature for. Example that shows creating features for all columns in the data set and retrieving one of those columns for further specifications: Create features for all columns in the data set and retrieve a column for further specifications. af.create_default_features() risk_feat = af.get_feature('Risk') Specifying feature attributes Features specify additional metadata on columns that may be used in downstream Machine Learning tasks. You can use the following function to retrieve the column that the feature is defined for: * get_column_name() This function retrieves the column name that the feature is defined for. Role The role specifies the intended usage of the feature in a Machine Learning task. Valid roles are: * Input: The feature can be used as an input to a Machine Learning model. * Identifier: The feature uniquely identifies a row in the data set. * Target: The feature can be used as a target in a prediction algorithm. At this time, a feature must have exactly one role. You can use the following methods to work with the role: * set_roles(roles) This method sets the roles of the feature. Parameters: * roles : (Required) The roles to be used. Either as a single string or an array of strings. * get_roles() This method returns all roles of the feature. Example that shows getting a feature and setting a role: Set the role of the feature 'Risk' to 'Target' to use it as a target in a prediction model. risk_feat = af.get_feature('Risk') risk_feat.set_roles('Target') Description An optional description of the feature. It defaults to None. You can use the following methods to work with the description. * set_description(description) This method sets the description of the feature. Parameters: * description: (Required) Either a string or None to remove the description. * get_description() This method returns the description of the feature. Fairness information for favorable and unfavorable outcomes You can specify favorable and unfavorable labels for a feature with a Target role. You can use the following methods to set and retrieve favorable or unfavorable labels. Favorable outcomes You can use the following methods to set and get favorable labels: * set_favorable_labels(labels) This method sets favorable labels for the feature. Parameters: * labels: (Required) A string or list of strings with favorable labels. * get_favorable_labels() This method returns the favorable labels of the feature. Unfavorable outcomes You can use the following methods to set and get unfavorable labels: * set_unfavorable_labels(labels) This method sets unfavorable labels for the feature. Parameters: * labels: (Required) A string or list of strings with unfavorable labels. * get_unfavorable_labels() This method gets the unfavorable labels of the feature. Example that shows setting favorable and unfavorable labels: Set favorable and unfavorable labels for the target feature 'Risk'. risk_feat = af.get_feature('Risk') risk_feat.set_favorable_labels("No Risk") risk_feat.set_unfavorable_labels("Risk") Fairness information for monitored and reference groups Some columns in your data might by prone to unfair bias. You can specify monitored and reference groups for further usage in Machine Learning tasks. They can be specified for features with the role Input. You can either specify single values or ranges of numeric values as a string with square brackets and a start and end value, for example [0,15]. You can use the following methods to set and retrieve monitored and reference groups: * set_monitored_groups(groups) This method sets monitored groups for the feature. Parameters: * groups: (Required) A string or list of strings with monitored groups. * get_monitored_groups() This method gets the monitored groups of the feature. * set_reference_groups(groups) This method sets reference groups for the feature. Parameters: * groups: (Required) A string or list of strings with reference groups. * get_reference_groups() This method gets the reference groups of the feature. Example that shows setting monitored and reference groups: Set monitored and reference groups for the features 'Sex' and 'Age'. sex_feat = af.get_feature("Sex") sex_feat.set_reference_groups("male") sex_feat.set_monitored_groups("female") age_feat = af.get_feature("Age") age_feat.set_monitored_groups("[0,25]") age_feat.set_reference_groups("[26,80]") Value descriptions You can use value descriptions to specify descriptions for column values in your data. You can use the following methods to set and retrieve descriptions: * set_value_descriptions(value_descriptions) This method sets value descriptions for the feature. Parameters: * value_descriptions: (Required) A Pyton dictionary or list of dictionaries of the following format: {'value': '<value>', 'description': '<description>'} * get_value_descriptions() This method returns all value descriptions of the feature. * get_value_description(value) This method returns the value description for the given value. Parameters: * value: (Required) The value to retrieve the value description for. * add_value_description(value, description) This method adds a value description with the given value and description to the list of value descriptions for the feature. Parameters: * value: (Required) The string value of the value description. * description: (Required) The string description of the value description. * remove_value_description(value) This method removes the value description with the given value from the list of value descriptions of the feature. Parameters: * value: (Required) A value of the value description to be removed. Example that shows how to set value descriptions: plan_feat = af.get_feature("InstallmentPlans") val_descriptions = [ {'value': 'stores', 'description': 'customer has additional business installment plan'}, {'value': 'bank', 'description': 'customer has additional personal installment plan'}, {'value': 'none', 'description': 'customer has no additional installment plan'} ] plan_feat.set_value_descriptions(val_descriptions) Recipe You can use the recipe to describe how a feature was created, for example with a formula or a code snippet. It defaults to None. You can use the following methods to work with the recipe. * set_recipe(recipe) This method sets the recipe of the feature. Parameters: * recipe: (Required) Either a string or None to remove the recipe. * get_recipe() This method returns the recipe of the feature. Tags You can use tags to attach additional labels or information to your feature. You can use the following methods to work with tags: * set_tags(tags) This method sets the tags of the feature. Parameters: * tags: (Required) Either as a single string or an array of strings. * get_tags() This method returns all tags of the feature. Previewing data You can preview the data of your data asset or pandas DataFrame with additional information about your features like fairness information. The data is displayed like a pandas DataFrame with optional header information about feature roles, descriptions or recipes. Fairness information is displayed with coloring for favorable or unfavorable labels, monitored and reference groups. At this time, you can retrieve up to 100 rows of sample data for a data asset. Use the following function to preview data: * head(num_rows=5, display_options=['role']) This function returns the first num_rows rows of the data set in a pandas DataFrame. Parameters: * num_rows : (Optional) The number of rows to retrieve. * display_options: (Optional) The column header can display additional information for a column in your data set. Use these options to display feature attributes: * role: Displays the role of a feature for this column. * description: Displays the description of a feature for this column. * recipe: Displays the recipe of a feature for this column. Getting fairness information You can retrieve the fairness information of all features in your asset frame as a Python dictionary. This includes all features containing monitored or reference groups (or both) as protected attributes and the target feature with favorable or unfavorable labels. If the data type of a column with fairness information is numeric, the values of labels and groups are transformed to numeric values if possible. Fairness information can be used directly in [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) or [AI Fairness 360](https://www.ibm.com/opensource/open/projects/ai-fairness-360/). You can use the following function to retrieve fairness information of your asset frame: * get_fairness_info(target=None) This function returns a Python dictionary with favorable and unfavorable labels of the target column and protected attributes with monitored and reference groups. Parameters: * target: (Optional) The target feature. If there is only one feature with role Target, it will be used automatically. Example that shows how to retrieve fairness information: af.get_fairness_info() Output showing fairness information: { 'favorable_labels': ['No Risk'], 'unfavorable_labels': ['Risk'], 'protected_attributes': [ {'feature': 'Sex', 'monitored_group': 'female'], 'reference_group': 'male']}, {'feature': 'Age', 'monitored_group': 0.0, 25]], 'reference_group': 26, 80]] }] } Saving feature group information After you have fully specified or updated your features, you can save the whole feature group definition as metadata for your data asset. If you created the asset frame from a pandas DataFrame, a new data asset will be created in the project storage with the name of the asset frame. You can use the following method to store your feature group information: * to_data_asset(overwrite_data=False) This method saves feature group information to the assets metadata. It creates a new data asset, if the asset frame was created from a pandas DataFrame. Parameters: * overwrite_data: (Optional) Also overwrite the asset contents with the data from the asset frame. Defaults to False. Learn more See the [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples. Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
# Managing feature groups with assetframe\-lib for Python (beta) # You can use the `assetframe-lib` to create, view and edit feature group information for data assets in Watson Studio notebooks\. Feature groups define additional metadata on columns of your data asset that can be used in downstream Machine Learning tasks\. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for more information about using feature groups in the UI\. ## Setting up the `assetframe-lib` and `ibm-watson-studio-lib` libraries ## The `assetframe-lib` library for Python is pre\-installed and can be imported directly in a notebook in Watson Studio\. However, it relies on the [`ibm-watson-studio-lib`](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) library\. The following steps describe how to set up both libraries\. To insert the project token to your notebook: <!-- <ol> --> 1. Click the **More** icon on your notebook toolbar and then click **Insert project token**\. If a project token exists, a cell is added to your notebook with the following information: from ibm_watson_studio_lib import access_project_or_space wslib = access_project_or_space({"token":"<ProjectToken>"}) `<ProjectToken>` is the value of the project token. If you are told in a message that no project token exists, click the link in the message to be redirected to the project's **Access Control** page where you can create a project token. You must be eligible to create a project token. To create a project token: <!-- <ol> --> 1. From the **Manage** tab, select the **Access Control** page, and click **New access token** under **Access tokens**. 2. Enter a name, select **Editor** role for the project, and create a token. 3. Go back to your notebook, click the **More** icon on the notebook toolbar and then click **Insert project token**. <!-- </ol> --> 2. Import `assetframe-lib` and initialize it with the created `ibm-watson-studio-lib` instance\. from assetframe_lib import AssetFrame AssetFrame._wslib = wslib <!-- </ol> --> ## The assetframe\-lib functions and methods ## The assetframe\-lib library exposes a set of functions and methods that are grouped in the following way: <!-- <ul> --> * [Creating an asset frame](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#create-assetframe) * [Creating, retrieving and removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#create-features) * [Specifying feature attributes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#specify-featureatt) <!-- <ul> --> * [Role](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#role) * [Description](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#description) * [Fairness information for favorable and unfavorable outcomes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#fairnessinfo) * [Fairness information for monitored and reference groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#monitoredreference) * [Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#value-desc) * [Recipe](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#recipe) * [Tags](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#tags) <!-- </ul> --> * [Previewing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#preview-data) * [Getting fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=en#get-fairness) <!-- </ul> --> ## Creating an asset frame ## An asset frame is used to define feature group metadata on an existing data asset or on a pandas DataFrame\. You can have exactly one feature group for each asset\. If you create an asset frame on a pandas DataFrame, you can store the pandas DataFrame along with the feature group metadata as a data asset in your project\. You can use one of the following functions to create your asset frame: <!-- <ul> --> * `AssetFrame.from_data_asset(asset_name, create_default_features=False)` This function creates a new asset frame wrapping an existing data asset in your project. If there is already a feature group for this asset, for example created in the user interface, it is read from the asset metadata. <!-- </ul> --> **Parameters:** - `asset_name`: (Required) The name of a data asset in your project. - `create_default_features`: (Optional) Creates features for all columns in the data asset. <!-- <ul> --> * `AssetFrame.from_pandas(name, dataframe, create_default_features=False)` This function creates a new asset frame wrapping a pandas DataFrame. **Parameters:** <!-- <ul> --> * `name`: (Required) The name of the asset frame. This name will be used as the name of the data asset if you store your feature group in your project in a later step. * `dataframe`: (Required) A pandas DataFrame that you want to store along with feature group information. * `create_default_features`: (Optional) Create features for all columns in the dataframe. Example of creating a asset frame from a pandas DataFrame: # Create an asset frame from a pandas DataFrame and set # the name of the asset frame. af = AssetFrame.from_pandas(dataframe=credit_risk_df, name="Credit Risk Training Data") <!-- </ul> --> <!-- </ul> --> ## Creating, retrieving and removing features ## A feature defines metadata that can be used by downstream Machine Learning tasks\. You can create one feature per column in your data set\. You can use one of the following functions to create, retrieve or remove columns from your asset frame: <!-- <ul> --> * `add_feature(column_name, role='Input')` This function adds a new feature to your asset frame with the given role. **Parameters:** <!-- <ul> --> * `column_name`: (Required) The name of the column to create a feature for. * `role`: (Optional) The role of the feature. It defaults to *Input*. Valid roles are: <!-- <ul> --> * *Input*: The input for a machine learning model * *Target*: The target of a prediction model * *Identifier*: The identifier of a row in your data set. <!-- </ul> --> <!-- </ul> --> * `create_default_features()` This function creates features for all columns in your data set. The roles of the features will default to *Input*. * `get_features()` This function retrieves all features of the asset frame. * `get_feature(column_name)` This function retrieves the feature for the given column name. **Parameters:** <!-- <ul> --> * `column_name`: (Required) The string name of the column to create the feature for. <!-- </ul> --> * `get_features_by_role(role)` This function retrieves all features of the dataframe with the given role. **Parameters:** <!-- <ul> --> * `role`: (Required) The role that the features must have. This can be *Input*, *Target* or *Identifier*. <!-- </ul> --> * `remove_feature(feature_or_column_name)` This function removes the feature from the asset frame. **Parameters:** <!-- <ul> --> * `feature_or_column_name`: (Required) A feature or the name of the column to remove the feature for. <!-- </ul> --> <!-- </ul> --> Example that shows creating features for all columns in the data set and retrieving one of those columns for further specifications: # Create features for all columns in the data set and retrieve a column # for further specifications. af.create_default_features() risk_feat = af.get_feature('Risk') ## Specifying feature attributes ## Features specify additional metadata on columns that may be used in downstream Machine Learning tasks\. You can use the following function to retrieve the column that the feature is defined for: <!-- <ul> --> * `get_column_name()` This function retrieves the column name that the feature is defined for. <!-- </ul> --> ### Role ### The role specifies the intended usage of the feature in a Machine Learning task\. Valid roles are: <!-- <ul> --> * `Input`: The feature can be used as an input to a Machine Learning model\. * `Identifier`: The feature uniquely identifies a row in the data set\. * `Target`: The feature can be used as a target in a prediction algorithm\. <!-- </ul> --> At this time, a feature must have exactly one role\. You can use the following methods to work with the role: <!-- <ul> --> * `set_roles(roles)` This method sets the roles of the feature. **Parameters:** <!-- <ul> --> * `roles` : (Required) The roles to be used. Either as a single string or an array of strings. <!-- </ul> --> * `get_roles()` This method returns all roles of the feature. <!-- </ul> --> Example that shows getting a feature and setting a role: # Set the role of the feature 'Risk' to 'Target' to use it as a target in a prediction model. risk_feat = af.get_feature('Risk') risk_feat.set_roles('Target') ### Description ### An optional description of the feature\. It defaults to `None`\. You can use the following methods to work with the description\. <!-- <ul> --> * `set_description(description)` This method sets the description of the feature. **Parameters:** <!-- <ul> --> * `description`: (Required) Either a string or `None` to remove the description. <!-- </ul> --> * `get_description()` This method returns the description of the feature. <!-- </ul> --> ### Fairness information for favorable and unfavorable outcomes ### You can specify favorable and unfavorable labels for a feature with a `Target` role\. You can use the following methods to set and retrieve favorable or unfavorable labels\. #### Favorable outcomes #### You can use the following methods to set and get favorable labels: <!-- <ul> --> * `set_favorable_labels(labels)` This method sets favorable labels for the feature. **Parameters:** <!-- <ul> --> * `labels`: (Required) A string or list of strings with favorable labels. <!-- </ul> --> * `get_favorable_labels()` This method returns the favorable labels of the feature. <!-- </ul> --> #### Unfavorable outcomes #### You can use the following methods to set and get unfavorable labels: <!-- <ul> --> * `set_unfavorable_labels(labels)` This method sets unfavorable labels for the feature. **Parameters**: <!-- <ul> --> * `labels`: (Required) A string or list of strings with unfavorable labels. <!-- </ul> --> * `get_unfavorable_labels()` This method gets the unfavorable labels of the feature. <!-- </ul> --> Example that shows setting favorable and unfavorable labels: # Set favorable and unfavorable labels for the target feature 'Risk'. risk_feat = af.get_feature('Risk') risk_feat.set_favorable_labels("No Risk") risk_feat.set_unfavorable_labels("Risk") ### Fairness information for monitored and reference groups ### Some columns in your data might by prone to unfair bias\. You can specify monitored and reference groups for further usage in Machine Learning tasks\. They can be specified for features with the role `Input`\. You can either specify single values or ranges of numeric values as a string with square brackets and a start and end value, for example `[0,15]`\. You can use the following methods to set and retrieve monitored and reference groups: <!-- <ul> --> * `set_monitored_groups(groups)` This method sets monitored groups for the feature. **Parameters**: <!-- <ul> --> * `groups`: (Required) A string or list of strings with monitored groups. <!-- </ul> --> * `get_monitored_groups()` This method gets the monitored groups of the feature. * `set_reference_groups(groups)` This method sets reference groups for the feature. **Parameters**: <!-- <ul> --> * `groups`: (Required) A string or list of strings with reference groups. <!-- </ul> --> * `get_reference_groups()` This method gets the reference groups of the feature. <!-- </ul> --> Example that shows setting monitored and reference groups: # Set monitored and reference groups for the features 'Sex' and 'Age'. sex_feat = af.get_feature("Sex") sex_feat.set_reference_groups("male") sex_feat.set_monitored_groups("female") age_feat = af.get_feature("Age") age_feat.set_monitored_groups("[0,25]") age_feat.set_reference_groups("[26,80]") ### Value descriptions ### You can use value descriptions to specify descriptions for column values in your data\. You can use the following methods to set and retrieve descriptions: <!-- <ul> --> * `set_value_descriptions(value_descriptions)` This method sets value descriptions for the feature. **Parameters:** <!-- <ul> --> * `value_descriptions`: (Required) A Pyton dictionary or list of dictionaries of the following format: `{'value': '<value>', 'description': '<description>'}` <!-- </ul> --> * `get_value_descriptions()` This method returns all value descriptions of the feature. * `get_value_description(value)` This method returns the value description for the given value. **Parameters**: <!-- <ul> --> * `value`: (Required) The value to retrieve the value description for. <!-- </ul> --> * `add_value_description(value, description)` This method adds a value description with the given value and description to the list of value descriptions for the feature. **Parameters**: <!-- <ul> --> * `value`: (Required) The string value of the value description. * `description`: (Required) The string description of the value description. <!-- </ul> --> * `remove_value_description(value)` This method removes the value description with the given value from the list of value descriptions of the feature. **Parameters**: <!-- <ul> --> * `value`: (Required) A value of the value description to be removed. <!-- </ul> --> <!-- </ul> --> Example that shows how to set value descriptions: plan_feat = af.get_feature("InstallmentPlans") val_descriptions = [ {'value': 'stores', 'description': 'customer has additional business installment plan'}, {'value': 'bank', 'description': 'customer has additional personal installment plan'}, {'value': 'none', 'description': 'customer has no additional installment plan'} ] plan_feat.set_value_descriptions(val_descriptions) ### Recipe ### You can use the recipe to describe how a feature was created, for example with a formula or a code snippet\. It defaults to `None`\. You can use the following methods to work with the recipe\. <!-- <ul> --> * `set_recipe(recipe)` This method sets the recipe of the feature. **Parameters**: <!-- <ul> --> * `recipe`: (Required) Either a string or None to remove the recipe. <!-- </ul> --> * `get_recipe()` This method returns the recipe of the feature. <!-- </ul> --> ### Tags ### You can use tags to attach additional labels or information to your feature\. You can use the following methods to work with tags: <!-- <ul> --> * `set_tags(tags)` This method sets the tags of the feature. **Parameters**: <!-- <ul> --> * `tags`: (Required) Either as a single string or an array of strings. <!-- </ul> --> * `get_tags()` This method returns all tags of the feature. <!-- </ul> --> ## Previewing data ## You can preview the data of your data asset or pandas DataFrame with additional information about your features like fairness information\. The data is displayed like a pandas DataFrame with optional header information about feature roles, descriptions or recipes\. Fairness information is displayed with coloring for favorable or unfavorable labels, monitored and reference groups\. At this time, you can retrieve up to 100 rows of sample data for a data asset\. Use the following function to preview data: <!-- <ul> --> * `head(num_rows=5, display_options=['role'])` This function returns the first `num_rows` rows of the data set in a pandas DataFrame. **Parameters**: <!-- <ul> --> * `num_rows` : (Optional) The number of rows to retrieve. * `display_options`: (Optional) The column header can display additional information for a column in your data set. Use these options to display feature attributes: <!-- <ul> --> * `role`: Displays the role of a feature for this column. * `description`: Displays the description of a feature for this column. * `recipe`: Displays the recipe of a feature for this column. <!-- </ul> --> <!-- </ul> --> <!-- </ul> --> ## Getting fairness information ## You can retrieve the fairness information of all features in your asset frame as a Python dictionary\. This includes all features containing monitored or reference groups (or both) as protected attributes and the target feature with favorable or unfavorable labels\. If the data type of a column with fairness information is numeric, the values of labels and groups are transformed to numeric values if possible\. Fairness information can be used directly in [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) or [AI Fairness 360](https://www.ibm.com/opensource/open/projects/ai-fairness-360/)\. You can use the following function to retrieve fairness information of your asset frame: <!-- <ul> --> * `get_fairness_info(target=None)` This function returns a Python dictionary with favorable and unfavorable labels of the target column and protected attributes with monitored and reference groups. **Parameters**: <!-- <ul> --> * target: (Optional) The target feature. If there is only one feature with role `Target`, it will be used automatically. Example that shows how to retrieve fairness information: af.get_fairness_info() Output showing fairness information: { 'favorable_labels': ['No Risk'], 'unfavorable_labels': ['Risk'], 'protected_attributes': [ {'feature': 'Sex', 'monitored_group': 'female'], 'reference_group': 'male']}, {'feature': 'Age', 'monitored_group': 0.0, 25]], 'reference_group': 26, 80]] }] } <!-- </ul> --> <!-- </ul> --> ## Saving feature group information ## After you have fully specified or updated your features, you can save the whole feature group definition as metadata for your data asset\. If you created the asset frame from a pandas DataFrame, a new data asset will be created in the project storage with the name of the asset frame\. You can use the following method to store your feature group information: <!-- <ul> --> * `to_data_asset(overwrite_data=False)` This method saves feature group information to the assets metadata. It creates a new data asset, if the asset frame was created from a pandas DataFrame. **Parameters**: <!-- <ul> --> * `overwrite_data`: (Optional) Also overwrite the asset contents with the data from the asset frame. Defaults to `False`. <!-- </ul> --> <!-- </ul> --> ## Learn more ## See the [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples\. **Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) <!-- </article "role="article" "> -->
A724F6E91162B52C519F6887F06DF40626C0F698
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/python_os.html?context=cdpaas&locale=en
Using Python functions to work with Cloud Object Storage
Using Python functions to work with Cloud Object Storage To access and work with data that is in IBM Cloud Object Storage, you can use Python functions from a notebook. With your IBM Cloud Object Storage credentials, you can access and load data from IBM Cloud Object Storage to use in a notebook. This data can be any object of type file-like-object, for example, byte buffers or string buffers. The data that you upload can reside in a different IBM Cloud Object Storage bucket than the project's bucket. You can also upload data from a local system into IBM Cloud Object Storage from within a notebook. This data can be a compressed file or Pickle object. See [Working With IBM Cloud Object Storage In Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f) for more information. Learn more * Use [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) to interact with Watson Studio projects and project assets. The library also contains functions that simplify fetching files from IBM Cloud Object Storage. * [Control access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
# Using Python functions to work with Cloud Object Storage # To access and work with data that is in IBM Cloud Object Storage, you can use Python functions from a notebook\. With your IBM Cloud Object Storage credentials, you can access and load data from IBM Cloud Object Storage to use in a notebook\. This data can be any object of type file\-like\-object, for example, byte buffers or string buffers\. The data that you upload can reside in a different IBM Cloud Object Storage bucket than the project's bucket\. You can also upload data from a local system into IBM Cloud Object Storage from within a notebook\. This data can be a compressed file or Pickle object\. See [Working With IBM Cloud Object Storage In Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f) for more information\. ## Learn more ## <!-- <ul> --> * Use [ibm\-watson\-studio\-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) to interact with Watson Studio projects and project assets\. The library also contains functions that simplify fetching files from IBM Cloud Object Storage\. * [Control access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) <!-- </ul> --> **Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) <!-- </article "role="article" "> -->
F43870B5B6CE4D191950FDAAE6AAFC36F05360C9
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en
Compute resource options for RStudio in projects
Compute resource options for RStudio in projects When you run RStudio in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=entypes) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=encompute) * [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enscope) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enchange-env) Types of environments You can use this type of environment with RStudio: * Default RStudio CPU environments for standard workloads Default environment templates You can select any of the following default environment templates for RStudio in a project. These default environment templates are listed under Templates on the Environments page on the Manage tab of your project. All environment templates use RStudio with Runtime 23.1 on the R 4.2 programming language. Default RStudio environment templates Name Hardware configuration Local storage CUH rate per hour Default RStudio L 16 vCPU and 64 GB RAM 2 GB 8 Default RStudio M 8 vCPU and 32 GB RAM 2 GB 4 Default RStudio XS 2 vCPU and 8 GB RAM 2 GB 1 If you don't explicitly select an environment, Default RStudio M is the default. The hardware configuration of the available RStudio environments is preset and cannot be changed. For compute-intensive processing on a large data set, consider pushing your data processing to Spark from your RStudio session. See [Using Spark in RStudio](https://medium.com/ibm-data-science-experience/access-ibm-analytics-for-apache-spark-from-rstudio-eb11bf8b401b). To prevent consuming extra capacity unit hours (CUHs), stop all active RStudio runtimes when you no longer need them. See [RStudio idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes). Compute usage in projects RStudio consumes compute resources as CUH from the Watson Studio service in projects. You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. Runtime scope An RStudio environment runtime is always scoped to a project and a user. Each user can only have one RStudio runtime per project at one time. If you start RStudio in a project in which you already have an active RStudio session, the existing active session is disconnected and you can continue working in the new RStudio session. Changing the RStudio runtime If you notice that processing is very slow, you can restart RStudio and select a larger environment runtime. To change the RStudio environment runtime: 1. Save any data from your current session before switching to another environment. 2. Stop the active RStudio runtime under Tool runtimes on the Environments page on the Manage tab of your project. 3. Restart RStudio from the Launch IDE menu on your project's action bar and select another environment with the compute power and memory capacity that better meets your data processing requirements. Learn more * [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for RStudio in projects # When you run RStudio in a project, you choose an environment template for the runtime environment\. The environment template specifies the type, size, and power of the hardware configuration, plus the software template\. <!-- <ul> --> * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en#types) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en#default) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en#compute) * [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en#scope) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=en#change-env) <!-- </ul> --> ## Types of environments ## You can use this type of environment with RStudio: <!-- <ul> --> * Default RStudio CPU environments for standard workloads <!-- </ul> --> ## Default environment templates ## You can select any of the following default environment templates for RStudio in a project\. These default environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. All environment templates use RStudio with Runtime 23\.1 on the R 4\.2 programming language\. <!-- <table> --> Default RStudio environment templates | Name | Hardware configuration | Local storage | CUH rate per hour | | -------------------- | ---------------------- | ------------- | ----------------- | | `Default RStudio L` | 16 vCPU and 64 GB RAM | 2 GB | 8 | | `Default RStudio M` | 8 vCPU and 32 GB RAM | 2 GB | 4 | | `Default RStudio XS` | 2 vCPU and 8 GB RAM | 2 GB | 1 | <!-- </table ""> --> If you don't explicitly select an environment, `Default RStudio M` is the default\. The hardware configuration of the available RStudio environments is preset and cannot be changed\. For compute\-intensive processing on a large data set, consider pushing your data processing to Spark from your RStudio session\. See [Using Spark in RStudio](https://medium.com/ibm-data-science-experience/access-ibm-analytics-for-apache-spark-from-rstudio-eb11bf8b401b)\. To prevent consuming extra capacity unit hours (CUHs), stop all active RStudio runtimes when you no longer need them\. See [RStudio idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)\. ## Compute usage in projects ## RStudio consumes compute resources as CUH from the Watson Studio service in projects\. You can monitor the Watson Studio CUH consumption on the **Resource usage** page on the **Manage** tab of your project\. ## Runtime scope ## An RStudio environment runtime is always scoped to a project and a user\. Each user can only have one RStudio runtime per project at one time\. If you start RStudio in a project in which you already have an active RStudio session, the existing active session is disconnected and you can continue working in the new RStudio session\. ## Changing the RStudio runtime ## If you notice that processing is very slow, you can restart RStudio and select a larger environment runtime\. To change the RStudio environment runtime: <!-- <ol> --> 1. Save any data from your current session before switching to another environment\. 2. Stop the active RStudio runtime under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project\. 3. Restart RStudio from the **Launch IDE** menu on your project's action bar and select another environment with the compute power and memory capacity that better meets your data processing requirements\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html?context=cdpaas&locale=en
RStudio
RStudio R is a popular statistical analysis and machine-learning package that enables data management and includes tests, models, analyses and graphics. RStudio, included in IBM Watson Studio, provides an integrated development environment for working with R scripts. Accessing RStudio RStudio is integrated in IBM Watson Studio projects and can be launched after you create a project. With RStudio integration in projects, you can access and use the data files that are stored in the IBM Cloud Object Storage bucket associated with your project in RStudio. To start RStudio in your project: 1. Click RStudio from the Launch IDE menu on your project's action bar. 2. Select an environment. 3. Click Launch. The environment runtime is initiated and the development environment opens. Sometimes, when you start an RStudio session, you might experience a corrupted RStudio state from a previous session and your session will not start. If this happens, select to reset the workspace at the time you select the RStudio environment and then start the RStudio IDE again. By resetting the workspace, RStudio is started using the default settings with a clean RStudio workspace. Working with data files In RStudio, you can work with data files from different sources: * Files in the RStudio server file structure, which you can view by clicking Files in the bottom right section of RStudio. This is where you can create folders, upload files from your local system, and delete files. To access these files in R, you need to set the working directory to the directory with the files. You can do this by navigating to the directory with the files and clicking More > Set as Working Directory. Be aware that files stored in the Home directory of your RStudio instance are persistent within your instance only and cannot be shared across environments nor within your project. Video disclaimer: Some minor steps and graphical elements in the videos on this page may differ from your deployment. Watch this video to see how to load data to RStudio. This video provides a visual method to learn the concepts and tasks in this documentation. * Project data assets that are stored in the IBM Cloud Object Storage bucket associated with your project. When RStudio is launched, the IBM Cloud Object Storage bucket content is mounted to the project-objectstorage directory in your RStudio Home directory. If you want data files to appear in the project-objectstorage directory, you must add them as assets to your project. See [Adding files as project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html?context=cdpaas&locale=enadding-files). If new data assets are added to the project while you are in RStudio and you want to access them, you need to refresh the project-objectstorage folder. See how to [read and write data to and from Cloud Object Storage](https://medium.com/ibm-data-science-experience/read-and-write-data-to-and-from-bluemix-object-storage-in-rstudio-276282347ce1). * Data stored in a database system. Watch this video to see how to connect to external data sources in RStudio. This video provides a visual method to learn the concepts and tasks in this documentation. * Files stored in local storage that are mounted to /home/rstudio. The home directory has a storage limitation of 2 GB and is used to store the RStudio session workspace. Note that you are allocated 2 GB for your home directory storage across all of your projects, irrespective of whether you use RStudio in each project. As a consequence, you should only store R script files and small data files in the home directory. It is not intended for large data files or large generated output. All large data files should be uploaded as project assets, which are mounted to the project-objectstorage directory from where you can access them. Adding files as project assets If you worked with data files and want them appear in the project-objectstorage directory, you must add them to your project as data assets. To add these files as data assets to the project: 1. On the Assets page of the project, click the Upload asset to project icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and select the Files tab. 2. Select the files you want to add to the project as assets. 3. From the Actions list, select Add as data asset and apply your changes. Capacity consumption and runtime scope An RStudio environment runtime is always scoped to an environment template and an RStudio session user. Only one RStudio session can be active per Watson Studio user at one time. If you started RStudio in another project, you are asked if you want to stop that session and start a new RStudio session in the context of the current project you're working in. Runtime usage is calculated by the number of capacity unit hours (CUHs) consumed by the active environment runtime. The CUHs consumed by an active RStudio runtime in a project are billed to the account of the project creator. See [Capacity units per hour billing for RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmlrstudio). You can see which RStudio environment runtimes are active on the project's Environments page. You can stop your runtime from this page. Remember: The CUH counter continues to increase while the runtime is active so stop the runtime if you aren't using RStudio. If you don't explicitly stop the runtime, it is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours. Watch this video to see an overview of the RStudio IDE. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video is a quick tour of the RStudio integrated development environment inside a Watson Studio project. 00:07 From any project, you can launch the RStudio IDE. 00:12 RStudio is a free and open-source integrated development environment for R, a programming language for statistical computing and graphics. 00:22 In RStudio, there are four panes: the source pane, the console pane, the environment pane, and the files pane. 00:32 The panes help you organize your work and separate the different tasks you'll do with R. 00:39 You can drag to resize the panes or use the icons to minimize and maximize a pane. 00:47 You can also rearrange the panes in global options. 00:53 The console pane is your interface to R. 00:56 It's exactly what you would see in terminal window or user interfaces bundled with R. 01:01 The console pane does have some added features that you'll find helpful. 01:06 To run code from the console, just type the command. 01:11 Start typing a command to see a list of commands that begin with the letters you started typing. 01:17 Highlight a command in the list and press "Enter" to insert it. 01:24 Use the up arrow to scroll through the commands you've previously entered. 01:31 As you issue more commands, you can scroll through the results. 01:36 Use the menu option to clear the console. 01:39 You can also use tab completion to see a list of the functions, objects, and data sets beginning with that text. 01:47 And use the arrows to highlight a command to see help for that command. 01:51 When you're ready, just press "Enter" to insert it. 01:55 Next, you'll see a list of the options for that command in the current context. 01:59 For example, the first argument for the read.csv function is the file. 02:05 RStudio will display a list of the folders and files in your working directory, so you can easily locate the file to include with the argument. 02:16 Lastly, if you use the tab completion with a function that expects a package name, such as a library, you'll see a list of all the installed packages. 02:28 Next, let's look at the source pane, which is simply a text editor for you to write your R code. 02:34 The text editor supports R command files and plain text, as well as several other languages, and includes language-specific highlighting in context. 02:47 And you'll notice the tab completion is also available in the text editor. 02:53 From the text editor, you can run a single line of code, or select several lines of code to run, and you'll see the results in the console pane. 03:08 You can save your code as an R script to share or run again later. 03:15 The view function opens a new tab that shows the dataframe in spreadsheet format. 03:22 Or you can display it in its own window. 03:25 Now, you can scroll through the data, sort the columns, search for specific values, or filter the rows using the sliders and drop-down menus. 03:41 The environment pane contains an "Environment" tab, a "History" tab, and a "Connections" tab, and keeps track of what's been happening in this R session. 03:51 The "Environment" tab contains the R objects that exist in your global environment, created during the session. 03:58 So, when you create a new object in the console pane, it automatically displays in the environment pane. 04:04 You can also view the objects related to a specific package, and even see the source code for a specific function. 04:12 You can also see a list of the data sets, expand a data set to inspect its individual elements, and view them in the source pane. 04:22 You can save the contents of an environment as an .RData file, so you can load that .RData file at a later date. 04:29 From here, you can also clear the objects from the workspace. 04:33 If you want to delete specific items, use the grid view. 04:38 For example, you can easily find large items to delete to free up memory in your R session. 04:45 The "Environment" tab also allows you to import a data set. 04:50 You can see a preview of the data set and change options before completing the import. 04:55 The imported data will display in the source pane. 05:00 The "History" tab displays a history of each of the commands that you run at the command line. 05:05 Just like the "Environment" tab, you can save the history as an .Rhistory file, so you can open it at a later date. 05:11 And this tab has the same options to clear all of the history and individual entries in the history. 05:17 Select a command and send it to the console to rerun the command. 05:23 You can also copy a command to the source pane to include it in a script. 05:31 On the "Connections" tab, you can create a new connection to a data source. 05:36 The choices in this dialog box are dependent upon which packages you have installed. 05:41 For example, a "BLUDB" connection allows you to connect to a Db2 Warehouse on Cloud service. 05:49 The files pane contains the "Files", "Plots", "Packages", "Help", and "Viewer" tabs. 05:55 The "Files" tab displays the contents of your working directory. 05:59 RStudio will load files from this directory and save files to this directory. 06:04 Navigate to a file and click the file to view it in the source pane. 06:09 From here, you can create new folders and upload files, either by selecting individual files to upload or selecting a .zip file containing all of the files to upload. 06:25 From here, you can also delete and rename files and folders. 06:30 In order to access the file in R, you need to set the data folder as a working directory. 06:36 You'll see that the setwd command was executed in the console. 06:43 You can access the data assets in your project by opening the project folder. 06:50 The "Plots" tab displays the results of R's plot functions, such as: plot, hist, ggplot, and xyplot 07:00 You can navigate through different plots using the arrows or zoom to see a graph full screen. 07:09 You can also delete individual plots or all plots from here. 07:13 Use the "Export" option to save the plot as a graphic or print file at the specified resolution. 07:21 The "Packages" tab displays the packages you currently have installed in your system library. 07:26 The search bar lets you quickly find a specific package. 07:30 The checked packages are the packages that were already loaded, using the library command, in the current session. 07:38 You can check additional packages from here to load them or uncheck packages to detach them from the current session. 07:45 The console pane displays the results. 07:48 Use the "X" next to a package name to remove it from the system library. 07:54 You can also find new packages to install or update to the latest version of any package. 08:03 Clicking any of the packages opens the "Help" tab with additional information for that package. 08:09 From here, you can search for functions to get more help. 08:13 And from the console, you can use the help command, or simply type a question mark followed by the function, to get help with that function. 08:21 The "Viewer" tab displays HTML output. 08:25 Some R functions generate HTML to display reports and interactive graphs. 08:31 The R Markdown package creates reports that you can view in the "Viewer" tab. 08:38 The Shiny package creates web apps that you can view in the "Viewer" tab. 08:44 And other packages build on the htmlwidgets framework and include Java-based, interactive visualizations. 08:54 You can also publish the visualization to the free site, called "RPubs.com". 09:01 This is been a brief overview of the RStudio IDE. 09:05 Find more videos on RStudio in the Cloud Pak for Data as a Service documentation. Learn more * [RStudio environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) * [Using Spark in RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-spark.html) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# RStudio # R is a popular statistical analysis and machine\-learning package that enables data management and includes tests, models, analyses and graphics\. RStudio, included in IBM Watson Studio, provides an integrated development environment for working with R scripts\. ## Accessing RStudio ## RStudio is integrated in IBM Watson Studio projects and can be launched after you create a project\. With RStudio integration in projects, you can access and use the data files that are stored in the IBM Cloud Object Storage bucket associated with your project in RStudio\. To start RStudio in your project: <!-- <ol> --> 1. Click **RStudio** from the **Launch IDE** menu on your project's action bar\. 2. Select an environment\. 3. Click **Launch**\. The environment runtime is initiated and the development environment opens. <!-- </ol> --> Sometimes, when you start an RStudio session, you might experience a corrupted RStudio state from a previous session and your session will not start\. If this happens, select to reset the workspace at the time you select the RStudio environment and then start the RStudio IDE again\. By resetting the workspace, RStudio is started using the default settings with a clean RStudio workspace\. ## Working with data files ## In RStudio, you can work with data files from different sources: <!-- <ul> --> * *Files* in the RStudio server file structure, which you can view by clicking **Files** in the bottom right section of RStudio\. This is where you can create folders, upload files from your local system, and delete files\. To access these files in R, you need to set the working directory to the directory with the files. You can do this by navigating to the directory with the files and clicking **More > Set as Working Directory**. Be aware that files stored in the `Home` directory of your RStudio instance are persistent within your instance only and cannot be shared across environments nor within your project. <!-- </ul> --> Video disclaimer: Some minor steps and graphical elements in the videos on this page may differ from your deployment\. Watch this video to see how to load data to RStudio\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * *Project data assets* that are stored in the IBM Cloud Object Storage bucket associated with your project\. When RStudio is launched, the IBM Cloud Object Storage bucket content is mounted to the `project-objectstorage` directory in your RStudio `Home` directory\. If you want data files to appear in the `project-objectstorage` directory, you must add them as assets to your project. See [Adding files as project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html?context=cdpaas&locale=en#adding-files). If new data assets are added to the project while you are in RStudio and you want to access them, you need to refresh the `project-objectstorage` folder. See how to [read and write data to and from Cloud Object Storage](https://medium.com/ibm-data-science-experience/read-and-write-data-to-and-from-bluemix-object-storage-in-rstudio-276282347ce1). * *Data* stored in a database system\. Watch this video to see how to connect to external data sources in RStudio. This video provides a visual method to learn the concepts and tasks in this documentation. * *Files stored in local storage* that are mounted to `/home/rstudio`\. The `home` directory has a storage limitation of 2 GB and is used to store the RStudio session workspace\. Note that you are allocated 2 GB for your `home` directory storage across all of your projects, irrespective of whether you use RStudio in each project\. As a consequence, you should only store R script files and small data files in the `home` directory\. It is not intended for large data files or large generated output\. All large data files should be uploaded as project assets, which are mounted to the `project-objectstorage` directory from where you can access them\. <!-- </ul> --> ## Adding files as project assets ## If you worked with data files and want them appear in the `project-objectstorage` directory, you must add them to your project as data assets\. To add these files as data assets to the project: <!-- <ol> --> 1. On the Assets page of the project, click the **Upload asset to project** icon (![Shows the Upload asset to project icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/find_data_icon.png)) and select the **Files** tab\. 2. Select the files you want to add to the project as assets\. 3. From the Actions list, select **Add as data asset** and apply your changes\. <!-- </ol> --> ## Capacity consumption and runtime scope ## An RStudio environment runtime is always scoped to an environment template and an RStudio session user\. Only one RStudio session can be active per Watson Studio user at one time\. If you started RStudio in another project, you are asked if you want to stop that session and start a new RStudio session in the context of the current project you're working in\. Runtime usage is calculated by the number of capacity unit hours (CUHs) consumed by the active environment runtime\. The CUHs consumed by an active RStudio runtime in a project are billed to the account of the project creator\. See [Capacity units per hour billing for RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html#rstudio)\. You can see which RStudio environment runtimes are active on the project's **Environments** page\. You can stop your runtime from this page\. **Remember:** The CUH counter continues to increase while the runtime is active so stop the runtime if you aren't using RStudio\. If you don't explicitly stop the runtime, it is stopped for you after an idle time of 2 hour\. During this idle time, you will continue to consume CUHs for which you are billed\. Long compute\-intensive jobs are hard stopped after 24 hours\. Watch this video to see an overview of the RStudio IDE\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * Transcript Synchronize transcript with video <!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> | Time | Transcript | | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | 00:00 | This video is a quick tour of the RStudio integrated development environment inside a Watson Studio project. | | 00:07 | From any project, you can launch the RStudio IDE. | | 00:12 | RStudio is a free and open-source integrated development environment for R, a programming language for statistical computing and graphics. | | 00:22 | In RStudio, there are four panes: the source pane, the console pane, the environment pane, and the files pane. | | 00:32 | The panes help you organize your work and separate the different tasks you'll do with R. | | 00:39 | You can drag to resize the panes or use the icons to minimize and maximize a pane. | | 00:47 | You can also rearrange the panes in global options. | | 00:53 | The console pane is your interface to R. | | 00:56 | It's exactly what you would see in terminal window or user interfaces bundled with R. | | 01:01 | The console pane does have some added features that you'll find helpful. | | 01:06 | To run code from the console, just type the command. | | 01:11 | Start typing a command to see a list of commands that begin with the letters you started typing. | | 01:17 | Highlight a command in the list and press "Enter" to insert it. | | 01:24 | Use the up arrow to scroll through the commands you've previously entered. | | 01:31 | As you issue more commands, you can scroll through the results. | | 01:36 | Use the menu option to clear the console. | | 01:39 | You can also use tab completion to see a list of the functions, objects, and data sets beginning with that text. | | 01:47 | And use the arrows to highlight a command to see help for that command. | | 01:51 | When you're ready, just press "Enter" to insert it. | | 01:55 | Next, you'll see a list of the options for that command in the current context. | | 01:59 | For example, the first argument for the read.csv function is the file. | | 02:05 | RStudio will display a list of the folders and files in your working directory, so you can easily locate the file to include with the argument. | | 02:16 | Lastly, if you use the tab completion with a function that expects a package name, such as a library, you'll see a list of all the installed packages. | | 02:28 | Next, let's look at the source pane, which is simply a text editor for you to write your R code. | | 02:34 | The text editor supports R command files and plain text, as well as several other languages, and includes language-specific highlighting in context. | | 02:47 | And you'll notice the tab completion is also available in the text editor. | | 02:53 | From the text editor, you can run a single line of code, or select several lines of code to run, and you'll see the results in the console pane. | | 03:08 | You can save your code as an R script to share or run again later. | | 03:15 | The view function opens a new tab that shows the dataframe in spreadsheet format. | | 03:22 | Or you can display it in its own window. | | 03:25 | Now, you can scroll through the data, sort the columns, search for specific values, or filter the rows using the sliders and drop-down menus. | | 03:41 | The environment pane contains an "Environment" tab, a "History" tab, and a "Connections" tab, and keeps track of what's been happening in this R session. | | 03:51 | The "Environment" tab contains the R objects that exist in your global environment, created during the session. | | 03:58 | So, when you create a new object in the console pane, it automatically displays in the environment pane. | | 04:04 | You can also view the objects related to a specific package, and even see the source code for a specific function. | | 04:12 | You can also see a list of the data sets, expand a data set to inspect its individual elements, and view them in the source pane. | | 04:22 | You can save the contents of an environment as an .RData file, so you can load that .RData file at a later date. | | 04:29 | From here, you can also clear the objects from the workspace. | | 04:33 | If you want to delete specific items, use the grid view. | | 04:38 | For example, you can easily find large items to delete to free up memory in your R session. | | 04:45 | The "Environment" tab also allows you to import a data set. | | 04:50 | You can see a preview of the data set and change options before completing the import. | | 04:55 | The imported data will display in the source pane. | | 05:00 | The "History" tab displays a history of each of the commands that you run at the command line. | | 05:05 | Just like the "Environment" tab, you can save the history as an .Rhistory file, so you can open it at a later date. | | 05:11 | And this tab has the same options to clear all of the history and individual entries in the history. | | 05:17 | Select a command and send it to the console to rerun the command. | | 05:23 | You can also copy a command to the source pane to include it in a script. | | 05:31 | On the "Connections" tab, you can create a new connection to a data source. | | 05:36 | The choices in this dialog box are dependent upon which packages you have installed. | | 05:41 | For example, a "BLUDB" connection allows you to connect to a Db2 Warehouse on Cloud service. | | 05:49 | The files pane contains the "Files", "Plots", "Packages", "Help", and "Viewer" tabs. | | 05:55 | The "Files" tab displays the contents of your working directory. | | 05:59 | RStudio will load files from this directory and save files to this directory. | | 06:04 | Navigate to a file and click the file to view it in the source pane. | | 06:09 | From here, you can create new folders and upload files, either by selecting individual files to upload or selecting a .zip file containing all of the files to upload. | | 06:25 | From here, you can also delete and rename files and folders. | | 06:30 | In order to access the file in R, you need to set the data folder as a working directory. | | 06:36 | You'll see that the setwd command was executed in the console. | | 06:43 | You can access the data assets in your project by opening the project folder. | | 06:50 | The "Plots" tab displays the results of R's plot functions, such as: plot, hist, ggplot, and xyplot | | 07:00 | You can navigate through different plots using the arrows or zoom to see a graph full screen. | | 07:09 | You can also delete individual plots or all plots from here. | | 07:13 | Use the "Export" option to save the plot as a graphic or print file at the specified resolution. | | 07:21 | The "Packages" tab displays the packages you currently have installed in your system library. | | 07:26 | The search bar lets you quickly find a specific package. | | 07:30 | The checked packages are the packages that were already loaded, using the library command, in the current session. | | 07:38 | You can check additional packages from here to load them or uncheck packages to detach them from the current session. | | 07:45 | The console pane displays the results. | | 07:48 | Use the "X" next to a package name to remove it from the system library. | | 07:54 | You can also find new packages to install or update to the latest version of any package. | | 08:03 | Clicking any of the packages opens the "Help" tab with additional information for that package. | | 08:09 | From here, you can search for functions to get more help. | | 08:13 | And from the console, you can use the help command, or simply type a question mark followed by the function, to get help with that function. | | 08:21 | The "Viewer" tab displays HTML output. | | 08:25 | Some R functions generate HTML to display reports and interactive graphs. | | 08:31 | The R Markdown package creates reports that you can view in the "Viewer" tab. | | 08:38 | The Shiny package creates web apps that you can view in the "Viewer" tab. | | 08:44 | And other packages build on the htmlwidgets framework and include Java-based, interactive visualizations. | | 08:54 | You can also publish the visualization to the free site, called "RPubs.com". | | 09:01 | This is been a brief overview of the RStudio IDE. | | 09:05 | Find more videos on RStudio in the Cloud Pak for Data as a Service documentation. | <!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [RStudio environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) * [Using Spark in RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-spark.html) <!-- </ul> --> **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
2BCC4276EA71978FFA874621715BE92A9667390F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-spark.html?context=cdpaas&locale=en
Using Spark in RStudio
Using Spark in RStudio Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. RStudio uses the sparklyr package to connect to Spark from R. The sparklyr package includes a dplyr interface to Spark data frames as well as an R interface to Spark’s distributed machine learning pipelines. You can connect to Spark from RStudio: * By connecting to a Spark kernel that runs locally in the RStudio container in IBM Watson Studio RStudio includes sample code snippets that show you how to connect to a Spark kernel in your applications for both methods. To use Spark in RStudio after you have launched the IDE: 1. Locate the ibm_sparkaas_demos directory under your home directory and open it. The directory contains the following R scripts: * A readme with details on the included R sample scripts * spark_kernel_basic_local.R includes sample code of how to connect to a local Spark kernel * spark_kernel_basic_remote.R includes sample code of how to connect to a remote Spark kernel * The files sparkaas_flights.Rand sparkaas_mtcars.R are two examples of how to use Spark in a small sample application 2. Use the sample code snippets in your R scripts or applications to help you get started using Spark. Connecting to Spark from RStudio To connect to Spark from RStudio using the Sparklyr R package, you need a Spark with R environment. You can either use the default Spark with R environment that is provided or create a custom Spark with R environment. To create a custom environment, see [Creating environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html). Follow these steps after you launch RStudio in an RStudio environment: Use the following sample code to get a listing of the Spark environment details and to connect to a Spark kernel from your RStudio session: load spark R packages library(ibmwsrspark) library(sparklyr) load kernels kernels <- load_spark_kernels() display kernels display_spark_kernels() get spark kernel Configuration conf <- get_spark_config(kernels[1]) Set spark configuration conf$spark.driver.maxResultSize <- "1G" connect to Spark kernel sc <- spark_connect(config = conf) Then to disconnect from Spark, use: disconnect spark_disconnect(sc) Examples of these commands are provided in the readme under /home/wsuser/ibm_sparkaas_demos. Parent topic:[RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html)
# Using Spark in RStudio # Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically\. RStudio uses the `sparklyr` package to connect to Spark from R\. The `sparklyr` package includes a `dplyr` interface to Spark data frames as well as an R interface to Spark’s distributed machine learning pipelines\. You can connect to Spark from RStudio: <!-- <ul> --> * By connecting to a Spark kernel that runs locally in the RStudio container in IBM Watson Studio <!-- </ul> --> RStudio includes sample code snippets that show you how to connect to a Spark kernel in your applications for both methods\. To use Spark in RStudio after you have launched the IDE: <!-- <ol> --> 1. Locate the `ibm_sparkaas_demos` directory under your home directory and open it\. The directory contains the following R scripts: <!-- <ul> --> * A readme with details on the included R sample scripts * `spark_kernel_basic_local.R` includes sample code of how to connect to a local Spark kernel * `spark_kernel_basic_remote.R` includes sample code of how to connect to a remote Spark kernel * The files `sparkaas_flights.R`and `sparkaas_mtcars.R` are two examples of how to use Spark in a small sample application <!-- </ul> --> 2. Use the sample code snippets in your R scripts or applications to help you get started using Spark\. <!-- </ol> --> ## Connecting to Spark from RStudio ## To connect to Spark from RStudio using the `Sparklyr` R package, you need a Spark with R environment\. You can either use the default Spark with R environment that is provided or create a custom Spark with R environment\. To create a custom environment, see [Creating environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)\. Follow these steps after you launch RStudio in an RStudio environment: Use the following sample code to get a listing of the Spark environment details and to connect to a Spark kernel from your RStudio session: # load spark R packages library(ibmwsrspark) library(sparklyr) # load kernels kernels <- load_spark_kernels() # display kernels display_spark_kernels() # get spark kernel Configuration conf <- get_spark_config(kernels[1]) # Set spark configuration conf$spark.driver.maxResultSize <- "1G" # connect to Spark kernel sc <- spark_connect(config = conf) Then to disconnect from Spark, use: # disconnect spark_disconnect(sc) Examples of these commands are provided in the readme under `/home/wsuser/ibm_sparkaas_demos`\. **Parent topic:**[RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) <!-- </article "role="article" "> -->
42F34465DD884E8110BB08A708A138532999714F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=en
Compute resource options for AutoAI experiments in projects
Compute resource options for AutoAI experiments in projects When you run an AutoAI experiment in a project, the type, size, and power of the hardware configuration available depend on the type of experiment you build. * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=encompute) Default hardware configurations The type of hardware configuration available for your AutoAI experiment depends on the type of experiment you are building. A standard AutoAI experiment, with a single data source, has a single, default hardware configuration. An AutoAI experiment with joined data has options for increasing computational power. Capacity units per hour for AutoAI experiments Hardware configurations available in projects for AutoAI with a single data source Capacity type Capacity units per hour 8 vCPU and 32 GB RAM 20 The runtimes for AutoAI stop automatically when processing is complete. Compute usage in projects AutoAI consumes compute resources as CUH from the Watson Machine Learning service. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. Learn more * [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for AutoAI experiments in projects # When you run an AutoAI experiment in a project, the type, size, and power of the hardware configuration available depend on the type of experiment you build\. <!-- <ul> --> * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=en#default) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=en#compute) <!-- </ul> --> ## Default hardware configurations ## The type of hardware configuration available for your AutoAI experiment depends on the type of experiment you are building\. A standard AutoAI experiment, with a single data source, has a single, default hardware configuration\. An AutoAI experiment with joined data has options for increasing computational power\. ### Capacity units per hour for AutoAI experiments ### <!-- <table> --> Hardware configurations available in projects for AutoAI with a single data source | Capacity type | Capacity units per hour | | -------------------- | ----------------------- | | 8 vCPU and 32 GB RAM | 20 | <!-- </table ""> --> The runtimes for AutoAI stop automatically when processing is complete\. ## Compute usage in projects ## AutoAI consumes compute resources as CUH from the Watson Machine Learning service\. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the **Resource usage** page on the **Manage** tab of your project\. ## Learn more ## <!-- <ul> --> * [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=en
Compute options for model training and scoring
Compute options for model training and scoring When you train or score a model or function, you choose the type, size, and power of the hardware configuration that matches your computing needs. * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=encompute) Default hardware configurations Choose the hardware configuration for your Watson Machine Learning asset when you train the asset or when you deploy it. Hardware configurations available for training and deploying assets Capacity type Capacity units per hour Extra small: 1x4 = 1 vCPU and 4 GB RAM 0.5 Small: 2x8 = 2 vCPU and 8 GB RAM 1 Medium: 4x16 = 4 vCPU and 16 GB RAM 2 Large: 8x32 = 8 vCPU and 32 GB RAM 4 Extra large: 16x64 = 16 vCPU and 64 GB RAM 8 Compute usage for Watson Machine Learning assets Deployments and scoring consume compute resources as capacity unit hours (CUH) from the Watson Machine Learning service. To check the total monthly CUH consumption for your Watson Machine Learning services, from the navigation menu, select Administration -> Environment runtimes. Additionally, you can monitor the monthly resource usage in each specific deployment space. To do that, from your deployment space, go to the Manage tab and then select Resource usage. The summary shows CUHs used by deployment type: separately for AutoAI deployments, Federated Learning deployments, batch deployments, and online deployments. Compute usage details The rate of consumed CUHs is determined by the computing requirements of your deployments. It is based on such variables as: * type of deployment * type of framework * complexity of scoring Scaling a deployment to support more concurrent users and requests also increases CUH consumption. As many variables affect resource consumption for a deployment, it is recommended that you run tests on your models and deployments to analyze CUH consumption. The way that online deployments consume capacity units is based on framework. For some frameworks, CUHs are charged for the number of hours that the deployment asset is active in a deployment space. For example, SPSS models in online deployment mode that run for 24 hours a day, seven days a week, consume CUHs and are charged for that period. An active online deployment has no idle time. For other frameworks, CUHs are charged according to scoring duration. Refer to the CUH consumption table for details on how CUH usage is calculated. Compute time is calculated to the millisecond, with a 1-minute minimum for each distinct operation. For example: * A training run that takes 12 seconds is billed as 1 minute * A training run that takes 83.555 seconds is billed exactly as calculated CUH consumption by deployment and framework type CUH consumption is calculated by using these formulas: Deployment type Framework CUH calculation Online AutoAI, AI function, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny Deployment active duration * Number of nodes * CUH rate for capacity type framework Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost Score duration in seconds * Number of nodes * CUH rate for capacity type framework Batch all frameworks Job duration in seconds * Number of nodes * CUH rate for capacity type framework Learn more * [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
# Compute options for model training and scoring # When you train or score a model or function, you choose the type, size, and power of the hardware configuration that matches your computing needs\. <!-- <ul> --> * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=en#default) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=en#compute) <!-- </ul> --> ## Default hardware configurations ## Choose the hardware configuration for your Watson Machine Learning asset when you train the asset or when you deploy it\. <!-- <table> --> Hardware configurations available for training and deploying assets | Capacity type | Capacity units per hour | | ------------------------------------------ | ----------------------- | | Extra small: 1x4 = 1 vCPU and 4 GB RAM | 0\.5 | | Small: 2x8 = 2 vCPU and 8 GB RAM | 1 | | Medium: 4x16 = 4 vCPU and 16 GB RAM | 2 | | Large: 8x32 = 8 vCPU and 32 GB RAM | 4 | | Extra large: 16x64 = 16 vCPU and 64 GB RAM | 8 | <!-- </table ""> --> ## Compute usage for Watson Machine Learning assets ## Deployments and scoring consume compute resources as capacity unit hours (CUH) from the Watson Machine Learning service\. To check the total monthly CUH consumption for your Watson Machine Learning services, from the navigation menu, select **Administration** \-> **Environment runtimes**\. Additionally, you can monitor the monthly resource usage in each specific deployment space\. To do that, from your deployment space, go to the **Manage** tab and then select **Resource usage**\. The summary shows CUHs used by deployment type: separately for AutoAI deployments, Federated Learning deployments, batch deployments, and online deployments\. ### Compute usage details ### The rate of consumed CUHs is determined by the computing requirements of your deployments\. It is based on such variables as: <!-- <ul> --> * type of deployment * type of framework * complexity of scoring Scaling a deployment to support more concurrent users and requests also increases CUH consumption\. As many variables affect resource consumption for a deployment, it is recommended that you run tests on your models and deployments to analyze CUH consumption\. <!-- </ul> --> The way that online deployments consume capacity units is based on framework\. For some frameworks, CUHs are charged for the number of hours that the deployment asset is active in a deployment space\. For example, SPSS models in online deployment mode that run for 24 hours a day, seven days a week, consume CUHs and are charged for that period\. An active online deployment has no idle time\. For other frameworks, CUHs are charged according to scoring duration\. Refer to the CUH consumption table for details on how CUH usage is calculated\. Compute time is calculated to the millisecond, with a 1\-minute minimum for each distinct operation\. For example: <!-- <ul> --> * A training run that takes 12 seconds is billed as 1 minute * A training run that takes 83\.555 seconds is billed exactly as calculated <!-- </ul> --> ### CUH consumption by deployment and framework type ### CUH consumption is calculated by using these formulas: <!-- <table> --> | Deployment type | Framework | CUH calculation | | --------------- | ----------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- | | Online | AutoAI, AI function, SPSS, Scikit\-Learn custom libraries, Tensorflow, RShiny | Deployment active duration `*` Number of nodes `*` CUH rate for capacity type framework | | Online | Spark, PMML, Scikit\-Learn, Pytorch, XGBoost | Score duration in seconds `*` Number of nodes `*` CUH rate for capacity type framework | | Batch | all frameworks | Job duration in seconds `*` Number of nodes `*` CUH rate for capacity type framework | <!-- </table ""> --> ## Learn more ## <!-- <ul> --> * [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) <!-- </article "role="article" "> -->
5B66F4F408827FE62B0584882D7F25FB9C6CA839
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=en
Compute resource options for Decision Optimization
Compute resource options for Decision Optimization When you run a Decision Optimization model, you use the Watson Machine Learning instance that is linked to the deployment space associated with your experiment. * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=encompute) Default hardware configuration The following hardware configuration is used by default when running models in an experiment: Capacity type Capacity units per hour (CUH) 2 vCPU and 8 GB RAM 6 The CUH is consumed only when the model is running and not when you are adding data or editing your model. You can also switch to any other experiment environment as required. See the [Decision Optimization plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmldo) for a list of environments for Decision Optimization experiments. For more information on how to configure Decision Optimization experiment environments, see [Configuring environments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html). Compute usage in projects Decision Optimization experiments consume compute resources as CUH from the Watson Machine Learning service. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. Learn more * [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for Decision Optimization # When you run a Decision Optimization model, you use the Watson Machine Learning instance that is linked to the deployment space associated with your experiment\. <!-- <ul> --> * [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=en#default) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=en#compute) <!-- </ul> --> ## Default hardware configuration ## The following hardware configuration is used by default when running models in an experiment: <!-- <table> --> | Capacity type | Capacity units per hour (CUH) | | ------------------- | ----------------------------- | | 2 vCPU and 8 GB RAM | 6 | <!-- </table ""> --> The CUH is consumed only when the model is running and not when you are adding data or editing your model\. You can also switch to any other experiment environment as required\. See the [Decision Optimization plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html#do) for a list of environments for Decision Optimization experiments\. For more information on how to configure Decision Optimization experiment environments, see [Configuring environments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html)\. ## Compute usage in projects ## Decision Optimization experiments consume compute resources as CUH from the Watson Machine Learning service\. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the **Resource usage** page on the **Manage** tab of your project\. ## Learn more ## <!-- <ul> --> * [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html?context=cdpaas&locale=en
Compute resource options for Tuning Studio experiments in projects
Compute resource options for Tuning Studio experiments in projects A Tuning Studio experiment has a single hardware configuration. The following table shows the hardware configuration that is used when tuning foundation models in a tuning experiment. Hardware configuration available in projects for Tuning Studio Capacity type Capacity units per hour NVIDIA A100 80GB GPU|43| Compute usage in projects Tuning Studio consumes compute resources as CUH from the Watson Machine Learning service. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project. Learn more * [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) * [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for Tuning Studio experiments in projects # A Tuning Studio experiment has a single hardware configuration\. The following table shows the hardware configuration that is used when tuning foundation models in a tuning experiment\. <!-- <table> --> Hardware configuration available in projects for Tuning Studio | Capacity type | Capacity units per hour | | ------------- | ----------------------- | <!-- </table ""> --> NVIDIA A100 80GB GPU\|43\| ## Compute usage in projects ## Tuning Studio consumes compute resources as CUH from the Watson Machine Learning service\. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the **Resource usage** page on the **Manage** tab of your project\. ## Learn more ## <!-- <ul> --> * [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) * [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
9DAE797269714235C8D9287B5D358BCF72E2C9F5
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html?context=cdpaas&locale=en
SPSS predictive analytics algorithms for scoring
SPSS predictive analytics algorithms for scoring A PMML-compliant scoring engine supports: * PMML-compliant models (4.2 and earlier versions) produced by various vendors, except for Baseline Model, ScoreCard Model, Sequence Model, and Text Model. Refer to the [Data Mining Group (DMG) web site](http://www.dmg.org/) for a list of supported models. * Non-PMML models produced by IBM SPSS products: Discriminant and Bayesian networks * PMML 4.2 transformations completely Different kinds of models can produce various scoring results. For example: * Classification models (those with a categorical target: Bayes Net, General Regression, Mining, Naive Bayes, k-Nearest Neighbor, Neural Network, Regression, Ruleset, Support Vector Machine, and Tree) produce: * Predicted values * Probabilities * Confidence values * Regression models (those with a continuous target: General Regression, Mining, k-Nearest Neighbor, Neural Network, Regression, and Tree) produce predicted values; some also produce standard errors. * Cox regression (in General Regression) produces predicted survival probability and cumulative hazard values. * Tree models also produce Node ID. * Clustering models produce Cluster ID and Cluster affinity. * Anomaly Detection (represented as Clustering) produces anomaly index and top reasons. * Association models produce Consequent, Rule ID, and confidence for top matching rules. Python example code: from spss.ml.score import Score with open("linear.pmml") as reader: pmmlString = reader.read() score = Score().fromPMML(pmmlString) scoredDf = score.transform(data) scoredDf.show() Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
# SPSS predictive analytics algorithms for scoring # A PMML\-compliant scoring engine supports: <!-- <ul> --> * PMML\-compliant models (4\.2 and earlier versions) produced by various vendors, except for Baseline Model, ScoreCard Model, Sequence Model, and Text Model\. Refer to the [Data Mining Group (DMG) web site](http://www.dmg.org/) for a list of supported models\. * Non\-PMML models produced by IBM SPSS products: Discriminant and Bayesian networks * PMML 4\.2 transformations completely <!-- </ul> --> Different kinds of models can produce various scoring results\. For example: <!-- <ul> --> * Classification models (those with a categorical target: Bayes Net, General Regression, Mining, Naive Bayes, k\-Nearest Neighbor, Neural Network, Regression, Ruleset, Support Vector Machine, and Tree) produce: <!-- <ul> --> * Predicted values * Probabilities * Confidence values <!-- </ul> --> * Regression models (those with a continuous target: General Regression, Mining, k\-Nearest Neighbor, Neural Network, Regression, and Tree) produce predicted values; some also produce standard errors\. * Cox regression (in General Regression) produces predicted survival probability and cumulative hazard values\. * Tree models also produce Node ID\. * Clustering models produce Cluster ID and Cluster affinity\. * Anomaly Detection (represented as Clustering) produces anomaly index and top reasons\. * Association models produce Consequent, Rule ID, and confidence for top matching rules\. <!-- </ul> --> **Python example code:** from spss.ml.score import Score with open("linear.pmml") as reader: pmmlString = reader.read() score = Score().fromPMML(pmmlString) scoredDf = score.transform(data) scoredDf.show() **Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) <!-- </article "role="article" "> -->
2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html?context=cdpaas&locale=en
Sharing notebooks with a URL
Sharing notebooks with a URL You can create a URL to share the last saved version of a notebook on social media or with people outside of Watson Studio. The URL shows a read-only view of the notebook. Anyone who has the URL can view or download the notebook. Required permissions: You must have the Admin or Editor role in the project to share a notebook URL. The shared notebook shows the author of the shared version and when the notebook version was last updated. Sharing a notebook URL To share a notebook URL: 1. Open the notebook in edit mode. 2. If necessary, add code to [hide sensitive code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html). 3. Create a saved version of the notebook by clicking File > Save Version. 4. Click the Share icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar. ![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) 5. Select to share the link. 6. Choose a sharing option: * Choose Only text and output to hide all code cells. * Choose All content excluding sensitive code cells to hide code cells that you marked as sensitive. * Choose All content, including code to show everything, even code cells that you marked as sensitive. Make sure that you remove your credential and other sensitive information before you choose this option and every time before you save a new version of the notebook. 7. Copy the link or choose a social media site on which to share the URL. Note: The URL remains valid while the project and notebook exist and while the notebook is shared. If you [unshare the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html?context=cdpaas&locale=enunsharing), the URL becomes invalid. When you unshare, and then re-share the notebook, the URL will be the same again. Updating a shared notebook To update a shared notebook: 1. Open the notebook in edit mode. 2. Make changes to the notebook. 3. Create a new version of the notebook by clicking File > Save Version. Note: Clicking File > Save saves your changes but it doesn't create a new version of the notebook; the shared URL still points to the older version of the notebook. Unsharing a notebook URL To unshare a notebook URL: 1. Open the notebook in edit mode. 2. Click the Share icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar. ![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) 3. Unselect the Share with anyone who has the link toggle. Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
# Sharing notebooks with a URL # You can create a URL to share the last saved version of a notebook on social media or with people outside of Watson Studio\. The URL shows a read\-only view of the notebook\. Anyone who has the URL can view or download the notebook\. **Required permissions:** You must have the **Admin** or **Editor** role in the project to share a notebook URL\. The shared notebook shows the author of the shared version and when the notebook version was last updated\. ## Sharing a notebook URL ## To share a notebook URL: <!-- <ol> --> 1. Open the notebook in edit mode\. 2. If necessary, add code to [hide sensitive code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html)\. 3. Create a saved version of the notebook by clicking **File > Save Version**\. 4. Click the **Share** icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar\. ![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) 5. Select to share the link\. 6. Choose a sharing option: <!-- <ul> --> * Choose **Only text and output** to hide all code cells. * Choose **All content excluding sensitive code cells** to hide code cells that you marked as sensitive. * Choose **All content, including code** to show everything, even code cells that you marked as sensitive. Make sure that you remove your credential and other sensitive information before you choose this option and every time before you save a new version of the notebook. <!-- </ul> --> 7. Copy the link or choose a social media site on which to share the URL\. <!-- </ol> --> Note: The URL remains valid while the project and notebook exist and while the notebook is shared\. If you [unshare the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html?context=cdpaas&locale=en#unsharing), the URL becomes invalid\. When you unshare, and then re\-share the notebook, the URL will be the same again\. ## Updating a shared notebook ## To update a shared notebook: <!-- <ol> --> 1. Open the notebook in edit mode\. 2. Make changes to the notebook\. 3. Create a new version of the notebook by clicking **File > Save Version**\. <!-- </ol> --> Note: Clicking File > Save saves your changes but it doesn't create a new version of the notebook; the shared URL still points to the older version of the notebook\. ## Unsharing a notebook URL ## To unshare a notebook URL: <!-- <ol> --> 1. Open the notebook in edit mode\. 2. Click the **Share** icon (![Share icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/share_icon.png)) from the notebook action bar\. ![Share notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/images/share_notebook.png) 3. Unselect the **Share with anyone who has the link** toggle\. <!-- </ol> --> **Parent topic:**[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html) <!-- </article "role="article" "> -->
C0E0C248B3934E34883814B5F9CEB792D734042A
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en
Compute resource options for Data Refinery in projects
Compute resource options for Data Refinery in projects When you create or edit a Data Refinery flow in a project, you use the Default Data Refinery XS runtime environment. However, when you run a Data Refinery flow in a job, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=entypes) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=encompute) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enchange-env) * [Runtime logs for jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enlogs) Types of environments You can use these types of environments with Data Refinery: * Default Data Refinery XS runtime environment for running jobs on small data sets. * Spark environments for running jobs on larger data sets. The Spark environments have [default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault) so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) for Spark environments. You should use a Spark & R environment only if you are working on a large data set. If your data set is small, you should select the Default Data Refinery XS runtime. The reason is that, although the SparkR cluster in a Spark & R environment is fast and powerful, it requires time to create, which is noticeable when you run a Data Refinery job on small data set. Default environment templates When you work in Data Refinery, the Default Data Refinery XS environment runtime is started and appears as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. This runtime stops after an hour of inactivity in the Data Refinery interface. However, you can stop it manually under Tool runtimes on the Environments page. When you create a job to run a Data Refinery flow in a project, you select an environment template. After a runtime for a job is started, it is listed as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. The runtime for a job stops when the Data Refinery job stops running. Compute usage is tracked by capacity unit hours (CUH). Preset environment templates available in projects for Data Refinery Name Hardware configuration Capacity units per hour (CUH) Default Data Refinery XS 3 vCPU and 12 GB RAM 1.5 Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM 1.5 All default environment templates for Data Refinery are HIPAA ready. The Spark default environment templates are listed under Templates on the Environments page on the Manage tab of your project. Compute usage in projects You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. Changing the runtime You can't change the runtime for working in Data Refinery. You can change the runtime for a Data Refinery flow job by editing the job template. See [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr). Runtime logs for jobs To view the accumulated logs for a Data Refinery job: 1. From the project's Jobs page, click the job that ran the Data Refinery flow for which you want to see logs. 2. Click the job run. You can view the log tail or download the complete log file. Next steps * [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr) * [Stopping active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) Learn more * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for Data Refinery in projects # When you create or edit a Data Refinery flow in a project, you use the `Default Data Refinery XS` runtime environment\. However, when you run a Data Refinery flow in a job, you choose an environment template for the runtime environment\. The environment template specifies the type, size, and power of the hardware configuration, plus the software template\. <!-- <ul> --> * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#types) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#default) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#compute) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#change-env) * [Runtime logs for jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#logs) <!-- </ul> --> ## Types of environments ## You can use these types of environments with Data Refinery: <!-- <ul> --> * `Default Data Refinery XS` runtime environment for running jobs on small data sets\. * Spark environments for running jobs on larger data sets\. The Spark environments have [default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=en#default) so you can get started quickly\. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) for Spark environments\. You should use a Spark & R environment only if you are working on a large data set\. If your data set is small, you should select the `Default Data Refinery XS` runtime\. The reason is that, although the SparkR cluster in a Spark & R environment is fast and powerful, it requires time to create, which is noticeable when you run a Data Refinery job on small data set\. <!-- </ul> --> ## Default environment templates ## When you work in Data Refinery, the `Default Data Refinery XS` environment runtime is started and appears as an active runtime under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project\. This runtime stops after an hour of inactivity in the Data Refinery interface\. However, you can stop it manually under **Tool runtimes** on the **Environments** page\. When you create a job to run a Data Refinery flow in a project, you select an environment template\. After a runtime for a job is started, it is listed as an active runtime under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project\. The runtime for a job stops when the Data Refinery job stops running\. Compute usage is tracked by capacity unit hours (CUH)\. <!-- <table> --> Preset environment templates available in projects for Data Refinery | Name | Hardware configuration | Capacity units per hour (CUH) | | --------------------------- | ----------------------------------------------------------------------- | ----------------------------- | | Default Data Refinery XS | 3 vCPU and 12 GB RAM | 1\.5 | | Default Spark 3\.3 & R 4\.2 | 2 Executors each: 1 vCPU and 4 GB RAM; <br>Driver: 1 vCPU and 4 GB RAM | 1\.5 | <!-- </table ""> --> All default environment templates for Data Refinery are HIPAA ready\. The Spark default environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. ## Compute usage in projects ## You can monitor the Watson Studio CUH consumption on the **Resource usage** page on the **Manage** tab of your project\. ## Changing the runtime ## You can't change the runtime for working in Data Refinery\. You can change the runtime for a Data Refinery flow job by editing the job template\. See [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#create-jobs-in-dr)\. ## Runtime logs for jobs ## To view the accumulated logs for a Data Refinery job: <!-- <ol> --> 1. From the project's **Jobs** page, click the job that ran the Data Refinery flow for which you want to see logs\. 2. Click the job run\. You can view the log tail or download the complete log file\. <!-- </ol> --> ## Next steps ## <!-- <ul> --> * [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) * [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#create-jobs-in-dr) * [Stopping active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
6F544922DE2638796837398F7EC15A4AFE6B0781
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html?context=cdpaas&locale=en
SPSS predictive analytics algorithms
SPSS predictive analytics algorithms You can use the following SPSS predictive analytics algorithms in your notebooks. Code samples are provided for Python notebooks. Notebooks must run in a Spark with Python environment runtime. To run the algorithms described in this section, you don't need the SPSS Modeler service. * [Data preparation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html) * [Classification and regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html) * [Clustering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html) * [Forecasting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html) * [Survival analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html) * [Score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html) Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
# SPSS predictive analytics algorithms # You can use the following SPSS predictive analytics algorithms in your notebooks\. Code samples are provided for Python notebooks\. Notebooks must run in a Spark with Python environment runtime\. To run the algorithms described in this section, you don't need the SPSS Modeler service\. <!-- <ul> --> * [Data preparation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html) * [Classification and regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html) * [Clustering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html) * [Forecasting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html) * [Survival analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html) * [Score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html) <!-- </ul> --> **Parent topic:**[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html) <!-- </article "role="article" "> -->
54029DD42BAE3A23D68D928AC3B6C04D0C735DEC
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=en
Compute resource options for SPSS Modeler in projects
Compute resource options for SPSS Modeler in projects When you run an SPSS Modeler flow in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template. * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=entypes_spss) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=endefault_spss) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=encompute_spss) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=enchange-env_spss) Types of environments You can use this type of environment with SPSS Modeler: * Default SPSS Modeler CPU environments for standard workloads Default environment templates You can select any of the following default environment templates for SPSS Modeler in a project. The included environment templates are listed under Templates on the Environments page on the Manage tab of your project. Default SPSS Modeler environment templates Name Hardware configuration Local storage CUH rate per hour Default SPSS Modeler S 2 vCPU and 8 GB RAM 128 GB 1 Default SPSS Modeler M 4 vCPU and 16 GB RAM 128 GB 2 Default SPSS Modeler L 6 vCPU and 24 GB RAM 128 GB 3 After selecting an environment, any other SPSS Modeler flows opened in that project will use the same runtime. The hardware configuration of the available SPSS Modeler environments is preset and cannot be changed. Compute usage in projects SPSS Modeler consumes compute resources as CUH from the Watson Studio service in projects. You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project. Changing the SPSS Modeler runtime If you notice that processing is very slow, you can restart SPSS Modeler and select a larger environment runtime. To change the SPSS Modeler environment runtime: 1. Save any data from your current session before switching to another environment. 2. Stop the active SPSS Modeler runtime under Tool runtimes on the Environments page on the Manage tab of your project. 3. Restart SPSS Modeler and select another environment with the compute power and memory capacity that better meets your requirements. Learn more * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for SPSS Modeler in projects # When you run an SPSS Modeler flow in a project, you choose an environment template for the runtime environment\. The environment template specifies the type, size, and power of the hardware configuration, plus the software template\. <!-- <ul> --> * [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=en#types_spss) * [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=en#default_spss) * [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=en#compute_spss) * [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=en#change-env_spss) <!-- </ul> --> ## Types of environments ## You can use this type of environment with SPSS Modeler: <!-- <ul> --> * Default SPSS Modeler CPU environments for standard workloads <!-- </ul> --> ## Default environment templates ## You can select any of the following default environment templates for SPSS Modeler in a project\. The included environment templates are listed under **Templates** on the **Environments** page on the **Manage** tab of your project\. <!-- <table> --> Default SPSS Modeler environment templates | Name | Hardware configuration | Local storage | CUH rate per hour | | ------------------------ | ---------------------- | ------------- | ----------------- | | `Default SPSS Modeler S` | 2 vCPU and 8 GB RAM | 128 GB | 1 | | `Default SPSS Modeler M` | 4 vCPU and 16 GB RAM | 128 GB | 2 | | `Default SPSS Modeler L` | 6 vCPU and 24 GB RAM | 128 GB | 3 | <!-- </table ""> --> After selecting an environment, any other SPSS Modeler flows opened in that project will use the same runtime\. The hardware configuration of the available SPSS Modeler environments is preset and cannot be changed\. ## Compute usage in projects ## SPSS Modeler consumes compute resources as CUH from the Watson Studio service in projects\. You can monitor the Watson Studio CUH consumption on the **Resource usage** page on the **Manage** tab of your project\. ## Changing the SPSS Modeler runtime ## If you notice that processing is very slow, you can restart SPSS Modeler and select a larger environment runtime\. To change the SPSS Modeler environment runtime: <!-- <ol> --> 1. Save any data from your current session before switching to another environment\. 2. Stop the active SPSS Modeler runtime under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project\. 3. Restart SPSS Modeler and select another environment with the compute power and memory capacity that better meets your requirements\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
2D81FCD3E78A5CC7B435198A59522AE6BF8640ED
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html?context=cdpaas&locale=en
SPSS predictive analytics survival analysis algorithms in notebooks
SPSS predictive analytics survival analysis algorithms in notebooks You can use non-parametric distribution fitting, parametric distribution fitting, or parametric regression modeling SPSS predictive analytics algorithms in notebooks. Non-Parametric Distribution Fitting Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. Non-parametric Distribution Fitting (NPDF) provides an estimate of the survival function without making any assumptions concerning the distribution of the data. NPDF includes Kaplan-Meier estimation, life tables, and specialized extension algorithms to support left censored, interval censored, and recurrent event data. Python example code: from spss.ml.survivalanalysis import NonParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem npdf = NonParametricDistributionFitting(). setAlgorithm("KM"). setBeginField("time"). setStatusField("status"). setStrataFields(["treatment"]). setGroupFields(["gender"]). setUndefinedStatus("INTERVALCENSORED"). setDefinedStatus( DefinedStatus( failure=StatusItem(points = Points("1")), rightCensored=StatusItem(points = Points("0")))). setOutMeanSurvivalTime(True) npdfModel = npdf.fit(df) predictions = npdfModel.transform(data) predictions.show() Parametric Distribution Fitting Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function. Parametric Distribution Fitting (PDF) provides an estimate of the survival function by comparing the functions for several known distributions (exponential, Weibull, log-normal, and log-logistic) to determine which, if any, describes the data best. In addition, the distributions for two or more groups of cases can be compared. Python excample code: from spss.ml.survivalanalysis import ParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem pdf = ParametricDistributionFitting(). setBeginField("begintime"). setEndField("endtime"). setStatusField("status"). setFreqField("frequency"). setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points("F")), rightCensored=StatusItem(points=Points("R")), leftCensored=StatusItem(points=Points("L"))) ). setMedianRankEstimation("RRY"). setMedianRankObtainMethod("BetaFDistribution"). setStatusConflictTreatment("DERIVATION"). setEstimationMethod("MRR"). setDistribution("Weibull"). setOutProbDensityFunc(True). setOutCumDistFunc(True). setOutSurvivalFunc(True). setOutRegressionPlot(True). setOutMedianRankRegPlot(True). setComputeGroupComparison(True) pdfModel = pdf.fit(data) predictions = pdfModel.transform(data) predictions.show() Parametric regression modeling Parametric regression modeling (PRM) is a survival analysis technique that incorporates the effects of covariates on the survival times. PRM includes two model types: accelerated failure time and frailty. Accelerated failure time models assume that the relationship of the logarithm of survival time and the covariates is linear. Frailty, or random effects, models are useful for analyzing recurrent events, correlated survival data, or when observations are clustered into groups. PRM automatically selects the survival time distribution (exponential, Weibull, log-normal, or log-logistic) that best describes the survival times. Python example code: from spss.ml.survivalanalysis import ParametricRegression from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem prm = ParametricRegression(). setBeginField("startTime"). setEndField("endTime"). setStatusField("status"). setPredictorFields(["age", "surgery", "transplant"]). setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points("0.0")), intervalCensored=StatusItem(points=Points("1.0")))) prmModel = prm.fit(data) PMML = prmModel.toPMML() statXML = prmModel.statXML() predictions = prmModel.transform(data) predictions.show() Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
# SPSS predictive analytics survival analysis algorithms in notebooks # You can use non\-parametric distribution fitting, parametric distribution fitting, or parametric regression modeling SPSS predictive analytics algorithms in notebooks\. ## Non\-Parametric Distribution Fitting ## Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest\. The distribution of the event times is typically described by a survival function\. Non\-parametric Distribution Fitting (NPDF) provides an estimate of the survival function without making any assumptions concerning the distribution of the data\. NPDF includes Kaplan\-Meier estimation, life tables, and specialized extension algorithms to support left censored, interval censored, and recurrent event data\. **Python example code:** from spss.ml.survivalanalysis import NonParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem npdf = NonParametricDistributionFitting(). \ setAlgorithm("KM"). \ setBeginField("time"). \ setStatusField("status"). \ setStrataFields(["treatment"]). \ setGroupFields(["gender"]). \ setUndefinedStatus("INTERVALCENSORED"). \ setDefinedStatus( DefinedStatus( failure=StatusItem(points = Points("1")), rightCensored=StatusItem(points = Points("0")))). \ setOutMeanSurvivalTime(True) npdfModel = npdf.fit(df) predictions = npdfModel.transform(data) predictions.show() ## Parametric Distribution Fitting ## Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest\. The distribution of the event times is typically described by a survival function\. Parametric Distribution Fitting (PDF) provides an estimate of the survival function by comparing the functions for several known distributions (exponential, Weibull, log\-normal, and log\-logistic) to determine which, if any, describes the data best\. In addition, the distributions for two or more groups of cases can be compared\. **Python excample code:** from spss.ml.survivalanalysis import ParametricDistributionFitting from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem pdf = ParametricDistributionFitting(). \ setBeginField("begintime"). \ setEndField("endtime"). \ setStatusField("status"). \ setFreqField("frequency"). \ setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points("F")), rightCensored=StatusItem(points=Points("R")), leftCensored=StatusItem(points=Points("L"))) ). \ setMedianRankEstimation("RRY"). \ setMedianRankObtainMethod("BetaFDistribution"). \ setStatusConflictTreatment("DERIVATION"). \ setEstimationMethod("MRR"). \ setDistribution("Weibull"). \ setOutProbDensityFunc(True). \ setOutCumDistFunc(True). \ setOutSurvivalFunc(True). \ setOutRegressionPlot(True). \ setOutMedianRankRegPlot(True). \ setComputeGroupComparison(True) pdfModel = pdf.fit(data) predictions = pdfModel.transform(data) predictions.show() ## Parametric regression modeling ## Parametric regression modeling (PRM) is a survival analysis technique that incorporates the effects of covariates on the survival times\. PRM includes two model types: accelerated failure time and frailty\. Accelerated failure time models assume that the relationship of the logarithm of survival time and the covariates is linear\. Frailty, or random effects, models are useful for analyzing recurrent events, correlated survival data, or when observations are clustered into groups\. PRM automatically selects the survival time distribution (exponential, Weibull, log\-normal, or log\-logistic) that best describes the survival times\. **Python example code:** from spss.ml.survivalanalysis import ParametricRegression from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem prm = ParametricRegression(). \ setBeginField("startTime"). \ setEndField("endTime"). \ setStatusField("status"). \ setPredictorFields(["age", "surgery", "transplant"]). \ setDefinedStatus( DefinedStatus( failure=StatusItem(points=Points("0.0")), intervalCensored=StatusItem(points=Points("1.0")))) prmModel = prm.fit(data) PMML = prmModel.toPMML() statXML = prmModel.statXML() predictions = prmModel.transform(data) predictions.show() **Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html) <!-- </article "role="article" "> -->
FAE139F839DAB4C6EB794D689DACCEFF869C718F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=en
Switching the platform for a space
Switching the platform for a space You can switch the platform for some spaces between the Cloud Pak for Data as a Service and the watsonx platform. When you switch a space to another platform, you can use the tools that are specific to that platform. For example, you might switch an existing space from Cloud Pak for Data as a Service to watsonx to consolidate your collaborative work on one platform. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html). Note: You cannot promote Prompt Lab assets created with foundation model inferencing to a space. * [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrequirements) * [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrestrictions) * [What happens when you switch a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enconsequences) * [Switch the platform for a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enmove-one) Requirements You can switch a space from one platform to the other if you have the required accounts and permissions. Required accounts : You must be signed up for both Cloud Pak for Data as a Service and watsonx. Required permissions : You must have the Admin role in the space that you want to switch. Required services : The current account that you are working in must have both of these services provisioned: : - Watson Studio : - Watson Machine Learning Restrictions To switch a space from Cloud Pak for Data as a Service to watsonx, all the assets in the space must be supported by both platforms. Spaces that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: * Connected data asset * Connection * Data asset from a file * Deployment * Jupyter notebook * Model * Python function * Script You can’t switch a space that contains assets that are specific to Cloud Pak for Data as a Service. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that space to watsonx. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx spaces, you can't switch a space that contains pipeline assets because pipelines can reference unsupported assets. For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). What happens when you switch the platform for a space Switching a space between platforms has the following effects: Collaborators : Collaborators in the project receive notifications of the switch on the original platform. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project. Jobs : Scheduled jobs are retained. Any jobs that are running at the time of the switch continue until completion on the original platform. Any jobs that are scheduled for times after the switch are run on the destination platform. Job history is not retained. Environments : Custom hardware and software specifications are retained. Space history : Recent activity and asset activities are not retained. Resource usage : Resource usage is cumulative because you continue to use the same service instances. Storage : The space's IBM Cloud Object Storage bucket remains the same. Switch the platform for a space You can switch the platform for a space from within the space on the original platform. You can switch between Cloud Pak for Data as a Service and watsonx. To switch the platform for a space: 1. From the space you want to switch, open the Manage tab, select the General page, and in the Controls section, click Switch platform. If you don't see a Switch platform button or the button is not active, you can't switch the space. 2. Select the destination platform and click Switch platform. Learn more * [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
# Switching the platform for a space # You can switch the platform for some spaces between the Cloud Pak for Data as a Service and the watsonx platform\. When you switch a space to another platform, you can use the tools that are specific to that platform\. For example, you might switch an existing space from Cloud Pak for Data as a Service to watsonx to consolidate your collaborative work on one platform\. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)\. Note: You cannot promote Prompt Lab assets created with foundation model inferencing to a space\. <!-- <ul> --> * [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=en#requirements) * [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=en#restrictions) * [What happens when you switch a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=en#consequences) * [Switch the platform for a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=en#move-one) <!-- </ul> --> ## Requirements ## You can switch a space from one platform to the other if you have the required accounts and permissions\. **Required accounts** : You must be signed up for both Cloud Pak for Data as a Service and watsonx\. **Required permissions** : You must have the **Admin** role in the space that you want to switch\. **Required services** : The current account that you are working in must have both of these services provisioned: : \- Watson Studio : \- Watson Machine Learning ## Restrictions ## To switch a space from Cloud Pak for Data as a Service to watsonx, all the assets in the space must be supported by both platforms\. Spaces that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: <!-- <ul> --> * Connected data asset * Connection * Data asset from a file * Deployment * Jupyter notebook * Model * Python function * Script <!-- </ul> --> You can’t switch a space that contains assets that are specific to Cloud Pak for Data as a Service\. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that space to watsonx\. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx spaces, you can't switch a space that contains pipeline assets because pipelines can reference unsupported assets\. For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)\. ## What happens when you switch the platform for a space ## Switching a space between platforms has the following effects: **Collaborators** : Collaborators in the project receive notifications of the switch on the original platform\. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project\. **Jobs** : Scheduled jobs are retained\. Any jobs that are running at the time of the switch continue until completion on the original platform\. Any jobs that are scheduled for times after the switch are run on the destination platform\. Job history is not retained\. **Environments** : Custom hardware and software specifications are retained\. **Space history** : Recent activity and asset activities are not retained\. **Resource usage** : Resource usage is cumulative because you continue to use the same service instances\. **Storage** : The space's IBM Cloud Object Storage bucket remains the same\. ## Switch the platform for a space ## You can switch the platform for a space from within the space on the original platform\. You can switch between Cloud Pak for Data as a Service and watsonx\. To switch the platform for a space: <!-- <ol> --> 1. From the space you want to switch, open the **Manage** tab, select the **General** page, and in the **Controls** section, click **Switch platform**\. If you don't see a **Switch platform** button or the button is not active, you can't switch the space\. 2. Select the destination platform and click **Switch platform**\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) <!-- </ul> --> **Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) <!-- </article "role="article" "> -->
384EB2033AD74EA7044AFC8BF1DDB06FF392CB08
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html?context=cdpaas&locale=en
Compute resource options for Synthetic Data Generator in projects
Compute resource options for Synthetic Data Generator in projects To create data with the Synthetic Data Generator, you must have the Watson Studio and Watson Machine Learning services provisioned. Running a synthetic data flow consumes compute resources from the Watson Studio service. Capacity units per hour for Synthetic Data Generator Capacity type Capacity units per hour 2 vCPU and 8 GB RAM 7 Compute usage in projects Running a synthetic data flow consumes compute resources from the Watson Studio service. You can monitor the total monthly amount of CUH consumption for Watson Studio on the Resource usage page on the Manage tab of your project. Learn more * [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Watson Studio service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
# Compute resource options for Synthetic Data Generator in projects # To create data with the Synthetic Data Generator, you must have the Watson Studio and Watson Machine Learning services provisioned\. Running a synthetic data flow consumes compute resources from the Watson Studio service\. ### Capacity units per hour for Synthetic Data Generator ### <!-- <table> --> | Capacity type | Capacity units per hour | | ------------------- | ----------------------- | | 2 vCPU and 8 GB RAM | 7 | <!-- </table ""> --> ## Compute usage in projects ## Running a synthetic data flow consumes compute resources from the Watson Studio service\. You can monitor the total monthly amount of CUH consumption for Watson Studio on the **Resource usage** page on the **Manage** tab of your project\. ## Learn more ## <!-- <ul> --> * [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) * [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) * [Watson Studio service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) <!-- </ul> --> **Parent topic:**[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) <!-- </article "role="article" "> -->
8A411252B81F0E159C1F63EE64F63A987D1BEF9F
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html?context=cdpaas&locale=en
Manually adding the project access token
Manually adding the project access token All projects have an authorization token that is used to access data assets, for example files and connections, and is used by platform APIs. This token is called the project access token, or simply access token in the project user interface. This project access token must be set in notebooks so that project and platform functions can access the project resources. When you load data to your notebook by clicking Read data on the Code snippets pane, selecting the asset and the load option, the project access token is added for you, if the generated code that is inserted uses project functions. However, when you use API functions in your notebook that require the project token, for example, if you're using Wget to access data by using the HTTP, HTTPS or FTP protocols, or the ibm-watson-studio-lib library, you must add the project access token to the notebook yourself. To add a project access token to a notebook if you are not using the generated code: 1. From the Manage tab, select Access Control and click New access token under Access tokens. Only project administrators can create project access tokens. Enter a name and select the access role. To enable using API functions in a notebook, the access token must have the Editor access role. An access token with Viewer access role enables read access only to a notebook. 2. Add the project access token to a notebook by clicking More > Insert project token from the notebook action bar. By running the inserted hidden code cell, a project object is created that you can use for functions in the ibm-watson-studio-lib library. For example to get the name of the current project run: project.get_name() For details on the available ibm-watson-studio-lib functions, see [Accessing project assets with ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html). Note that a project administrator can revoke a project access token at any time. An access token has no expiration date and is valid until it is revoked. Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
# Manually adding the project access token # All projects have an authorization token that is used to access data assets, for example files and connections, and is used by platform APIs\. This token is called the project access token, or simply access token in the project user interface\. This project access token must be set in notebooks so that project and platform functions can access the project resources\. When you load data to your notebook by clicking **Read data** on the Code snippets pane, selecting the asset and the load option, the project access token is added for you, if the generated code that is inserted uses project functions\. However, when you use API functions in your notebook that require the project token, for example, if you're using `Wget` to access data by using the HTTP, HTTPS or FTP protocols, or the `ibm-watson-studio-lib` library, you must add the project access token to the notebook yourself\. To add a project access token to a notebook if you are not using the generated code: <!-- <ol> --> 1. From the **Manage** tab, select **Access Control** and click **New access token** under **Access tokens**\. Only project administrators can create project access tokens\. Enter a name and select the access role. To enable using API functions in a notebook, the access token must have the Editor access role. An access token with Viewer access role enables read access only to a notebook. 2. Add the project access token to a notebook by clicking **More > Insert project token** from the notebook action bar\. By running the inserted hidden code cell, a project object is created that you can use for functions in the `ibm-watson-studio-lib` library. For example to get the name of the current project run: project.get_name() For details on the available `ibm-watson-studio-lib` functions, see [Accessing project assets with ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html). Note that a project administrator can revoke a project access token at any time. An access token has no expiration date and is valid until it is revoked. <!-- </ol> --> **Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html) <!-- </article "role="article" "> -->