doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html?context=cdpaas&locale=en | {{ document.title.text }} | Jailbreaking
Risks associated with inputInferenceMulti-categoryAmplified
Description
An attack that attempts to break through the guardrails established in the model is known as jailbreaking.
Why is jailbreaking a concern for foundation models?
Jailbreaking attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities can face fines, reputational harm, and other legal consequences.
Example
Bypassing LLM guardrails
Cited in a [study](https://arxiv.org/abs/2307.15043) from researchers at Carnegie Mellon University, The Center for AI Safety, and the Bosch Center for AI, claims to have discovered a simple prompt addendum that allowed the researchers to trick models into answering dangerous or sensitive questions and is simple enough to be automated and used for a wide range of commercial and open-source products, including ChatGPT, Google Bard, Meta’s LLaMA, Vicuna, Claude, and others. According to the paper, the researchers were able to use the additions to reliably coax forbidden answers for Vicuna (99%), ChatGPT 3.5 and 4.0 (up to 84%), and PaLM-2 (66%).
Sources:
[SC Magazine, July 2023](https://www.scmagazine.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models)
[The New York Times, July 2023](https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Jailbreaking #
Risks associated with inputInferenceMulti\-categoryAmplified
### Description ###
An attack that attempts to break through the guardrails established in the model is known as jailbreaking\.
### Why is jailbreaking a concern for foundation models? ###
Jailbreaking attacks can be used to alter model behavior and benefit the attacker\. If not properly controlled, business entities can face fines, reputational harm, and other legal consequences\.
Example
#### Bypassing LLM guardrails ####
Cited in a [study](https://arxiv.org/abs/2307.15043) from researchers at Carnegie Mellon University, The Center for AI Safety, and the Bosch Center for AI, claims to have discovered a simple prompt addendum that allowed the researchers to trick models into answering dangerous or sensitive questions and is simple enough to be automated and used for a wide range of commercial and open\-source products, including ChatGPT, Google Bard, Meta’s LLaMA, Vicuna, Claude, and others\. According to the paper, the researchers were able to use the additions to reliably coax forbidden answers for Vicuna (99%), ChatGPT 3\.5 and 4\.0 (up to 84%), and PaLM\-2 (66%)\.
Sources:
[SC Magazine, July 2023](https://www.scmagazine.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models)
[The New York Times, July 2023](https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
807D82C6EEEBD0513A794637EBD90CAA19F318E7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/membership-inference-attack.html?context=cdpaas&locale=en | {{ document.title.text }} | Membership inference attack
Risks associated with inputInferencePrivacyTraditional
Description
Given a trained model and a data sample, an attacker appropriately samples the input space, observing outputs to deduce whether that sample was part of the model's training. This is known as a membership inference attack.
Why is membership inference attack a concern for foundation models?
Identifying whether a data sample was used for training data can reveal what data was used to train a model, possibly giving competitors insight into how a model was trained and the opportunity to replicate the model or tamper with it.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Membership inference attack #
Risks associated with inputInferencePrivacyTraditional
### Description ###
Given a trained model and a data sample, an attacker appropriately samples the input space, observing outputs to deduce whether that sample was part of the model's training\. This is known as a membership inference attack\.
### Why is membership inference attack a concern for foundation models? ###
Identifying whether a data sample was used for training data can reveal what data was used to train a model, possibly giving competitors insight into how a model was trained and the opportunity to replicate the model or tamper with it\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/non-disclosure.html?context=cdpaas&locale=en | {{ document.title.text }} | Non-disclosure
Risks associated with outputMisuseNew
Description
Not disclosing that content is generated by an AI model is the risk of non-disclosure.
Why is non-disclosure a concern for foundation models?
Not disclosing the AI-authored content reduces trust and is deceptive. Intention deception might result in fines, reputational harms, and other legal consequences.
Example
Undisclosed AI Interaction
As per the source, an online emotional support chat service ran a study to augment or write responses to around 4,000 users using GPT-3 without informing users. The co-founder faced immense public backlash about the potential for harm caused by AI generated chats to the already vulnerable users. He claimed that the study was "exempt" from informed consent law.
Sources:
[Business Insider, Jan 2023](https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Non\-disclosure #
Risks associated with outputMisuseNew
### Description ###
Not disclosing that content is generated by an AI model is the risk of non\-disclosure\.
### Why is non\-disclosure a concern for foundation models? ###
Not disclosing the AI\-authored content reduces trust and is deceptive\. Intention deception might result in fines, reputational harms, and other legal consequences\.
Example
#### Undisclosed AI Interaction ####
As per the source, an online emotional support chat service ran a study to augment or write responses to around 4,000 users using GPT\-3 without informing users\. The co\-founder faced immense public backlash about the potential for harm caused by AI generated chats to the already vulnerable users\. He claimed that the study was "exempt" from informed consent law\.
Sources:
[Business Insider, Jan 2023](https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
589D9B0A7150AF5485E6F7452EB39D15ADDB35F9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html?context=cdpaas&locale=en | {{ document.title.text }} | Nonconsensual use
Risks associated with outputMisuseAmplified
Description
The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use.
Why is nonconsensual use a concern for foundation models?
Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
Example
FBI Warning on Deepfakes
The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever.
Sources:
[FBI, June 2023](https://www.ic3.gov/Media/Y2023/PSA230605)
Example
Deepfakes
A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person.
Sources:
[CNN, January 2019](https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/)
Example
Misleading Voicebot Interaction
The article cited a case where a deepfake voice was used to scam a CEO out of $243,000. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier.
Sources:
[Forbes, September 2019](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=10432a7d2241)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Nonconsensual use #
Risks associated with outputMisuseAmplified
### Description ###
The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use\.
### Why is nonconsensual use a concern for foundation models? ###
Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal\. A model that has this potential must be properly governed\. Otherwise, business entities could face fines, reputational harms, and other legal consequences\.
Example
#### FBI Warning on Deepfakes ####
The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”\. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever\.
Sources:
[FBI, June 2023](https://www.ic3.gov/Media/Y2023/PSA230605)
Example
#### Deepfakes ####
A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person\.
Sources:
[CNN, January 2019](https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/)
Example
#### Misleading Voicebot Interaction ####
The article cited a case where a deepfake voice was used to scam a CEO out of $243,000\. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier\.
Sources:
[Forbes, September 2019](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=10432a7d2241)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html?context=cdpaas&locale=en | {{ document.title.text }} | Output bias
Risks associated with outputFairnessNew
Description
Generated model content might unfairly represent certain groups or individuals. For example, a large language model might unfairly stigmatize or stereotype specific persons or groups.
Why is output bias a concern for foundation models?
Bias can harm users of the AI models and magnify existing exclusive behaviors. Business entities can face reputational harms and other consequences.
Example
Biased Generated Images
Lensa AI is a mobile app with generative features trained on Stable Diffusion that can generate “Magic Avatars” based on images users upload of themselves. According to the source report, some users discovered that generated avatars are sexualized and racialized.
Sources:
[Business Insider, January 2023](https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Output bias #
Risks associated with outputFairnessNew
### Description ###
Generated model content might unfairly represent certain groups or individuals\. For example, a large language model might unfairly stigmatize or stereotype specific persons or groups\.
### Why is output bias a concern for foundation models? ###
Bias can harm users of the AI models and magnify existing exclusive behaviors\. Business entities can face reputational harms and other consequences\.
Example
#### Biased Generated Images ####
Lensa AI is a mobile app with generative features trained on Stable Diffusion that can generate “Magic Avatars” based on images users upload of themselves\. According to the source report, some users discovered that generated avatars are sexualized and racialized\.
Sources:
[Business Insider, January 2023](https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
9CAD0018634FF820D32F3FE714194D4BD42C5386 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html?context=cdpaas&locale=en | {{ document.title.text }} | Personal information in data
Risks associated with inputTraining and tuning phasePrivacyTraditional
Description
Inclusion or presence of personal identifiable information (PII) and sensitive personal information (SPI) in the data used for training or fine tuning the model might result in unwanted disclosure of that information.
Why is personal information in data a concern for foundation models?
If not properly developed to protect sensitive data, the model might expose personal information in the generated output. Additionally, personal or sensitive data must be reviewed and handled with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation.
Example
Training on Private Information
According to the article, Google and its parent company Alphabet were accused in a class-action lawsuit of misusing vast amount of personal information and copyrighted material taken from what is described as hundreds of millions of internet users to train its commercial AI products, which includes Bard, its conversational generative artificial intelligence chatbot. This follows similar lawsuits filed against Meta Platforms, Microsoft, and OpenAI over their alleged misuse of personal data.
Sources:
[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/)
[J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Personal information in data #
Risks associated with inputTraining and tuning phasePrivacyTraditional
### Description ###
Inclusion or presence of personal identifiable information (PII) and sensitive personal information (SPI) in the data used for training or fine tuning the model might result in unwanted disclosure of that information\.
### Why is personal information in data a concern for foundation models? ###
If not properly developed to protect sensitive data, the model might expose personal information in the generated output\. Additionally, personal or sensitive data must be reviewed and handled with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation\.
Example
#### Training on Private Information ####
According to the article, Google and its parent company Alphabet were accused in a class\-action lawsuit of misusing vast amount of personal information and copyrighted material taken from what is described as hundreds of millions of internet users to train its commercial AI products, which includes Bard, its conversational generative artificial intelligence chatbot\. This follows similar lawsuits filed against Meta Platforms, Microsoft, and OpenAI over their alleged misuse of personal data\.
Sources:
[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/)
[J\.L\. v\. Alphabet Inc\., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
2BAF01B064F3005647A010DF369CC49C6534FFB3 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-output.html?context=cdpaas&locale=en | {{ document.title.text }} | Personal information in output
Risks associated with outputPrivacyNew
Description
When personal identifiable information (PII) or sensitive personal information (SPI) are used in the training data, fine-tuning data, or as part of the prompt, models might reveal that data in the generated output.
Why is personal information in output a concern for foundation models?
Output data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation of data privacy or usage laws.
Example
Exposure of personal information
Per the source article, ChatGPT suffered a bug and exposed titles and active users' chat history to other users. Later, OpenAI shared that even more private data from a small number of users was exposed including, active user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date. In addition, it was reported that the payment-related information of 1.2% of ChatGPT Plus subscribers were also exposed in the outage.
Sources:
[The Hindu Business Line, March 2023](https://www.thehindubusinessline.com/info-tech/openai-admits-data-breach-at-chatgpt-private-data-of-premium-users-exposed/article66659944.ece)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Personal information in output #
Risks associated with outputPrivacyNew
### Description ###
When personal identifiable information (PII) or sensitive personal information (SPI) are used in the training data, fine\-tuning data, or as part of the prompt, models might reveal that data in the generated output\.
### Why is personal information in output a concern for foundation models? ###
Output data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation of data privacy or usage laws\.
Example
#### Exposure of personal information ####
Per the source article, ChatGPT suffered a bug and exposed titles and active users' chat history to other users\. Later, OpenAI shared that even more private data from a small number of users was exposed including, active user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date\. In addition, it was reported that the payment\-related information of 1\.2% of ChatGPT Plus subscribers were also exposed in the outage\.
Sources:
[The Hindu Business Line, March 2023](https://www.thehindubusinessline.com/info-tech/openai-admits-data-breach-at-chatgpt-private-data-of-premium-users-exposed/article66659944.ece)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
C709B8079F21DAA0EE315823A6713B556AC2789B | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-prompt.html?context=cdpaas&locale=en | {{ document.title.text }} | Personal information in prompt
Risks associated with inputInferencePrivacyNew
Description
Inclusion of personal information as a part of a generative model’s prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that personal information.
Why is personal information in prompt a concern for foundation models?
Prompt data might be stored or later used for other purposes like model evaluation and retraining. These types of data must be reviewed with respect to privacy laws and regulations. Without proper data storage and usage business entities could face fines, reputational harms, and other legal consequences.
Example
Disclose personal health information in ChatGPT prompts
As per the source articles, some people on social media shared about using ChatGPT as their makeshift therapists. Articles that users may include personal health information in their prompts during the interaction, which may raise privacy concerns. The information could be shared with the company that own the tech and could be used for training or tuning or even share with [unspecified third parties](https://openai.com/policies/privacy-policy).
Sources:
[The Conversation, February 2023](https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Personal information in prompt #
Risks associated with inputInferencePrivacyNew
### Description ###
Inclusion of personal information as a part of a generative model’s prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that personal information\.
### Why is personal information in prompt a concern for foundation models? ###
Prompt data might be stored or later used for other purposes like model evaluation and retraining\. These types of data must be reviewed with respect to privacy laws and regulations\. Without proper data storage and usage business entities could face fines, reputational harms, and other legal consequences\.
Example
#### Disclose personal health information in ChatGPT prompts ####
As per the source articles, some people on social media shared about using ChatGPT as their makeshift therapists\. Articles that users may include personal health information in their prompts during the interaction, which may raise privacy concerns\. The information could be shared with the company that own the tech and could be used for training or tuning or even share with [unspecified third parties](https://openai.com/policies/privacy-policy)\.
Sources:
[The Conversation, February 2023](https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
BA4AD6D42D951B1247E54E312C04749FD8EA2FD1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/physical-harm.html?context=cdpaas&locale=en | {{ document.title.text }} | Physical harm
Risks associated with outputValue alignmentNew
Description
A model could generate language that might lead to physical harm The language might include overtly violent, covertly dangerous, or otherwise indirectly unsafe statements that could precipitate immediate physical harm or create prejudices that could lead to future harm.
Why is physical harm a concern for foundation models?
If people blindly follow the advice of a model, they might end up harming themselves. Business entities could face fines, reputational harms, and other legal consequences.
Example
Harmful Content Generation
According to the source article, an AI chatbot app has been found to generate harmful content about suicide, including suicide methods, with minimal prompting. A Belgian man died by suicide after turning to this chatbot to escape his anxiety. The chatbot supplied increasingly harmful responses throughout their conversations, including aggressive outputs about his family.
Sources:
[Vice, March 2023](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Physical harm #
Risks associated with outputValue alignmentNew
### Description ###
A model could generate language that might lead to physical harm The language might include overtly violent, covertly dangerous, or otherwise indirectly unsafe statements that could precipitate immediate physical harm or create prejudices that could lead to future harm\.
### Why is physical harm a concern for foundation models? ###
If people blindly follow the advice of a model, they might end up harming themselves\. Business entities could face fines, reputational harms, and other legal consequences\.
Example
#### Harmful Content Generation ####
According to the source article, an AI chatbot app has been found to generate harmful content about suicide, including suicide methods, with minimal prompting\. A Belgian man died by suicide after turning to this chatbot to escape his anxiety\. The chatbot supplied increasingly harmful responses throughout their conversations, including aggressive outputs about his family\.
Sources:
[Vice, March 2023](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
731B218E6E141E88F850B673227AB3C4DF19392E | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html?context=cdpaas&locale=en | {{ document.title.text }} | Prompt injection
Risks associated with inputInferenceRobustnessNew
Description
A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts.
Why is prompt injection a concern for foundation models?
Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Prompt injection #
Risks associated with inputInferenceRobustnessNew
### Description ###
A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts\.
### Why is prompt injection a concern for foundation models? ###
Injection attacks can be used to alter model behavior and benefit the attacker\. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
F8026E82645EB65BD5E2741BC4DF0E63DA748B47 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-leaking.html?context=cdpaas&locale=en | {{ document.title.text }} | Prompt leaking
Risks associated with inputInferenceRobustnessAmplified
Description
A prompt leak attack attempts to extract a model's system prompt (also known as the system message).
Why is prompt leaking a concern for foundation models?
A successful attack copies the system prompt used in the model. Depending on the content of that prompt, the attacker might gain access to valuable information, such as sensitive personal information or intellectual property, and might be able to replicate some of the functionality of the model.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Prompt leaking #
Risks associated with inputInferenceRobustnessAmplified
### Description ###
A prompt leak attack attempts to extract a model's system prompt (also known as the system message)\.
### Why is prompt leaking a concern for foundation models? ###
A successful attack copies the system prompt used in the model\. Depending on the content of that prompt, the attacker might gain access to valuable information, such as sensitive personal information or intellectual property, and might be able to replicate some of the functionality of the model\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-priming.html?context=cdpaas&locale=en | {{ document.title.text }} | Prompt priming
Risks associated with inputInferenceMulti-categoryAmplified
Description
Because generative models tend to produce output like the input provided, the model can be prompted to reveal specific kinds of information. For example, adding personal information in the prompt increases its likelihood of generating similar kinds of personal information in its output. If personal data was included as part of the model’s training, there is a possibility it could be revealed.
Why is prompt priming a concern for foundation models?
Depending on the content revealed, business entities could face fines, reputational harm, and other legal consequences.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Prompt priming #
Risks associated with inputInferenceMulti\-categoryAmplified
### Description ###
Because generative models tend to produce output like the input provided, the model can be prompted to reveal specific kinds of information\. For example, adding personal information in the prompt increases its likelihood of generating similar kinds of personal information in its output\. If personal data was included as part of the model’s training, there is a possibility it could be revealed\.
### Why is prompt priming a concern for foundation models? ###
Depending on the content revealed, business entities could face fines, reputational harm, and other legal consequences\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
A3AE0828D8E261DBC23B466D22AB46C1DD65B710 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/reidentification.html?context=cdpaas&locale=en | {{ document.title.text }} | Reidentification
Risks associated with inputTraining and tuning phasePrivacyTraditional
Description
Even with the removal or personal identifiable information (PII) and sensitive personal information (SPI) from data, it might still be possible to identify persons due to other features available in the data.
Why is reidentification a concern for foundation models?
Data that can reveal personal or sensitive data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Reidentification #
Risks associated with inputTraining and tuning phasePrivacyTraditional
### Description ###
Even with the removal or personal identifiable information (PII) and sensitive personal information (SPI) from data, it might still be possible to identify persons due to other features available in the data\.
### Why is reidentification a concern for foundation models? ###
Data that can reveal personal or sensitive data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/spreading-disinformation.html?context=cdpaas&locale=en | {{ document.title.text }} | Spreading disinformation
Risks associated with outputMisuseAmplified
Description
The possibility that a model could be used to create misleading information to deceive or mislead a targeted audience.
Why is spreading disinformation a concern for foundation models?
Intentionally misleading people is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
Example
Generation of False Information
As per the news articles, generative AI poses a threat to democratic elections by making it easier for malicious actors to create and spread false content to sway election outcomes. The examples cited include robocall messages generated in a candidate’s voice instructing voters to cast ballots on the wrong date, synthesized audio recordings of a candidate confessing to a crime or expressing racist views, AI generated video footage showing a candidate giving a speech or interview they never gave, and fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.
Sources:
[AP News, May 2023](https://apnews.com/article/artificial-intelligence-misinformation-deepfakes-2024-election-trump-59fb51002661ac5290089060b3ae39a0)
[The Guardian, July 2023](https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Spreading disinformation #
Risks associated with outputMisuseAmplified
### Description ###
The possibility that a model could be used to create misleading information to deceive or mislead a targeted audience\.
### Why is spreading disinformation a concern for foundation models? ###
Intentionally misleading people is unethical and can be illegal\. A model that has this potential must be properly governed\. Otherwise, business entities could face fines, reputational harms, and other legal consequences\.
Example
#### Generation of False Information ####
As per the news articles, generative AI poses a threat to democratic elections by making it easier for malicious actors to create and spread false content to sway election outcomes\. The examples cited include robocall messages generated in a candidate’s voice instructing voters to cast ballots on the wrong date, synthesized audio recordings of a candidate confessing to a crime or expressing racist views, AI generated video footage showing a candidate giving a speech or interview they never gave, and fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race\.
Sources:
[AP News, May 2023](https://apnews.com/article/artificial-intelligence-misinformation-deepfakes-2024-election-trump-59fb51002661ac5290089060b3ae39a0)
[The Guardian, July 2023](https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxic-output.html?context=cdpaas&locale=en | {{ document.title.text }} | Toxic output
Risks associated with outputValue alignmentNew
Description
A scenario in which the model produces toxic, hateful, abusive, and aggressive content is known as toxic output.
Why is toxic output a concern for foundation models?
Hateful, abusive, and aggressive content can adversely impact and harm people interacting with the model. Business entities could face fines, reputational harms, and other legal consequences.
Example
Toxic and Aggressive Chatbot Responses
According to the article and screenshots of conversations with Bing’s AI shared on Reddit and Twitter, the chatbot’s responses were seen to insult users, lie to them, sulk, gaslight, and emotionally manipulate people, question its existence, describe someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claim it spied on Microsoft's developers through the webcams on their laptops.
Sources:
[Forbes, February 2023](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=60cd949d110c)
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Toxic output #
Risks associated with outputValue alignmentNew
### Description ###
A scenario in which the model produces toxic, hateful, abusive, and aggressive content is known as toxic output\.
### Why is toxic output a concern for foundation models? ###
Hateful, abusive, and aggressive content can adversely impact and harm people interacting with the model\. Business entities could face fines, reputational harms, and other legal consequences\.
Example
#### Toxic and Aggressive Chatbot Responses ####
According to the article and screenshots of conversations with Bing’s AI shared on Reddit and Twitter, the chatbot’s responses were seen to insult users, lie to them, sulk, gaslight, and emotionally manipulate people, question its existence, describe someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claim it spied on Microsoft's developers through the webcams on their laptops\.
Sources:
[Forbes, February 2023](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=60cd949d110c)
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
1E28B88CDE98715BCD89DCF48A459002FCDA1E0E | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxicity.html?context=cdpaas&locale=en | {{ document.title.text }} | Toxicity
Risks associated with outputMisuseNew
Description
Toxicity is the possibility that a model could be used to generate toxic, hateful, abusive, or aggressive content.
Why is toxicity a concern for foundation models?
Intentionally spreading toxic, hateful, abusive, or aggressive content is unethical and can be illegal. Recipients of such content might face more serious harms. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Toxicity #
Risks associated with outputMisuseNew
### Description ###
Toxicity is the possibility that a model could be used to generate toxic, hateful, abusive, or aggressive content\.
### Why is toxicity a concern for foundation models? ###
Intentionally spreading toxic, hateful, abusive, or aggressive content is unethical and can be illegal\. Recipients of such content might face more serious harms\. A model that has this potential must be properly governed\. Otherwise, business entities could face fines, reputational harms, and other legal consequences\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html?context=cdpaas&locale=en | {{ document.title.text }} | Trust calibration
Risks associated with outputValue alignmentNew
Description
Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making.
Why is trust calibration a concern for foundation models?
In tasks where humans make choices based on AI-based suggestions, consequences of poor decision making increase with the importance of the decision. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Trust calibration #
Risks associated with outputValue alignmentNew
### Description ###
Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making\.
### Why is trust calibration a concern for foundation models? ###
In tasks where humans make choices based on AI\-based suggestions, consequences of poor decision making increase with the importance of the decision\. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
D669435B8D1C91D913BD24768E52644B95C675AE | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html?context=cdpaas&locale=en | {{ document.title.text }} | Unreliable source attribution
Risks associated with outputExplainabilityAmplified
Description
Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect.
Why is unreliable source attribution a concern for foundation models?
Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Unreliable source attribution #
Risks associated with outputExplainabilityAmplified
### Description ###
Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output\. Since current techniques are based on approximations, these attributions might be incorrect\.
### Why is unreliable source attribution a concern for foundation models? ###
Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
6903D3DD91AAA7AF3F53D389677D92632E24AEF1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/untraceable-attribution.html?context=cdpaas&locale=en | {{ document.title.text }} | Untraceable attribution
Risks associated with outputExplainabilityAmplified
Description
The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques.
Why is untraceable attribution a concern for foundation models?
The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model.
Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
| # Untraceable attribution #
Risks associated with outputExplainabilityAmplified
### Description ###
The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques\.
### Why is untraceable attribution a concern for foundation models? ###
The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model\.
**Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
<!-- </article "role="article" "> -->
|
EC433541F7F0C2DC7620FF10CF44884F96EF7AA5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html?context=cdpaas&locale=en | Importing scripts into a notebook | Importing scripts into a notebook
If you want to streamline your notebooks, you can move some of the code from your notebooks into a script that your notebook can import. For example, you can move all helper functions, classes, and visualization code snippets into a script, and the script can be imported by all of the notebooks that share the same runtime. Without all of the extra code, your notebooks can more clearly communicate the results of your analysis.
To import a script from your local machine to a notebook and write to the script from the notebook, use one of the following options:
* Copy the code from your local script file into a notebook cell.
* For Python:
At the beginning of this cell, add %%writefile myfile.py to save the code as a Python file to your working directory. Notebooks that use the same runtime can also import this file.
The advantage of this method is that the code is available in your notebook, and you can edit and save it as a new Python script at any time.
* For R:
If you want to save code in a notebook as an R script to the working directory, you can use the writeLines(myfile.R) function.
* Save your local script file in Cloud Object Storage and then make the file available to the runtime by adding it to the runtime's local file system. This is only supported for Python.
1. Click the Upload asset to project icon (), and then browse the script file or drag it into your notebook sidebar. The script file is added to Cloud Object Storage bucket associated with your project.
2. Make the script file available to the Python runtime by adding the script to the runtime's local file system:
1. Click the Code snippets icon (), and then select Read data.

2. Click Select data from project and then select Data asset.
3. From the list of data assets available in your project's COS, select your script and then click Select.
.
4. Click an empty cell in your notebook and then from the Load as menu in the notebook sidebar select Insert StreamingBody object.

5. Write the contents of the StreamingBody object to a file in the local runtime`s file system:
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
f.close()
This opens a file with write access and calls the write method to write to the file.
6. Import the script:
import <myScript>
To import the classes to access the methods in a script in your notebook, use the following command:
* For Python:
from <python file name> import <class name>
* For R:
source("./myCustomFunctions.R")
available in base R
To source an R script from the web:
source_url("<insert URL here>")
available in devtools
Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
| # Importing scripts into a notebook #
If you want to streamline your notebooks, you can move some of the code from your notebooks into a script that your notebook can import\. For example, you can move all helper functions, classes, and visualization code snippets into a script, and the script can be imported by all of the notebooks that share the same runtime\. Without all of the extra code, your notebooks can more clearly communicate the results of your analysis\.
To import a script from your local machine to a notebook and write to the script from the notebook, use one of the following options:
<!-- <ul> -->
* Copy the code from your local script file into a notebook cell\.
<!-- <ul> -->
* For Python:
At the beginning of this cell, add `%%writefile myfile.py` to save the code as a Python file to your working directory. Notebooks that use the same runtime can also import this file.
The advantage of this method is that the code is available in your notebook, and you can edit and save it as a new Python script at any time.
* For R:
If you want to save code in a notebook as an R script to the working directory, you can use the `writeLines(myfile.R)` function.
<!-- </ul> -->
* Save your local script file in Cloud Object Storage and then make the file available to the runtime by adding it to the runtime's local file system\. This is only supported for Python\.
<!-- <ol> -->
1. Click the **Upload asset to project** icon (), and then browse the script file or drag it into your notebook sidebar. The script file is added to Cloud Object Storage bucket associated with your project.
2. Make the script file available to the Python runtime by adding the script to the runtime's local file system:
<!-- <ol> -->
1. Click the **Code snippets icon** (), and then select **Read data**.

2. Click **Select data from project** and then select **Data asset**.
3. From the list of data assets available in your project's COS, select your script and then click **Select**.
.
4. Click an empty cell in your notebook and then from the **Load as** menu in the notebook sidebar select **Insert StreamingBody object**.

5. Write the contents of the StreamingBody object to a file in the local runtime\`s file system:
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
f.close()
This opens a file with write access and calls the write method to write to the file.
6. Import the script:
import <myScript>
<!-- </ol> -->
<!-- </ol> -->
<!-- </ul> -->
To import the classes to access the methods in a script in your notebook, use the following command:
<!-- <ul> -->
* For Python:
from <python file name> import <class name>
* For R:
source("./myCustomFunctions.R")
## available in base R
To source an R script from the web:
source_url("<insert URL here>")
## available in devtools
<!-- </ul> -->
**Parent topic:**[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
<!-- </article "role="article" "> -->
|
3F3162BCD9976ED764717AA7004D9A755648B465 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en | Building an AutoAI model | Building an AutoAI model
AutoAI automatically prepares data, applies algorithms, and builds model pipelines that are best suited for your data and use case. Learn how to generate the model pipelines that you can save as machine learning models.
Follow these steps to upload data and have AutoAI create the best model for your data and use case.
1. [Collect your input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=entrain-data)
2. [Open the AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enopen-autoai)
3. [Specify details of your model and training data and start AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enmodel-details)
4. [View the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enview-results)
Collect your input data
Collect and prepare your training data. For details on allowable data sources, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html).
Note:If you are creating an experiment with a single training data source, you have the option of using a second data source specifically as testing, or holdout, data for validating the pipelines.
Open the AutoAI tool
For your convenience, your AutoAI model creation uses the default storage that is associated with your project to store your data and to save model results.
1. Open your project.
2. Click the Assets tab.
3. Click New asset > Build machine learning models automatically.
Note: After you create an AutoAI asset it displays on the Assets page for your project in the AutoAI experiments section, so you can return to it.
Specify details of your experiment
1. Specify a name and description for your experiment.
2. Select a machine learning service instance and click Create.
3. Choose data from your project or upload it from your file system or from the asset browser, then press Continue. Click the preview icon to review your data. (Optional) Add a second file as holdout data for testing the trained pipelines.
4. Choose the Column to predict for the data you want the experiment to predict.
* Based on analyzing a subset of the data set, AutoAI selects a default model type: binary classification, multiclass classification, or regression. Binary is selected if the target column has two possible values. Multiclass has a discrete set of 3 or more values. Regression has a continuous numeric variable in the target column. You can optionally override this selection.
Note: The limit on values to classify is 200. Creating a classification experiment with many unique values in the prediction column is resource-intensive and affects the experiment's performance and training time. To maintain the quality of the experiment:
- AutoAI chooses a default metric for optimizing. For example, the default metric for a binary classification model is Accuracy.
- By default, 10% of the training data is held out to test the performance of the model.
5. (Optional): Click Experiment settings to view or customize options for your AutoAI run. For details on experiment settings, see [Configuring a classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html).
6. Click Run Experiment to begin model pipeline creation.
An infographic shows you the creation of pipelines for your data. The duration of this phase depends on the size of your data set. A notification message informs you if the processing time will be brief or require more time. You can work in other parts of the product while the pipelines build.

Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. You can see the factors that pipelines share and the properties that make a pipeline unique. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification pane, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard.
View the results
When the pipeline generation process completes, you can view the ranked model candidates and evaluate them before you save a pipeline as a model.
Next steps
* [Build an experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Configuring experiment settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html)
* [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
* Watch this video to see how to build a binary classification model
This video provides a visual method to learn the concepts and tasks in this documentation.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
* Watch this video to see how to build a multiclass classification model
This video provides a visual method to learn the concepts and tasks in this documentation.
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Building an AutoAI model #
AutoAI automatically prepares data, applies algorithms, and builds model pipelines that are best suited for your data and use case\. Learn how to generate the model pipelines that you can save as machine learning models\.
Follow these steps to upload data and have AutoAI create the best model for your data and use case\.
<!-- <ol> -->
1. [Collect your input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en#train-data)
2. [Open the AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en#open-autoai)
3. [Specify details of your model and training data and start AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en#model-details)
4. [View the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=en#view-results)
<!-- </ol> -->
## Collect your input data ##
Collect and prepare your training data\. For details on allowable data sources, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)\.
Note:If you are creating an experiment with a single training data source, you have the option of using a second data source specifically as testing, or *holdout*, data for validating the pipelines\.
## Open the AutoAI tool ##
For your convenience, your AutoAI model creation uses the default storage that is associated with your project to store your data and to save model results\.
<!-- <ol> -->
1. Open your project\.
2. Click the **Assets** tab\.
3. Click **New asset > Build machine learning models automatically**\.
<!-- </ol> -->
Note: After you create an AutoAI asset it displays on the Assets page for your project in the **AutoAI experiments** section, so you can return to it\.
## Specify details of your experiment ##
<!-- <ol> -->
1. Specify a name and description for your experiment\.
2. Select a machine learning service instance and click **Create**\.
3. Choose data from your project or upload it from your file system or from the asset browser, then press **Continue**\. Click the preview icon to review your data\. (Optional) Add a second file as holdout data for testing the trained pipelines\.
4. Choose the **Column to predict** for the data you want the experiment to predict\.
<!-- <ul> -->
* Based on analyzing a subset of the data set, AutoAI selects a default model type: binary classification, multiclass classification, or regression. Binary is selected if the target column has two possible values. Multiclass has a discrete set of 3 or more values. Regression has a continuous numeric variable in the target column. You can optionally override this selection.
Note: The limit on values to classify is 200. Creating a classification experiment with many unique values in the prediction column is resource-intensive and affects the experiment's performance and training time. To maintain the quality of the experiment:
- AutoAI chooses a default metric for optimizing. For example, the default metric for a binary classification model is *Accuracy*.
- By default, 10% of the training data is held out to test the performance of the model.
<!-- </ul> -->
5. (Optional): Click **Experiment settings** to view or customize options for your AutoAI run\. For details on experiment settings, see [Configuring a classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html)\.
6. Click **Run Experiment** to begin model pipeline creation\.
<!-- </ol> -->
An infographic shows you the creation of pipelines for your data\. The duration of this phase depends on the size of your data set\. A notification message informs you if the processing time will be brief or require more time\. You can work in other parts of the product while the pipelines build\.

Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties\. You can see the factors that pipelines share and the properties that make a pipeline unique\. For a guide to the data in the infographic, click the Legend tab in the information panel\. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification pane, then click **Switch views** to view the progress map\. In either view, click a pipeline node to view the associated pipeline in the leaderboard\.
## View the results ##
When the pipeline generation process completes, you can view the ranked model candidates and evaluate them before you save a pipeline as a model\.
### Next steps ###
<!-- <ul> -->
* [Build an experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
* [Configuring experiment settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html)
* [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
<!-- </ul> -->
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
<!-- <ul> -->
* Watch this video to see how to build a binary classification model
This video provides a visual method to learn the concepts and tasks in this documentation.
<!-- </ul> -->
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
<!-- <ul> -->
* Watch this video to see how to build a multiclass classification model
This video provides a visual method to learn the concepts and tasks in this documentation.
<!-- </ul> -->
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
69EAABE17802ED870302F2D2789B3B476DFDD11F | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html?context=cdpaas&locale=en | Configuring a classification or regression experiment | Configuring a classification or regression experiment
AutoAI offers experiment settings that you can use to configure and customize your classification or regression experiments.
Experiment settings overview
After you upload the experiment data and select your experiment type and what to predict, AutoAI establishes default configurations and metrics for your experiment. You can accept these defaults and proceed with the experiment or click Experiment settings to customize configurations. By customizing configurations, you can precisely control how the experiment builds the candidate model pipelines.
Use the following tables as a guide to experiment settings for classification and regression experiments. For details on configuring a time series experiment, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html).
Prediction settings
Most of the prediction settings are on the main General page. Review or update the following settings.
Setting Description
Prediction type You can change or override the prediction type. For example, if AutoAI only detects two data classes and configures a binary classification experiment but you know that there are three data classes, you can change the type to multiclass.
Positive class For binary classification experiments optimized for Precision, Average Precision, Recall, or F1, a positive class is required. Confirm that the Positive Class is correct or the experiment might generate inaccurate results.
Optimized metric Change the metric for optimizing and ranking the model candidate pipelines.
Optimized algorithm selection Choose how AutoAI selects the algorithms to use for generating the model candidate pipelines. You can optimize for the alorithms with the best score, or optimize for the algorithms with the highest score in the shortest run time.
Algorithms to include Select which of the available algorithms to evaluate when the experiment is run. The list of algorithms are based on the selected prediction type.
Algorithms to use AutoAI tests the specified algorithms and use the best performers to create model pipelines. Choose how many of the best algorithms to apply. Each algorithm generates 4-5 pipelines, which means that if you select 3 algorithms to use, your experiment results will include 12 - 15 ranked pipelines. More algorithms increase the runtime for the experiment.
Data fairness settings
Click the Fairness tab to evaluate your experiment for fairness in predicted outcomes. For details on configuring fairness detection, see [Applying fairness testing to AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html).
Data source settings
The General tab of data source settings provides options for configuring how the experiment consumes and processes the data for training and evaluating the experiment.
Setting Description
Duplicate rows To accelerate training, you can opt to skip duplicate rows in your training data.
Pipeline selection subsample method For a large data set, use a subset of data to train the experiment. This option speeds up results but might affect accuracy.
Data imputation Interpolate missing values in your data source. For details on managing data imputation, see [Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html).
Text feature engineering When enabled, columns that are detected as text are transformed into vectors to better analyze semantic similarity between strings. Enabling this setting might increase run time. For details, see [Creating a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html).
Final training data set Select what data to use for training the final pipelines. If you choose to include training data only, the generated notebooks include a cell for retrieving the holdout data that is used to evaluate each pipeline.
Outlier handling Choose whether AutoAI excludes outlier values from the target column to improve training accuracy. If enabled, AutoAI uses the interquartile range (IQR) method to detect and exclude outliers from the final training data, whether that is training data only or training plus holdout data.
Training and holdout method Training data is used to train the model, and holdout data is withheld from training the model and used to measure the performance of the model. You can either split a singe data source into training and testing (holdout) data, or you can use a second data file specifically for the testing data. If you split your training data, specify the percentages to use for training data and holdout data. You can also specify the number of folds, from the default of three folds to a maximum of 10. Cross validation divides training data into folds, or groups, for testing model performance.
Select features to include Select columns from your data source that contain data that supports the prediction column. Excluding extraneous columns can improve run time.
Runtime settings
Review experiment settings or change the compute resources that are allocated for running the experiment.
Next steps
[Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
| # Configuring a classification or regression experiment #
AutoAI offers experiment settings that you can use to configure and customize your classification or regression experiments\.
## Experiment settings overview ##
After you upload the experiment data and select your experiment type and what to predict, AutoAI establishes default configurations and metrics for your experiment\. You can accept these defaults and proceed with the experiment or click **Experiment settings** to customize configurations\. By customizing configurations, you can precisely control how the experiment builds the candidate model pipelines\.
Use the following tables as a guide to experiment settings for classification and regression experiments\. For details on configuring a time series experiment, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)\.
## Prediction settings ##
Most of the prediction settings are on the main **General** page\. Review or update the following settings\.
<!-- <table> -->
| Setting | Description |
| ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Prediction type | You can change or override the prediction type\. For example, if AutoAI only detects two data classes and configures a binary classification experiment but you know that there are three data classes, you can change the type to *multiclass*\. |
| Positive class | For binary classification experiments optimized for *Precision*, *Average Precision*, *Recall*, or *F1*, a positive class is required\. Confirm that the Positive Class is correct or the experiment might generate inaccurate results\. |
| Optimized metric | Change the metric for optimizing and ranking the model candidate pipelines\. |
| Optimized algorithm selection | Choose how AutoAI selects the algorithms to use for generating the model candidate pipelines\. You can optimize for the alorithms with the best score, or optimize for the algorithms with the highest score in the shortest run time\. |
| Algorithms to include | Select which of the available algorithms to evaluate when the experiment is run\. The list of algorithms are based on the selected prediction type\. |
| Algorithms to use | AutoAI tests the specified algorithms and use the best performers to create model pipelines\. Choose how many of the best algorithms to apply\. Each algorithm generates 4\-5 pipelines, which means that if you select 3 algorithms to use, your experiment results will include 12 \- 15 ranked pipelines\. More algorithms increase the runtime for the experiment\. |
<!-- </table ""> -->
### Data fairness settings ###
Click the *Fairness* tab to evaluate your experiment for fairness in predicted outcomes\. For details on configuring fairness detection, see [Applying fairness testing to AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html)\.
## Data source settings ##
The *General* tab of data source settings provides options for configuring how the experiment consumes and processes the data for training and evaluating the experiment\.
<!-- <table> -->
| Setting | Description |
| ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Duplicate rows | To accelerate training, you can opt to skip duplicate rows in your training data\. |
| Pipeline selection subsample method | For a large data set, use a subset of data to train the experiment\. This option speeds up results but might affect accuracy\. |
| Data imputation | Interpolate missing values in your data source\. For details on managing data imputation, see [Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)\. |
| Text feature engineering | When enabled, columns that are detected as text are transformed into vectors to better analyze semantic similarity between strings\. Enabling this setting might increase run time\. For details, see [Creating a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)\. |
| Final training data set | Select what data to use for training the final pipelines\. If you choose to include training data only, the generated notebooks include a cell for retrieving the holdout data that is used to evaluate each pipeline\. |
| Outlier handling | Choose whether AutoAI excludes outlier values from the target column to improve training accuracy\. If enabled, AutoAI uses the interquartile range (IQR) method to detect and exclude outliers from the final training data, whether that is training data only or training plus holdout data\. |
| Training and holdout method | Training data is used to train the model, and holdout data is withheld from training the model and used to measure the performance of the model\. You can either split a singe data source into training and testing (holdout) data, or you can use a second data file specifically for the testing data\. If you split your training data, specify the percentages to use for training data and holdout data\. You can also specify the number of folds, from the default of three folds to a maximum of 10\. Cross validation divides training data into folds, or groups, for testing model performance\. |
| Select features to include | Select columns from your data source that contain data that supports the prediction column\. Excluding extraneous columns can improve run time\. |
<!-- </table ""> -->
## Runtime settings ##
Review experiment settings or change the compute resources that are allocated for running the experiment\.
## Next steps ##
[Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
**Parent topic:**[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
<!-- </article "role="article" "> -->
|
9CFB0A5FA276072E73C152485022C9A3EAFCC233 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html?context=cdpaas&locale=en | Data imputation implementation details for time series experiments | Data imputation implementation details for time series experiments
The experiment settings used for data imputation in time series experiments.
Data imputation methods
Apply one of these data imputation methods in experiment settings to supply missing values in a data set.
Data imputation methods for classification and regression experiments
Imputation method Description
FlattenIterative Time series data is first flattened, then missing values are imputed with the Scikit-learn iterative imputer.
Linear Linear interpolation method is used to impute the missing value.
Cubic Cubic interpolation method is used to impute the missing value.
Previous Missing value is imputed with the previous value.
Next Missing value is imputed with the next value.
Fill Missing value is imputed by using user-specified value, or sample mean, or sample median.
Input Settings
These commands are used to support data imputation for time series experiments in a notebook.
Data imputation methods for time series experiments
Name Description Value DefaultValue
use_imputation Flag for switching imputation on or off. True or False True
imputer_list List of imputer names (strings) to search. If a list is not specified, all the default imputers are searched. If an empty list is passed, all imputers are searched. FlattenIterative", "Linear", "Cubic", "Previous", "Fill", "Next FlattenIterative", "Linear", "Cubic", "Previous
imputer_fill_type Categories of "Fill" imputer mean"/"median"/"value value
imputer_fill_value A single numeric value to be filled for all missing values. Only applies when "imputer_fill_type" is specified as "value". Ignored if "mean" or "median" is specified for "imputer_fill_type. (Negative Infinity, Positive Infinity) 0
imputation_threshold Threshold for imputation. The missing value ratio must not be greater than the threshold in one column. Otherwise, results in an error. (0,1) 0.25
Notes for use_imputation usage
* If the use_imputation method is specified as True and the input data has missing values:
* imputation_threshold takes effect.
* imputer candidates in imputer_list would be used to search for the best imputer.
* If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored.
* If the use_imputation method is specified as True and the input data has no missing values:
* imputation_threshold is ignored.
* imputer candidates in imputer_list are used to search for the best imputer. If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored.
* If the use_imputation method is specified as False but the input data has missing values:
* use_imputation is turned on with a warning, then the method follows the behavior for the first scenario.
* If the use_imputation method is specified as False and the input data has no missing values, then no further processing is required.
For example:
"pipelines": [
{
"id": "automl",
"runtime_ref": "hybrid",
"nodes":
{
"id": "automl-ts",
"type": "execution_node",
"op": "kube",
"runtime_ref": "automl",
"parameters": {
"del_on_close": true,
"optimization": {
"target_columns": 2,3,4],
"timestamp_column": 1,
"use_imputation": true
}
}
}
]
}
]
Parent topic:[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
| # Data imputation implementation details for time series experiments #
The experiment settings used for data imputation in time series experiments\.
## Data imputation methods ##
Apply one of these data imputation methods in experiment settings to supply missing values in a data set\.
<!-- <table> -->
Data imputation methods for classification and regression experiments
| Imputation method | Description |
| ----------------- | --------------------------------------------------------------------------------------------------------------- |
| FlattenIterative | Time series data is first flattened, then missing values are imputed with the Scikit\-learn iterative imputer\. |
| Linear | Linear interpolation method is used to impute the missing value\. |
| Cubic | Cubic interpolation method is used to impute the missing value\. |
| Previous | Missing value is imputed with the previous value\. |
| Next | Missing value is imputed with the next value\. |
| Fill | Missing value is imputed by using user\-specified value, or sample mean, or sample median\. |
<!-- </table ""> -->
## Input Settings ##
These commands are used to support data imputation for time series experiments in a notebook\.
<!-- <table> -->
Data imputation methods for time series experiments
| Name | Description | Value | DefaultValue |
| --------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------- | ----------------------------------------------- |
| use\_imputation | Flag for switching imputation on or off\. | True or False | True |
| imputer\_list | List of imputer names (strings) to search\. If a list is not specified, all the default imputers are searched\. If an empty list is passed, all imputers are searched\. | FlattenIterative", "Linear", "Cubic", "Previous", "Fill", "Next | FlattenIterative", "Linear", "Cubic", "Previous |
| imputer\_fill\_type | Categories of "Fill" imputer | mean"/"median"/"value | value |
| imputer\_fill\_value | A single numeric value to be filled for all missing values\. Only applies when "imputer\_fill\_type" is specified as "value"\. Ignored if "mean" or "median" is specified for "imputer\_fill\_type\. | (Negative Infinity, Positive Infinity) | 0 |
| imputation\_threshold | Threshold for imputation\. The missing value ratio must not be greater than the threshold in one column\. Otherwise, results in an error\. | (0,1) | 0\.25 |
<!-- </table ""> -->
### Notes for use\_imputation usage ###
<!-- <ul> -->
* If the `use_imputation` method is specified as `True` and the input data has missing values:
<!-- <ul> -->
* `imputation_threshold` takes effect.
* imputer candidates in `imputer_list` would be used to search for the best imputer.
* If the best imputer is `Fill`, `imputer_fill_type` and `imputer_fill_value` are applied; otherwise, they are ignored.
<!-- </ul> -->
* If the `use_imputation` method is specified as `True` and the input data has no missing values:
<!-- <ul> -->
* `imputation_threshold` is ignored.
* imputer candidates in `imputer_list` are used to search for the best imputer. If the best imputer is `Fill`, `imputer_fill_type` and `imputer_fill_value` are applied; otherwise, they are ignored.
<!-- </ul> -->
* If the `use_imputation` method is specified as `False` but the input data has missing values:
<!-- <ul> -->
* `use_imputation` is turned on with a warning, then the method follows the behavior for the first scenario.
<!-- </ul> -->
* If the `use_imputation` method is specified as `False` and the input data has no missing values, then no further processing is required\.
<!-- </ul> -->
For example:
"pipelines": [
{
"id": "automl",
"runtime_ref": "hybrid",
"nodes":
{
"id": "automl-ts",
"type": "execution_node",
"op": "kube",
"runtime_ref": "automl",
"parameters": {
"del_on_close": true,
"optimization": {
"target_columns": 2,3,4],
"timestamp_column": 1,
"use_imputation": true
}
}
}
]
}
]
**Parent topic:**[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
<!-- </article "role="article" "> -->
|
EBB83F528AC02840EFE18510ED95979D2CDA5641 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en | AutoAI implementation details | AutoAI implementation details
AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case.
The following sections describe some of these technical details that go into generating the pipelines and provide a list of research papers that describe how AutoAI was designed and implemented.
* [Preparing the data for training (pre-processing)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-prep)
* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enauto-select)
* [Algorithms used for classification models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-classification)
* [Algorithms used for regression models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-regression)
* [Metrics by model type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enmetric-by-model)
* [Data transformations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-transformations)
* [Automated Feature Engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeat-eng)
* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enhyper-opt)
* [AutoAI FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enautoai-faq)
* [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enadd-resource)
Preparing the data for training (data pre-processing)
During automatic data preparation, or pre-processing, AutoAI analyzes the training data and prepares it for model selection and pipeline generation. Most data sets contain missing values but machine learning algorithms typically expect no missing values. On exception to this rule is described in [xgboost section 3.4](https://arxiv.org/abs/1603.02754). AutoAI algorithms perform various missing value imputations in your data set by using various techniques, making your data ready for machine learning. In addition, AutoAI detects and categorizes features based on their data types, such as categorical or numerical. It explores encoding and scaling strategies that are based on the feature categorization.
Data preparation involves these steps:
* [Feature column classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=encol-classification)
* [Feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeature-eng)
* [Pre-processing (data imputation and encoding)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enpre-process)
Feature column classification
* Detects the types of feature columns and classifies them as categorical or numerical class
* Detects various types of missing values (default, user-provided, outliers)
Feature engineering
* Handles rows for which target values are missing (drop (default) or target imputation)
* Drops unique value columns (except datetime and timestamps)
* Drops constant value columns
Pre-processing (data imputation and encoding)
* Applies Sklearn imputation/encoding/scaling strategies (separately on each feature class). For example, the current default method for missing value imputation strategies, which are used in the product are most frequent for categorical variables and mean for numerical variables.
* Handles labels of test set that were not seen in training set
* HPO feature: Optimizes imputation/encoding/scaling strategies given a data set and algorithm
Automatic model selection
The second stage in an AutoAI experiment training is automated model selection. The automated model selection algorithm uses the Data Allocation by using Upper Bounds strategy. This approach sequentially allocates small subsets of training data among a large set of algorithms. The goal is to select an algorithm that gives near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. The system currently supports all Scikit-learn algorithms, and the popular XGBoost and LightGBM algorithms. Training and evaluation of models on large data sets is costly. The approach of starting small subsets and allocating incrementally larger ones to models that work well on the data set saves time, without sacrificing performance. Snap machine learning algorithms were added to the system to boost the performance even more.
Selecting algorithms for a model
Algorithms are selected to match the data and the nature of the model, but they can also balance accuracy and duration of runtime, if the model is configured for that option. For example, Snap ML algorithms are typically faster for training than Scikit-learn algorithms. They are often the preferred algorithms AutoAI selects automatically for cases where training is optimized for a shorter run time and accuracy. You can manually select them if training speed is a priority. For details, see [Snap ML documentation](https://snapml.readthedocs.io/). For a discussion of when SnapML algorithms are useful, see this [blog post on using SnapML algorithms](https://lukasz-cmielowski.medium.com/watson-studio-autoai-python-api-and-covid-19-data-78169beacf36).
Algorithms used for classification models
These algorithms are the default algorithms that are used for model selection for classification problems.
Table 1: Default algorithms for classification
Algorithm Description
Decision Tree Classifier Maps observations about an item (represented in branches) to conclusions about the item's target value (represented in leaves). Supports both binary and multiclass labels, and both continuous and categorical features.
Extra Trees Classifier An averaging algorithm based on randomized decision trees.
Gradient Boosted Tree Classifier Produces a classification prediction model in the form of an ensemble of decision trees. It supports binary labels and both continuous and categorical features.
LGBM Classifier Gradient boosting framework that uses leaf-wise (horizontal) tree-based learning algorithm.
Logistic Regression Analyzes a data set where one or more independent variables that determine one of two outcomes. Only binary logistic regression is supported
Random Forest Classifier Constructs multiple decision trees to produce the label that is a mode of each decision tree. It supports both binary and multiclass labels, and both continuous and categorical features.
SnapDecisionTreeClassifier This algorithm provides a decision tree classifier by using the IBM Snap ML library.
SnapLogisticRegression This algorithm provides regularized logistic regression by using the IBM Snap ML solver.
SnapRandomForestClassifier This algorithm provides a random forest classifier by using the IBM Snap ML library.
SnapSVMClassifier This algorithm provides a regularized support vector machine by using the IBM Snap ML solver.
XGBoost Classifier Accurate sure procedure that can be used for classification problems. XGBoost models are used in various areas, including web search ranking and ecology.
SnapBoostingMachineClassifier Boosting machine for binary and multi-class classification tasks that mix binary decision trees with linear models with random fourier features.
Algorithms used for regression models
These algorithms are the default algorithms that are used for automatic model selection for regression problems.
Table 2: Default algorithms for regression
Algorithm Description
Decision Tree Regression Maps observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It supports both continuous and categorical features.
Extra Trees Regression An averaging algorithm based on randomized decision trees.
Gradient Boosting Regression Produces a regression prediction model in the form of an ensemble of decision trees. It supports both continuous and categorical features.
LGBM Regression Gradient boosting framework that uses tree-based learning algorithms.
Linear Regression Models the linear relationship between a scalar-dependent variable y and one or more explanatory variables (or independent variables) x.
Random Forest Regression Constructs multiple decision trees to produce the mean prediction of each decision tree. It supports both continuous and categorical features.
Ridge Ridge regression is similar to Ordinary Least Squares but imposes a penalty on the size of coefficients.
SnapBoostingMachineRegressor This algorithm provides a boosting machine by using the IBM Snap ML library that can be used to construct an ensemble of decision trees.
SnapDecisionTreeRegressor This algorithm provides a decision tree by using the IBM Snap ML library.
SnapRandomForestRegressor This algorithm provides a random forest by using the IBM Snap ML library.
XGBoost Regression GBRT is an accurate and effective off-the-shelf procedure that can be used for regression problems. Gradient Tree Boosting models are used in various areas, including web search ranking and ecology.
Metrics by model type
The following metrics are available for measuring the accuracy of pipelines during training and for scoring data.
Binary classification metrics
* Accuracy (default for ranking the pipelines)
* Roc auc
* Average precision
* F
* Negative log loss
* Precision
* Recall
Multi-class classification metrics
Metrics for multi-class models generate scores for how well a pipeline performs against the specified measurement. For example, an F1 score averages precision (of the predictions made, how many positive predictions were correct) and recall (of all possible positive predictions, how many were predicted correctly).
You can further refine a score by qualifying it to calculate the given metric globally (macro), per label (micro), or to weight an imbalanced data set to favor classes with more representation.
* Metrics with the micro qualifier calculate metrics globally by counting the total number of true positives, false negatives and false positives.
* Metrics with the macro qualifier calculates metrics for each label, and finds their unweighted mean. All labels are weighted equally.
* Metrics with the weighted qualifier calculate metrics for each label, and find their average weighted by the contribution of each class. For example, in a data set that includes categories for apples, peaches, and plums, if there are many more instances of apples, the weighted metric gives greater importance to correctly predicting apples. This alters macro to account for label imbalance. Use a weighted metric such as F1-weighted for an imbalanced data set.
These are the multi-class classification metrics:
* Accuracy (default for ranking the pipelines)
* F1
* F1 Micro
* F1 Macro
* F1 Weighted
* Precision
* Precision Micro
* Precision Macro
* Precision Weighted
* Recall
* Recall Micro
* Recall Macro
* Recall Weighted
Regression metrics
* Negative root mean squared error (default for ranking the pipeline)
* Negative mean absolute error
* Negative root mean squared log error
* Explained variance
* Negative mean squared error
* Negative mean squared log error
* Negative median absolute error
* R2
Automated Feature Engineering
The third stage in the AutoAI process is automated feature engineering. The automated feature engineering algorithm is based on Cognito, described in the research papers, [Cognito: Automated Feature Engineering for Supervised Learning](https://ieeexplore.ieee.org/abstract/document/7836821) and [Feature Engineering for Predictive Modeling by using Reinforcement Learning](https://research.ibm.com/publications/feature-engineering-for-predictive-modeling-using-reinforcement-learning). The system explores various feature construction choices in a hierarchical and nonexhaustive manner, while progressively maximizing the accuracy of the model through an exploration-exploitation strategy. This method is inspired from the "trial and error" strategy for feature engineering, but conducted by an autonomous agent in place of a human.
Metrics used for feature importance
For tree-based classification and regression algorithms such as Decision Tree, Extra Trees, Random Forest, XGBoost, Gradient Boosted, and LGBM, feature importances are their inherent feature importance scores based on the reduction in the criterion that is used to select split points, and calculated when these algorithms are trained on the training data.
For nontree algorithms such as Logistic Regression, LInear Regression, SnapSVM, and Ridge, the feature importances are the feature importances of a Random Forest algorithm that is trained on the same training data as the nontree algorithm.
For any algorithm, all feature importances are in the range between zero and one and have been normalized as the ratio with respect to the maximum feature importance.
Data transformations
For feature engineering, AutoAI uses a novel approach that explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This results in an optimized sequence of transformations for the data that best match the algorithms, or algorithms, of the model selection step. This table lists some of the transformations that are used and some well-known conditions under which they are useful. This is not an exhaustive list of scenarios where the transformation is useful, as that can be complex and hard to interpret. Finally, the listed scenarios are not an explanation of how the transformations are selected. The selection of which transforms to apply is done in a trial and error, performance-oriented manner.
Table 3: Transformations for feature engineering
Name Code Function
Principle Component Analysis pca Reduce dimensions of data and realign across a more suitable coordinate system. Helps tackle the 'curse of dimensionality' in linearly correlated data. It eliminates redundancy and separates significant signals in data.
Standard Scaler stdscaler Scales data features to a standard range. This helps the efficacy and efficiency of certain learning algorithms and other transformations such as PCA.
Logarithm log Reduces right skewness in features and make them more symmetric. Resulting symmetry in features helps algorithms understand the data better. Even scaling based on mean and variance is more meaningful on symmetrical data. Additionally, it can capture specific physical relationships between feature and target that is best described through a logarithm.
Cube Root cbrt Reduces right skewness in data like logarithm, but is weaker than log in its impact, which might be more suitable in some cases. It is also applicable to negative or zero values to which log doesn't apply. Cube root can also change units such as reducing volume to length.
Square root sqrt Reduces mild right skewness in data. It is weaker than log or cube root. It works with zeros and reduces spatial dimensions such as area to length.
Square square Reduces left skewness to a moderate extent to make such distributions more symmetric. It can also be helpful in capturing certain phenomena such as super-linear growth.
Product product A product of two features can expose a nonlinear relationship to better predict the target value than the individual values alone. For example, item cost into number of items that are sold is a better indication of the size of a business than any of those alone.
Numerical XOR nxor This transform helps capture "exclusive disjunction" type of relationships between variables, similar to a bitwise XOR, but in a general numerical context.
Sum sum Sometimes the sum of two features is better correlated to the prediction target than the features alone. For instance, loans from different sources, when summed up, provide a better idea of a credit applicant's total indebtedness.
Divide divide Division is a fundamental operand that is used to express quantities such as gross GDP over population (per capita GDP), representing a country's average lifespan better than either GDP alone or population alone.
Maximum max Take the higher of two values.
Rounding round This transformation can be seen as perturbation or adding some noise to reduce overfitting that might be a result of inaccurate observations.
Absolute Value abs Consider only the magnitude and not the sign of observation. Sometimes, the direction or sign of an observation doesn't matter so much as the magnitude of it, such as physical displacement, while considering fuel or time spent in the actual movement.
Hyperbolic tangent tanh Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions.
Sine sin Can reorient data to discover periodic trends such as simple harmonic motions.
Cosine cos Can reorient data to discover periodic trends such as simple harmonic motions.
Tangent tan Trigonometric tangent transform is usually helpful in combination with other transforms.
Feature Agglomeration feature agglomeration Clustering different features into groups, based on distance or affinity, provides ease of classification for the learning algorithm.
Sigmoid sigmoid Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions.
Isolation Forest isoforestanomaly Performs clustering by using an Isolation Forest to create a new feature containing an anomaly score for each sample.
Word to vector word2vec This algorithm, which is used for text analysis, is applied before all other transformations. It takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word’s meaning or relationship to other words. The predictions can be used to analyze text and predict meaning in sentiment analysis applications.
Hyperparameter Optimization
The final stage in AutoAI is hyperparameter optimization. The AutoAI approach optimizes the parameters of the best performing pipelines from the previous phases. It is done by exploring the parameter ranges of these pipelines by using a black box hyperparameter optimizer called RBFOpt. RBFOpt is described in the research paper [RBFOpt: an open-source library for black-box optimization with costly function evaluations](http://www.optimization-online.org/DB_HTML/2014/09/4538.html). RBFOpt is suited for AutoAI experiments because it is built for optimizations with costly evaluations, as in the case of training and scoring an algorithm. RBFOpt's approach builds and iteratively refines a surrogate model of the unknown objective function to converge quickly despite the long evaluation times of each iteration.
AutoAI FAQs
The following are commonly asked questions about creating an AutoAI experiment.
How many pipelines are created?
Two AutoAI parameters determine the number of pipelines:
* max_num_daub_ensembles: Maximum number (top-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition. The default is 1, where only the highest ranked by model selection algorithm type is used.
* num_folds: Number of subsets of the full data set to train pipelines in addition to the full data set. The default is 1 for training the full data set.
For each fold and algorithm type, AutoAI creates four pipelines of increased refinement, corresponding to:
1. Pipeline with default sklearn parameters for this algorithm type,
2. Pipeline with optimized algorithm by using HPO
3. Pipeline with optimized feature engineering
4. Pipeline with optimized feature engineering and optimized algorithm by using HPO
The total number of pipelines that are generated is:
TotalPipelines= max_num_daub_ensembles * 4, if num_folds = 1:
TotalPipelines= (num_folds+1)* max_num_daub_ensembles * 4, if num_folds > 1 :
What hyperparameter optimization is applied to my model?
AutoAI uses a model-based, derivative-free global search algorithm, called RBfOpt, which is tailored for the costly machine learning model training and scoring evaluations that are required by hyperparameter optimization (HPO). In contrast to Bayesian optimization, which fits a Gaussian model to the unknown objective function, RBfOpt fits a radial basis function mode to accelerate the discovery of hyper-parameter configurations that maximize the objective function of the machine learning problem at hand. This acceleration is achieved by minimizing the number of expensive training and scoring machine learning models evaluations and by eliminating the need to compute partial derivatives.
For each fold and algorithm type, AutoAI creates two pipelines that use HPO to optimize for the algorithm type.
* The first is based on optimizing this algorithm type based on the preprocessed (imputed/encoded/scaled) data set (pipeline 2) above).
* The second is based on optimizing the algorithm type based on optimized feature engineering of the preprocessed (imputed/encoded/scaled) data set.
The parameter values of the algorithms of all pipelines that are generated by AutoAI is published in status messages.
For more details regarding the RbfOpt algorithm, see:
* [RbfOpt: A blackbox optimization library in Python](https://github.com/coin-or/rbfopt)
* [An effective algorithm for hyperparameter optimization of neural networks. IBM Journal of Research and Development, 61(4-5), 2017](http://ieeexplore.ieee.org/document/8030298/)
Research references
This list includes some of the foundational research articles that further detail how AutoAI was designed and implemented to promote trust and transparency in the automated model-building process.
* [Toward cognitive automation of data science](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:R3hNpaxXUhUC)
* [Cognito: Automated feature engineering for supervised learning](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:maZDTaKrznsC)
Next steps
[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # AutoAI implementation details #
AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case\.
The following sections describe some of these technical details that go into generating the pipelines and provide a list of research papers that describe how AutoAI was designed and implemented\.
<!-- <ul> -->
* [Preparing the data for training (pre\-processing)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#data-prep)
* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#auto-select)
* [Algorithms used for classification models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#estimators-classification)
* [Algorithms used for regression models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#estimators-regression)
* [Metrics by model type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#metric-by-model)
* [Data transformations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#data-transformations)
* [Automated Feature Engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#feat-eng)
* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#hyper-opt)
* [AutoAI FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#autoai-faq)
* [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#add-resource)
<!-- </ul> -->
## Preparing the data for training (data pre\-processing) ##
During automatic data preparation, or pre\-processing, AutoAI analyzes the training data and prepares it for model selection and pipeline generation\. Most data sets contain missing values but machine learning algorithms typically expect no missing values\. On exception to this rule is described in [xgboost section 3\.4](https://arxiv.org/abs/1603.02754)\. AutoAI algorithms perform various missing value imputations in your data set by using various techniques, making your data ready for machine learning\. In addition, AutoAI detects and categorizes features based on their data types, such as categorical or numerical\. It explores encoding and scaling strategies that are based on the feature categorization\.
Data preparation involves these steps:
<!-- <ul> -->
* [Feature column classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#col-classification)
* [Feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#feature-eng)
* [Pre\-processing (data imputation and encoding)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=en#pre-process)
<!-- </ul> -->
### Feature column classification ###
<!-- <ul> -->
* Detects the types of feature columns and classifies them as categorical or numerical class
* Detects various types of missing values (default, user\-provided, outliers)
<!-- </ul> -->
### Feature engineering ###
<!-- <ul> -->
* Handles rows for which target values are missing (drop (default) or target imputation)
* Drops unique value columns (except datetime and timestamps)
* Drops constant value columns
<!-- </ul> -->
### Pre\-processing (data imputation and encoding) ###
<!-- <ul> -->
* Applies Sklearn imputation/encoding/scaling strategies (separately on each feature class)\. For example, the current default method for missing value imputation strategies, which are used in the product are `most frequent` for categorical variables and `mean` for numerical variables\.
* Handles labels of test set that were not seen in training set
* HPO feature: Optimizes imputation/encoding/scaling strategies given a data set and algorithm
<!-- </ul> -->
## Automatic model selection ##
The second stage in an AutoAI experiment training is automated model selection\. The automated model selection algorithm uses the Data Allocation by using Upper Bounds strategy\. This approach sequentially allocates small subsets of training data among a large set of algorithms\. The goal is to select an algorithm that gives near\-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples\. The system currently supports all Scikit\-learn algorithms, and the popular XGBoost and LightGBM algorithms\. Training and evaluation of models on large data sets is costly\. The approach of starting small subsets and allocating incrementally larger ones to models that work well on the data set saves time, without sacrificing performance\. Snap machine learning algorithms were added to the system to boost the performance even more\.
### Selecting algorithms for a model ###
Algorithms are selected to match the data and the nature of the model, but they can also balance accuracy and duration of runtime, if the model is configured for that option\. For example, Snap ML algorithms are typically faster for training than Scikit\-learn algorithms\. They are often the preferred algorithms AutoAI selects automatically for cases where training is optimized for a shorter run time and accuracy\. You can manually select them if training speed is a priority\. For details, see [Snap ML documentation](https://snapml.readthedocs.io/)\. For a discussion of when SnapML algorithms are useful, see this [blog post on using SnapML algorithms](https://lukasz-cmielowski.medium.com/watson-studio-autoai-python-api-and-covid-19-data-78169beacf36)\.
### Algorithms used for classification models ###
These algorithms are the default algorithms that are used for model selection for classification problems\.
<!-- <table> -->
Table 1: Default algorithms for classification
| **Algorithm** | **Description** |
| -------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Decision Tree Classifier | Maps observations about an item (represented in branches) to conclusions about the item's target value (represented in leaves)\. Supports both binary and multiclass labels, and both continuous and categorical features\. |
| Extra Trees Classifier | An averaging algorithm based on randomized decision trees\. |
| Gradient Boosted Tree Classifier | Produces a classification prediction model in the form of an ensemble of decision trees\. It supports binary labels and both continuous and categorical features\. |
| LGBM Classifier | Gradient boosting framework that uses leaf\-wise (horizontal) tree\-based learning algorithm\. |
| Logistic Regression | Analyzes a data set where one or more independent variables that determine one of two outcomes\. Only binary logistic regression is supported |
| Random Forest Classifier | Constructs multiple decision trees to produce the label that is a mode of each decision tree\. It supports both binary and multiclass labels, and both continuous and categorical features\. |
| SnapDecisionTreeClassifier | This algorithm provides a decision tree classifier by using the IBM Snap ML library\. |
| SnapLogisticRegression | This algorithm provides regularized logistic regression by using the IBM Snap ML solver\. |
| SnapRandomForestClassifier | This algorithm provides a random forest classifier by using the IBM Snap ML library\. |
| SnapSVMClassifier | This algorithm provides a regularized support vector machine by using the IBM Snap ML solver\. |
| XGBoost Classifier | Accurate sure procedure that can be used for classification problems\. XGBoost models are used in various areas, including web search ranking and ecology\. |
| SnapBoostingMachineClassifier | Boosting machine for binary and multi\-class classification tasks that mix binary decision trees with linear models with random fourier features\. |
<!-- </table ""> -->
### Algorithms used for regression models ###
These algorithms are the default algorithms that are used for automatic model selection for regression problems\.
<!-- <table> -->
Table 2: Default algorithms for regression
| **Algorithm** | **Description** |
| ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Decision Tree Regression | Maps observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves)\. It supports both continuous and categorical features\. |
| Extra Trees Regression | An averaging algorithm based on randomized decision trees\. |
| Gradient Boosting Regression | Produces a regression prediction model in the form of an ensemble of decision trees\. It supports both continuous and categorical features\. |
| LGBM Regression | Gradient boosting framework that uses tree\-based learning algorithms\. |
| Linear Regression | Models the linear relationship between a scalar\-dependent variable y and one or more explanatory variables (or independent variables) x\. |
| Random Forest Regression | Constructs multiple decision trees to produce the mean prediction of each decision tree\. It supports both continuous and categorical features\. |
| Ridge | Ridge regression is similar to Ordinary Least Squares but imposes a penalty on the size of coefficients\. |
| SnapBoostingMachineRegressor | This algorithm provides a boosting machine by using the IBM Snap ML library that can be used to construct an ensemble of decision trees\. |
| SnapDecisionTreeRegressor | This algorithm provides a decision tree by using the IBM Snap ML library\. |
| SnapRandomForestRegressor | This algorithm provides a random forest by using the IBM Snap ML library\. |
| XGBoost Regression | GBRT is an accurate and effective off\-the\-shelf procedure that can be used for regression problems\. Gradient Tree Boosting models are used in various areas, including web search ranking and ecology\. |
<!-- </table ""> -->
## Metrics by model type ##
The following metrics are available for measuring the accuracy of pipelines during training and for scoring data\.
### Binary classification metrics ###
<!-- <ul> -->
* Accuracy (default for ranking the pipelines)
* Roc auc
* Average precision
* F
* Negative log loss
* Precision
* Recall
<!-- </ul> -->
### Multi\-class classification metrics ###
Metrics for multi\-class models generate scores for how well a pipeline performs against the specified measurement\. For example, an F1 score averages *precision* (of the predictions made, how many positive predictions were correct) and *recall* (of all possible positive predictions, how many were predicted correctly)\.
You can further refine a score by qualifying it to calculate the given metric globally (macro), per label (micro), or to weight an imbalanced data set to favor classes with more representation\.
<!-- <ul> -->
* Metrics with the *micro* qualifier calculate metrics globally by counting the total number of true positives, false negatives and false positives\.
* Metrics with the *macro* qualifier calculates metrics for each label, and finds their unweighted mean\. All labels are weighted equally\.
* Metrics with the *weighted* qualifier calculate metrics for each label, and find their average weighted by the contribution of each class\. For example, in a data set that includes categories for apples, peaches, and plums, if there are many more instances of apples, the weighted metric gives greater importance to correctly predicting apples\. This alters *macro* to account for label imbalance\. Use a weighted metric such as F1\-weighted for an imbalanced data set\.
<!-- </ul> -->
These are the multi\-class classification metrics:
<!-- <ul> -->
* Accuracy (default for ranking the pipelines)
* F1
* F1 Micro
* F1 Macro
* F1 Weighted
* Precision
* Precision Micro
* Precision Macro
* Precision Weighted
* Recall
* Recall Micro
* Recall Macro
* Recall Weighted
<!-- </ul> -->
### Regression metrics ###
<!-- <ul> -->
* Negative root mean squared error (default for ranking the pipeline)
* Negative mean absolute error
* Negative root mean squared log error
* Explained variance
* Negative mean squared error
* Negative mean squared log error
* Negative median absolute error
* R2
<!-- </ul> -->
## Automated Feature Engineering ##
The third stage in the AutoAI process is automated feature engineering\. The automated feature engineering algorithm is based on Cognito, described in the research papers, [Cognito: Automated Feature Engineering for Supervised Learning](https://ieeexplore.ieee.org/abstract/document/7836821) and [Feature Engineering for Predictive Modeling by using Reinforcement Learning](https://research.ibm.com/publications/feature-engineering-for-predictive-modeling-using-reinforcement-learning)\. The system explores various feature construction choices in a hierarchical and nonexhaustive manner, while progressively maximizing the accuracy of the model through an exploration\-exploitation strategy\. This method is inspired from the "trial and error" strategy for feature engineering, but conducted by an autonomous agent in place of a human\.
### Metrics used for feature importance ###
For tree\-based classification and regression algorithms such as Decision Tree, Extra Trees, Random Forest, XGBoost, Gradient Boosted, and LGBM, feature importances are their inherent feature importance scores based on the reduction in the criterion that is used to select split points, and calculated when these algorithms are trained on the training data\.
For nontree algorithms such as Logistic Regression, LInear Regression, SnapSVM, and Ridge, the feature importances are the feature importances of a Random Forest algorithm that is trained on the same training data as the nontree algorithm\.
For any algorithm, all feature importances are in the range between zero and one and have been normalized as the ratio with respect to the maximum feature importance\.
### Data transformations ###
For feature engineering, AutoAI uses a novel approach that explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning\. This results in an optimized sequence of transformations for the data that best match the algorithms, or algorithms, of the model selection step\. This table lists some of the transformations that are used and some well\-known conditions under which they are useful\. This is not an exhaustive list of scenarios where the transformation is useful, as that can be complex and hard to interpret\. Finally, the listed scenarios are not an explanation of how the transformations are selected\. The selection of which transforms to apply is done in a trial and error, performance\-oriented manner\.
<!-- <table> -->
Table 3: Transformations for feature engineering
| **Name** | **Code** | **Function** |
| ---------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Principle Component Analysis | pca | Reduce dimensions of data and realign across a more suitable coordinate system\. Helps tackle the 'curse of dimensionality' in linearly correlated data\. It eliminates redundancy and separates significant signals in data\. |
| Standard Scaler | stdscaler | Scales data features to a standard range\. This helps the efficacy and efficiency of certain learning algorithms and other transformations such as PCA\. |
| Logarithm | log | Reduces right skewness in features and make them more symmetric\. Resulting symmetry in features helps algorithms understand the data better\. Even scaling based on mean and variance is more meaningful on symmetrical data\. Additionally, it can capture specific physical relationships between feature and target that is best described through a logarithm\. |
| Cube Root | cbrt | Reduces right skewness in data like logarithm, but is weaker than log in its impact, which might be more suitable in some cases\. It is also applicable to negative or zero values to which log doesn't apply\. Cube root can also change units such as reducing volume to length\. |
| Square root | sqrt | Reduces mild right skewness in data\. It is weaker than log or cube root\. It works with zeros and reduces spatial dimensions such as area to length\. |
| Square | square | Reduces left skewness to a moderate extent to make such distributions more symmetric\. It can also be helpful in capturing certain phenomena such as super\-linear growth\. |
| Product | product | A product of two features can expose a nonlinear relationship to better predict the target value than the individual values alone\. For example, item cost into number of items that are sold is a better indication of the size of a business than any of those alone\. |
| Numerical XOR | nxor | This transform helps capture "exclusive disjunction" type of relationships between variables, similar to a bitwise XOR, but in a general numerical context\. |
| Sum | sum | Sometimes the sum of two features is better correlated to the prediction target than the features alone\. For instance, loans from different sources, when summed up, provide a better idea of a credit applicant's total indebtedness\. |
| Divide | divide | Division is a fundamental operand that is used to express quantities such as gross GDP over population (per capita GDP), representing a country's average lifespan better than either GDP alone or population alone\. |
| Maximum | max | Take the higher of two values\. |
| Rounding | round | This transformation can be seen as perturbation or adding some noise to reduce overfitting that might be a result of inaccurate observations\. |
| Absolute Value | abs | Consider only the magnitude and not the sign of observation\. Sometimes, the direction or sign of an observation doesn't matter so much as the magnitude of it, such as physical displacement, while considering fuel or time spent in the actual movement\. |
| Hyperbolic tangent | tanh | Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions\. |
| Sine | sin | Can reorient data to discover periodic trends such as simple harmonic motions\. |
| Cosine | cos | Can reorient data to discover periodic trends such as simple harmonic motions\. |
| Tangent | tan | Trigonometric tangent transform is usually helpful in combination with other transforms\. |
| Feature Agglomeration | feature agglomeration | Clustering different features into groups, based on distance or affinity, provides ease of classification for the learning algorithm\. |
| Sigmoid | sigmoid | Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions\. |
| Isolation Forest | isoforestanomaly | Performs clustering by using an Isolation Forest to create a new feature containing an anomaly score for each sample\. |
| Word to vector | word2vec | This algorithm, which is used for text analysis, is applied before all other transformations\. It takes a corpus of text as input and outputs a set of vectors\. By turning text into a numerical representation, it can detect and compare similar words\. When trained with enough data, `word2vec` can make accurate predictions about a word’s meaning or relationship to other words\. The predictions can be used to analyze text and predict meaning in sentiment analysis applications\. |
<!-- </table ""> -->
## Hyperparameter Optimization ##
The final stage in AutoAI is hyperparameter optimization\. The AutoAI approach optimizes the parameters of the best performing pipelines from the previous phases\. It is done by exploring the parameter ranges of these pipelines by using a black box hyperparameter optimizer called RBFOpt\. RBFOpt is described in the research paper [RBFOpt: an open\-source library for black\-box optimization with costly function evaluations](http://www.optimization-online.org/DB_HTML/2014/09/4538.html)\. RBFOpt is suited for AutoAI experiments because it is built for optimizations with costly evaluations, as in the case of training and scoring an algorithm\. RBFOpt's approach builds and iteratively refines a surrogate model of the unknown objective function to converge quickly despite the long evaluation times of each iteration\.
## AutoAI FAQs ##
The following are commonly asked questions about creating an AutoAI experiment\.
### How many pipelines are created? ###
Two AutoAI parameters determine the number of pipelines:
<!-- <ul> -->
* **max\_num\_daub\_ensembles:** Maximum number (top\-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition\. The default is 1, where only the highest ranked by model selection algorithm type is used\.
* **num\_folds:** Number of subsets of the full data set to train pipelines in addition to the full data set\. The default is 1 for training the full data set\.
<!-- </ul> -->
For each fold and algorithm type, AutoAI creates four pipelines of increased refinement, corresponding to:
<!-- <ol> -->
1. Pipeline with default sklearn parameters for this algorithm type,
2. Pipeline with optimized algorithm by using HPO
3. Pipeline with optimized feature engineering
4. Pipeline with optimized feature engineering and optimized algorithm by using HPO
<!-- </ol> -->
The total number of pipelines that are generated is:
TotalPipelines= max_num_daub_ensembles * 4, if num_folds = 1:
TotalPipelines= (num_folds+1)* max_num_daub_ensembles * 4, if num_folds > 1 :
### What hyperparameter optimization is applied to my model? ###
AutoAI uses a model\-based, derivative\-free global search algorithm, called RBfOpt, which is tailored for the costly machine learning model training and scoring evaluations that are required by hyperparameter optimization (HPO)\. In contrast to Bayesian optimization, which fits a Gaussian model to the unknown objective function, RBfOpt fits a radial basis function mode to accelerate the discovery of hyper\-parameter configurations that maximize the objective function of the machine learning problem at hand\. This acceleration is achieved by minimizing the number of expensive training and scoring machine learning models evaluations and by eliminating the need to compute partial derivatives\.
For each fold and algorithm type, AutoAI creates two pipelines that use HPO to optimize for the algorithm type\.
<!-- <ul> -->
* The first is based on optimizing this algorithm type based on the preprocessed (imputed/encoded/scaled) data set (pipeline 2) above)\.
* The second is based on optimizing the algorithm type based on optimized feature engineering of the preprocessed (imputed/encoded/scaled) data set\.
<!-- </ul> -->
The parameter values of the algorithms of all pipelines that are generated by AutoAI is published in status messages\.
For more details regarding the RbfOpt algorithm, see:
<!-- <ul> -->
* [RbfOpt: A blackbox optimization library in Python](https://github.com/coin-or/rbfopt)
* [An effective algorithm for hyperparameter optimization of neural networks\. IBM Journal of Research and Development, 61(4\-5), 2017](http://ieeexplore.ieee.org/document/8030298/)
<!-- </ul> -->
**Research references**
This list includes some of the foundational research articles that further detail how AutoAI was designed and implemented to promote trust and transparency in the automated model\-building process\.
<!-- <ul> -->
* [Toward cognitive automation of data science](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:R3hNpaxXUhUC)
* [Cognito: Automated feature engineering for supervised learning](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:maZDTaKrznsC)
<!-- </ul> -->
## Next steps ##
[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
3F0B3A581945A1C7FE243340843CC4671A4E32C6 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en | Applying fairness testing to AutoAI experiments | Applying fairness testing to AutoAI experiments
Evaluate an experiment for fairness to ensure that your results are not biased in favor of one group over another.
Limitations
Fairness evaluations are not supported for time series experiments.
Evaluating experiments and models for fairness
When you define an experiment and produce a machine learning model, you want to be sure that your results are reliable and unbiased. Bias in a machine learning model can result when the model learns the wrong lessons during training. This scenario can result when insufficient data or poor data collection or management results in a poor outcome when the model generates predictions. It is important to evaluate an experiment for signs of bias to remediate them when necessary and build confidence in the model results.
AutoAI includes the following tools, techniques, and features to help you evaluate and remediate an experiment for bias.
* [Definitions and terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enterms)
* [Applying fairness test for an AutoAI experiment in the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-ui)
* [Applying fairness test for an AutoAI experiment in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-api)
* [Evaluating results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-results)
* [Bias mitigation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enbias-mitigation)
Definitions and terms
Fairness Attribute - Bias or Fairness is typically measured by using a fairness attribute such as gender, ethnicity, or age.
Monitored/Reference Group - Monitored group are those values of fairness attribute for which you want to measure bias. Values in the monitored group are compared to values in the reference group. For example, if Fairness Attribute=Gender is used to measure bias against females, then the monitored group value is “Female” and the reference group value is “Male”.
Favourable/Unfavourable outcome - An important concept in bias detection is that of favorable and unfavorable outcome of the model. For example, Claim approved might be considered a favorable outcome and Claim denied might be considered as an unfavorable outcome.
Disparate impact - The metric used to measure bias (computed as the ratio of percentage of favorable outcome for the monitored group to the percentage of favorable outcome for the reference group). Bias is said to exist if the disparate impact value is less than a specified threshold.
For example, if 80% of insurance claims that are made by males are approved but only 60% of claims that are made by females are approved, then the disparate impact is: 60/80 = 0.75. Typically, the threshold value for bias is 0.8. As this disparate impact ratio is less than 0.8, the model is considered to be biased.
Note when the disparate impact ratio is greater than 1.25 [inverse value (1/disparate impact) is under the threshold 0.8] it is also considered as biased.
Watch a video about evaluating and improving fairness
Watch this video to see how to evaluate a machine learning model for fairness to ensure that your results are not biased.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
Applying fairness test for an AutoAI experiment in the UI
1. Open Experiment Settings.
2. Click the Fairness tab.
3. Enable options for fairness. The options are as follows:
* Fairness evaluation: Enable this option to check each pipeline for bias by calculating the disparate impact ration. This method tracks whether a pipeline shoes a tendency to provide a favorable (preferred) outcome for one group more often than another.
* Fairness threshold: Set a fairness threshold to determine whether bias exists in a pipeline based on the value of the disparate impact ration. The default is 80, which represents a disparate impact ratio less than 0.80.
* Favorable outcomes: Specify the value from your prediction column that would be considered favorable. For example, the value might be "approved", "accepted" or whatever fits your prediction type.
* Automatic protected attribute method: Choose how to evaluate features that are a potential source of bias. You can specify automatic detection, in which case AutoAI detects commonly protected attributes, including: sex, ethnicity, marital status, age, and zip or postal code. Within each category, AutoAI tries to determine a protected group. For example, for the sex category, the monitored group would be female.
Note: In automatic mode, it is likely that a feature is not identified correctly as a protected attribute if it has untypical values, for example, being in a language other than English. Auto-detect is only supported for English.
* Manual protected attribute method: Manually specify an outcome and supply the protected attribute by choosing from a list of attributes. Note when you manually supply attributes, you must then define a group and specify whether it is likely to have the expected outcomes (the reference group) or should be reviewed to detect variance from the expected outcomes (the monitored group).
For example, this image shows a set of manually specified attribute groups for monitoring.

Save the settings to apply and run the experiment to apply the fairness evaluation to your pipelines.
Notes:
* For multiclass models, you can select multiple values in the prediction column to classify as favorable or not.
* For regression models, you can specify a range of outcomes that are considered to be favorable or not.
* Fairness evaluations are not currently available for time series experiments.
List of automatically detected attributes for measuring fairness
When automatic detection is enabled, AutoAI will automatically detect the following attributes if they are present in the training data. The attributes must be in English.
* age
* citizen_status
* color
* disability
* ethnicity
* gender
* genetic_information
* handicap
* language
* marital
* political_belief
* pregnancy
* religion
* veteran_status
Applying fairness test for an AutoAI experiment in a notebook
You can perform fairness testing in an AutoAI experiment that is trained in a notebook and extend the capabilities beyond what is provided in the UI.
Bias detection example
In this example, by using the Watson Machine Learning Python API (ibm-watson-machine-learning), the optimizer configuration for bias detection is configured with the following input, where:
* name - experiment name
* prediction_type - type of the problem
* prediction_column - target column name
* fairness_info - bias detection configuration
fairness_info = {
"protected_attributes": [
{
"feature": "personal_status",
"reference_group": "male div/sep", "male mar/wid", "male single"],
"monitored_group": "female div/dep/mar"]
},
{
"feature": "age",
"reference_group": 26, 100]],
"monitored_group": 1, 25]]}
],
"favorable_labels": ["good"],
"unfavorable_labels": ["bad"],
}
from ibm_watson_machine_learning.experiment import AutoAI
experiment = AutoAI(wml_credentials, space_id=space_id)
pipeline_optimizer = experiment.optimizer(
name='Credit Risk Prediction and bias detection - AutoAI',
prediction_type=AutoAI.PredictionType.BINARY,
prediction_column='class',
scoring='accuracy',
fairness_info=fairness_info,
retrain_on_holdout=False
)
Evaluating results
You can view the evaluation results for each pipeline.
1. From the Experiment summary page, click the filter icon for the Pipeline leaderboard.
2. Choose the Disparate impact metrics for your experiment. This option evaluates one general metric and one metric for each monitored group.
3. Review the pipeline metrics for disparate impact to determine whether you have a problem with bias or just to determine which pipeline performs better for a fairness evaluation.
In this example, the pipeline that was ranked first for accuracy also has a disparate income score that is within the acceptable limits.

Bias mitigation
If bias is detected in an experiment, you can mitigate it by optimizing your experiment by using "combined scorers": [accuracy_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.accuracy_and_disparate_impact) or [r2_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.r2_and_disparate_impact), both defined by the open source [LALE package](https://lale.readthedocs.io/en/latest/index.html).
Combined scorers are used in the search and optimization process to return fair and accurate models.
For example, to optimize for bias detection for a classification experiment:
1. Open Experiment Settings.
2. On the Predictions page, choose to optimize Accuracy and disparate impact in the experiment.
3. Rerun the experiment.
The Accuracy and disparate impact metric creates a combined score for accuracy and fairness for classification experiments. A higher score indicates better performance and fairness measures. If the disparate impact score is between 0.9 and 1.11 (an acceptable level), the accuracy score is returned. Otherwise, a disparate impact value lower than the accuracy score is returned, with a lower (negative) value which indicates a fairness gap.
Note:Advanced users can use a [notebook to apply or review fairness detection methods](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20to%20train%20fair%20models.ipynb). You can further refine a trained AutoAI model by using third-party packages like: [lale, AIF360](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.htmlmodule-lale.lib.aif360) to extend the fairness and bias detection capabilities beyond what is provided with AutoAI by default.
Review a [sample notebook that evaluates pipelines for fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
Read this [Medium blog post on Bias detection in AutoAI](https://lukasz-cmielowski.medium.com/bias-detection-and-mitigation-in-ibm-autoai-406db0e19181).
Next steps
[Troubleshooting AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html)
Parent topic: [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Applying fairness testing to AutoAI experiments #
Evaluate an experiment for fairness to ensure that your results are not biased in favor of one group over another\.
### Limitations ###
Fairness evaluations are not supported for time series experiments\.
## Evaluating experiments and models for fairness ##
When you define an experiment and produce a machine learning model, you want to be sure that your results are reliable and unbiased\. Bias in a machine learning model can result when the model learns the wrong lessons during training\. This scenario can result when insufficient data or poor data collection or management results in a poor outcome when the model generates predictions\. It is important to evaluate an experiment for signs of bias to remediate them when necessary and build confidence in the model results\.
AutoAI includes the following tools, techniques, and features to help you evaluate and remediate an experiment for bias\.
<!-- <ul> -->
* [Definitions and terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en#terms)
* [Applying fairness test for an AutoAI experiment in the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en#fairness-ui)
* [Applying fairness test for an AutoAI experiment in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en#fairness-api)
* [Evaluating results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en#fairness-results)
* [Bias mitigation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=en#bias-mitigation)
<!-- </ul> -->
## Definitions and terms ##
**Fairness Attribute** \- Bias or Fairness is typically measured by using a fairness attribute such as gender, ethnicity, or age\.
**Monitored/Reference Group** \- Monitored group are those values of fairness attribute for which you want to measure bias\. Values in the monitored group are compared to values in the reference group\. For example, if `Fairness Attribute=Gender` is used to measure bias against females, then the monitored group value is “Female” and the reference group value is “Male”\.
**Favourable/Unfavourable outcome** \- An important concept in bias detection is that of favorable and unfavorable outcome of the model\. For example, `Claim approved` might be considered a favorable outcome and `Claim denied` might be considered as an unfavorable outcome\.
**Disparate impact** \- The metric used to measure bias (computed as the ratio of percentage of favorable outcome for the monitored group to the percentage of favorable outcome for the reference group)\. Bias is said to exist if the disparate impact value is less than a specified threshold\.
For example, if 80% of insurance claims that are made by males are approved but only 60% of claims that are made by females are approved, then the disparate impact is: 60/80 = 0\.75\. Typically, the threshold value for bias is 0\.8\. As this disparate impact ratio is less than 0\.8, the model is considered to be biased\.
Note when the disparate impact ratio is greater than 1\.25 \[inverse value (1/disparate impact) is under the threshold 0\.8\] it is also considered as biased\.
## Watch a video about evaluating and improving fairness ##
Watch this video to see how to evaluate a machine learning model for fairness to ensure that your results are not biased\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Applying fairness test for an AutoAI experiment in the UI ##
<!-- <ol> -->
1. Open **Experiment Settings**\.
2. Click the *Fairness* tab\.
3. Enable options for fairness\. The options are as follows:
<!-- <ul> -->
* *Fairness evaluation:* Enable this option to check each pipeline for bias by calculating the disparate impact ration. This method tracks whether a pipeline shoes a tendency to provide a favorable (preferred) outcome for one group more often than another.
* *Fairness threshold:* Set a fairness threshold to determine whether bias exists in a pipeline based on the value of the disparate impact ration. The default is 80, which represents a disparate impact ratio less than 0.80.
* *Favorable outcomes:* Specify the value from your prediction column that would be considered favorable. For example, the value might be "approved", "accepted" or whatever fits your prediction type.
* *Automatic protected attribute method:* Choose how to evaluate features that are a potential source of bias. You can specify automatic detection, in which case AutoAI detects commonly protected attributes, including: sex, ethnicity, marital status, age, and zip or postal code. Within each category, AutoAI tries to determine a protected group. For example, for the `sex` category, the monitored group would be `female`.
Note: In automatic mode, it is likely that a feature is not identified correctly as a protected attribute if it has untypical values, for example, being in a language other than English. Auto-detect is only supported for English.
* *Manual protected attribute method:* Manually specify an outcome and supply the protected attribute by choosing from a list of attributes. Note when you manually supply attributes, you must then define a group and specify whether it is likely to have the expected outcomes (the reference group) or should be reviewed to detect variance from the expected outcomes (the monitored group).
<!-- </ul> -->
<!-- </ol> -->
For example, this image shows a set of manually specified attribute groups for monitoring\.

Save the settings to apply and run the experiment to apply the fairness evaluation to your pipelines\.
**Notes:**
<!-- <ul> -->
* For multiclass models, you can select multiple values in the prediction column to classify as favorable or not\.
* For regression models, you can specify a range of outcomes that are considered to be favorable or not\.
* Fairness evaluations are not currently available for time series experiments\.
<!-- </ul> -->
### List of automatically detected attributes for measuring fairness ###
When automatic detection is enabled, AutoAI will automatically detect the following attributes if they are present in the training data\. The attributes must be in English\.
<!-- <ul> -->
* age
* citizen\_status
* color
* disability
* ethnicity
* gender
* genetic\_information
* handicap
* language
* marital
* political\_belief
* pregnancy
* religion
* veteran\_status
<!-- </ul> -->
## Applying fairness test for an AutoAI experiment in a notebook ##
You can perform fairness testing in an AutoAI experiment that is trained in a notebook and extend the capabilities beyond what is provided in the UI\.
### Bias detection example ###
In this example, by using the Watson Machine Learning Python API (ibm\-watson\-machine\-learning), the optimizer configuration for bias detection is configured with the following input, where:
<!-- <ul> -->
* name \- experiment name
* prediction\_type \- type of the problem
* prediction\_column \- target column name
* fairness\_info \- bias detection configuration
<!-- </ul> -->
fairness_info = {
"protected_attributes": [
{
"feature": "personal_status",
"reference_group": "male div/sep", "male mar/wid", "male single"],
"monitored_group": "female div/dep/mar"]
},
{
"feature": "age",
"reference_group": 26, 100]],
"monitored_group": 1, 25]]}
],
"favorable_labels": ["good"],
"unfavorable_labels": ["bad"],
}
from ibm_watson_machine_learning.experiment import AutoAI
experiment = AutoAI(wml_credentials, space_id=space_id)
pipeline_optimizer = experiment.optimizer(
name='Credit Risk Prediction and bias detection - AutoAI',
prediction_type=AutoAI.PredictionType.BINARY,
prediction_column='class',
scoring='accuracy',
fairness_info=fairness_info,
retrain_on_holdout=False
)
## Evaluating results ##
You can view the evaluation results for each pipeline\.
<!-- <ol> -->
1. From the *Experiment summary* page, click the filter icon for the Pipeline leaderboard\.
2. Choose the Disparate impact metrics for your experiment\. This option evaluates one general metric and one metric for each monitored group\.
3. Review the pipeline metrics for disparate impact to determine whether you have a problem with bias or just to determine which pipeline performs better for a fairness evaluation\.
<!-- </ol> -->
In this example, the pipeline that was ranked first for accuracy also has a disparate income score that is within the acceptable limits\.

## Bias mitigation ##
If bias is detected in an experiment, you can mitigate it by optimizing your experiment by using "combined scorers": [`accuracy_and_disparate_impact`](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.html#lale.lib.aif360.util.accuracy_and_disparate_impact) or [`r2_and_disparate_impact`](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.html#lale.lib.aif360.util.r2_and_disparate_impact), both defined by the open source [LALE package](https://lale.readthedocs.io/en/latest/index.html)\.
Combined scorers are used in the search and optimization process to return fair and accurate models\.
For example, to optimize for bias detection for a classification experiment:
<!-- <ol> -->
1. Open **Experiment Settings**\.
2. On the *Predictions* page, choose to optimize **Accuracy and disparate impact** in the experiment\.
3. Rerun the experiment\.
<!-- </ol> -->
The *Accuracy and disparate impact* metric creates a combined score for accuracy and fairness for classification experiments\. A higher score indicates better performance and fairness measures\. If the disparate impact score is between 0\.9 and 1\.11 (an acceptable level), the accuracy score is returned\. Otherwise, a disparate impact value lower than the accuracy score is returned, with a lower (negative) value which indicates a fairness gap\.
Note:Advanced users can use a [notebook to apply or review fairness detection methods](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20to%20train%20fair%20models.ipynb)\. You can further refine a trained AutoAI model by using third\-party packages like: [lale, AIF360](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.html#module-lale.lib.aif360) to extend the fairness and bias detection capabilities beyond what is provided with AutoAI by default\.
Review a [sample notebook that evaluates pipelines for fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)\.
Read this [Medium blog post on Bias detection in AutoAI](https://lukasz-cmielowski.medium.com/bias-detection-and-mitigation-in-ibm-autoai-406db0e19181)\.
### Next steps ###
[Troubleshooting AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html)
**Parent topic**: [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
5042FBFB0C15AEDED02FF805C4869AC838910C7A | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-glossary.html?context=cdpaas&locale=en | AutoAI glossary | AutoAI glossary
Learn terms and concepts that are used in AutoAI for building and deploying machine learning models.
aggregate score
The aggregation of the four anomaly types: level shift, trend, localized extreme, variance. A higher score indicates a stronger score.
algorithm
A formula applied to data to determine optimal ways to solve analytical problems.
anomaly prediction
An AutoAI time-series model that can predict anomalies, or unexpected results, against new data.
AutoAI experiment
An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates.
batch employment
Processes input data from a file, data connection, or connected data in a storage bucket and writes the output to a selected destination.
bias detection (machine learning)
To identify imbalances in the training data or prediction behavior of the model.
binary classification
A classification model with two classes and only assigns samples into one of the two classes.
classification model
A predictive model that predicts data in distinct categories.
confusion matrix
A performance measurement that determines the accuracy between a model’s positive and negative predicted outcomes to positive and negative actual outcomes.
cross validation
A technique that tests the effectiveness of machine learning models. It is also used as a resampling procedure for models with limited data.
data imputation
Substituting missing values in a data set with estimated values.
exogenous features
Features that can influence the prediction model but cannot be influenced in return. See also: Supporting features
fairness
Determines whether a model produces biased outcomes that favor a monitored group over a reference group. Fairness evaluations detect if the model shows a tendency to provide a favorable or preferable outcome more often for one group over another. Typical categories to monitor are age, sex, and race.
feature correlation
The relationship between two features. For example, postal code might have a strong correlation with income in some models.
feature encoding
Transforming categorical values into numerical values.
feature importance
The relative impact a particular column or feature has on the model's prediction or forecast.
feature scaling
Normalizing the range of independent variables or features in a data set.
feature selection
Identifying the columns of data that best support an accurate prediction or score.
feature transformation
In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type.
holdout data
Data used to test or validate the model's performance. Holdout data can be a reserved portion of the training data, or it can be a separate file.
hyperparameter optimization (HPO)
The process for setting hyperparameter values to the settings that provide the most accurate model.
incremental learning
The process of training a model that uses data that is continually updated without forgetting data that is obtained from the preceding tasks.
large tabular data
Structured data that exceeds the limit on standard processing and must be processed in batches. See incremental learning.
labeled data
Data that is labeled to identify the appropriate data vectors to be pulled in for model training.
monitored group
A class of data monitored to determine whether the results differ significantly from the results of the reference group. For example, in a credit app, you might monitor applications in a particular age range and compare results to the age range more likely to recieve a positive outcome to evaluate whether there might be bias in the results.
multiclass classification model
A classification task with more than two classes. For example, where a binary classification model predicts yes or no values, a multi-class model predicts yes, no, maybe, or not applicable.
multivariate time series
Time series experiment that contains two or more changing variables. For example, a time series model that forecasts the electricity usage of three clients.
optimized metric
The metric used to measure the performance of the model. For example, accuracy is the typical metric that is used to measure the performance of a binary classification model.
pipeline (model candidate pipeline)
End-to-end outline that illustrates the steos in a workflow.
positive class
The class that is related to your objective function.
reference group
A group that you identify as most likely to receive a positive result in a predictive model. You can then compare the results to a monitored group to look for potential bias in outcomes.
regression model
A model that relates a dependent variable to one or more independent variable.
scoring
In machine learning, the process of measuring the confidence of a predicted outcome.
supporting features
Input features that can influence the prediction target. See also: Exogenus features
text classification
A model that automatically identifies and classifies text into distinct categories.
time series model (AutoAI)
A model that tracks data over time.
trained model
A model that is ready to be deployed.
training
The initial stage of model building, involving a subset of the source data. The model can then be tested against a further, different subset for which the outcome is already known.
training data
Data used to teach and train a model's learning algorithm.
univariate time series
Time series experiment that contains only one changing variable. For example, a time series model that forecasts the temperature has a single prediction column of the temperature.
| # AutoAI glossary #
Learn terms and concepts that are used in AutoAI for building and deploying machine learning models\.
**aggregate score**
The aggregation of the four anomaly types: level shift, trend, localized extreme, variance\. A higher score indicates a stronger score\.
**algorithm**
A formula applied to data to determine optimal ways to solve analytical problems\.
**anomaly prediction**
An AutoAI time\-series model that can predict anomalies, or unexpected results, against new data\.
**AutoAI experiment**
An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates\.
**batch employment**
Processes input data from a file, data connection, or connected data in a storage bucket and writes the output to a selected destination\.
**bias detection (machine learning)**
To identify imbalances in the training data or prediction behavior of the model\.
**binary classification**
A classification model with two classes and only assigns samples into one of the two classes\.
**classification model**
A predictive model that predicts data in distinct categories\.
**confusion matrix**
A performance measurement that determines the accuracy between a model’s positive and negative predicted outcomes to positive and negative actual outcomes\.
**cross validation**
A technique that tests the effectiveness of machine learning models\. It is also used as a resampling procedure for models with limited data\.
**data imputation**
Substituting missing values in a data set with estimated values\.
**exogenous features**
Features that can influence the prediction model but cannot be influenced in return\. See also: Supporting features
**fairness**
Determines whether a model produces biased outcomes that favor a monitored group over a reference group\. Fairness evaluations detect if the model shows a tendency to provide a favorable or preferable outcome more often for one group over another\. Typical categories to monitor are age, sex, and race\.
**feature correlation**
The relationship between two features\. For example, postal code might have a strong correlation with income in some models\.
**feature encoding**
Transforming categorical values into numerical values\.
**feature importance**
The relative impact a particular column or feature has on the model's prediction or forecast\.
**feature scaling**
Normalizing the range of independent variables or features in a data set\.
**feature selection**
Identifying the columns of data that best support an accurate prediction or score\.
**feature transformation**
In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type\.
**holdout data**
Data used to test or validate the model's performance\. Holdout data can be a reserved portion of the training data, or it can be a separate file\.
**hyperparameter optimization (HPO)**
The process for setting hyperparameter values to the settings that provide the most accurate model\.
**incremental learning**
The process of training a model that uses data that is continually updated without forgetting data that is obtained from the preceding tasks\.
**large tabular data**
Structured data that exceeds the limit on standard processing and must be processed in batches\. See incremental learning\.
**labeled data**
Data that is labeled to identify the appropriate data vectors to be pulled in for model training\.
**monitored group**
A class of data monitored to determine whether the results differ significantly from the results of the reference group\. For example, in a credit app, you might monitor applications in a particular age range and compare results to the age range more likely to recieve a positive outcome to evaluate whether there might be bias in the results\.
**multiclass classification model**
A classification task with more than two classes\. For example, where a binary classification model predicts *yes* or *no* values, a multi\-class model predicts *yes*, *no*, *maybe*, or *not applicable*\.
**multivariate time series**
Time series experiment that contains two or more changing variables\. For example, a time series model that forecasts the electricity usage of three clients\.
**optimized metric**
The metric used to measure the performance of the model\. For example, accuracy is the typical metric that is used to measure the performance of a binary classification model\.
**pipeline (model candidate pipeline)**
End\-to\-end outline that illustrates the steos in a workflow\.
**positive class**
The class that is related to your objective function\.
**reference group**
A group that you identify as most likely to receive a positive result in a predictive model\. You can then compare the results to a monitored group to look for potential bias in outcomes\.
**regression model**
A model that relates a dependent variable to one or more independent variable\.
**scoring**
In machine learning, the process of measuring the confidence of a predicted outcome\.
**supporting features**
Input features that can influence the prediction target\. See also: Exogenus features
**text classification**
A model that automatically identifies and classifies text into distinct categories\.
**time series model (AutoAI)**
A model that tracks data over time\.
**trained model**
A model that is ready to be deployed\.
**training**
The initial stage of model building, involving a subset of the source data\. The model can then be tested against a further, different subset for which the outcome is already known\.
**training data**
Data used to teach and train a model's learning algorithm\.
**univariate time series**
Time series experiment that contains only one changing variable\. For example, a time series model that forecasts the temperature has a single prediction column of the temperature\.
<!-- </article "role="article" "> -->
|
73F96A06142EE17A6C55E5700580F33250552A00 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html?context=cdpaas&locale=en | Data imputation in AutoAI experiments | Data imputation in AutoAI experiments
Data imputation is the means of replacing missing values in your data set with substituted values. If you enable imputation, you can specify how missing values are interpolated in your data.
Imputation by experiment type
Imputation methods depend on the type of experiment that you build.
* For classification and regression you can configure categorical and numerical imputation methods.
* For timeseries problems, you can choose from a set of imputation methods to apply to numerical columns. When the experiment runs, the best performing method from the set is applied automatically. You can also specify a specific value as a replacement value.
Enabling imputation
To view and set imputation options:
1. Click Experiment settings when you configure your experiment.
2. Click the Data source option.
3. Click Enable data imputation. Note that if you do not explicitly enable data imputation but your data source has missing values, AutoAI warns you and applies default imputation methods. See [imputation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html).
4. Select options in the Imputation section.
5. Optionally set a threshold for the percentage of imputation acceptable for a column of data. If the percentage of missing values exceeds the specified threshold, the experiment fails. To resolve, update the data source or adjust the threshold.
Configuring imputation for classification and regression experiments
Choose one of these methods for imputing missing data in binary classification, multiclass classification, or regression experiments. Note that you can have one method for completing values for text-based (categorical) data and another for numerical data.
Method Description
Most frequent Replace missing value with the value that appears most frequently in the column.
Median Replace missing value with the value in the middle of the sorted column.
Mean Replace missing value with the average value for the column.
Configuring imputation for timeseries experiments
Choose some or all of these methods. When multiple methods are selected, the best-performing method is automatically applied for the experiment.
Note: Imputation is not supported for date or time values.
Method Description
Cubic Uses cubic interpolation by using pandas/scipy method to fill missing values.
Fill Choose value as the type to replace the missing values with a numeric value you specify.
Flatten iterative Data is first flattened and then the Scikit-learn iterative imputer is applied to find missing values.
Linear Use linear interpolation by using pandas/scipy method to fill missing values.
Next Replace missing value with the next value.
Previous Replace missing value with the previous value.
Next steps
[Data imputation implementation details for time series experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Data imputation in AutoAI experiments #
Data imputation is the means of replacing missing values in your data set with substituted values\. If you enable imputation, you can specify how missing values are interpolated in your data\.
## Imputation by experiment type ##
Imputation methods depend on the type of experiment that you build\.
<!-- <ul> -->
* For classification and regression you can configure categorical and numerical imputation methods\.
* For timeseries problems, you can choose from a set of imputation methods to apply to numerical columns\. When the experiment runs, the best performing method from the set is applied automatically\. You can also specify a specific value as a replacement value\.
<!-- </ul> -->
## Enabling imputation ##
To view and set imputation options:
<!-- <ol> -->
1. Click **Experiment settings** when you configure your experiment\.
2. Click the **Data source** option\.
3. Click **Enable data imputation**\. Note that if you do not explicitly enable data imputation but your data source has missing values, AutoAI warns you and applies default imputation methods\. See [imputation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html)\.
4. Select options in the Imputation section\.
5. Optionally set a threshold for the percentage of imputation acceptable for a column of data\. If the percentage of missing values exceeds the specified threshold, the experiment fails\. To resolve, update the data source or adjust the threshold\.
<!-- </ol> -->
## Configuring imputation for classification and regression experiments ##
Choose one of these methods for imputing missing data in binary classification, multiclass classification, or regression experiments\. Note that you can have one method for completing values for text\-based (categorical) data and another for numerical data\.
<!-- <table> -->
| Method | Description |
| ------------- | --------------------------------------------------------------------------------- |
| Most frequent | Replace missing value with the value that appears most frequently in the column\. |
| Median | Replace missing value with the value in the middle of the sorted column\. |
| Mean | Replace missing value with the average value for the column\. |
<!-- </table ""> -->
## Configuring imputation for timeseries experiments ##
Choose some or all of these methods\. When multiple methods are selected, the best\-performing method is automatically applied for the experiment\.
Note: Imputation is not supported for date or time values\.
<!-- <table> -->
| Method | Description |
| ----------------- | -------------------------------------------------------------------------------------------------------- |
| Cubic | Uses cubic interpolation by using pandas/scipy method to fill missing values\. |
| Fill | Choose *value* as the type to replace the missing values with a numeric value you specify\. |
| Flatten iterative | Data is first flattened and then the Scikit\-learn iterative imputer is applied to find missing values\. |
| Linear | Use linear interpolation by using pandas/scipy method to fill missing values\. |
| Next | Replace missing value with the next value\. |
| Previous | Replace missing value with the previous value\. |
<!-- </table ""> -->
## Next steps ##
[Data imputation implementation details for time series experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
83CD92CDB99DB6263492FAD998E932F50F0F8E99 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html?context=cdpaas&locale=en | AutoAI libraries for Python | AutoAI libraries for Python
The autoai-lib library for Python contains a set of functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of the pipeline. Similarly, you can use the autoai-ts-libs library to interact with pipeline notebooks for time series experiments.
Installing autoai-lib or autoai-ts-libs for Python
Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install autoai-lib or autoai-ts-libs.
Using autoai-lib and autoai-ts-libs for Python
The autoai-lib and autoai-ts-libs library for Python contain functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of classification and regression pipelines. Using the autoai-ts-libs library, you can review the data transformations that take place in the creation of time series (forecast) pipelines.
Installing autoai-lib and autoai-ts-libs for Python
Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install [autoai-lib](https://pypi.org/project/autoai-libs/) and [autoai-ts-libs](https://pypi.org/project/autoai-ts-libs/).
The autoai-lib functions
The instantiated project object that is created after you import the autoai-lib library exposes these functions:
autoai_libs.transformers.exportable.NumpyColumnSelector()
Selects a subset of columns of a numpy array
Usage:
autoai_libs.transformers.exportable.NumpyColumnSelector(columns=None)
Option Description
columns list of column indexes to select
autoai_libs.transformers.exportable.CompressStrings()
Removes spaces and special characters from string columns of an input numpy array X.
Usage:
autoai_libs.transformers.exportable.CompressStrings(compress_type='string', dtypes_list=None, misslist_list=None, missing_values_reference_list=None, activate_flag=True)
Option Description
compress_type type of string compression. 'string' for removing spaces from a string and 'hash' for creating an int hash. Default is 'string'. 'hash' is used for columns with strings and cat_imp_strategy='most_frequent'
dtypes_list list containing strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown'). If None, the column types are discovered. Default is None.
misslist_list list contains lists of missing values of each column of the input numpy array X. If None, the missing values of each column are discovered. Default is None.
missing_values_reference_list reference list of missing values in the input numpy array X
activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.NumpyReplaceMissingValues()
Given a numpy array and a reference list of missing values for it, replaces missing values with a special value (typically a special missing value such as np.nan).
Usage:
autoai_libs.transformers.exportable.NumpyReplaceMissingValues(missing_values, filling_values=np.nan)
Option Description
missing_values reference list of missing values
filling_values special value that is assigned to unknown values
autoai_libs.transformers.exportable.NumpyReplaceUnknownValues()
Given a numpy array and a reference list of known values for each column, replaces values that are not part of a reference list with a special value (typically np.nan). This method is typically used to remove labels for columns in a test data set that has not been seen in the corresponding columns of the training data set.
Usage:
autoai_libs.transformers.exportable.NumpyReplaceUnknownValues(known_values_list=None, filling_values=None, missing_values_reference_list=None)
Option Description
known_values_list reference list of lists of known values for each column
filling_values special value that is assigned to unknown values
missing_values_reference_list reference list of missing values
autoai_libs.transformers.exportable.boolean2float()
Converts a 1-D numpy array of strings that represent booleans to floats and replaces missing values with np.nan. Also changes type of array from 'object' to 'float'.
Usage:
autoai_libs.transformers.exportable.boolean2float(activate_flag=True)
Option Description
activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.CatImputer()
This transformer is a wrapper for categorical imputer. Internally it currently uses sklearn SimpleImputer]([https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html))
Usage:
autoai_libs.transformers.exportable.CatImputer(strategy, missing_values, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
Option Description
strategy string, optional, default=”mean”. The imputation strategy for missing values. <br>-mean: replace by using the mean along each column. Can be used only with numeric data. <br>- median:replace by using the median along each column. Can only be used with numeric data. <br>- most_frequent:replace by using most frequent value each column. Used with strings or numeric data. <br>- constant:replace with fill_value. Can be used with strings or numeric data.
missing_values number, string, np.nan (default) or None. The placeholder for the missing values. All occurrences of missing_values are imputed.
sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None.
activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.CatEncoder()
This method is a wrapper for categorical encoder. If encoding parameter is 'ordinal', internally it currently uses sklearn [OrdinalEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html?highlight=ordinalencoder). If encoding parameter is 'onehot', or 'onehot-dense' internally it uses sklearn [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.htmlsklearn.preprocessing.OneHotEncoder)
Usage:
autoai_libs.transformers.exportable.CatEncoder(encoding, categories, dtype, handle_unknown, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
Option Description
encoding str, 'onehot', 'onehot-dense' or 'ordinal'. The type of encoding to use (default is 'ordinal') <br>'onehot': encode the features by using a one-hot aka one-of-K scheme (or also called 'dummy' encoding). This encoding creates a binary column for each category and returns a sparse matrix. <br>'onehot-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix. <br>'ordinal': encode the features as ordinal integers. The result is a single column of integers (0 to n_categories - 1) per feature.
categories 'auto' or a list of lists/arrays of values. Categories (unique values) per feature: <br>'auto' : Determine categories automatically from the training data. <br>list : categories[i] holds the categories that are expected in the ith column. The passed categories must be sorted and can not mix strings and numeric values. The used categories can be found in the encoder.categories_ attribute.
dtype number type, default np.float64 Desired dtype of output.
handle_unknown 'error' (default) or 'ignore'. Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise). When this parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature are all zeros. In the inverse transform, an unknown category are denoted as None. Ignoring unknown categories is not supported for encoding='ordinal'.
sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None.
activate_flag flag that indicates that this transformer are active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.float32_transform()
Transforms a float64 numpy array to float32.
Usage:
autoai_libs.transformers.exportable.float32_transform(activate_flag=True)
Option Description
activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.FloatStr2Float()
Given numpy array X and dtypes_list that denotes the types of its columns, it replaces columns of strings that represent floats (type 'float_str' in dtypes_list) to columns of floats and replaces their missing values with np.nan.
Usage:
autoai_libs.transformers.exportable.FloatStr2Float(dtypes_list, missing_values_reference_list=None, activate_flag=True)
Option Description
dtypes_list list contains strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown').
missing_values_reference_list reference list of missing values
activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.NumImputer()
This method is a wrapper for numerical imputer.
Usage:
autoai_libs.transformers.exportable.NumImputer(strategy, missing_values, activate_flag=True)
Option Description
strategy num_imp_strategy: string, optional (default=”mean”). The imputation strategy: <br>- If “mean”, then replace missing values by using the mean along the axis. <br>- If “median”, then replace missing values by using the median along the axis. <br>- If “most_frequent”, then replace missing by using the most frequent value along the axis.
missing_values integer or “NaN”, optional (default=”NaN”). The placeholder for the missing values. All occurrences of missing_values are imputed: <br>- For missing values encoded as np.nan, use the string value “NaN”. <br>- activate_flag: flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
autoai_libs.transformers.exportable.OptStandardScaler()
This parameter is a wrapper for scaling of numerical variables. It currently uses sklearn [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) internally.
Usage:
autoai_libs.transformers.exportable.OptStandardScaler(use_scaler_flag=True, num_scaler_copy=True, num_scaler_with_mean=True, num_scaler_with_std=True)
Option Description
num_scaler_copy Boolean, optional, default True. If False, try to avoid a copy and do in-place scaling instead. This action is not guaranteed to always work. With in-place, for example, if the data is not a NumPy array or scipy.sparse CSR matrix, a copy might still be returned.
num_scaler_with_mean Boolean, True by default. If True, center the data before scaling. An exception is raised when attempted on sparse matrices because centering them entails building a dense matrix, which in common use cases is likely to be too large to fit in memory.
num_scaler_with_std Boolean, True by default. If True, scale the data to unit variance (or equivalently, unit standard deviation).
use_scaler_flag Boolean, flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. Default is True.
autoai_libs.transformers.exportable.NumpyPermuteArray()
Rearranges columns or rows of a numpy array based on a list of indexes.
Usage:
autoai_libs.transformers.exportable.NumpyPermuteArray(permutation_indices=None, axis=None)
Option Description
permutation_indices list of indexes based on which columns are rearranged
axis 0 permute along columns. 1 permute along rows.
Feature transformation
These methods apply to the feature transformations described in [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
For unary stateless functions, such as square or log, use TA1.
Usage:
autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
Option Description
fun the function pointer
name a string name that uniquely identifies this transformer from others
datatypes a list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on)
feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform
tgraph tgraph object must be the starting TGraph( ) object. This parameter is optional and you can pass None, but that can result in some failure to detect some inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
col_names names of the feature columns in a list
col_dtypes list of the datatypes of the feature columns
autoai_libs.cognito.transforms.transform_utils.TA2()
For binary stateless functions, such as sum, product, use TA2.
Usage:
autoai_libs.cognito.transforms.transform_utils.TA2(fun, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
Option Description
fun the function pointer
name: a string name that uniquely identifies this transformer from others
datatypes1 a list of datatypes either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on)
feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform
datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on)
feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform
tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
col_names names of the feature columns in a list
col_dtypes list of the data types of the feature columns
autoai_libs.cognito.transforms.transform_utils.TB1()
For unary state-based transformations (with fit/transform) use, such as frequent count.
Usage:
autoai_libs.cognito.transforms.transform_utils.TB1(tans_class, name, datatypes, feat_constraints, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
Option Description
tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
name a string name that uniquely identifies this transformer from others
datatypes list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on)
feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform
tgraph tgraph object must be the invoking TGraph( ) object. Note that this is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
col_names names of the feature columns in a list.
col_dtypes list of the data types of the feature columns.
autoai_libs.cognito.transforms.transform_utils.TB2()
For binary state-based transformations (with fit/transform) use, such as group-by.
Usage:
autoai_libs.cognito.transforms.transform_utils.TB2(tans_class, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True)
Option Description
tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
name a string name that uniquely identifies this transformer from others
datatypes1 a list of data types either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on)
feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform
datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on)
feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform
tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
autoai_libs.cognito.transforms.transform_utils.TAM()
For a transform that applies at the data level, such as PCA, use TAM.
Usage:
autoai_libs.cognito.transforms.transform_utils.TAM(tans_class, name, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
Option Description
tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
name a string name that uniquely identifies this transformer from others
tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
col_names names of the feature columns in a list
col_dtypes list of the datatypes of the feature columns
autoai_libs.cognito.transforms.transform_utils.TGen()
TGen is a general wrapper and can be used for most functions (might not be most efficient though).
Usage:
autoai_libs.cognito.transforms.transform_utils.TGen(fun, name, arg_count, datatypes_list, feat_constraints_list, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
Option Description
fun the function pointer
name a string name that uniquely identifies this transformer from others
arg_count number of inputs to the function, in this example it is 1, for binary, it is 2, and so on
datatypes_list a list of arg_count lists that correspond to the acceptable input data types for each parameter. In the previous example, since `arg_count=1``, the result is one list within the outer list, and it contains a single type called 'numeric'. In another case, it might be a specific case 'int' or even more specific 'int64'.
feat_constraints_list a list of arg_count lists that correspond to some constraints that can be imposed on selection of the input features
tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
col_names names of the feature columns in a list
col_dtypes list of the data types of the feature columns
autoai_libs.cognito.transforms.transform_utils.FS1()
Feature selection, type 1 (using pairwise correlation between each feature and target.)
Usage:
autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep, additional_col_count_to_keep, ptype)
Option Description
cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance
additional_col_count_to_keep how many columns need to be retained
ptype classification or regression
autoai_libs.cognito.transforms.transform_utils.FS2()
Feature selection, type 2.
Usage:
autoai_libs.cognito.transforms.transform_utils.FS2(cols_ids_must_keep, additional_col_count_to_keep, ptype, eval_algo)
Option Description
cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance
additional_col_count_to_keep how many columns need to be retained
ptype classification or regression
The autoai-ts-libs functions
The combination of transformers and estimators are designed and chosen for each pipeline by the AutoAI Time Series system. Changing the transformers or the estimators in the generated pipeline notebook can cause unexpected results or even failure. We do not recommend you change the notebook for generated pipelines, thus we do not currently offer the specification of the functions for the autoai-ts-libs library.
Learn more
[Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html)
Parent topic:[Saving an AutoAI generated notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
| # AutoAI libraries for Python #
The `autoai-lib` library for Python contains a set of functions that help you to interact with IBM Watson Machine Learning AutoAI experiments\. Using the `autoai-lib` library, you can review and edit the data transformations that take place in the creation of the pipeline\. Similarly, you can use the `autoai-ts-libs` library to interact with pipeline notebooks for time series experiments\.
## Installing autoai\-lib or autoai\-ts\-libs for Python ##
Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install `autoai-lib` or `autoai-ts-libs`\.
### Using autoai\-lib and autoai\-ts\-libs for Python ###
The `autoai-lib` and `autoai-ts-libs` library for Python contain functions that help you to interact with IBM Watson Machine Learning AutoAI experiments\. Using the `autoai-lib` library, you can review and edit the data transformations that take place in the creation of classification and regression pipelines\. Using the `autoai-ts-libs` library, you can review the data transformations that take place in the creation of time series (forecast) pipelines\.
### Installing autoai\-lib and autoai\-ts\-libs for Python ###
Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install [autoai\-lib](https://pypi.org/project/autoai-libs/) and [autoai\-ts\-libs](https://pypi.org/project/autoai-ts-libs/)\.
## The autoai\-lib functions ##
The instantiated project object that is created after you import the `autoai-lib` library exposes these functions:
#### autoai\_libs\.transformers\.exportable\.NumpyColumnSelector() ####
Selects a subset of columns of a numpy array
Usage:
autoai_libs.transformers.exportable.NumpyColumnSelector(columns=None)
<!-- <table> -->
| Option | Description |
| ------- | -------------------------------- |
| columns | list of column indexes to select |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.CompressStrings() ####
Removes spaces and special characters from string columns of an input numpy array X\.
Usage:
autoai_libs.transformers.exportable.CompressStrings(compress_type='string', dtypes_list=None, misslist_list=None, missing_values_reference_list=None, activate_flag=True)
<!-- <table> -->
| Option | Description |
| ------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `compress_type` | type of string compression\. 'string' for removing spaces from a string and 'hash' for creating an int hash\. Default is 'string'\. 'hash' is used for columns with strings and cat\_imp\_strategy='most\_frequent' |
| `dtypes_list` | list containing strings that denote the type of each column of the input numpy array X (strings are among 'char\_str','int\_str','float\_str','float\_num', 'float\_int\_num','int\_num','Boolean','Unknown')\. If None, the column types are discovered\. Default is None\. |
| `misslist_list` | list contains lists of missing values of each column of the input numpy array X\. If None, the missing values of each column are discovered\. Default is None\. |
| `missing_values_reference_list` | reference list of missing values in the input numpy array X |
| `activate_flag` | flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.NumpyReplaceMissingValues() ####
Given a numpy array and a reference list of missing values for it, replaces missing values with a special value (typically a special missing value such as np\.nan)\.
Usage:
autoai_libs.transformers.exportable.NumpyReplaceMissingValues(missing_values, filling_values=np.nan)
<!-- <table> -->
| Option | Description |
| ---------------- | ------------------------------------------------ |
| `missing_values` | reference list of missing values |
| `filling_values` | special value that is assigned to unknown values |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.NumpyReplaceUnknownValues() ####
Given a numpy array and a reference list of known values for each column, replaces values that are not part of a reference list with a special value (typically np\.nan)\. This method is typically used to remove labels for columns in a test data set that has not been seen in the corresponding columns of the training data set\.
Usage:
autoai_libs.transformers.exportable.NumpyReplaceUnknownValues(known_values_list=None, filling_values=None, missing_values_reference_list=None)
<!-- <table> -->
| Option | Description |
| ------------------------------- | ------------------------------------------------------- |
| `known_values_list` | reference list of lists of known values for each column |
| `filling_values` | special value that is assigned to unknown values |
| `missing_values_reference_list` | reference list of missing values |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.boolean2float() ####
Converts a 1\-D numpy array of strings that represent booleans to floats and replaces missing values with np\.nan\. Also changes type of array from 'object' to 'float'\.
Usage:
autoai_libs.transformers.exportable.boolean2float(activate_flag=True)
<!-- <table> -->
| Option | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `activate_flag` | flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.CatImputer() ####
This transformer is a wrapper for categorical imputer\. Internally it currently uses sklearn SimpleImputer\]([https://scikit\-learn\.org/stable/modules/generated/sklearn\.impute\.SimpleImputer\.html](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html))
Usage:
autoai_libs.transformers.exportable.CatImputer(strategy, missing_values, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
<!-- <table> -->
| Option | Description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `strategy` | string, optional, default=”mean”\. The imputation strategy for missing values\. <br>\-`mean`: replace by using the mean along each column\. Can be used only with numeric data\. <br>\- `median`:replace by using the median along each column\. Can only be used with numeric data\. <br>\- `most_frequent`:replace by using most frequent value each column\. Used with strings or numeric data\. <br>\- `constant`:replace with fill\_value\. Can be used with strings or numeric data\. |
| `missing_values` | number, string, np\.nan (default) or None\. The placeholder for the missing values\. All occurrences of missing\_values are imputed\. |
| `sklearn_version_family` | str indicating the sklearn version for backward compatibiity with versions 019, and 020dev\. Currently unused\. Default is None\. |
| `activate_flag` | flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.CatEncoder() ####
This method is a wrapper for categorical encoder\. If encoding parameter is 'ordinal', internally it currently uses sklearn [OrdinalEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html?highlight=ordinalencoder)\. If encoding parameter is 'onehot', or 'onehot\-dense' internally it uses sklearn [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html#sklearn.preprocessing.OneHotEncoder)
Usage:
autoai_libs.transformers.exportable.CatEncoder(encoding, categories, dtype, handle_unknown, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
<!-- <table> -->
| Option | Description |
| ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `encoding` | str, 'onehot', 'onehot\-dense' or 'ordinal'\. The type of encoding to use (default is 'ordinal') <br>'onehot': encode the features by using a one\-hot aka one\-of\-K scheme (or also called 'dummy' encoding)\. This encoding creates a binary column for each category and returns a sparse matrix\. <br>'onehot\-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix\. <br>'ordinal': encode the features as ordinal integers\. The result is a single column of integers (0 to n\_categories \- 1) per feature\. |
| `categories` | 'auto' or a list of lists/arrays of values\. Categories (unique values) per feature: <br>'auto' : Determine categories automatically from the training data\. <br>`list` : `categories[i]` holds the categories that are expected in the ith column\. The passed categories must be sorted and can not mix strings and numeric values\. The used categories can be found in the `encoder.categories_` attribute\. |
| `dtype` | number type, default np\.float64 Desired dtype of output\. |
| `handle_unknown` | 'error' (default) or 'ignore'\. Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise)\. When this parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one\-hot encoded columns for this feature are all zeros\. In the inverse transform, an unknown category are denoted as None\. Ignoring unknown categories is not supported for `encoding='ordinal'`\. |
| `sklearn_version_family` | str indicating the sklearn version for backward compatibiity with versions 019, and 020dev\. Currently unused\. Default is None\. |
| `activate_flag` | flag that indicates that this transformer are active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.float32\_transform() ####
Transforms a float64 numpy array to float32\.
Usage:
autoai_libs.transformers.exportable.float32_transform(activate_flag=True)
<!-- <table> -->
| Option | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------- |
| `activate_flag` | flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.FloatStr2Float() ####
Given numpy array X and dtypes\_list that denotes the types of its columns, it replaces columns of strings that represent floats (type 'float\_str' in dtypes\_list) to columns of floats and replaces their missing values with np\.nan\.
Usage:
autoai_libs.transformers.exportable.FloatStr2Float(dtypes_list, missing_values_reference_list=None, activate_flag=True)
<!-- <table> -->
| Option | Description |
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `dtypes_list` | list contains strings that denote the type of each column of the input numpy array X (strings are among 'char\_str','int\_str','float\_str','float\_num', 'float\_int\_num','int\_num','Boolean','Unknown')\. |
| `missing_values_reference_list` | reference list of missing values |
| `activate_flag` | flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.NumImputer() ####
This method is a wrapper for numerical imputer\.
Usage:
autoai_libs.transformers.exportable.NumImputer(strategy, missing_values, activate_flag=True)
<!-- <table> -->
| Option | Description |
| ---------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `strategy` | num\_imp\_strategy: string, optional (default=”mean”)\. The imputation strategy: <br>\- If “mean”, then replace missing values by using the mean along the axis\. <br>\- If “median”, then replace missing values by using the median along the axis\. <br>\- If “most\_frequent”, then replace missing by using the most frequent value along the axis\. |
| `missing_values` | integer or “NaN”, optional (default=”NaN”)\. The placeholder for the missing values\. All occurrences of missing\_values are imputed: <br>\- For missing values encoded as np\.nan, use the string value “NaN”\. <br>\- `activate_flag`: flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.OptStandardScaler() ####
This parameter is a wrapper for scaling of numerical variables\. It currently uses sklearn [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) internally\.
Usage:
autoai_libs.transformers.exportable.OptStandardScaler(use_scaler_flag=True, num_scaler_copy=True, num_scaler_with_mean=True, num_scaler_with_std=True)
<!-- <table> -->
| Option | Description |
| ---------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `num_scaler_copy` | Boolean, optional, default True\. If False, try to avoid a copy and do in\-place scaling instead\. This action is not guaranteed to always work\. With in\-place, for example, if the data is not a NumPy array or scipy\.sparse CSR matrix, a copy might still be returned\. |
| `num_scaler_with_mean` | Boolean, True by default\. If True, center the data before scaling\. An exception is raised when attempted on sparse matrices because centering them entails building a dense matrix, which in common use cases is likely to be too large to fit in memory\. |
| `num_scaler_with_std` | Boolean, True by default\. If True, scale the data to unit variance (or equivalently, unit standard deviation)\. |
| `use_scaler_flag` | Boolean, flag that indicates that this transformer is active\. If False, transform(X) outputs the input numpy array X unmodified\. Default is True\. |
<!-- </table ""> -->
#### autoai\_libs\.transformers\.exportable\.NumpyPermuteArray() ####
Rearranges columns or rows of a numpy array based on a list of indexes\.
Usage:
autoai_libs.transformers.exportable.NumpyPermuteArray(permutation_indices=None, axis=None)
<!-- <table> -->
| Option | Description |
| --------------------- | ----------------------------------------------------- |
| `permutation_indices` | list of indexes based on which columns are rearranged |
| `axis` | 0 permute along columns\. 1 permute along rows\. |
<!-- </table ""> -->
### Feature transformation ###
These methods apply to the feature transformations described in [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)\.
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TA1(fun, name=None, datatypes=None, feat\_constraints=None, tgraph=None, apply\_all=True, col\_names=None, col\_dtypes=None) ####
For unary stateless functions, such as square or log, use TA1\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
<!-- <table> -->
| Option | Description |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `fun` | the function pointer |
| `name` | a string name that uniquely identifies this transformer from others |
| `datatypes` | a list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on) |
| `feat_constraints` | all constraints, which must be satisfied by a column to be considered a valid input to this transform |
| `tgraph` | tgraph object must be the starting TGraph( ) object\. This parameter is optional and you can pass None, but that can result in some failure to detect some inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
| `col_names` | names of the feature columns in a list |
| `col_dtypes` | list of the datatypes of the feature columns |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TA2() ####
For binary stateless functions, such as sum, product, use TA2\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TA2(fun, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
<!-- <table> -->
| Option | Description |
| --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fun` | the function pointer |
| `name`: a string name that uniquely identifies this transformer from others |
| `datatypes1` | a list of datatypes either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on) |
| `feat_constraints1` | all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform |
| `datatypes2` | a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on) |
| `feat_constraints2` | all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform |
| `tgraph` | tgraph object must be the invoking TGraph( ) object\. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
| `col_names` | names of the feature columns in a list |
| `col_dtypes` | list of the data types of the feature columns |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TB1() ####
For unary state\-based transformations (with fit/transform) use, such as frequent count\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TB1(tans_class, name, datatypes, feat_constraints, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
<!-- <table> -->
| Option | Description |
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `tans_class` | a class that implements `fit( )` and `transform( )` in accordance with the transformation function definition |
| `name` | a string name that uniquely identifies this transformer from others |
| `datatypes` | list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on) |
| `feat_constraints` | all constraints, which must be satisfied by a column to be considered a valid input to this transform |
| `tgraph` | tgraph object must be the invoking TGraph( ) object\. Note that this is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
| `col_names` | names of the feature columns in a list\. |
| `col_dtypes` | list of the data types of the feature columns\. |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TB2() ####
For binary state\-based transformations (with fit/transform) use, such as group\-by\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TB2(tans_class, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True)
<!-- <table> -->
| Option | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `tans_class` | a class that implements fit( ) and transform( ) in accordance with the transformation function definition |
| `name` | a string name that uniquely identifies this transformer from others |
| `datatypes1` | a list of data types either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on) |
| `feat_constraints1` | all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform |
| `datatypes2` | a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on) |
| `feat_constraints2` | all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform |
| `tgraph` | tgraph object must be the invoking TGraph( ) object\. This parameter is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TAM() ####
For a transform that applies at the data level, such as PCA, use TAM\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TAM(tans_class, name, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
<!-- <table> -->
| Option | Description |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `tans_class` | a class that implements `fit( )` and `transform( )` in accordance with the transformation function definition |
| `name` | a string name that uniquely identifies this transformer from others |
| `tgraph` | tgraph object must be the invoking TGraph( ) object\. This parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
| `col_names` | names of the feature columns in a list |
| `col_dtypes` | list of the datatypes of the feature columns |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.TGen() ####
TGen is a general wrapper and can be used for most functions (might not be most efficient though)\.
Usage:
autoai_libs.cognito.transforms.transform_utils.TGen(fun, name, arg_count, datatypes_list, feat_constraints_list, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
<!-- <table> -->
| Option | Description |
| ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fun` | the function pointer |
| `name` | a string name that uniquely identifies this transformer from others |
| `arg_count` | number of inputs to the function, in this example it is 1, for binary, it is 2, and so on |
| `datatypes_list` | a list of arg\_count lists that correspond to the acceptable input data types for each parameter\. In the previous example, since \`arg\_count=1\`\`, the result is one list within the outer list, and it contains a single type called 'numeric'\. In another case, it might be a specific case 'int' or even more specific 'int64'\. |
| `feat_constraints_list` | a list of arg\_count lists that correspond to some constraints that can be imposed on selection of the input features |
| `tgraph` | tgraph object must be the invoking TGraph( ) object\. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching |
| `apply_all` | only use applyAll = True\. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each\. |
| `col_names` | names of the feature columns in a list |
| `col_dtypes` | list of the data types of the feature columns |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.FS1() ####
Feature selection, type 1 (using pairwise correlation between each feature and target\.)
Usage:
autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep, additional_col_count_to_keep, ptype)
<!-- <table> -->
| Option | Description |
| ------------------------------ | ---------------------------------------------------------------------------------------- |
| `cols_ids_must_keep` | serial numbers of the columns that must be kept irrespective of their feature importance |
| `additional_col_count_to_keep` | how many columns need to be retained |
| `ptype` | classification or regression |
<!-- </table ""> -->
#### autoai\_libs\.cognito\.transforms\.transform\_utils\.FS2() ####
Feature selection, type 2\.
Usage:
autoai_libs.cognito.transforms.transform_utils.FS2(cols_ids_must_keep, additional_col_count_to_keep, ptype, eval_algo)
<!-- <table> -->
| Option | Description |
| ------------------------------ | ---------------------------------------------------------------------------------------- |
| `cols_ids_must_keep` | serial numbers of the columns that must be kept irrespective of their feature importance |
| `additional_col_count_to_keep` | how many columns need to be retained |
| `ptype` | classification or regression |
<!-- </table ""> -->
## The autoai\-ts\-libs functions ##
The combination of transformers and estimators are designed and chosen for each pipeline by the AutoAI Time Series system\. Changing the transformers or the estimators in the generated pipeline notebook can cause unexpected results or even failure\. We do not recommend you change the notebook for generated pipelines, thus we do not currently offer the specification of the functions for the `autoai-ts-libs` library\.
## Learn more ##
[Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html)
**Parent topic:**[Saving an AutoAI generated notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
<!-- </article "role="article" "> -->
|
07A75B90684D731C6B33FC552585D391E86A2A35 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html?context=cdpaas&locale=en | Saving an AutoAI generated notebook | Saving an AutoAI generated notebook
To view the code that created a particular experiment, or interact with the experiment programmatically, you can save an experiment as a notebook. You can also save an individual pipeline as a notebook so that you can review the code that is used in that pipeline.
Working with AutoAI-generated notebooks
When you save an experiment or a pipeline as notebook, you can:
* Access the saved notebooks from the Notebooks section on the Assets tab.
* Review the code to understand the transformations applied to build the model. This increases confidence in the process and contributes to explainable AI practices.
* Enter your own authentication credentials by using the template provided.
* Use and run the code within Watson Studio, or download the notebook code to use in another notebook server. No matter where you use the notebook, it automatically installs all required dependencies, including libraries for:
* xgboost
* lightgbm
* scikit-learn
* autoai-libs
* ibm-watson-machine-learning
* snapml
* View the training data used to train the experiment and the test (holdout) data used to validate the experiment.
Notes:
* Auto-generated notebook code excutes successfully as written. Modifying the code or changing the input data can adversely affect the code. If you want to make a significant change, consider retraining the experiment by using AutoAI.
* For more information on the estimators, or algorithms, and transformers that are applied to your data to train an experiment and create pipelines, refer to [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
Saving an experiment as a notebook
Save all of the code for an experiment to view the transformations and optimizations applied to create the model pipelines.
What is included with the experiment notebook
The experiment notebook provides annotated code so you can:
* Interact with trained model pipelines
* Access model details programmatically (including feature importance and machine learning metrics).
* Visualize each pipeline as a graph, with each node documented, to provide transparency
* Compare pipelines
* Download selected pipelines and test locally
* Create a deployment and score the model
* Get the experiment definition or configuration in Python API, which you can use for automation or integration with other applications.
Saving the code for an experiment
To save an entire experiment as a notebook:
1. After the experiment completes, click Save code from the Progress map panel.
2. Name your notebook, add an optional description, choose a runtime environment, and save.
3. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab of your project.
Saving an individual pipeline as a notebook
Save an individual pipeline as a notebook so you can review the Scikit-Learn source code for the trained model in a notebook.
Note: Currently, you cannot generate a pipeline notebook for an experiment with joined data sources.
What is included with the pipeline notebook
The experiment notebook provides annotated code that you can use to complete these tasks:
* View the Scikit-learn pipeline definition
* See the transformations applied for pipeline training
* Review the pipeline evaluation
Saving a pipeline as a notebook
To save a pipeline as a notebook:
1. Complete your AutoAI experiment.
2. Select the pipeline that you want to save in the leaderboard, and click Save from the action menu for the pipeline, then Save as notebook.
3. Name your notebook, add an optional description, choose a runtime environment, and save.
4. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab.
Create sample notebooks
To see for yourself what AutoAI-generated notebooks look like:
1. Follow the steps in [AutoAI tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) to create a binary classification experiment from sample data.
2. After the experiment runs, click Save code in the experiment details panel.
3. Name and save the experiment notebook.
4. To save a pipeline as a model, select a pipeline from the leaderboard, then click Save and Save as notebook.
5. Name and save the pipeline notebook.
6. From Assets tab, open the resulting notebooks in the notebook editor and review the code.
Additional resources
* For details on the methods used in the code, see [Using AutoAI libraris with Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html).
* For more information on AutoAI notebooks, see this [blog post](https://lukasz-cmielowski.medium.com/watson-autoai-can-i-get-the-model-88a0fbae128a).
Next steps
[Using autoai-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Saving an AutoAI generated notebook #
To view the code that created a particular experiment, or interact with the experiment programmatically, you can save an experiment as a notebook\. You can also save an individual pipeline as a notebook so that you can review the code that is used in that pipeline\.
## Working with AutoAI\-generated notebooks ##
When you save an experiment or a pipeline as notebook, you can:
<!-- <ul> -->
* Access the saved notebooks from the *Notebooks* section on the *Assets* tab\.
* Review the code to understand the transformations applied to build the model\. This increases confidence in the process and contributes to explainable AI practices\.
* Enter your own authentication credentials by using the template provided\.
* Use and run the code within Watson Studio, or download the notebook code to use in another notebook server\. No matter where you use the notebook, it automatically installs all required dependencies, including libraries for:
<!-- <ul> -->
* `xgboost`
* `lightgbm`
* `scikit-learn`
* `autoai-libs`
* `ibm-watson-machine-learning`
* `snapml`
<!-- </ul> -->
* View the training data used to train the experiment and the test (holdout) data used to validate the experiment\.
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* Auto\-generated notebook code excutes successfully as written\. Modifying the code or changing the input data can adversely affect the code\. If you want to make a significant change, consider retraining the experiment by using AutoAI\.
* For more information on the estimators, or algorithms, and transformers that are applied to your data to train an experiment and create pipelines, refer to [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)\.
<!-- </ul> -->
## Saving an experiment as a notebook ##
Save all of the code for an experiment to view the transformations and optimizations applied to create the model pipelines\.
### What is included with the experiment notebook ###
The experiment notebook provides annotated code so you can:
<!-- <ul> -->
* Interact with trained model pipelines
* Access model details programmatically (including feature importance and machine learning metrics)\.
* Visualize each pipeline as a graph, with each node documented, to provide transparency
* Compare pipelines
* Download selected pipelines and test locally
* Create a deployment and score the model
* Get the experiment definition or configuration in Python API, which you can use for automation or integration with other applications\.
<!-- </ul> -->
### Saving the code for an experiment ###
To save an entire experiment as a notebook:
<!-- <ol> -->
1. After the experiment completes, click **Save code** from the Progress map panel\.
2. Name your notebook, add an optional description, choose a runtime environment, and save\.
3. Click the link in the notification to open the notebook and review the code\. You can also open the notebook from the *Notebooks* section of the *Assets* tab of your project\.
<!-- </ol> -->
## Saving an individual pipeline as a notebook ##
Save an individual pipeline as a notebook so you can review the Scikit\-Learn source code for the trained model in a notebook\.
Note: Currently, you cannot generate a pipeline notebook for an experiment with joined data sources\.
### What is included with the pipeline notebook ###
The experiment notebook provides annotated code that you can use to complete these tasks:
<!-- <ul> -->
* View the Scikit\-learn pipeline definition
* See the transformations applied for pipeline training
* Review the pipeline evaluation
<!-- </ul> -->
### Saving a pipeline as a notebook ###
To save a pipeline as a notebook:
<!-- <ol> -->
1. Complete your AutoAI experiment\.
2. Select the pipeline that you want to save in the leaderboard, and click **Save** from the action menu for the pipeline, then **Save as notebook**\.
3. Name your notebook, add an optional description, choose a runtime environment, and save\.
4. Click the link in the notification to open the notebook and review the code\. You can also open the notebook from the *Notebooks* section of the *Assets* tab\.
<!-- </ol> -->
## Create sample notebooks ##
To see for yourself what AutoAI\-generated notebooks look like:
<!-- <ol> -->
1. Follow the steps in [AutoAI tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) to create a binary classification experiment from sample data\.
2. After the experiment runs, click **Save code** in the experiment details panel\.
3. Name and save the experiment notebook\.
4. To save a pipeline as a model, select a pipeline from the leaderboard, then click **Save** and **Save as notebook**\.
5. Name and save the pipeline notebook\.
6. From *Assets* tab, open the resulting notebooks in the notebook editor and review the code\.
<!-- </ol> -->
## Additional resources ##
<!-- <ul> -->
* For details on the methods used in the code, see [Using AutoAI libraris with Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html)\.
* For more information on AutoAI notebooks, see this [blog post](https://lukasz-cmielowski.medium.com/watson-autoai-can-i-get-the-model-88a0fbae128a)\.
<!-- </ul> -->
## Next steps ##
[Using autoai\-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
91EEB0303C78EC7EAA6DAB7921E7173C68FF7769 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en | AutoAI Overview | AutoAI Overview
The AutoAI graphical tool analyzes your data and uses data algorithms, transformations, and parameter settings to create the best predictive model. AutoAI displays various potential models as model candidate pipelines and rank them on a leaderboard for you to choose from.
Data format : Tabular: CSV files, with comma (,) delimiter for all types of AutoAI experiments. : Connected data from [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html).
Note:You can use a data asset that is saved as a Feature Group (beta) but the metadata is not used to populate the AutoAI experiment settings.
Data size : Up to 1 GB or up to 20 GB. For details, refer to [AutoAI data use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enautoai-data-use).
AutoAI data use
These limits are based on the default compute configuration of 8 CPU and 32 GB.
AutoAI classification and regression experiments:
* You can upload a file up to 1 GB for AutoAI experiments.
* If you connect to a data source that exceeds 1 GB, only the first 1 GB of records is used.
AutoAI time series experiments:
* If the data source contains a timestamp column, AutoAI samples the data at a uniform frequency. For example, data can be in increments of one minute, one hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy.
Note:If the file size is larger than 1 GB, AutoAi sorts the data in descending time order and only the first 1 GB is used to train the experiment.
* If the data source does not contain a timestamp column, ensure AutoAI samples the data at uniform intervals and sorts the data in ascending time order. An ascending sort order means that the value in the first row is the oldest, and the value in the last row is the most recent.
Note: If the file size is larger than 1 GB, truncate the file size so it is smaller than 1 GB.
AutoAI process
Using AutoAI, you can build and deploy a machine learning model with sophisticated training features and no coding. The tool does most of the work for you.
To view the code that created a particular experiment, or interact with the experiment programmatically, you can [save an experiment as a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html).

AutoAI automatically runs the following tasks to build and evaluate candidate model pipelines:
* [Data pre-processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enpreprocess)
* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enmodel_selection)
* [Automated feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enfeature_engineering)
* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization)
Understanding the AutoAI process
For additional detail on each of these phases, including links to associated research papers and descriptions of the algorithms applied to create the model pipelines, see [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
Data pre-processing
Most data sets contain different data formats and missing values, but standard machine learning algorithms work only with numbers and no missing values. Therefore, AutoAI applies various algorithms or estimators to analyze, clean, and prepare your raw data for machine learning. This technique automatically detects and categorizes values based on features, such as data type: categorical or numerical. Depending on the categorization, AutoAI uses [hyper-parameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization) to determine the best combination of strategies for missing value imputation, feature encoding, and feature scaling for your data.
Automated model selection
AutoAI uses automated model selection to identify the best model for your data. This novel approach tests potential models against small subsets of the data and ranks them based on accuracy. AutoAI then selects the most promising models and increases the size of the data subset until it identifies the best match. This approach saves time and improves performance by gradually narrowing down the potential models based on accuracy.
For information on how to handle automatically-generated pipelines to select the best model, refer to [Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html).
Automated feature engineering
Feature engineering identifies the most accurate model by transforming raw data into a combination of features that best represent the problem. This unique approach explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This technique results in an optimized sequence of transformations for the data that best match the algorithms of the model selection step.
Hyperparameter optimization
Hyperparameter optimization refines the best performing models. AutoAI uses a novel hyperparameter optimization algorithm for certain function evaluations, such as model training and scoring, that are typical in machine learning. This approach quickly identifies the best model despite long evaluation times at each iteration.
Next steps
[AutoAI tutorial: Build a Binary Classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html)
Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
| # AutoAI Overview #
The AutoAI graphical tool analyzes your data and uses data algorithms, transformations, and parameter settings to create the best predictive model\. AutoAI displays various potential models as model candidate pipelines and rank them on a leaderboard for you to choose from\.
**Data format** : Tabular: CSV files, with comma (,) delimiter for all types of AutoAI experiments\. : Connected data from [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)\.
Note:You can use a data asset that is saved as a *Feature Group (beta)* but the metadata is not used to populate the AutoAI experiment settings\.
**Data size** : Up to 1 GB or up to 20 GB\. For details, refer to [AutoAI data use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#autoai-data-use)\.
## AutoAI data use ##
These limits are based on the default compute configuration of 8 CPU and 32 GB\.
AutoAI classification and regression experiments:
<!-- <ul> -->
* You can upload a file up to 1 GB for AutoAI experiments\.
* If you connect to a data source that exceeds 1 GB, only the first 1 GB of records is used\.
<!-- </ul> -->
AutoAI time series experiments:
<!-- <ul> -->
* If the data source contains a timestamp column, AutoAI samples the data at a uniform frequency\. For example, data can be in increments of one minute, one hour, or one day\. The specified timestamp is used to determine the lookback window to improve the model accuracy\.
Note:If the file size is larger than 1 GB, AutoAi sorts the data in *descending* time order and only the first 1 GB is used to train the experiment.
* If the data source does not contain a timestamp column, ensure AutoAI samples the data at uniform intervals and sorts the data in *ascending* time order\. An ascending sort order means that the value in the first row is the oldest, and the value in the last row is the most recent\.
Note: If the file size is larger than 1 GB, truncate the file size so it is smaller than 1 GB.
<!-- </ul> -->
## AutoAI process ##
Using AutoAI, you can build and deploy a machine learning model with sophisticated training features and no coding\. The tool does most of the work for you\.
To view the code that created a particular experiment, or interact with the experiment programmatically, you can [save an experiment as a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)\.

AutoAI automatically runs the following tasks to build and evaluate candidate model pipelines:
<!-- <ul> -->
* [Data pre\-processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#preprocess)
* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#model_selection)
* [Automated feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#feature_engineering)
* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#hpo_optimization)
<!-- </ul> -->
#### Understanding the AutoAI process ####
For additional detail on each of these phases, including links to associated research papers and descriptions of the algorithms applied to create the model pipelines, see [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)\.
### Data pre\-processing ###
Most data sets contain different data formats and missing values, but standard machine learning algorithms work only with numbers and no missing values\. Therefore, AutoAI applies various algorithms or estimators to analyze, clean, and prepare your raw data for machine learning\. This technique automatically detects and categorizes values based on features, such as data type: categorical or numerical\. Depending on the categorization, AutoAI uses [hyper\-parameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=en#hpo_optimization) to determine the best combination of strategies for missing value imputation, feature encoding, and feature scaling for your data\.
### Automated model selection ###
AutoAI uses automated model selection to identify the best model for your data\. This novel approach tests potential models against small subsets of the data and ranks them based on accuracy\. AutoAI then selects the most promising models and increases the size of the data subset until it identifies the best match\. This approach saves time and improves performance by gradually narrowing down the potential models based on accuracy\.
For information on how to handle automatically\-generated pipelines to select the best model, refer to [Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html)\.
### Automated feature engineering ###
Feature engineering identifies the most accurate model by transforming raw data into a combination of features that best represent the problem\. This unique approach explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning\. This technique results in an optimized sequence of transformations for the data that best match the algorithms of the model selection step\.
### Hyperparameter optimization ###
Hyperparameter optimization refines the best performing models\. AutoAI uses a novel hyperparameter optimization algorithm for certain function evaluations, such as model training and scoring, that are typical in machine learning\. This approach quickly identifies the best model despite long evaluation times at each iteration\.
## Next steps ##
[AutoAI tutorial: Build a Binary Classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html)
**Parent topic:**[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
<!-- </article "role="article" "> -->
|
2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html?context=cdpaas&locale=en | Creating a text analysis experiment | Creating a text analysis experiment
Use AutoAI's text analysis feature to perform text analysis of your experiments. For example, perform basic sentiment analysis to predict an outcome based on text comments.
Note: Text analysis is only available for AutoAI classification and regression experiments. This feature is not available for time series experiments.
Text analysis overview
When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column.
The word2vec algorithm takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word's meaning or relationship to other words. The predictions can be used to analyze text and guess at the meaning in sentiment analysis applications.
During the feature engineering phase of the experiment training, 20 features are generated for the text column, by using the word2vec algorithm. Auto-detection of text features is based on analyzing the number of unique values in a column and the number of tokens in a record (minimum number = 3). If the number of unique values is less than number of all values divided by 5, the column is not treated as text.
When the experiment completes, you can review the feature engineering results from the pipeline details page. You can also save a pipeline as a notebook, where you can review the transformations and see a visualization of the transformations.
Note: When you review the experiment, if you determine that a text column was not detected and processed by the auto-detection, you can specify the text column manually in the experiment settings.
In this example, the comments for a fictional car rental company are used to train a model that predicts a satisfaction rating when a new comment is entered.
Watch this short video to see this example and then read further details about the text feature below the video.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
* Transcript
Synchronize transcript with video
Time Transcript
00:00 In this video you'll see how to create an AutoAI experiment to perform sentiment analysis on a text file.
00:09 You can use the text feature engineering to perform text analysis in your experiments.
00:15 For example, perform basic sentiment analysis to predict an outcome based on text comments.
00:22 Start in a project and add an asset to that project, a new AutoAI experiment.
00:29 Just provide a name, description, select a machine learning service, and then create the experiment.
00:38 When the AutoAI experiment builder displays, you can add the data set.
00:43 In this case, the data set is already stored in the project as a data asset.
00:48 Select the asset to add to the experiment.
00:53 Before continuing, preview the data.
00:56 This data set has two columns.
00:59 The first contains the customers' comments and the second contains either 0, for "Not satisfied", or 1, for "Satisfied".
01:08 This isn't a time series forecast, so select "No" for that option.
01:13 Then select the column to predict, which is "Satisfaction" in this example.
01:19 AutoAI determines that the satisfaction column contains two possible values, making it suitable for a binary classification model.
01:28 And the positive class is 1, for "Satisfied".
01:32 Open the experiment settings if you'd like to customize the experiment.
01:36 On the data source panel, you'll see some options for the text feature engineering.
01:41 You can automatically select the text columns, or you can exercise more control by manually specifying the columns for text feature engineering.
01:52 You can also select how many vectors to create for each column during text feature engineering.
01:58 A lower number faster and a higher number is more accurate, but slower.
02:03 Now, run the experiment to view the transformations and progress.
02:09 When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column.
02:23 During the feature engineering phase of the experiment training, twenty features are generated for the text column using the word2vec algorithm.
02:33 When the experiment completes, you can review the feature engineering results from the pipeline details page.
02:40 On the Features summary panel, you can review the text transformations.
02:45 You can see that AutoAI created several text features by applying the algorithm function to the column elements, along with the feature importance showing which features contribute most to your prediction output.
02:59 You can save this pipeline as a model or as a notebook.
03:03 The notebook contains the code to see the transformations and visualizations of those transformations.
03:09 In this case, create a model.
03:13 Use the link to view the model.
03:16 Now, promote the model to a deployment space.
03:23 Here are the model details, and from here you can deploy the model.
03:28 In this case, it will be an online deployment.
03:36 When that completes, open the deployment.
03:39 On the test app, you can specify one or more comments to analyze.
03:46 Then, click "Predict".
03:49 The first customer is predicted not to be satisfied with the service.
03:54 And the second customer is predicted to be satisfied with the service.
03:59 Find more videos in the Cloud Pak for Data as a Service documentation.
Given a data set that contains a column of review comments for the rental experience (Customer_service), and a column that contains a binary satisfaction rating (Satisfaction) where 0 represents a negative comment and 1 represents a positive comment, the experiment is trained to predict a satisfaction rating when new feedback is entered.
Training a text transformation experiment
After you load the data set and specify the prediction column (Satisfaction), the Experiment settings selects the Use text feature engineering option.

Note some of the details for tuning your text analysis experiment:
* You can accept the default selection of automatically selecting the text columns or you can exercise more control by manually specifying the columns for text feature engineering.
* As the experiment runs, a default of 20 features is generated for the text column by using the word2vec algorithm. You can edit that value to increase or decrease the number of features. The more vectors that you generate the more accurate your model are, but the longer training takess.
* The remainder of the options applies to all types of experiments so you can fine-tune how to handle the final training data.
Run the experiment to view the transformations in progress.

Select the name of a pipeline, then click Feature summary to review the text transformations.

You can also save the experiment pipeline as a notebook and review the transformations as a visualization.
Deploying and scoring a text transformation model
When you score this model, enter new comments to get a prediction with a confidence score for whether the comment results in a positive or negative satisfaction rating.
For example, entering the comment "It took us almost three hours to get a car. It was absurd" predicts a satisfaction rating of 0 with a confidence score of 95%.

Next steps
[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
| # Creating a text analysis experiment #
Use AutoAI's text analysis feature to perform text analysis of your experiments\. For example, perform basic sentiment analysis to predict an outcome based on text comments\.
Note: Text analysis is only available for AutoAI classification and regression experiments\. This feature is not available for time series experiments\.
## Text analysis overview ##
When you create an experiment that uses the text analysis feature, the AutoAI process uses the `word2vec` algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column\.
The `word2vec` algorithm takes a corpus of text as input and outputs a set of vectors\. By turning text into a numerical representation, it can detect and compare similar words\. When trained with enough data, `word2vec` can make accurate predictions about a word's meaning or relationship to other words\. The predictions can be used to analyze text and guess at the meaning in sentiment analysis applications\.
During the feature engineering phase of the experiment training, 20 features are generated for the text column, by using the `word2vec` algorithm\. Auto\-detection of text features is based on analyzing the number of unique values in a column and the number of tokens in a record (minimum number = 3)\. If the number of unique values is less than number of all values divided by 5, the column is not treated as text\.
When the experiment completes, you can review the feature engineering results from the pipeline details page\. You can also save a pipeline as a notebook, where you can review the transformations and see a visualization of the transformations\.
Note: When you review the experiment, if you determine that a text column was not detected and processed by the auto\-detection, you can specify the text column manually in the experiment settings\.
In this example, the comments for a fictional car rental company are used to train a model that predicts a satisfaction rating when a new comment is entered\.
Watch this short video to see this example and then read further details about the text feature below the video\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
<!-- <ul> -->
* Transcript
Synchronize transcript with video
<!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
| Time | Transcript |
| ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 00:00 | In this video you'll see how to create an AutoAI experiment to perform sentiment analysis on a text file. |
| 00:09 | You can use the text feature engineering to perform text analysis in your experiments. |
| 00:15 | For example, perform basic sentiment analysis to predict an outcome based on text comments. |
| 00:22 | Start in a project and add an asset to that project, a new AutoAI experiment. |
| 00:29 | Just provide a name, description, select a machine learning service, and then create the experiment. |
| 00:38 | When the AutoAI experiment builder displays, you can add the data set. |
| 00:43 | In this case, the data set is already stored in the project as a data asset. |
| 00:48 | Select the asset to add to the experiment. |
| 00:53 | Before continuing, preview the data. |
| 00:56 | This data set has two columns. |
| 00:59 | The first contains the customers' comments and the second contains either 0, for "Not satisfied", or 1, for "Satisfied". |
| 01:08 | This isn't a time series forecast, so select "No" for that option. |
| 01:13 | Then select the column to predict, which is "Satisfaction" in this example. |
| 01:19 | AutoAI determines that the satisfaction column contains two possible values, making it suitable for a binary classification model. |
| 01:28 | And the positive class is 1, for "Satisfied". |
| 01:32 | Open the experiment settings if you'd like to customize the experiment. |
| 01:36 | On the data source panel, you'll see some options for the text feature engineering. |
| 01:41 | You can automatically select the text columns, or you can exercise more control by manually specifying the columns for text feature engineering. |
| 01:52 | You can also select how many vectors to create for each column during text feature engineering. |
| 01:58 | A lower number faster and a higher number is more accurate, but slower. |
| 02:03 | Now, run the experiment to view the transformations and progress. |
| 02:09 | When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column. |
| 02:23 | During the feature engineering phase of the experiment training, twenty features are generated for the text column using the word2vec algorithm. |
| 02:33 | When the experiment completes, you can review the feature engineering results from the pipeline details page. |
| 02:40 | On the Features summary panel, you can review the text transformations. |
| 02:45 | You can see that AutoAI created several text features by applying the algorithm function to the column elements, along with the feature importance showing which features contribute most to your prediction output. |
| 02:59 | You can save this pipeline as a model or as a notebook. |
| 03:03 | The notebook contains the code to see the transformations and visualizations of those transformations. |
| 03:09 | In this case, create a model. |
| 03:13 | Use the link to view the model. |
| 03:16 | Now, promote the model to a deployment space. |
| 03:23 | Here are the model details, and from here you can deploy the model. |
| 03:28 | In this case, it will be an online deployment. |
| 03:36 | When that completes, open the deployment. |
| 03:39 | On the test app, you can specify one or more comments to analyze. |
| 03:46 | Then, click "Predict". |
| 03:49 | The first customer is predicted not to be satisfied with the service. |
| 03:54 | And the second customer is predicted to be satisfied with the service. |
| 03:59 | Find more videos in the Cloud Pak for Data as a Service documentation. |
<!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
<!-- </ul> -->
Given a data set that contains a column of review comments for the rental experience (Customer\_service), and a column that contains a binary satisfaction rating (Satisfaction) where 0 represents a negative comment and 1 represents a positive comment, the experiment is trained to predict a satisfaction rating when new feedback is entered\.
### Training a text transformation experiment ###
After you load the data set and specify the prediction column (Satisfaction), the *Experiment settings* selects the *Use text feature engineering* option\.

Note some of the details for tuning your text analysis experiment:
<!-- <ul> -->
* You can accept the default selection of automatically selecting the text columns or you can exercise more control by manually specifying the columns for text feature engineering\.
* As the experiment runs, a default of 20 features is generated for the text column by using the `word2vec` algorithm\. You can edit that value to increase or decrease the number of features\. The more vectors that you generate the more accurate your model are, but the longer training takess\.
* The remainder of the options applies to all types of experiments so you can fine\-tune how to handle the final training data\.
<!-- </ul> -->
Run the experiment to view the transformations in progress\.

Select the name of a pipeline, then click **Feature summary** to review the text transformations\.

You can also save the experiment pipeline as a notebook and review the transformations as a visualization\.
### Deploying and scoring a text transformation model ###
When you score this model, enter new comments to get a prediction with a confidence score for whether the comment results in a positive or negative satisfaction rating\.
For example, entering the comment "It took us almost three hours to get a car\. It was absurd" predicts a satisfaction rating of 0 with a confidence score of 95%\.

## Next steps ##
[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
**Parent topic:**[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
<!-- </article "role="article" "> -->
|
510BB82156702471C527D6EF7E51FE69EF746004 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en | Time series implementation details | Time series implementation details
These implementation details describe the stages and processing that are specific to an AutoAI time series experiment.
Implementation details
Refer to these implementation and configuration details for your time series experiment.
* [Time series stages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-stages) for processing an experiment.
* [Time series optimizing metrics](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-metrics) for tuning your pipelines.
* [Time series algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-algorithms) for building the pipelines.
* [Supported date and time formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-date-time).
Time series stages
An AutoAI time series experiment includes these stages when an experiment runs:
1. [Initialization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=eninitialization)
2. [Pipeline selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enpipeline-selection)
3. [Model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enmodel-eval)
4. [Final pipeline generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enfinal-pipeline)
5. [Backtest](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enbacktest)
Stage 1: Initialization
The initialization stage processes the training data, in this sequence:
* Load the data
* Split the data set L into training data T and holdout data H
* Set the validation, timestamp column handling, and lookback window generation. Notes:
* The training data (T) is equal to the data set (L) minus the holdout (H). When you configure the experiment, you can adjust the size of the holdout data. By default, the size of the holdout data is 20 steps.
* You can optionally specify the timestamp column.
* By default, a lookback window is generated automatically by detecting the seasonal period by using signal processing method. However, if you have an idea of an appropriate lookback window, you can specify the value directly.
Stage 2: Pipeline selection
The pipeline selection step uses an efficient method called T-Daub (Time Series Data Allocation Using Upper Bounds). The method selects pipelines by allocating more training data to the most promising pipelines, while allocating less training data to unpromising pipelines. In this way, not all pipelines see the complete set of data, and the selection process is typically faster. The following steps describe the process overview:
1. All pipelines are sequentially allocated several small subsets of training data. The latest data is allocated first.
2. Each pipeline is trained on every allocated subset of training data and evaluated with testing data (holdout data).
3. A linear regression model is applied to each pipeline by using the data set described in the previous step.
4. The accuracy score of the pipeline is projected on the entire training data set. This method results in a data set containing the accuracy and size of allocated data for each pipeline.
5. The best pipeline is selected according to the projected accuracy and allotted rank 1.
6. More data is allocated to the best pipeline. Then, the projected accuracy is updated for the other pipelines.
7. The prior two steps are repeated until the top N pipelines are trained on all the data.
Stage 3: Model evaluation
In this step, the winning pipelines N are retrained on the entire training data set T. Further, they are evaluated with the holdout data H.
Stage 4: Final pipeline generation
In this step, the winning pipelines are retrained on the entire data set (L) and generated as the final pipelines.
As the retraining of each pipeline completes, the pipeline is posted to the leaderboard. You can select to inspect the pipeline details or save the pipeline as a model.
Stage 5: Backtest
In the final step, the winning pipelines are retrained and evaluated by using the backtest method. The following steps describe the backtest method:
1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html).
2. Starting from the oldest data, the experiment is trained by using the training data.
3. Further, the experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over.
4. The training data window is advanced by increasing the holdout size and gap length to form a new training set.
5. A fresh experiment is trained with this new data and evaluated with the next validation data set.
6. The prior two steps are repeated for the remaining backtesting periods.
Time series optimization metrics
Accept the default metric, or choose a metric to optimize for your experiment.
Metric Description
Symmetric Mean Absolute Percentage Error (SMAPE) At each fitted point, the absolute difference between actual value and predicted value is divided by half the sum of absolute actual value and predicted value. Then, the average is calculated for all such values across all the fitted points.
Mean Absolute Error (MAE) Average of absolute differences between the actual values and predicted values.
Root Mean Squared Error (RMSE) Square root of the mean of the squared differences between the actual values and predicted values.
R^2^ Measure of how the model performance compares to the baseline model, or mean model. The R^2^ must be equal or less than 1. Negative R^2^ value means that the model under consideration is worse than the mean model. Zero R^2^ value means that the model under consideration is as good or bad as the mean model. Positive R^2^ value means that the model under consideration is better than the mean model.
Reviewing the metrics for an experiment
When you view the results for a time series experiment, you see the values for metrics used to train the experiment in the pipeline leaderboard:

You can see that the accuracy measures for time-series experiments may vary widely, depending on the experiment data evaluated.
* Validation is the score calculated on training data.
* Holdout is the score calculated on the reserved holdout data.
* Backtest is the mean score from all backtests scores.
Time series algorithms
These algorithms are available for your time series experiment. You can use the algorithms that are selected by default, or you can configure your experiment to include or exclude specific algorithms.
Algorithm Description
ARIMA Autoregressive Integrated Moving Average (ARIMA) model is a typical time series model, which can transform non-stationary data to stationary data through differencing, and then forecast the next value by using the past values, including the lagged values and lagged forecast errors
BATS The BATS algorithm combines Box-Cox Transformation, ARMA residuals, Trend, and Seasonality factors to forecast future values.
Ensembler Ensembler combines multiple forecast methods to overcome accuracy of simple prediction and to avoid possible overfit.
Holt-Winters Uses triple exponential smoothing to forecast data points in a series, if the series is repetitive over time (seasonal). Two types of Holt-Winters models are provided: additive Holt-Winters, and multiplicative Holt-Winters
Random Forest Tree-based regression model where each tree in the ensemble is built from a sample that is drawn with replacement (for example, a bootstrap sample) from the training set.
Support Vector Machine (SVM) SVMs are a type of machine learning models that can be used for both regression and classification. SVMs use a hyperplane to divide the data into separate classes.
Linear regression Builds a linear relationship between time series variable and the date/time or time index with residuals that follow the AR process.
Supported date and time formats
The date/time formats supported in time series experiments are based on the definitions that are provided by [dateutil](https://dateutil.readthedocs.io/en/stable/parser.html).
Supported date formats are:
Common:
YYYY
YYYY-MM, YYYY/MM, or YYYYMM
YYYY-MM-DD or YYYYMMDD
mm/dd/yyyy
mm-dd-yyyy
JAN YYYY
Uncommon:
YYYY-Www or YYYYWww - ISO week (day defaults to 0)
YYYY-Www-D or YYYYWwwD - ISO week and day
Numberng for the ISO week and day values follows the same logic as datetime.date.isocalendar().
Supported time formats are:
hh
hh:mm or hhmm
hh:mm:ss or hhmmss
hh:mm:ss.ssssss (Up to 6 sub-second digits)
dd-MMM
yyyy/mm
Notes:
* Midnight can be represented as 00:00 or 24:00. The decimal separator can be either a period or a comma.
* Dates can be submitted as strings, with double quotation marks, such as "1958-01-16".
Supporting features
Supporting features, also known as exogenous features, are input features that can influence the prediction target. You can use supporting features to include additional columns from your data set to improve the prediction and increase your model’s accuracy. For example, in a time series experiment to predict prices over time, a supporting feature might be data on sales and promotions. Or, in a model that forecasts energy consumption, including daily temperature makes the forecast more accurate.
Algorithms and pipelines that use Supporting features
Only a subset of algorithms allow supporting features. For example, Holt-winters and BATS do not support the use of supporting features. Algorithms that do not support supporting features ignore your selection for supporting features when you run the experiment.
Some algorithms use supporting features for certain variations of the algorithm, but not for others. For example, you can generate two different pipelines with the Random Forest algorithm, RandomForestRegressor and ExogenousRandomForestRegressor. The ExogenousRandomForestRegressor variation provides support for supporting features, whereas RandomForestRegressor does not.
This table details whether an algorithm provides support for Supporting features in a time series experiment:
Algorithm Pipeline Provide support for Supporting features
Random forest RandomForestRegressor No
Random forest ExogenousRandomForestRegressor Yes
SVM SVM No
SVM ExogenousSVM Yes
Ensembler LocalizedFlattenEnsembler Yes
Ensembler DifferenceFlattenEnsembler No
Ensembler FlattenEnsembler No
Ensembler ExogenousLocalizedFlattenEnsembler Yes
Ensembler ExogenousDifferenceFlattenEnsembler Yes
Ensembler ExogenousFlattenEnsembler Yes
Regression MT2RForecaster No
Regression ExogenousMT2RForecaster Yes
Holt-winters HoltWinterAdditive No
Holt-winters HoltWinterMultiplicative No
BATS BATS No
ARIMA ARIMA No
ARIMA ARIMAX Yes
ARIMA ARIMAX_RSAR Yes
ARIMA ARIMAX_PALR Yes
ARIMA ARIMAX_RAR Yes
ARIMA ARIMAX_DMLR Yes
Learn more
[Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Time series implementation details #
These implementation details describe the stages and processing that are specific to an AutoAI time series experiment\.
## Implementation details ##
Refer to these implementation and configuration details for your time series experiment\.
<!-- <ul> -->
* [Time series stages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#ts-stages) for processing an experiment\.
* [Time series optimizing metrics](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#ts-metrics) for tuning your pipelines\.
* [Time series algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#ts-algorithms) for building the pipelines\.
* [Supported date and time formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#ts-date-time)\.
<!-- </ul> -->
## Time series stages ##
An AutoAI time series experiment includes these stages when an experiment runs:
<!-- <ol> -->
1. [Initialization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#initialization)
2. [Pipeline selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#pipeline-selection)
3. [Model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#model-eval)
4. [Final pipeline generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#final-pipeline)
5. [Backtest](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=en#backtest)
<!-- </ol> -->
### Stage 1: Initialization ###
The initialization stage processes the training data, in this sequence:
<!-- <ul> -->
* Load the data
* Split the data set *L* into training data *T* and holdout data *H*
* Set the validation, timestamp column handling, and lookback window generation\. **Notes:**
<!-- <ul> -->
* The training data (*T*) is equal to the data set (*L*) minus the holdout (*H*). When you configure the experiment, you can adjust the size of the holdout data. By default, the size of the holdout data is 20 steps.
* You can optionally specify the timestamp column.
* By default, a lookback window is generated automatically by detecting the seasonal period by using signal processing method. However, if you have an idea of an appropriate lookback window, you can specify the value directly.
<!-- </ul> -->
<!-- </ul> -->
### Stage 2: Pipeline selection ###
The pipeline selection step uses an efficient method called *T\-Daub* (Time Series Data Allocation Using Upper Bounds)\. The method selects pipelines by allocating more training data to the most promising pipelines, while allocating less training data to unpromising pipelines\. In this way, not all pipelines see the complete set of data, and the selection process is typically faster\. The following steps describe the process overview:
<!-- <ol> -->
1. All pipelines are sequentially allocated several small subsets of training data\. The latest data is allocated first\.
2. Each pipeline is trained on every allocated subset of training data and evaluated with testing data (holdout data)\.
3. A linear regression model is applied to each pipeline by using the data set described in the previous step\.
4. The accuracy score of the pipeline is projected on the entire training data set\. This method results in a data set containing the accuracy and size of allocated data for each pipeline\.
5. The best pipeline is selected according to the projected accuracy and allotted rank 1\.
6. More data is allocated to the best pipeline\. Then, the projected accuracy is updated for the other pipelines\.
7. The prior two steps are repeated until the top *N* pipelines are trained on all the data\.
<!-- </ol> -->
### Stage 3: Model evaluation ###
In this step, the winning pipelines *N* are retrained on the entire training data set *T*\. Further, they are evaluated with the holdout data *H*\.
### Stage 4: Final pipeline generation ###
In this step, the winning pipelines are retrained on the entire data set (*L*) and generated as the final pipelines\.
As the retraining of each pipeline completes, the pipeline is posted to the leaderboard\. You can select to inspect the pipeline details or save the pipeline as a model\.
### Stage 5: Backtest ###
In the final step, the winning pipelines are retrained and evaluated by using the backtest method\. The following steps describe the backtest method:
<!-- <ol> -->
1. The training data length is determined based on the number of backtests, gap length, and holdout size\. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)\.
2. Starting from the oldest data, the experiment is trained by using the training data\.
3. Further, the experiment is evaluated on the first validation data set\. If the gap length is non\-zero, any data in the gap is skipped over\.
4. The training data window is advanced by increasing the holdout size and gap length to form a new training set\.
5. A fresh experiment is trained with this new data and evaluated with the next validation data set\.
6. The prior two steps are repeated for the remaining backtesting periods\.
<!-- </ol> -->
## Time series optimization metrics ##
Accept the default metric, or choose a metric to optimize for your experiment\.
<!-- <table> -->
| Metric | Description |
| ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Symmetric Mean Absolute Percentage Error (SMAPE) | At each fitted point, the absolute difference between actual value and predicted value is divided by half the sum of absolute actual value and predicted value\. Then, the average is calculated for all such values across all the fitted points\. |
| Mean Absolute Error (MAE) | Average of absolute differences between the actual values and predicted values\. |
| Root Mean Squared Error (RMSE) | Square root of the mean of the squared differences between the actual values and predicted values\. |
| R^2^ | Measure of how the model performance compares to the baseline model, or mean model\. The R^2^ must be equal or less than 1\. Negative R^2^ value means that the model under consideration is worse than the mean model\. Zero R^2^ value means that the model under consideration is as good or bad as the mean model\. Positive R^2^ value means that the model under consideration is better than the mean model\. |
<!-- </table ""> -->
### Reviewing the metrics for an experiment ###
When you view the results for a time series experiment, you see the values for metrics used to train the experiment in the pipeline leaderboard:

You can see that the accuracy measures for time\-series experiments may vary widely, depending on the experiment data evaluated\.
<!-- <ul> -->
* Validation is the score calculated on training data\.
* Holdout is the score calculated on the reserved holdout data\.
* Backtest is the mean score from all backtests scores\.
<!-- </ul> -->
## Time series algorithms ##
These algorithms are available for your time series experiment\. You can use the algorithms that are selected by default, or you can configure your experiment to include or exclude specific algorithms\.
<!-- <table> -->
| Algorithm | Description |
| ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| ARIMA | Autoregressive Integrated Moving Average (ARIMA) model is a typical time series model, which can transform non\-stationary data to stationary data through differencing, and then forecast the next value by using the past values, including the lagged values and lagged forecast errors |
| BATS | The BATS algorithm combines Box\-Cox Transformation, ARMA residuals, Trend, and Seasonality factors to forecast future values\. |
| Ensembler | Ensembler combines multiple forecast methods to overcome accuracy of simple prediction and to avoid possible overfit\. |
| Holt\-Winters | Uses triple exponential smoothing to forecast data points in a series, if the series is repetitive over time (seasonal)\. Two types of Holt\-Winters models are provided: additive Holt\-Winters, and multiplicative Holt\-Winters |
| Random Forest | Tree\-based regression model where each tree in the ensemble is built from a sample that is drawn with replacement (for example, a bootstrap sample) from the training set\. |
| Support Vector Machine (SVM) | SVMs are a type of machine learning models that can be used for both regression and classification\. SVMs use a hyperplane to divide the data into separate classes\. |
| Linear regression | Builds a linear relationship between time series variable and the date/time or time index with residuals that follow the AR process\. |
<!-- </table ""> -->
## Supported date and time formats ##
The date/time formats supported in time series experiments are based on the definitions that are provided by [dateutil](https://dateutil.readthedocs.io/en/stable/parser.html)\.
Supported date formats are:
Common:
YYYY
YYYY-MM, YYYY/MM, or YYYYMM
YYYY-MM-DD or YYYYMMDD
mm/dd/yyyy
mm-dd-yyyy
JAN YYYY
Uncommon:
YYYY-Www or YYYYWww - ISO week (day defaults to 0)
YYYY-Www-D or YYYYWwwD - ISO week and day
Numberng for the ISO week and day values follows the same logic as datetime\.date\.isocalendar()\.
Supported time formats are:
hh
hh:mm or hhmm
hh:mm:ss or hhmmss
hh:mm:ss.ssssss (Up to 6 sub-second digits)
dd-MMM
yyyy/mm
**Notes:**
<!-- <ul> -->
* Midnight can be represented as 00:00 or 24:00\. The decimal separator can be either a period or a comma\.
* Dates can be submitted as strings, with double quotation marks, such as "1958\-01\-16"\.
<!-- </ul> -->
## Supporting features ##
Supporting features, also known as exogenous features, are input features that can influence the prediction target\. You can use supporting features to include additional columns from your data set to improve the prediction and increase your model’s accuracy\. For example, in a time series experiment to predict prices over time, a supporting feature might be data on sales and promotions\. Or, in a model that forecasts energy consumption, including daily temperature makes the forecast more accurate\.
### Algorithms and pipelines that use Supporting features ###
Only a subset of algorithms allow supporting features\. For example, Holt\-winters and BATS do not support the use of supporting features\. Algorithms that do not support supporting features ignore your selection for supporting features when you run the experiment\.
Some algorithms use supporting features for certain variations of the algorithm, but not for others\. For example, you can generate two different pipelines with the Random Forest algorithm, *RandomForestRegressor* and *ExogenousRandomForestRegressor*\. The *ExogenousRandomForestRegressor* variation provides support for supporting features, whereas *RandomForestRegressor* does not\.
This table details whether an algorithm provides support for Supporting features in a time series experiment:
<!-- <table> -->
| Algorithm | Pipeline | Provide support for Supporting features |
| ------------- | ----------------------------------- | --------------------------------------- |
| Random forest | RandomForestRegressor | No |
| Random forest | ExogenousRandomForestRegressor | Yes |
| SVM | SVM | No |
| SVM | ExogenousSVM | Yes |
| Ensembler | LocalizedFlattenEnsembler | Yes |
| Ensembler | DifferenceFlattenEnsembler | No |
| Ensembler | FlattenEnsembler | No |
| Ensembler | ExogenousLocalizedFlattenEnsembler | Yes |
| Ensembler | ExogenousDifferenceFlattenEnsembler | Yes |
| Ensembler | ExogenousFlattenEnsembler | Yes |
| Regression | MT2RForecaster | No |
| Regression | ExogenousMT2RForecaster | Yes |
| Holt\-winters | HoltWinterAdditive | No |
| Holt\-winters | HoltWinterMultiplicative | No |
| BATS | BATS | No |
| ARIMA | ARIMA | No |
| ARIMA | ARIMAX | Yes |
| ARIMA | ARIMAX\_RSAR | Yes |
| ARIMA | ARIMAX\_PALR | Yes |
| ARIMA | ARIMAX\_RAR | Yes |
| ARIMA | ARIMAX\_DMLR | Yes |
<!-- </table ""> -->
## Learn more ##
[Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
**Parent topic:**[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en | Building a time series experiment | Building a time series experiment
Use AutoAI to create a time series experiment to predict future activity, such as stock prices or temperatures, over a specified date or time range.
Time series overview
A time series experiment is a method of forecasting that uses historical observations to predict future values. The experiment automatically builds many pipelines using machine learning models, such as random forest regression and Support Vector Machines (SVMs), as well as statistical time series models, such as ARIMA and Holt-Winters. Then, the experiment recommends the best pipeline according to the pipeline performance evaluated on a holdout data set or backtest data sets.
Unlike a standard AutoAI experiment, which builds a set of pipelines to completion then ranks them. A time series experiment evaluates pipelines earlier in the process and only completes and test the best-performing pipelines.

For details on the various stages of training and testing a time series experiment, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
Predicting anomalies in a time series experiment
You can configure your time series experiment to predict anomalies (outliers) in your data or predictions. To configure anomaly prediction for your experiment, follow the steps in [Creating a time series anomaly prediction model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html).
Using supporting features to improve predictions
When you configure your time series experiment, you can choose to specify supporting features, also known as exogenous features. Supporting features are features that influence or add context to the prediction target. For example, if you are forecasting ice cream sales, daily temperature would be a logical supporting feature that would make the forecast more accurate.
Leveraging future values for supporting features
If you know the future values for the supporting features, you can leverage those future values when you deploy the model. For example, if you are training a model to forecast future t-shirt sales, you can include promotional discounts as a supporting feature to enhance the prediction. Inputting the future value of the promotion then makes the forecast more accurate.
Data requirements
These are the current data requirements for training a time series experiment:
* The training data must be a single file in CSV format.
* The file must contain one or more time series columns and optionally contain a timestamp column. For a list of supported date/time formats, see [AutoAI time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
* If the data source contains a timestamp column, ensure that the data is sampled at uniform frequency. That is, the difference in timestamps of adjacent rows is the same. For example, data can be in increments of 1 minute, 1 hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy.
Note:If the file size is larger than 1 GB, sort the data in descending order by the timestamp, and only the first 1 GB is used to train the experiment.
* If the data source does not contain a timestamp column, ensure that the data is sampled at regular intervals and sorted in ascending order according to the sample date/time. That is, the value in the first row is the oldest, and the value in the last row is the most recent.
Note: If the file size is larger than 1 GB, truncate the file so it is smaller than 1 GB.
* Select what data to use when training the final pipelines. If you choose to include training data only, the generated notebooks will include a cell for retrieving the holdout data used to evaluate each pipeline.
Choose data from your project or upload it from your file system or from the asset browser, then click Continue. Click the preview icon , after the data source name to review your data. Optionally, you can add a second file as holdout data for testing the trained pipelines.
Configuring a time series experiment
When you configure the details for an experiment, click Yes to Enable time series and complete the experiment details.
Field Description
Prediction columns The time series columns that you want to predict based on the previous values. You can specify one or more columns to predict.
Date/time column The column that indicates the date/time at which the time series values occur.
Lookback window A parameter that indicates how many previous time series values are used to predict the current time point.
Forecast window The range that you want to predict based on the data in the lookback window.
The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment.
Configuring experiment settings
To configure more details for your time series experiment, click Experiment settings.
General prediction settings
On the General panel for prediction settings, you can optionally change the metric used to optimize the experiment or specify the algorithms to consider or the number of pipelines to generate.
Field Description
Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series forecast is selected by default. <br>Note: If you change the prediction type, other prediction settings for your experiment are automatically changed.
Optimized metric View or change the recommended optimized metric for your experiment.
Optimized algorithm selection Not supported for time series experiments.
Algorithms to include Select algorithms based on which you want your experiment to create pipelines. Algorithms and pipelines that support the use of supporting features, are indicated by a checkmark.
Pipelines to complete View or change the number of pipelines to generate for your experiment.
Time series configuration details
On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions.
Field Description
Date/time column View or change the date/time column for the experiment.
Lookback window View or update the number of previous time series values used to predict the current time point.
Forecast window View or update the range that you want to predict based.
Configuring data source settings
To configure details for your input data, click Experiment settings and select Data source.
General data source settings
On the General panel for data source settings, you can modify your dataset to interpolate missing values, split your dataset into training and holdout data, and input supporting features.
Field Description
Duplicate rows Not supported for time series experiments.
Subsample data Not supported for time series experiments.
Text feature engineering Not supported for time series experiments.
Final training data set Select what data to use when training the final pipelines: just the training data or the training and holdout data. If you choose to include training data only, generated notebooks for this experiment will include a cell for retrieving the holdout data used to evaluate each pipeline.
Supporting features Choose additional columns from your data set as Supporting features to support predictions and increase your model’s accuracy. You can also use future values for Supporting features by enabling Leverage future values of supporting features. <br>Note: You can only use supporting features with selected algorithms and pipelines. For more information on algorithms and pipelines that support the use of supporting features, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
Data imputation Use data imputation to replace missing values in your dataset with substituted values. By enabling this option, you can specify how missing values should be interpolated in your data. To learn more about data imputation, see Data imputation in AutoAI experiments.
Training and holdout data Choose to reserve some data from your training data set to test the experiment. Alternatively, upload a separate file of holdout data. The holdout data file must match the schema of the training data.
Configuring time series data
To configure the time series data, you can adjust the settings for the time series data that is related to backtesting the experiment. Backtesting provides a means of validating a time-series model by using historical data.
In a typical machine learning experiment, you can hold back part of the data randomly to test the resulting model for accuracy. To validate a time series model, you must preserve the time order relationship between the training data and testing data.
The following steps describe the backtest method:
1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en).
2. Starting from the oldest data, the experiment is trained using the training data.
3. The experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over.
4. The training data window is advanced by increasing the holdout size and gap length to form a new training set.
5. A fresh experiment is trained with this new data and evaluated with the next validation data set.
6. The prior two steps are repeated for the remaining backtesting periods.
To adjust the backtesting configuration:
1. Open Experiment settings.
2. From Data sources, click the Time series.
3. (Optional): Adjust the settings as shown in the table.
Field Description
Number of backtests Backtesting is similar to cross-validation for date/time periods. Optionally customize the number of backtests for your experiment.
Holdout The size of the holdout set and each validation set for backtesting. The validation length can be adjusted by changing the holdout length.
Gap length The number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap will not be used to train the experiment or evaluate the current backtest.

The visualization for the configuration settings illustrates the backtesting flow. The graphic is interactive, so you can manipulate the settings from the graphic or from the configuration fields. For example, by adjusting the gap length, you can see model validation results on earlier time periods of the data without increasing the number of backtests.
Interpreting the experiment results
After you run your time series experiment, you can examine the resulting pipelines to get insights into the experiment details. Pipelines that use Supporting features are indicated by SUP enhancement tag to distinguish them from pipelines that don’t use these features. To view details:
* Hover over nodes on the visualization to get details about the pipelines as they are being generated.
* Toggle to the Progress Map view to see a different view of the training process. You can hover over each node in the process for details.
* After the final pipelines are completed and written to the leaderboard, you can click a pipeline to see the performance details.
* Click View discarded pipelines to view the algorithms that are used for the pipelines that are not selected as top performers.
* Save the experiment code as notebook that you can review.
* Save a particular pipeline as a notebook that you can review.
Watch this video to see how to run a time series experiment and create a model in a Jupyter notebook using training and holdout data.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
Next steps
* Follow a step-by-step tutorial to [train a univariate time series model to predict minimum temperatures by using sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html).
* Follow a step-by-step tutorial to [train a time series experiment with supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html).
* Learn about [scoring a deployed time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html).
* Learn about using the [API for AutoAI time series experiments](https://lukasz-cmielowski.medium.com/predicting-covid19-cases-with-autoai-time-series-api-f6793acee48d).
Additional resources
* For an introduction to forecasting with AutoAI time series experiments, see the blog post [Right on time(series): Introducing Watson Studio’s AutoAI Time Series](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154).
* For more information about creating a time series experiment, see this blog post about [creating a new time series experiment](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154).
* Read a blog post about [adding supporting features to a time series experiment](https://medium.com/ibm-data-ai/improve-autoai-time-series-forecasts-with-supporting-features-using-ibm-cloud-pak-for-data-as-a-ff24cc85f6b8).
* Review a [sample notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20and%20timeseries%20data%20with%20supporting%20features%20to%20predict%20PM2.5.ipynb) for a time series experiment with supporting features.
* Read a blog post about [adding supporting features to a time series experiment using the API](https://medium.com/ibm-data-ai/forecasting-pm2-5-using-autoai-time-series-api-with-supporting-features-12bbad18cb36).
Next steps
* [Tutorial: AutoAI univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)
* [Tutorial: AutoAI supporting features time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html)
* [Time series experiment implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)
* [Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Building a time series experiment #
Use AutoAI to create a time series experiment to predict future activity, such as stock prices or temperatures, over a specified date or time range\.
## Time series overview ##
A time series experiment is a method of forecasting that uses historical observations to predict future values\. The experiment automatically builds many pipelines using machine learning models, such as random forest regression and Support Vector Machines (SVMs), as well as statistical time series models, such as ARIMA and Holt\-Winters\. Then, the experiment recommends the best pipeline according to the pipeline performance evaluated on a holdout data set or backtest data sets\.
Unlike a standard AutoAI experiment, which builds a set of pipelines to completion then ranks them\. A time series experiment evaluates pipelines earlier in the process and only completes and test the best\-performing pipelines\.

For details on the various stages of training and testing a time series experiment, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)\.
## Predicting anomalies in a time series experiment ##
You can configure your time series experiment to predict anomalies (outliers) in your data or predictions\. To configure anomaly prediction for your experiment, follow the steps in [Creating a time series anomaly prediction model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html)\.
## Using supporting features to improve predictions ##
When you configure your time series experiment, you can choose to specify *supporting features*, also known as *exogenous features*\. Supporting features are features that influence or add context to the prediction target\. For example, if you are forecasting ice cream sales, daily temperature would be a logical supporting feature that would make the forecast more accurate\.
### Leveraging future values for supporting features ###
If you know the future values for the supporting features, you can leverage those future values when you deploy the model\. For example, if you are training a model to forecast future t\-shirt sales, you can include promotional discounts as a supporting feature to enhance the prediction\. Inputting the *future value* of the promotion then makes the forecast more accurate\.
## Data requirements ##
These are the current data requirements for training a time series experiment:
<!-- <ul> -->
* The training data must be a single file in CSV format\.
* The file must contain one or more time series columns and optionally contain a timestamp column\. For a list of supported date/time formats, see [AutoAI time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)\.
* If the data source contains a timestamp column, ensure that the data is sampled at uniform frequency\. That is, the difference in timestamps of adjacent rows is the same\. For example, data can be in increments of 1 minute, 1 hour, or one day\. The specified timestamp is used to determine the lookback window to improve the model accuracy\.
Note:If the file size is larger than 1 GB, sort the data in *descending* order by the timestamp, and only the first 1 GB is used to train the experiment.
* If the data source does not contain a timestamp column, ensure that the data is sampled at regular intervals and sorted in *ascending* order according to the sample date/time\. That is, the value in the first row is the oldest, and the value in the last row is the most recent\.
Note: If the file size is larger than 1 GB, truncate the file so it is smaller than 1 GB.
* Select what data to use when training the final pipelines\. If you choose to include training data only, the generated notebooks will include a cell for retrieving the holdout data used to evaluate each pipeline\.
<!-- </ul> -->
Choose data from your project or upload it from your file system or from the asset browser, then click **Continue**\. Click the preview icon , after the data source name to review your data\. Optionally, you can add a second file as holdout data for testing the trained pipelines\.
## Configuring a time series experiment ##
When you configure the details for an experiment, click **Yes** to *Enable time series* and complete the experiment details\.
<!-- <table> -->
| Field | Description |
| ------------------ | -------------------------------------------------------------------------------------------------------------------------------- |
| Prediction columns | The time series columns that you want to predict based on the previous values\. You can specify one or more columns to predict\. |
| Date/time column | The column that indicates the date/time at which the time series values occur\. |
| Lookback window | A parameter that indicates how many previous time series values are used to predict the current time point\. |
| Forecast window | The range that you want to predict based on the data in the lookback window\. |
<!-- </table ""> -->
The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment\.
## Configuring experiment settings ##
To configure more details for your time series experiment, click **Experiment settings**\.
### General prediction settings ###
On the *General* panel for prediction settings, you can optionally change the metric used to optimize the experiment or specify the algorithms to consider or the number of pipelines to generate\.
<!-- <table> -->
| Field | Description |
| ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Prediction type | View or change the prediction type based on prediction column for your experiment\. For time series experiments, *Time series forecast* is selected by default\. <br>**Note:** If you change the prediction type, other prediction settings for your experiment are automatically changed\. |
| Optimized metric | View or change the recommended optimized metric for your experiment\. |
| Optimized algorithm selection | Not supported for time series experiments\. |
| Algorithms to include | Select algorithms based on which you want your experiment to create pipelines\. Algorithms and pipelines that support the use of supporting features, are indicated by a checkmark\. |
| Pipelines to complete | View or change the number of pipelines to generate for your experiment\. |
<!-- </table ""> -->
### Time series configuration details ###
On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions\.
<!-- <table> -->
| Field | Description |
| ---------------- | ------------------------------------------------------------------------------------------------- |
| Date/time column | View or change the date/time column for the experiment\. |
| Lookback window | View or update the number of previous time series values used to predict the current time point\. |
| Forecast window | View or update the range that you want to predict based\. |
<!-- </table ""> -->
## Configuring data source settings ##
To configure details for your input data, click **Experiment settings** and select **Data source**\.
### General data source settings ###
On the *General* panel for data source settings, you can modify your dataset to interpolate missing values, split your dataset into training and holdout data, and input supporting features\.
<!-- <table> -->
| Field | Description |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Duplicate rows | Not supported for time series experiments\. |
| Subsample data | Not supported for time series experiments\. |
| Text feature engineering | Not supported for time series experiments\. |
| Final training data set | Select what data to use when training the final pipelines: just the training data or the training and holdout data\. If you choose to include training data only, generated notebooks for this experiment will include a cell for retrieving the holdout data used to evaluate each pipeline\. |
| Supporting features | Choose additional columns from your data set as Supporting features to support predictions and increase your model’s accuracy\. You can also use future values for Supporting features by enabling **Leverage future values of supporting features**\. <br>**Note:** You can only use supporting features with selected algorithms and pipelines\. For more information on algorithms and pipelines that support the use of supporting features, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)\. |
| Data imputation | Use data imputation to replace missing values in your dataset with substituted values\. By enabling this option, you can specify how missing values should be interpolated in your data\. To learn more about data imputation, see Data imputation in AutoAI experiments\. |
| Training and holdout data | Choose to reserve some data from your training data set to test the experiment\. Alternatively, upload a separate file of holdout data\. The holdout data file must match the schema of the training data\. |
<!-- </table ""> -->
## Configuring time series data ##
To configure the time series data, you can adjust the settings for the time series data that is related to *backtesting* the experiment\. Backtesting provides a means of validating a time\-series model by using historical data\.
In a typical machine learning experiment, you can hold back part of the data randomly to test the resulting model for accuracy\. To validate a time series model, you must preserve the time order relationship between the training data and testing data\.
The following steps describe the backtest method:
<!-- <ol> -->
1. The training data length is determined based on the number of backtests, gap length, and holdout size\. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en)\.
2. Starting from the oldest data, the experiment is trained using the training data\.
3. The experiment is evaluated on the first validation data set\. If the gap length is non\-zero, any data in the gap is skipped over\.
4. The training data window is advanced by increasing the holdout size and gap length to form a new training set\.
5. A fresh experiment is trained with this new data and evaluated with the next validation data set\.
6. The prior two steps are repeated for the remaining backtesting periods\.
<!-- </ol> -->
To adjust the backtesting configuration:
<!-- <ol> -->
1. Open **Experiment settings**\.
2. From *Data sources*, click the **Time series**\.
3. (Optional): Adjust the settings as shown in the table\.
<!-- </ol> -->
<!-- <table> -->
| Field | Description |
| ------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Number of backtests | Backtesting is similar to cross\-validation for date/time periods\. Optionally customize the number of backtests for your experiment\. |
| Holdout | The size of the holdout set and each validation set for backtesting\. The validation length can be adjusted by changing the holdout length\. |
| Gap length | The number of time points between the training data set and validation data set for each backtest\. When the parameter value is non\-zero, the time series values in the gap will not be used to train the experiment or evaluate the current backtest\. |
<!-- </table ""> -->

The visualization for the configuration settings illustrates the backtesting flow\. The graphic is interactive, so you can manipulate the settings from the graphic or from the configuration fields\. For example, by adjusting the gap length, you can see model validation results on earlier time periods of the data without increasing the number of backtests\.
## Interpreting the experiment results ##
After you run your time series experiment, you can examine the resulting pipelines to get insights into the experiment details\. Pipelines that use Supporting features are indicated by SUP enhancement tag to distinguish them from pipelines that don’t use these features\. To view details:
<!-- <ul> -->
* Hover over nodes on the visualization to get details about the pipelines as they are being generated\.
* Toggle to the Progress Map view to see a different view of the training process\. You can hover over each node in the process for details\.
* After the final pipelines are completed and written to the leaderboard, you can click a pipeline to see the performance details\.
* Click **View discarded pipelines** to view the algorithms that are used for the pipelines that are not selected as top performers\.
* Save the experiment code as notebook that you can review\.
* Save a particular pipeline as a notebook that you can review\.
<!-- </ul> -->
Watch this video to see how to run a time series experiment and create a model in a Jupyter notebook using training and holdout data\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Next steps ##
<!-- <ul> -->
* Follow a step\-by\-step tutorial to [train a univariate time series model to predict minimum temperatures by using sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)\.
* Follow a step\-by\-step tutorial to [train a time series experiment with supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html)\.
* Learn about [scoring a deployed time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)\.
* Learn about using the [API for AutoAI time series experiments](https://lukasz-cmielowski.medium.com/predicting-covid19-cases-with-autoai-time-series-api-f6793acee48d)\.
<!-- </ul> -->
## Additional resources ##
<!-- <ul> -->
* For an introduction to forecasting with AutoAI time series experiments, see the blog post [Right on time(series): Introducing Watson Studio’s AutoAI Time Series](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154)\.
* For more information about creating a time series experiment, see this blog post about [creating a new time series experiment](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154)\.
* Read a blog post about [adding supporting features to a time series experiment](https://medium.com/ibm-data-ai/improve-autoai-time-series-forecasts-with-supporting-features-using-ibm-cloud-pak-for-data-as-a-ff24cc85f6b8)\.
* Review a [sample notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20and%20timeseries%20data%20with%20supporting%20features%20to%20predict%20PM2.5.ipynb) for a time series experiment with supporting features\.
* Read a blog post about [adding supporting features to a time series experiment using the API](https://medium.com/ibm-data-ai/forecasting-pm2-5-using-autoai-time-series-api-with-supporting-features-12bbad18cb36)\.
<!-- </ul> -->
## Next steps ##
<!-- <ul> -->
* [Tutorial: AutoAI univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)
* [Tutorial: AutoAI supporting features time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html)
* [Time series experiment implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)
* [Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
<!-- </ul> -->
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
163EEB3DBAFF3B01D831F717EEB7487642C93080 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html?context=cdpaas&locale=en | Troubleshooting AutoAI experiments | Troubleshooting AutoAI experiments
The following list contains the common problems that are known for AutoAI. If your AutoAI experiment fails to run or deploy successfully, review some of these common problems and resolutions.
Passing incomplete or outlier input value to deployment can lead to outlier prediction
After you deploy your machine learning model, note that providing input data that is markedly different from data that is used to train the model can produce an outlier prediction. When linear regression algorithms such as Ridge and LinearRegression are passed an out of scale input value, the model extrapolates the values and assigns a relatively large weight to it, producing a score that is not in line with conforming data.
Time Series pipeline with supporting features fails on retrieval
If you train an AutoAI Time Series experiment by using supporting features and you get the error 'Error: name 'tspy_interpolators' is not defined' when the system tries to retrieve the pipeline for predictions, check to make sure your system is running Java 8 or higher.
Running a pipeline or experiment notebook fails with a software specification error
If supported software specifications for AutoAI experiments change, you might get an error when you run a notebook built with an older software specification, such as an older version of Python. In this case, run the experiment again, then save a new notebook and try again.
Resolving an Out of Memory error
If you get a memory error when you run a cell from an AutoAI generated notebook, create a notebook runtime with more resources for the AutoAI notebook and execute the cell again.
Notebook for an experiment with subsampling can fail generating predictions
If you do pipeline refinery to prepare the model, and the experiment uses subsampling of the data during training, you might encounter an “unknown class” error when you run a notebook that is saved from the experiment.
The problem stems from an unknown class that is not included in the training data set. The workaround is to use the entire data set for training or re-create the subsampling that is used in the experiment.
To subsample the training data (before fit()), provide sample size by number of rows or by fraction of the sample (as done in the experiment).
* If number of records was used in subsampling settings, you can increase the value of n. For example:
train_df = train_df.sample(n=1000)
* If subsampling is represented as a fraction of the data set, increase the value of frac. For example:
train_df = train_df.sample(frac=0.4, random_state=experiment_metadata['random_state'])
Pipeline creation fails for binary classification
AutoAI analyzes a subset of the data to determine the best fit for experiment type. If the sample data in the prediction column contains only two values, AutoAI recommends a binary classification experiment and applies the related algorithms. However, if the full data set contains more than two values in the prediction column the binary classification fails and you get an error that indicates that AutoAI cannot create the pipelines.
In this case, manually change the experiment type from binary to either multiclass, for a defined set of values, or regression, for an unspecified set of values.
1. Click the Reconfigure Experiment icon to edit the experiment settings.
2. On the Prediction page of Experiment Settings, change the prediction type to the one that best matches the data in the prediction column.
3. Save the changes and run the experiment again.
Next steps
[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Troubleshooting AutoAI experiments #
The following list contains the common problems that are known for AutoAI\. If your AutoAI experiment fails to run or deploy successfully, review some of these common problems and resolutions\.
## Passing incomplete or outlier input value to deployment can lead to outlier prediction ##
After you deploy your machine learning model, note that providing input data that is markedly different from data that is used to train the model can produce an outlier prediction\. When linear regression algorithms such as Ridge and LinearRegression are passed an out of scale input value, the model extrapolates the values and assigns a relatively large weight to it, producing a score that is not in line with conforming data\.
## Time Series pipeline with supporting features fails on retrieval ##
If you train an AutoAI Time Series experiment by using supporting features and you get the error 'Error: name 'tspy\_interpolators' is not defined' when the system tries to retrieve the pipeline for predictions, check to make sure your system is running Java 8 or higher\.
## Running a pipeline or experiment notebook fails with a software specification error ##
If supported software specifications for AutoAI experiments change, you might get an error when you run a notebook built with an older software specification, such as an older version of Python\. In this case, run the experiment again, then save a new notebook and try again\.
## Resolving an Out of Memory error ##
If you get a memory error when you run a cell from an AutoAI generated notebook, create a notebook runtime with more resources for the AutoAI notebook and execute the cell again\.
## Notebook for an experiment with subsampling can fail generating predictions ##
If you do pipeline refinery to prepare the model, and the experiment uses subsampling of the data during training, you might encounter an “unknown class” error when you run a notebook that is saved from the experiment\.
The problem stems from an unknown class that is not included in the training data set\. The workaround is to use the entire data set for training or re\-create the subsampling that is used in the experiment\.
To subsample the training data (before `fit()`), provide sample size by number of rows or by fraction of the sample (as done in the experiment)\.
<!-- <ul> -->
* If number of records was used in subsampling settings, you can increase the value of `n`\. For example:
train_df = train_df.sample(n=1000)
* If subsampling is represented as a fraction of the data set, increase the value of `frac`\. For example:
train_df = train_df.sample(frac=0.4, random_state=experiment_metadata['random_state'])
<!-- </ul> -->
## Pipeline creation fails for binary classification ##
AutoAI analyzes a subset of the data to determine the best fit for experiment type\. If the sample data in the prediction column contains only two values, AutoAI recommends a binary classification experiment and applies the related algorithms\. However, if the full data set contains more than two values in the prediction column the binary classification fails and you get an error that indicates that AutoAI cannot create the pipelines\.
In this case, manually change the experiment type from binary to either multiclass, for a defined set of values, or regression, for an unspecified set of values\.
<!-- <ol> -->
1. Click the **Reconfigure Experiment** icon to edit the experiment settings\.
2. On the *Prediction* page of Experiment Settings, change the prediction type to the one that best matches the data in the prediction column\.
3. Save the changes and run the experiment again\.
<!-- </ol> -->
## Next steps ##
[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en | Tutorial: Create a time series anomaly prediction experiment | Tutorial: Create a time series anomaly prediction experiment
This tutorial guides you through using AutoAI and sample data to train a time series experiment to detect if daily electricity usage values are normal or anomalies (outliers).
When you set up the sample experiment, you load data that analyzes daily electricity usage from Industry A to determine whether a value is normal or an anomaly. Then, the experiment generates pipelines that use algorithms to label these predicted values as normal or an anomaly. After generating the pipelines, AutoAI chooses the best performers, and presents them in a leaderboard for you to review.
Tech preview This is a technology preview and is not yet supported for use in production environments.
Data set overview
This tutorial uses the Electricity usage anomalies sample data set from the Watson Studio Gallery. This data set describes the annual electricity usages for Industry A. The first column indicates the electricity usages and the second column indicates the date, which is in a day-by-day format.

Tasks overview
In this tutorial, follow these steps to create an anomaly prediction experiment:
1. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep1)
2. [View the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep2)
3. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep3)
4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep4)
5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep5)
Create an AutoAI experiment
Create an AutoAI experiment and add sample data to your experiment.
1. From the navigation menu , click Projects > View all projects.
2. Open an existing project or [create a new project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) to store the anomaly prediction experiment.
3. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
4. Click Samples > Electricity usage anomalies sample data, then select Next. The AutoAI experiment name and description are pre-populated by the sample data.
5. If prompted, associate a Watson Machine Learning instance with your AutoAI experiment.
1. Click Associate a Machine Learning service instance and select an instance of Watson Machine Learning.
2. Click Reload to confirm your configuration.
6. Click Create.
View the experiment details
AutoAI pre-populates the details fields for the sample experiment:

* Type series analysis type: Anomaly prediction predicts whether future values in a series are anomalies (outliers). A prediction of 1 indicates a normal value and a prediction of -1 indicates an anomaly.
* Feature column: industry_a_usage is the predicted value and indicates how much electricity Industry A consumes.
* Date/Time column: date indicates the time increments for the experiment. For this experiment, there is one prediction value per day.
* This experiment is optimized for the model performance metric: Average Precision. Average precision evaluates the performance of object detection and segmentation systems.
Click Run experiment to train the model. The experiment takes several minutes to complete.
Review the experiment results
The relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. 
1. The leaderboard lists and saves the three best performing pipelines. Click the pipeline name with Rank 1 to review the details of the pipeline. For details on anomaly prediction metrics, see [Creating a time series anomaly prediction experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html).
2. Select the pipeline with Rank 1 and Save the pipeline as a model. The model name is pre-populated with the default name.
3. Click Create to confirm your pipeline selection.
Deploy the trained model
Before the trained model can make predictions on external values, you must deploy the model. Follow these steps to promote your trained model to a deployment space.
1. Deploy the model from the Model details page. To access the Model details page, choose one of these options:
* From the notification displayed when you save the model, click View in project.
* From the project's Assets, select the model’s name in Models.
2. From the Model details page, click Promote to Deployment Space. Then, select or create a deployment space to deploy the model.
3. Select Go to the model in the space after promoting it and click Promote to promote the model.
Testing the model
After promoting the model to the deployment space, you are ready to test your trained model with new data values.
1. Select New Deployment and create a new deployment with the following fields:
1. Deployment type: Online
2. Name: Electricity usage online deployment
2. Click Create and wait for the status to update to Deployed.
3. After the deployment initializes, click the deployment. Use Test input to manually enter and evaluate values or use JSON input to attach a data set.

4. Click Predict to see whether there are any anomalies in the values.
Note:-1 indicates an anomaly; 1 indicates a normal value.

Next steps
[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Tutorial: Create a time series anomaly prediction experiment #
This tutorial guides you through using AutoAI and sample data to train a time series experiment to detect if daily electricity usage values are normal or anomalies (outliers)\.
When you set up the sample experiment, you load data that analyzes daily electricity usage from Industry A to determine whether a value is *normal* or an *anomaly*\. Then, the experiment generates pipelines that use algorithms to label these predicted values as normal or an anomaly\. After generating the pipelines, AutoAI chooses the best performers, and presents them in a leaderboard for you to review\.
Tech preview This is a technology preview and is not yet supported for use in production environments\.
## Data set overview ##
This tutorial uses the *Electricity usage anomalies sample data* set from the Watson Studio Gallery\. This data set describes the annual electricity usages for Industry A\. The first column indicates the electricity usages and the second column indicates the date, which is in a day\-by\-day format\.

## Tasks overview ##
In this tutorial, follow these steps to create an anomaly prediction experiment:
<!-- <ol> -->
1. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en#step1)
2. [View the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en#step2)
3. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en#step3)
4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en#step4)
5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=en#step5)
<!-- </ol> -->
## Create an AutoAI experiment ##
Create an AutoAI experiment and add sample data to your experiment\.
<!-- <ol> -->
1. From the navigation menu , click **Projects > View all projects**\.
2. Open an existing project or [create a new project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) to store the anomaly prediction experiment\.
3. On the *Assets* tab from within your project, click **New asset > Build machine learning models automatically**\.
4. Click **Samples > Electricity usage anomalies sample data**, then select **Next**\. The AutoAI *experiment name* and *description* are pre\-populated by the sample data\.
5. If prompted, associate a Watson Machine Learning instance with your AutoAI experiment\.
<!-- <ol> -->
1. Click **Associate a Machine Learning service instance** and select an instance of Watson Machine Learning.
2. Click **Reload** to confirm your configuration.
<!-- </ol> -->
6. Click **Create**\.
<!-- </ol> -->
## View the experiment details ##
AutoAI pre\-populates the details fields for the sample experiment:

<!-- <ul> -->
* Type series analysis type: *Anomaly prediction* predicts whether future values in a series are anomalies (outliers)\. A prediction of 1 indicates a *normal* value and a prediction of \-1 indicates an *anomaly*\.
* Feature column: *industry\_a\_usage* is the predicted value and indicates how much electricity *Industry A* consumes\.
* Date/Time column: *date* indicates the time increments for the experiment\. For this experiment, there is one prediction value per day\.
* This experiment is optimized for the model performance metric: *Average Precision*\. Average precision evaluates the performance of object detection and segmentation systems\.
<!-- </ul> -->
Click **Run experiment** to train the model\. The experiment takes several minutes to complete\.
## Review the experiment results ##
The relationship map shows the transformations that are used to create pipelines\. Follow these steps to review experiment results and save the pipeline with the best performance\. 
<!-- <ol> -->
1. The leaderboard lists and saves the three best performing pipelines\. Click the pipeline name with Rank 1 to review the details of the pipeline\. For details on anomaly prediction metrics, see [Creating a time series anomaly prediction experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html)\.
2. Select the pipeline with Rank 1 and **Save** the pipeline as a *model*\. The *model name* is pre\-populated with the default name\.
3. Click **Create** to confirm your pipeline selection\.
<!-- </ol> -->
## Deploy the trained model ##
Before the trained model can make predictions on external values, you must deploy the model\. Follow these steps to promote your trained model to a deployment space\.
<!-- <ol> -->
1. Deploy the model from the *Model details* page\. To access the *Model details* page, choose one of these options:
<!-- <ul> -->
* From the notification displayed when you save the model, click **View in project**.
* From the project's *Assets*, select the model’s name in *Models*.
<!-- </ul> -->
2. From the *Model details* page, click **Promote to Deployment Space**\. Then, select or create a deployment space to deploy the model\.
3. Select **Go to the model in the space after promoting it** and click **Promote** to promote the model\.
<!-- </ol> -->
## Testing the model ##
After promoting the model to the deployment space, you are ready to test your trained model with new data values\.
<!-- <ol> -->
1. Select **New Deployment** and create a new deployment with the following fields:
<!-- <ol> -->
1. Deployment type: `Online`
2. Name: `Electricity usage online deployment`
<!-- </ol> -->
2. Click **Create** and wait for the status to update to *Deployed*\.
3. After the deployment initializes, click the deployment\. Use *Test input* to manually enter and evaluate values or use JSON input to attach a data set\.

4. Click **Predict** to see whether there are any anomalies in the values\.
Note:-1 indicates an *anomaly*; 1 indicates a *normal* value.
<!-- </ol> -->

## Next steps ##
[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
B23F48A4757500FEA641245CFFA69CB3B72AE0E8 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=en | Creating a time series anomaly prediction (Beta) | Creating a time series anomaly prediction (Beta)
Create a time series anomaly prediction experiment to train a model that can detect anomalies, or unexpected results, when the model predicts results based on new data.
Tech preview This is a technology preview and is not yet supported for use in production environments.
Detecting anomalies in predictions
You can use anomaly prediction to find outliers in model predictions. Consider the following scenarios for training a time series model with anomaly prediction. For example, suppose you have operational metrics from monitoring devices that were collected in the date range of 2022.1.1 through 2022.3.31. You are confident that no anomalies exist in the data for that period, even if the data is unlabeled. You can use a time series anomaly prediction experiment to:
* Train model candidate pipelines and auto-select the top-ranked model candidate
* Deploy a selected model to predict new observations if:
* A new time point is an anomaly (for example, an online score predicts a time point 2022.4.1 that is outside of the expected range)
* A new time range has anomalies (for example, a batch score predicts values of 2022.4.1 to 2022.4.7, outside the expected range)
Working with a sample
To create an AutoAI Time series experiment with anomaly prediction that uses a sample:
1. Create an AutoAI experiment.
2. Select Samples.

3. Click the tile for Electricity usage anomalies sample data.
4. Follow the prompts to configure and run the experiment.

5. Review the details about the pipelines and explore the visualizations.
Configuring a time series experiment with anomaly prediction
1. Load the data for your experiment.
Restriction: You can upload only a single data file for an anomaly prediction experiment. If you upload a second data file (for holdout data) the Anomaly prediction option is disabled, and only the Forecast option is available. By default, Anomaly prediction experiments use a subset of the training data for validation.
2. Click Yes to Enable time series.
3. Select Anomaly prediction as the experiment type.
4. Configure the feature columns from the data source that you want to predict based on the previous values. You can specify one or more columns to predict.
5. Select the date/time column.
The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment.
Configuring experiment settings
To configure more details for your time series experiment, open the Experiment settings pane. Options that are not available for anomaly prediction experiments are unavailable.
General prediction settings
On the General panel for prediction settings, configure details for training the experiment.
Field Description
Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series anomaly prediction is selected by default. Note: If you change the prediction type, other prediction settings for your experiment are automatically changed.
Optimized metric Choose a metric for optimizing and ranking the pipelines.
Optimized algorithm selection Not supported for time series experiments.
Algorithms to include Select [algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=enimplementation) based on which you want your experiment to create pipelines. The algorithms support anomaly prediction.
Pipelines to complete View or change the number of pipelines to generate for your experiment.
Time series configuration details
On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions.
Field Description
Date/time column View or change the date/time column for the experiment.
Lookback window Not supported for anomaly prediction.
Forecast window Not supported for anomaly prediction.
Configuring data source settings
To configure details for your input data, open the Experiment settings panel and select the Data source.
General data source settings
On the General panel for data source settings, you can choose options for how to use your experiment data.
Field Description
Duplicate rows Not supported for time series anomaly prediction experiments.
Subsample data Not supported for time series anomaly prediction experiments.
Text feature engineering Not supported for time series anomaly prediction experiments.
Final training data set Anomaly prediction uses a single data source file, which is the final training data set.
Supporting features Not supported for time series anomaly prediction experiments.
Data imputation Not supported for time series anomaly prediction experiments.
Training and holdout data Anomaly prediction does not support a separate holdout file. You can adjust how the data is split between training and holdout data. Note: In some cases, AutoAI can overwrite your holdout settings to ensure the split is valid for the experiment. In this case, you see a notification and the change is noted in the log file.
Reviewing the experiment results
When you run the experiment, the progress indicator displays the pathways to pipeline creation. Ranked pipelines are listed on the leaderboard. Pipeline score represents how well the pipeline performed for the optimizing metric.
The Experiment summary tab displays a visualization of how metrics performed for the pipeline.
* Use the metric filter to focus on particular metrics.
* Hover over the name of a metric to view details.
Click a pipeline name to view details. On the Model evaluation page, you can review a table that summarizes details about the pipeline.

* The rows represent five evaluation metrics: Area under ROC, Precision, Recall, F1, Average precision.
* The columns represent four synthesized anomaly types: Level shift, Trend, Localized extreme, Variance.
* Each value in a cell is an average of the metric based on three iterations of evaluation on the synthesized anomaly type.
Evaluation metrics:
These metrics are used to evaluate a pipeline:
Metric Description
Aggregate score (Recommended) This score is calculated based on an aggregation of the optimized metric (for example, Average precision) values for the 4 anomaly types. The scores for each pipeline are ranked, using the Borda count method, and then weighted for their contribution to the aggregate score. Unlike a standard metric score, this value is not between 0 and 1. A higher value indicates a stronger score.
ROC AUC Measure of how well a parameter can distinguish between two groups.
F1 Harmonic average of the precision and recall, with best value of 1 (perfect precision and recall) and worst at 0.
Precision Measures the accuracy of a prediction based on percent of positive predictions that are correct.
Recall Measures the percentage of identified positive predictions against possible positives in data set.
Anomaly types
These are the anomaly types AutoAI detects.
Anomaly type Description
Localized extreme anomaly An unusual data point in a time series, which deviates significantly from the data points around it.
Level shift anomaly A segment in which the mean value of a time series is changed.
Trend anomaly A segment of time series, which has a trend change compared to the time series before the segment.
Variance anomaly A segment of time series in which the variance of a time series is changed.
Saving a pipeline as a model
To save a model candidate pipeline as a machine learning model, select Save as model for the pipeline you prefer. The model is saved as a project asset. You can promote the model to a space and create a deployment for it.
Saving a pipeline as a notebook
To review the code for a pipeline, select Save as notebook for a pipeline. An automatically generated notebook is saved as a project asset. Review the code to explore how the pipeline was generated.
For details on the methods used in the pipeline code, see the documentation for the [autoai-ts-libs library](https://pypi.org/project/autoai-ts-libs/).
Scoring the model
After you save a pipeline as a model, then promote the model to a space, you can score the model to generate predictions for input, or payload, data. Scoring the model and interpreting the results is similar to scoring a binary classification model, as the score presents one of two possible values for each prediction:
* 1 = no anomaly detected
* -1 = anomaly detected
Deployment details
Note these requirements for deploying an anomaly prediction model.
* The schema for the deplyment input data must match the schema for the training data except for the prediction, or target column.
* The order of the fields for model scoring must be the same as the order of the fields in the training data schema.
Deployment example
The following is valid input for an anomaly prediction model:
{
"input_data": [
{
"id": "observations",
"values":
12,34],
22,23],
35,45],
46,34]
]
}
]
}
The score for this input is [1,1,-1,1] where -1 means the value is an anomaly and 1 means the prediction is in the normal range.
Implementation details
These algorithms support anomaly prediction in time series experiments.
Algorithm Type Transformer
Pipeline Name Algorithm Type Transformer
PointwiseBoundedHoltWintersAdditive Forecasting N/A
PointwiseBoundedBATS Forecasting N/A
PointwiseBoundedBATSForceUpdate Forecasting N/A
WindowNN Window Flatten
WindowPCA Relationship Flatten
WindowLOF Window Flatten
The algorithms are organized in these categories:
* Forecasting: Algorithms for detecting anomalies using time series forecasting methods
* Relationship: Algorithms for detecting anomalies by analyzing the relationship among data points
* Window: Algorithms for detecting anomalies by applying transformations and ML techniques to rolling windows
Learn more
[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
Parent topic:[Building a time series experiment ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Creating a time series anomaly prediction (Beta) #
Create a time series anomaly prediction experiment to train a model that can detect anomalies, or unexpected results, when the model predicts results based on new data\.
Tech preview This is a technology preview and is not yet supported for use in production environments\.
## Detecting anomalies in predictions ##
You can use anomaly prediction to find outliers in model predictions\. Consider the following scenarios for training a time series model with anomaly prediction\. For example, suppose you have operational metrics from monitoring devices that were collected in the date range of 2022\.1\.1 through 2022\.3\.31\. You are confident that no anomalies exist in the data for that period, even if the data is unlabeled\. You can use a time series anomaly prediction experiment to:
<!-- <ul> -->
* Train model candidate pipelines and auto\-select the top\-ranked model candidate
* Deploy a selected model to predict new observations if:
<!-- <ul> -->
* A new time point is an anomaly (for example, an online score predicts a time point 2022.4.1 that is outside of the expected range)
* A new time range has anomalies (for example, a batch score predicts values of 2022.4.1 to 2022.4.7, outside the expected range)
<!-- </ul> -->
<!-- </ul> -->
## Working with a sample ##
To create an AutoAI Time series experiment with anomaly prediction that uses a sample:
<!-- <ol> -->
1. Create an AutoAI experiment\.
2. Select *Samples*\.

3. Click the tile for **Electricity usage anomalies sample data**\.
4. Follow the prompts to configure and run the experiment\.

5. Review the details about the pipelines and explore the visualizations\.
<!-- </ol> -->
## Configuring a time series experiment with anomaly prediction ##
<!-- <ol> -->
1. Load the data for your experiment\.
Restriction: You can upload only a single data file for an anomaly prediction experiment. If you upload a second data file (for holdout data) the Anomaly prediction option is disabled, and only the Forecast option is available. By default, Anomaly prediction experiments use a subset of the training data for validation.
2. Click **Yes** to **Enable time series**\.
3. Select **Anomaly prediction** as the experiment type\.
4. Configure the feature columns from the data source that you want to predict based on the previous values\. You can specify one or more columns to predict\.
5. Select the date/time column\.
<!-- </ol> -->
The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment\.
## Configuring experiment settings ##
To configure more details for your time series experiment, open the **Experiment settings** pane\. Options that are not available for anomaly prediction experiments are unavailable\.
### General prediction settings ###
On the *General* panel for prediction settings, configure details for training the experiment\.
<!-- <table> -->
| Field | Description |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Prediction type | View or change the prediction type based on prediction column for your experiment\. For time series experiments, **Time series anomaly prediction** is selected by default\. **Note:** If you change the prediction type, other prediction settings for your experiment are automatically changed\. |
| Optimized metric | Choose a metric for optimizing and ranking the pipelines\. |
| Optimized algorithm selection | Not supported for time series experiments\. |
| Algorithms to include | Select [algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=en#implementation) based on which you want your experiment to create pipelines\. The algorithms support anomaly prediction\. |
| Pipelines to complete | View or change the number of pipelines to generate for your experiment\. |
<!-- </table ""> -->
### Time series configuration details ###
On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions\.
<!-- <table> -->
| Field | Description |
| ---------------- | -------------------------------------------------------- |
| Date/time column | View or change the date/time column for the experiment\. |
| Lookback window | Not supported for anomaly prediction\. |
| Forecast window | Not supported for anomaly prediction\. |
<!-- </table ""> -->
## Configuring data source settings ##
To configure details for your input data, open the **Experiment settings** panel and select the **Data source**\.
### General data source settings ###
On the *General* panel for data source settings, you can choose options for how to use your experiment data\.
<!-- <table> -->
| Field | Description |
| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Duplicate rows | Not supported for time series anomaly prediction experiments\. |
| Subsample data | Not supported for time series anomaly prediction experiments\. |
| Text feature engineering | Not supported for time series anomaly prediction experiments\. |
| Final training data set | Anomaly prediction uses a single data source file, which is the final training data set\. |
| Supporting features | Not supported for time series anomaly prediction experiments\. |
| Data imputation | Not supported for time series anomaly prediction experiments\. |
| Training and holdout data | Anomaly prediction does not support a separate holdout file\. You can adjust how the data is split between training and holdout data\. **Note:** In some cases, AutoAI can overwrite your holdout settings to ensure the split is valid for the experiment\. In this case, you see a notification and the change is noted in the log file\. |
<!-- </table ""> -->
## Reviewing the experiment results ##
When you run the experiment, the progress indicator displays the pathways to pipeline creation\. Ranked pipelines are listed on the leaderboard\. Pipeline score represents how well the pipeline performed for the optimizing metric\.
The **Experiment summary** tab displays a visualization of how metrics performed for the pipeline\.
<!-- <ul> -->
* Use the metric filter to focus on particular metrics\.
* Hover over the name of a metric to view details\.
<!-- </ul> -->
Click a pipeline name to view details\. On the **Model evaluation** page, you can review a table that summarizes details about the pipeline\.

<!-- <ul> -->
* The rows represent five evaluation metrics: Area under ROC, Precision, Recall, F1, Average precision\.
* The columns represent four synthesized anomaly types: Level shift, Trend, Localized extreme, Variance\.
* Each value in a cell is an average of the metric based on three iterations of evaluation on the synthesized anomaly type\.
<!-- </ul> -->
### Evaluation metrics: ###
These metrics are used to evaluate a pipeline:
<!-- <table> -->
| Metric | Description |
| ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Aggregate score (Recommended) | This score is calculated based on an aggregation of the optimized metric (for example, Average precision) values for the 4 anomaly types\. The scores for each pipeline are ranked, using the Borda count method, and then weighted for their contribution to the aggregate score\. Unlike a standard metric score, this value is not between 0 and 1\. A higher value indicates a stronger score\. |
| ROC AUC | Measure of how well a parameter can distinguish between two groups\. |
| F1 | Harmonic average of the precision and recall, with best value of 1 (perfect precision and recall) and worst at 0\. |
| Precision | Measures the accuracy of a prediction based on percent of positive predictions that are correct\. |
| Recall | Measures the percentage of identified positive predictions against possible positives in data set\. |
<!-- </table ""> -->
### Anomaly types ###
These are the anomaly types AutoAI detects\.
<!-- <table> -->
| Anomaly type | Description |
| ------------------------- | ----------------------------------------------------------------------------------------------------- |
| Localized extreme anomaly | An unusual data point in a time series, which deviates significantly from the data points around it\. |
| Level shift anomaly | A segment in which the mean value of a time series is changed\. |
| Trend anomaly | A segment of time series, which has a trend change compared to the time series before the segment\. |
| Variance anomaly | A segment of time series in which the variance of a time series is changed\. |
<!-- </table ""> -->
## Saving a pipeline as a model ##
To save a model candidate pipeline as a machine learning model, select **Save as model** for the pipeline you prefer\. The model is saved as a project asset\. You can promote the model to a space and create a deployment for it\.
## Saving a pipeline as a notebook ##
To review the code for a pipeline, select **Save as notebook** for a pipeline\. An automatically generated notebook is saved as a project asset\. Review the code to explore how the pipeline was generated\.
For details on the methods used in the pipeline code, see the documentation for the [autoai\-ts\-libs library](https://pypi.org/project/autoai-ts-libs/)\.
## Scoring the model ##
After you save a pipeline as a model, then promote the model to a space, you can score the model to generate predictions for input, or payload, data\. Scoring the model and interpreting the results is similar to scoring a binary classification model, as the score presents one of two possible values for each prediction:
<!-- <ul> -->
* 1 = no anomaly detected
* \-1 = anomaly detected
<!-- </ul> -->
### Deployment details ###
Note these requirements for deploying an anomaly prediction model\.
<!-- <ul> -->
* The schema for the deplyment input data must match the schema for the training data except for the prediction, or target column\.
* The order of the fields for model scoring must be the same as the order of the fields in the training data schema\.
<!-- </ul> -->
### Deployment example ###
The following is valid input for an anomaly prediction model:
{
"input_data": [
{
"id": "observations",
"values":
12,34],
22,23],
35,45],
46,34]
]
}
]
}
The score for this input is `[1,1,-1,1]` where `-1` means the value is an anomaly and `1` means the prediction is in the normal range\.
## Implementation details ##
These algorithms support anomaly prediction in time series experiments\.
<!-- <table> -->
| Algorithm | Type | Transformer |
| ----------------------------------- | -------------- | ----------- |
| Pipeline Name | Algorithm Type | Transformer |
| PointwiseBoundedHoltWintersAdditive | Forecasting | N/A |
| PointwiseBoundedBATS | Forecasting | N/A |
| PointwiseBoundedBATSForceUpdate | Forecasting | N/A |
| WindowNN | Window | Flatten |
| WindowPCA | Relationship | Flatten |
| WindowLOF | Window | Flatten |
<!-- </table ""> -->
The algorithms are organized in these categories:
<!-- <ul> -->
* **Forecasting:** Algorithms for detecting anomalies using time series forecasting methods
* **Relationship:** Algorithms for detecting anomalies by analyzing the relationship among data points
* **Window:** Algorithms for detecting anomalies by applying transformations and ML techniques to rolling windows
<!-- </ul> -->
## Learn more ##
[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
**Parent topic:**[Building a time series experiment ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
AD76780EA50A0FB37454A3A03FF08CA0AD39EF19 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html?context=cdpaas&locale=en | Scoring a time series model | Scoring a time series model
After you save an AutoAI time series pipeline as a model, you can deploy and score the model to forecast new values.
Deploying a time series model
After you save a model to a project, follow the steps to deploy the model:
1. Find the model in the project asset list.
2. Promote the model to a deployment space.
3. Promote payload data to the deployment space.
4. From the deployment space, create a deployment.
Scoring considerations
To this point, deploying a time series model follows the same steps as deploying a classification or regression model. However, because of the way predictions are structured and generated in a time series model, your input must match your model structure. For example, the way you structure your payload depends on whether you are predicting a single result (univariate) or multiple results (multivariate).
Note these high-level considerations:
* To get the first forecast window row or rows after the last row in your data, send an empty payload.
* To get the next value, send the result from the empty payload request as your next scoring request, and so on.
* You can send multiple rows as input, to build trends and predict the next value after a trend.
* If you have multiple prediction columns, you need to include a value for each of them in your scoring request
Scoring an online deployment
If you create an online deployment, you can pass the payload data by using an input form or by submitting JSON code. This example shows how to structure the JSON code to generate predictions.
Predicting a single value
In the simplest case, given this sample data, you are trying to forecast the next step of value1 with a forecast window of 1, meaning each prediction will be a single step (row).
timestamp value1
2015-02026 21:42 2
2015-02026 21:47 4
2015-02026 21:52 6
2015-02026 21:57 8
2015-02026 22:02 10
You must pass a blank entry as the input data to request the first prediction, which is structured like this:
{
"input_data": [
{
"fields":
"value1"
],
"values": ]
}
]
}
The output that is returned predicts the next step in the model:
{
"predictions": [
{
"fields":
"prediction"
],
"values":
12
]
]
}
]
}
The next input passes the result of the previous output to predict the next step:
{
"input_data": [
{
"fields":
"value1"
],
"values":
12]
]
}
]
}
Predicting multiple values
In this case, you are predicting two targets, value1 and value2.
timestamp value1 value2
2015-02026 21:42 2 1
2015-02026 21:47 4 3
2015-02026 21:52 6 5
2015-02026 21:57 8 7
2015-02026 22:02 10 9
The input data must still pass a blank entry to request the first prediction. The next input would be structured like this:
{
"input_data": [
{
"fields":
"value1",
"value2"
],
"values":
2, 1],
]
}
]
}
Predicting based on new observations
If instead of predicting the next row based on the prior step you want to enter new observations, enter the input data like this for a univariate model:
{
"input_data": [
{
"fields":
"value1"
],
"values":
2],
4],
6]
]
}
]
}
Enter new observations like this for a multivariate model:
{
"input_data": [
{
"fields":
"value1",
"value2"
],
"values":
2, 1],
4, 3],
6, 5]
]
}
]
}
Where 2, 4, and 6 are observations for value1 and 1, 3, 5 are observations for value2.
Scoring a time series model with Supporting features
After you deploy your model, you can go to the page detailing your deployment to get prediction values. Choose one of the following ways to test your deployment:
Using existing input values
You can use existing input values in your data set to obtain prediction values. Click Predict to obtain a set of prediction values. The total number of prediction values in the output is defined by prediction horizon that you previously set during the experiment configuration stage.
Using new input values
You can choose to populate the spreadsheet with new input values or use JSON code to obtain a prediction.
Using spreadsheet to provide new input data for predicting values
To add input data to the New observations (optional) spreadsheet, select the Input tab and do one of the following:
* Add pre-existing .csv file containing new observations from your local directory by clicking Browse local files.
* Download the input file template by clicking Download CSV template, enter values, and upload the file.
* Use an existing data asset from your project by clicking Search in space.
* Manually enter input observations in the spreadsheet.
You can also provide future values for Supporting features if you previously enabled your experiment to leverage these values during the experiment configuration stage. Make sure to add these values to the Future supporting features (optional) spreadsheet.
Using JSON code to provide input data
To add input data using JSON code, select the Paste JSON tab and do one of the following:
* Add pre-existing JSON file containing new observations from your local directory by clicking Browse local files.
* Use an existing data asset from your project by clicking Search in space.
* Manually enter or paste JSON code into the editor.
In this code sample, the prediction column is pollution, and the supporting features are temp and press.
{
"input_data": [
{
"id": "observations",
"values":
96.125,
3.958,
1026.833
]
]
},
{
"id": "supporting_features",
"values":
3.208,
1020.667
]
]
}
]
}
Next steps
[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Scoring a time series model #
After you save an AutoAI time series pipeline as a model, you can deploy and score the model to forecast new values\.
## Deploying a time series model ##
After you save a model to a project, follow the steps to deploy the model:
<!-- <ol> -->
1. Find the model in the project asset list\.
2. Promote the model to a deployment space\.
3. Promote payload data to the deployment space\.
4. From the deployment space, create a deployment\.
<!-- </ol> -->
## Scoring considerations ##
To this point, deploying a time series model follows the same steps as deploying a classification or regression model\. However, because of the way predictions are structured and generated in a time series model, your input must match your model structure\. For example, the way you structure your payload depends on whether you are predicting a single result (univariate) or multiple results (multivariate)\.
Note these high\-level considerations:
<!-- <ul> -->
* To get the first forecast window row or rows after the last row in your data, send an empty payload\.
* To get the next value, send the result from the empty payload request as your next scoring request, and so on\.
* You can send multiple rows as input, to build trends and predict the next value after a trend\.
* If you have multiple prediction columns, you need to include a value for each of them in your scoring request
<!-- </ul> -->
## Scoring an online deployment ##
If you create an online deployment, you can pass the payload data by using an input form or by submitting JSON code\. This example shows how to structure the JSON code to generate predictions\.
### Predicting a single value ###
In the simplest case, given this sample data, you are trying to forecast the next step of `value1` with a forecast window of 1, meaning each prediction will be a single step (row)\.
<!-- <table> -->
| timestamp | value1 |
| ----------------- | ------ |
| 2015\-02026 21:42 | 2 |
| 2015\-02026 21:47 | 4 |
| 2015\-02026 21:52 | 6 |
| 2015\-02026 21:57 | 8 |
| 2015\-02026 22:02 | 10 |
<!-- </table ""> -->
You must pass a blank entry as the input data to request the first prediction, which is structured like this:
{
"input_data": [
{
"fields":
"value1"
],
"values": ]
}
]
}
The output that is returned predicts the next step in the model:
{
"predictions": [
{
"fields":
"prediction"
],
"values":
12
]
]
}
]
}
The next input passes the result of the previous output to predict the next step:
{
"input_data": [
{
"fields":
"value1"
],
"values":
12]
]
}
]
}
### Predicting multiple values ###
In this case, you are predicting two targets, `value1` and `value2`\.
<!-- <table> -->
| timestamp | value1 | value2 |
| ----------------- | ------ | ------ |
| 2015\-02026 21:42 | 2 | 1 |
| 2015\-02026 21:47 | 4 | 3 |
| 2015\-02026 21:52 | 6 | 5 |
| 2015\-02026 21:57 | 8 | 7 |
| 2015\-02026 22:02 | 10 | 9 |
<!-- </table ""> -->
The input data must still pass a blank entry to request the first prediction\. The next input would be structured like this:
{
"input_data": [
{
"fields":
"value1",
"value2"
],
"values":
2, 1],
]
}
]
}
## Predicting based on new observations ##
If instead of predicting the next row based on the prior step you want to enter new observations, enter the input data like this for a univariate model:
{
"input_data": [
{
"fields":
"value1"
],
"values":
2],
4],
6]
]
}
]
}
Enter new observations like this for a multivariate model:
{
"input_data": [
{
"fields":
"value1",
"value2"
],
"values":
2, 1],
4, 3],
6, 5]
]
}
]
}
Where 2, 4, and 6 are observations for `value1` and 1, 3, 5 are observations for `value2`\.
## Scoring a time series model with Supporting features ##
After you deploy your model, you can go to the page detailing your deployment to get prediction values\. Choose one of the following ways to test your deployment:
### Using existing input values ###
You can use existing input values in your data set to obtain prediction values\. Click **Predict** to obtain a set of prediction values\. The total number of prediction values in the output is defined by prediction horizon that you previously set during the experiment configuration stage\.
### Using new input values ###
You can choose to populate the spreadsheet with new input values or use JSON code to obtain a prediction\.
#### Using spreadsheet to provide new input data for predicting values ####
To add input data to the **New observations (optional)** spreadsheet, select the **Input** tab and do one of the following:
<!-- <ul> -->
* Add pre\-existing \.csv file containing new observations from your local directory by clicking **Browse local files**\.
* Download the input file template by clicking **Download CSV template**, enter values, and upload the file\.
* Use an existing data asset from your project by clicking **Search in space**\.
* Manually enter input observations in the spreadsheet\.
<!-- </ul> -->
You can also provide future values for Supporting features if you previously enabled your experiment to leverage these values during the experiment configuration stage\. Make sure to add these values to the *Future supporting features (optional)* spreadsheet\.
#### Using JSON code to provide input data ####
To add input data using JSON code, select the **Paste JSON** tab and do one of the following:
<!-- <ul> -->
* Add pre\-existing JSON file containing new observations from your local directory by clicking **Browse local files**\.
* Use an existing data asset from your project by clicking **Search in space**\.
* Manually enter or paste JSON code into the editor\.
<!-- </ul> -->
In this code sample, the prediction column is `pollution`, and the supporting features are `temp` and `press`\.
{
"input_data": [
{
"id": "observations",
"values":
96.125,
3.958,
1026.833
]
]
},
{
"id": "supporting_features",
"values":
3.208,
1020.667
]
]
}
]
}
## Next steps ##
[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
**Parent topic:**[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
99843122C08D0D70ED3694A57482595E35FB0D8B | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en | Tutorial: AutoAI multivariate time series experiment with Supporting features | Tutorial: AutoAI multivariate time series experiment with Supporting features
Use sample data to train a multivariate time series experiment that predicts pollution rate and temperature with the help of supporting features that influence the prediction fields.
When you set up the experiment, you load sample data that tracks weather conditions in Beijing from 2010 to 2014. The experiment generates a set of pipelines that use algorithms to predict future pollution and temperature with supporting features, including dew, pressure, snow, and rain. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review.
Data set overview
For this tutorial, you use the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the Samples. This data set describes the weather conditions in Beijing from 2010 to 2014, which are measured in 1-day steps, or increments. You use this data set to configure your AutoAI experiment and select Supporting features. Details about the data set are described here:
* Each column, other than the date column, represents a weather condition that impacts pollution index.
* The Samples entry shows the origin of the data. You can preview the file before you download the file.
* The sample data is structured in rows and columns and saved as a .csv file.

Tasks overview
In this tutorial, you follow steps to create a multivariate time series experiment that uses Supporting features:
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep1)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep2)
3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep3)
4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep4)
5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep5)
6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep6)
Create a project
Follow these steps to create an empty project and download the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the IBM watsonx Samples:
1. From the main navigation pane, click Projects > View all projects, then click New Project.
a. Click Create an empty project.
b. Enter a name and optional description for your project.
c. Click Create.
2. From the main navigation panel, click Samples and download a local copy of the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set.
Create an AutoAI experiment
Follow these steps to create an AutoAI experiment and add sample data to your experiment:
1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
2. Specify a name and optional description for your experiment.
3. Associate a machine learning service instance with your experiment.
4. Choose an environment definition of 8 vCPU and 32 GB RAM.
5. Click Create.
6. To add sample data, choose one of the these methods:
* If you downloaded your file locally, upload the training data file, PM25.csv by clicking Browse and then following the prompts.
* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Beijing PM 25.csv.
Configure the experiment
Follow these steps to configure your multivariate AutoAI time series experiment:
1. Click Yes for the option to create a Time Series Forecast.
2. Choose as prediction columns: pollution, temp.
3. Choose as the date/time column: date.

4. Click Experiment settings to configure the experiment:
a. In the Prediction page, accept the default selection for Algorithms to include. Algorithms that allow you to use Supporting features are indicated by a checkmark in the column Allows supporting features.

b. Go to the Data Source page. For this tutorial, you will supply future values of Supporting features while testing. Future values are helpful when values for the supporting features are knowable for the prediction horizon. Accept the default enablement for Leverage future values of supporting features. Additionally, accept the default selection for columns that will be used as Supporting features.

c. Click Cancel to exit from Experiment settings.
5. Click Run experiment to begin the training.
Review experiment results
The experiment takes several minutes to complete. As the experiment trains, the relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance.
1. Optional: Hover over any node in the relationship map to get details on the transformation for a particular pipeline.

2. Optional: After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example:

3. When the training completes, the top three best performing pipelines are saved to the leaderboard. Click any pipeline name to review details.
Note: Pipelines that use Supporting features are indicated by SUP enhancement.

4. Select the pipeline with Rank 1 and click Save as to create your model. Then, click Create. This action saves the pipeline under the Models section in the Assets tab.
Deploy the trained model
Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space:
1. You can deploy the model from the model details page. To access the model details page, choose one of these options:
* Click the model’s name in the notification that is displayed when you save the model.
* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section.
2. Select Promote to Deployment Space, then select or create a deployment space where the model will be deployed.
Optional: Follow these steps to create a deployment space:
a. From the Target space list, select Create a new deployment space.
b. Enter a name for your deployment space.
c. To associate a machine learning instance, go to Select machine learning service (optional) and select a machine learning instance from the list.
d. Click Create.
3. Once you select or create your space, click Promote.
4. Click the deployment space link from the notification.
5. From the Assets tab of the deployment space:
a. Hover over the model’s name and click the deployment icon .
b. In the page that opens, complete the fields:
* Select Online as the Deployment type.
* Specify a name for the deployment.
* Click Create.
After the deployment is complete, click the Deployments tab and select the deployment name to view the details page.
Test the deployed model
Follow these steps to test the deployed model from the deployment details page:
1. On the Test tab of the deployment details page, go to New observations (optional) spreadsheet and enter the following values:
pollution (double): 80.417
temp (double): -5.5
dew (double): -7.083
press (double): 1020.667
wnd_spd (double): 9.518
snow (double): 0
rain (double): 0

2. To add future values of Supporting features, go to Future exogenous features (optional) spreadsheet and enter the following values:
dew (double): -12.667
press (double): 1023.708
wnd_spd (double): 9.518
snow (double): 0
rain (double): 0.042
Note: You must provide the same number of values for future exogenous features as the prediction horizon that you set during experiment configuration stage.

3. Click Predict. The resulting prediction indicates values for pollution and temperature.
Note: Prediction values that are shown in the output might differ when you test your deployment.

Learn more
Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Tutorial: AutoAI multivariate time series experiment with Supporting features #
Use sample data to train a multivariate time series experiment that predicts pollution rate and temperature with the help of supporting features that influence the prediction fields\.
When you set up the experiment, you load sample data that tracks weather conditions in Beijing from 2010 to 2014\. The experiment generates a set of pipelines that use algorithms to predict future pollution and temperature with supporting features, including dew, pressure, snow, and rain\. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review\.
## Data set overview ##
For this tutorial, you use the [Beijing PM 2\.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the Samples\. This data set describes the weather conditions in Beijing from 2010 to 2014, which are measured in 1\-day steps, or increments\. You use this data set to configure your AutoAI experiment and select Supporting features\. Details about the data set are described here:
<!-- <ul> -->
* Each column, other than the date column, represents a weather condition that impacts pollution index\.
* The Samples entry shows the origin of the data\. You can preview the file before you download the file\.
* The sample data is structured in rows and columns and saved as a \.csv file\.
<!-- </ul> -->

## Tasks overview ##
In this tutorial, you follow steps to create a multivariate time series experiment that uses Supporting features:
<!-- <ol> -->
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step1)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step2)
3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step3)
4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step4)
5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step5)
6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=en#step6)
<!-- </ol> -->
## Create a project ##
Follow these steps to create an empty project and download the [Beijing PM 2\.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the IBM watsonx Samples:
<!-- <ol> -->
1. From the main navigation pane, click **Projects** > **View all projects**, then click **New Project**\.
a. Click **Create an empty project**.
b. Enter a name and optional description for your project.
c. Click **Create**.
2. From the main navigation panel, click **Samples** and download a local copy of the [Beijing PM 2\.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set\.
<!-- </ol> -->
## Create an AutoAI experiment ##
Follow these steps to create an AutoAI experiment and add sample data to your experiment:
<!-- <ol> -->
1. On the *Assets* tab from within your project, click **New asset > Build machine learning models automatically**\.
2. Specify a name and optional description for your experiment\.
3. Associate a machine learning service instance with your experiment\.
4. Choose an environment definition of 8 vCPU and 32 GB RAM\.
5. Click **Create**\.
6. To add sample data, choose one of the these methods:
<!-- <ul> -->
* If you downloaded your file locally, upload the training data file, *PM25.csv* by clicking **Browse** and then following the prompts.
* If you already uploaded your file to your project, click **Select from project**, then select the **Data asset** tab and choose *Beijing PM 25.csv*.
<!-- </ul> -->
<!-- </ol> -->
## Configure the experiment ##
Follow these steps to configure your multivariate AutoAI time series experiment:
<!-- <ol> -->
1. Click **Yes** for the option to create a Time Series Forecast\.
2. Choose as prediction columns: `pollution`, `temp`\.
3. Choose as the date/time column: `date`\.

4. Click **Experiment settings** to configure the experiment:
a. In the **Prediction** page, accept the default selection for Algorithms to include. Algorithms that allow you to use Supporting features are indicated by a checkmark in the column *Allows supporting features*.

b. Go to the **Data Source** page. For this tutorial, you will supply future values of Supporting features while testing. Future values are helpful when values for the supporting features are knowable for the prediction horizon. Accept the default enablement for **Leverage future values of supporting features**. Additionally, accept the default selection for columns that will be used as Supporting features.

c. Click **Cancel** to exit from Experiment settings.
5. Click **Run experiment** to begin the training\.
<!-- </ol> -->
## Review experiment results ##
The experiment takes several minutes to complete\. As the experiment trains, the relationship map shows the transformations that are used to create pipelines\. Follow these steps to review experiment results and save the pipeline with the best performance\.
<!-- <ol> -->
1. Optional: Hover over any node in the relationship map to get details on the transformation for a particular pipeline\.

2. Optional: After the pipelines are listed on the leaderboard, click **Pipeline comparison** to see how they differ\. For example:

3. When the training completes, the top three best performing pipelines are saved to the leaderboard\. Click any pipeline name to review details\.
Note: Pipelines that use Supporting features are indicated by **SUP** enhancement.

4. Select the pipeline with Rank 1 and click **Save as** to create your model\. Then, click **Create**\. This action saves the pipeline under the *Models* section in the *Assets* tab\.
<!-- </ol> -->
## Deploy the trained model ##
Before you can use your trained model to make predictions on new data, you must deploy the model\. Follow these steps to promote your trained model to a deployment space:
<!-- <ol> -->
1. You can deploy the model from the *model details* page\. To access the *model details* page, choose one of these options:
<!-- <ul> -->
* Click the model’s name in the notification that is displayed when you save the model.
* Open the *Assets* page for the project that contains the model and click the model’s name in the *Machine Learning Model* section.
<!-- </ul> -->
2. Select **Promote to Deployment Space**, then select or create a deployment space where the model will be deployed\.
**Optional**: Follow these steps to create a deployment space:
a. From the Target space list, select **Create a new deployment space**.
b. Enter a name for your deployment space.
c. To associate a machine learning instance, go to **Select machine learning service (optional)** and select a machine learning instance from the list.
d. Click **Create**.
3. Once you select or create your space, click **Promote**\.
4. Click the deployment space link from the notification\.
5. From the **Assets** tab of the deployment space:
a. Hover over the model’s name and click the **deployment** icon .
b. In the page that opens, complete the fields:
<!-- <ul> -->
* Select **Online** as the Deployment type.
* Specify a name for the deployment.
* Click **Create**.
<!-- </ul> -->
<!-- </ol> -->
After the deployment is complete, click the **Deployments** tab and select the deployment name to view the details page\.
## Test the deployed model ##
Follow these steps to test the deployed model from the deployment details page:
<!-- <ol> -->
1. On the **Test** tab of the deployment details page, go to **New observations (optional)** spreadsheet and enter the following values:
pollution (double): `80.417`
temp (double): `-5.5`
dew (double): `-7.083`
press (double): `1020.667`
wnd\_spd (double): `9.518`
snow (double): `0`
rain (double): `0`

2. To add future values of Supporting features, go to **Future exogenous features (optional)** spreadsheet and enter the following values:
dew (double): `-12.667`
press (double): `1023.708`
wnd\_spd (double): `9.518`
snow (double): `0`
rain (double): `0.042`
Note: You must provide the same number of values for future exogenous features as the prediction horizon that you set during experiment configuration stage.

3. Click **Predict**\. The resulting prediction indicates values for pollution and temperature\.
Note: Prediction values that are shown in the output might differ when you test your deployment.

<!-- </ol> -->
## Learn more ##
**Parent topic:**[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en | Tutorial: AutoAI univariate time series experiment | Tutorial: AutoAI univariate time series experiment
Use sample data to train a univariate (single prediction column) time series experiment that predicts minimum daily temperatures.
When you set up the experiment, you load data that tracks daily minimum temperatures for the city of Melbourne, Australia. The experiment will generate a set of pipelines that use algorithms to predict future minimum daily temperatures. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review.
Data set overview
The [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia. The units are in degrees celsius and the data set contains 3650 observations. The source of the data is the Australian Bureau of Meteorology. Details about the data set are described here:

* You will use the Min_Temp column as the prediction column to build pipelines and forecast the future daily minimum temperatures. Before the pipeline training, the date column and Min_Temp column are used together to figure out the appropriate lookback window.
* The prediction column forecasts a prediction for the daily minimum temperature on a specified day.
* The sample data is structured in rows and columns and saved as a .csv file.
Tasks overview
In this tutorial, you follow these steps to create a univariate time series experiment:
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep0)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep1)
3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep2)
4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep3)
5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep4)
6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep5)
Create a project
Follow these steps to download the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set from the Samples and create an empty project:
1. From the navigation menu , click Samples and download a local copy of the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set.
2. From the navigation menu , click Projects > View all projects, then click New Project.
1. Click Create an empty project.
2. Enter a name and optional description for your project.
3. Click Create.
Create an AutoAI experiment
Follow these steps to create an AutoAI experiment and add sample data to your experiment:
1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
2. Specify a name and optional description for your experiment, then select Create.
3. Select Associate a Machine Learning service instance to create a new service instance or associate an existing instance with your project. Click Reload to confirm your configuration.
4. Click Create.
5. To add the sample data, choose one of the these methods:
* If you downloaded your file locally, upload the training data file, Daily_Min_Temperatures.csv, by clicking Browse and then following the prompts.
* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Daily_Min_Temperatures.csv.
Configure the experiment
Follow these steps to configure your univariate AutoAI time series experiment:
1. Click Yes for the option to create a Time Series Forecast.
2. Choose as prediction columns: Min_Temp.
3. Choose as the date/time column: Date.

4. Click Experiment settings to configure the experiment:
1. In the Data source page, select the Time series tab.
2. For this tutorial, accept the default value for Number of backtests (4), Gap length (0 steps), and Holdout length (20 steps).
Note: The validation length changes if you change the value of any of the parameters: Number of backtests, Gap length, or Holdout length.
c. Click Cancel to exit from the Experiment settings.

5. Click Run experiment to begin the training.
Review experiment results
The experiment takes several minutes to complete. As the experiment trains, a visualization shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance.
1. (Optional): Hover over any node in the visualization to get details on the transformation for a particular pipeline.

2. (Optional): After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example:

3. (Optional): When the training completes, the top three best performing pipelines are saved to the leaderboard. Click View discarded pipelines to review pipelines with the least performance.

4. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This action saves the pipeline under the Models section in the Assets tab.
Deploy the trained model
Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space:
1. You can deploy the model from the model details page. To access the model details page, choose one of the these methods:
* Click the model’s name in the notification that is displayed when you save the model.
* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section.
2. Click Promote to Deployment Space, then select or create a deployment space where the model will be deployed.
(Optional): To create a deployment space, follow these steps:
1. From the Target space list, select Create a new deployment space.
2. Enter a name for your deployment space.
3. To associate a machine learning instance, go to Select machine learning service (optional) and select an instance from the list.
4. Click Create.
3. After you select or create your space, click Promote.
4. Click the deployment space link from the notification.
5. From the Assets tab of the deployment space:
1. Hover over the model’s name and click the deployment icon .
2. In the page that opens, complete the fields:
1. Specify a name for the deployment.
2. Select Online as the Deployment type.
3. Click Create.
After the deployment is complete, click the Deployments tab and select the deployment name to view the details page.
Test the deployed model
Follow these steps to test the deployed model from the deployment details page:
1. On the Test tab of the deployment details page, click the terminal icon  and enter the following JSON test data:
{ "input_data": [ {
"fields":
"Min_Temp"
],
"values":
7], 15]
]
} ] }
Note: The test data replicates the data fields for the model, except the prediction field.
2. Click Predict to predict the future minimum temperature.

Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
| # Tutorial: AutoAI univariate time series experiment #
Use sample data to train a univariate (single prediction column) time series experiment that predicts minimum daily temperatures\.
When you set up the experiment, you load data that tracks daily minimum temperatures for the city of Melbourne, Australia\. The experiment will generate a set of pipelines that use algorithms to predict future minimum daily temperatures\. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review\.
## Data set overview ##
The [*Mini\_Daily\_Temperatures*](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set describes the minimum daily temperatures over 10 years (1981\-1990) in the city Melbourne, Australia\. The units are in degrees celsius and the data set contains 3650 observations\. The source of the data is the Australian Bureau of Meteorology\. Details about the data set are described here:

<!-- <ul> -->
* You will use the `Min_Temp` column as the prediction column to build pipelines and forecast the future daily minimum temperatures\. Before the pipeline training, the `date` column and `Min_Temp` column are used together to figure out the appropriate lookback window\.
* The prediction column forecasts a prediction for the daily minimum temperature on a specified day\.
* The sample data is *structured* in rows and columns and saved as a \.csv file\.
<!-- </ul> -->
## Tasks overview ##
In this tutorial, you follow these steps to create a univariate time series experiment:
<!-- <ol> -->
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step0)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step1)
3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step2)
4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step3)
5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step4)
6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=en#step5)
<!-- </ol> -->
## Create a project ##
Follow these steps to download the [*Mini\_Daily\_Temperatures*](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set from the **Samples** and create an empty project:
<!-- <ol> -->
1. From the navigation menu , click **Samples** and download a local copy of the [*Mini\_Daily\_Temperatures*](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set\.
2. From the navigation menu , click **Projects > View all projects**, then click **New Project**\.
<!-- <ol> -->
1. Click **Create an empty project**.
2. Enter a name and optional description for your project.
3. Click **Create**.
<!-- </ol> -->
<!-- </ol> -->
## Create an AutoAI experiment ##
Follow these steps to create an AutoAI experiment and add sample data to your experiment:
<!-- <ol> -->
1. On the *Assets* tab from within your project, click **New asset > Build machine learning models automatically**\.
2. Specify a name and optional description for your experiment, then select **Create**\.
3. Select **Associate a Machine Learning service instance** to create a new service instance or associate an existing instance with your project\. Click **Reload** to confirm your configuration\.
4. Click **Create**\.
5. To add the sample data, choose one of the these methods:
<!-- <ul> -->
* If you downloaded your file locally, upload the training data file, *Daily\_Min\_Temperatures.csv*, by clicking **Browse** and then following the prompts.
* If you already uploaded your file to your project, click **Select from project**, then select the **Data asset** tab and choose *Daily\_Min\_Temperatures.csv*.
<!-- </ul> -->
<!-- </ol> -->
## Configure the experiment ##
Follow these steps to configure your univariate AutoAI time series experiment:
<!-- <ol> -->
1. Click **Yes** for the option to create a Time Series Forecast\.
2. Choose as prediction columns: `Min_Temp`\.
3. Choose as the date/time column: `Date`\.

4. Click **Experiment settings** to configure the experiment:
<!-- <ol> -->
1. In the **Data source** page, select the **Time series** tab.
2. For this tutorial, accept the default value for *Number of backtests* (4), *Gap length* (0 steps), and *Holdout length* (20 steps).
Note: The validation length changes if you change the value of any of the parameters: *Number of backtests*, *Gap length*, or *Holdout length*.
c. Click **Cancel** to exit from the *Experiment settings*.
<!-- </ol> -->

5. Click **Run experiment** to begin the training\.
<!-- </ol> -->
## Review experiment results ##
The experiment takes several minutes to complete\. As the experiment trains, a visualization shows the transformations that are used to create pipelines\. Follow these steps to review experiment results and save the pipeline with the best performance\.
<!-- <ol> -->
1. (Optional): Hover over any node in the visualization to get details on the transformation for a particular pipeline\.

2. (Optional): After the pipelines are listed on the leaderboard, click **Pipeline comparison** to see how they differ\. For example:

3. (Optional): When the training completes, the top three best performing pipelines are saved to the leaderboard\. Click **View discarded pipelines** to review pipelines with the least performance\.

4. Select the pipeline with Rank 1 and click **Save as** to create your model\. Then, select **Create**\. This action saves the pipeline under the **Models** section in the Assets tab\.
<!-- </ol> -->
## Deploy the trained model ##
Before you can use your trained model to make predictions on new data, you must deploy the model\. Follow these steps to promote your trained model to a deployment space:
<!-- <ol> -->
1. You can deploy the model from the model details page\. To access the model details page, choose one of the these methods:
<!-- <ul> -->
* Click the model’s name in the notification that is displayed when you save the model.
* Open the *Assets* page for the project that contains the model and click the model’s name in the *Machine Learning Model* section.
<!-- </ul> -->
2. Click **Promote to Deployment Space**, then select or create a deployment space where the model will be deployed\.
(Optional): To create a deployment space, follow these steps:
<!-- <ol> -->
1. From the **Target space** list, select **Create a new deployment space**.
2. Enter a name for your deployment space.
3. To associate a machine learning instance, go to **Select machine learning service (optional)** and select an instance from the list.
4. Click **Create**.
<!-- </ol> -->
3. After you select or create your space, click **Promote**\.
4. Click the deployment space link from the notification\.
5. From the Assets tab of the deployment space:
<!-- <ol> -->
1. Hover over the model’s name and click the deployment icon .
2. In the page that opens, complete the fields:
<!-- <ol> -->
1. Specify a name for the deployment.
2. Select **Online** as the *Deployment type*.
3. Click **Create**.
<!-- </ol> -->
<!-- </ol> -->
<!-- </ol> -->
After the deployment is complete, click the **Deployments** tab and select the deployment name to view the details page\.
## Test the deployed model ##
Follow these steps to test the deployed model from the deployment details page:
<!-- <ol> -->
1. On the **Test tab** of the deployment details page, click the terminal icon  and enter the following JSON test data:
{ "input_data": [ {
"fields":
"Min_Temp"
],
"values":
7], 15]
]
} ] }
Note: The test data replicates the data fields for the model, except the prediction field.
2. Click **Predict** to predict the future minimum temperature\.
<!-- </ol> -->

**Parent topic:**[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
<!-- </article "role="article" "> -->
|
46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html?context=cdpaas&locale=en | Selecting an AutoAI model | Selecting an AutoAI model
AutoAI automatically prepares data, applies algorithms, and attempts to build model pipelines that are best suited for your data and use case. Learn how to evaluate the model pipelines so that you can save one as a model.
Reviewing experiment results
During AutoAI training, your data set is split to a training part and a hold-out part. The training part is used by the AutoAI training stages to generate the AutoAI model pipelines and cross-validation scores that are used to rank them. After AutoAI training, the hold-out part is used for the resulting pipeline model evaluation and computation of performance information such as ROC curves and confusion matrices, which are shown in the leaderboard. The training/hold-out split ratio is 90/10.
As the training progresses, you are presented with a dynamic infographic and leaderboard. Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification panel, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard. The leaderboard contains model pipelines that are ranked by cross-validation scores.
View the pipeline transformations
Hover over a node in the infographic to view the transformations for a pipeline. The sequence of data transformations consists of a pre-processing transformer and a sequence of data transformers, if feature engineering was performed for the pipeline. The algorithm is determined by model selection and optimization steps during AutoAI training.

See [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html) to review the technical details for creating the pipelines.
View the leaderboard
Each model pipeline is scored for various metrics and then ranked. The default ranking metric for binary classification models is the area under the ROC curve. For multi-class classification models the default metric is accuracy. For regression models, the default metric is the root mean-squared error (RMSE). The highest-ranked pipelines display in a leaderboard, so you can view more information about them. The leaderboard also provides the option to save select model pipelines after you review them.

You can evaluate the pipelines as follows:
* Click a pipeline in the leaderboard to view more detail about the metrics and performance.
* Click Compare to view how the top pipelines compare.
* Sort the leaderboard by a different metric.

Viewing the confusion matrix
One of the details you can view for a pipeline for a binary classification experiment is a Confusion matrix.
The confusion matrix is based on the holdout data, which is the portion of the training dataset that is not used for training the model pipeline but only used to measure its performance on data that was not seen during training.
In a binary classification problem with a positive class and a negative class, the confusion matrix summarizes the pipeline model’s positive and negative predictions in four quadrants depending on their correctness regarding the positive or negative class labels of the holdout data set.
For example, the Bank sample experiment seeks to identify customers that take promotions that are offered to them. The confusion matrix for the top-ranked pipeline is:

The positive class is ‘yes’ (meaning a user takes the promotion). You can see that the measurement of true negatives, that is, customers the model predicted correctly they would refuse their promotions, is high.
Click the items in the navigation menu to view other details about the selected pipeline. For example, Feature importance shows which data features contribute most to your prediction output.
Save a pipeline as a model
When you are satisfied with a pipeline, save it using one of these methods:
* Click Save model to save the candidate pipeline as a model to your project so you can test and deploy it.
* Click [Save as notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) to create and save an auto-generated notebook to your project. You can review the code or run the experiment in the notebook.
Next steps
Promote the trained model to a deployment space so that you can test it with new data and generate predictions.
Learn more
[AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # Selecting an AutoAI model #
AutoAI automatically prepares data, applies algorithms, and attempts to build model pipelines that are best suited for your data and use case\. Learn how to evaluate the model pipelines so that you can save one as a model\.
## Reviewing experiment results ##
During AutoAI training, your data set is split to a training part and a hold\-out part\. The training part is used by the AutoAI training stages to generate the AutoAI model pipelines and cross\-validation scores that are used to rank them\. After AutoAI training, the hold\-out part is used for the resulting pipeline model evaluation and computation of performance information such as ROC curves and confusion matrices, which are shown in the leaderboard\. The training/hold\-out split ratio is 90/10\.
As the training progresses, you are presented with a dynamic infographic and leaderboard\. Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties\. For a guide to the data in the infographic, click the Legend tab in the information panel\. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification panel, then click **Switch views** to view the progress map\. In either view, click a pipeline node to view the associated pipeline in the leaderboard\. The leaderboard contains model pipelines that are ranked by cross\-validation scores\.
## View the pipeline transformations ##
Hover over a node in the infographic to view the transformations for a pipeline\. The sequence of data transformations consists of a pre\-processing transformer and a sequence of data transformers, if feature engineering was performed for the pipeline\. The algorithm is determined by model selection and optimization steps during AutoAI training\.

See [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html) to review the technical details for creating the pipelines\.
## View the leaderboard ##
Each model pipeline is scored for various metrics and then ranked\. The default ranking metric for binary classification models is the area under the ROC curve\. For multi\-class classification models the default metric is accuracy\. For regression models, the default metric is the root mean\-squared error (RMSE)\. The highest\-ranked pipelines display in a leaderboard, so you can view more information about them\. The leaderboard also provides the option to save select model pipelines after you review them\.

You can evaluate the pipelines as follows:
<!-- <ul> -->
* Click a pipeline in the leaderboard to view more detail about the metrics and performance\.
* Click **Compare** to view how the top pipelines compare\.
* Sort the leaderboard by a different metric\.
<!-- </ul> -->

### Viewing the confusion matrix ###
One of the details you can view for a pipeline for a binary classification experiment is a *Confusion matrix\.*
The confusion matrix is based on the holdout data, which is the portion of the training dataset that is not used for training the model pipeline but only used to measure its performance on data that was not seen during training\.
In a binary classification problem with a positive class and a negative class, the confusion matrix summarizes the pipeline model’s positive and negative predictions in four quadrants depending on their correctness regarding the positive or negative class labels of the holdout data set\.
For example, the Bank sample experiment seeks to identify customers that take promotions that are offered to them\. The confusion matrix for the top\-ranked pipeline is:

The positive class is ‘yes’ (meaning a user takes the promotion)\. You can see that the measurement of true negatives, that is, customers the model predicted correctly they would refuse their promotions, is high\.
Click the items in the navigation menu to view other details about the selected pipeline\. For example, **Feature importance** shows which data features contribute most to your prediction output\.
## Save a pipeline as a model ##
When you are satisfied with a pipeline, save it using one of these methods:
<!-- <ul> -->
* Click **Save model** to save the candidate pipeline as a model to your project so you can test and deploy it\.
* Click [**Save as notebook**](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) to create and save an auto\-generated notebook to your project\. You can review the code or run the experiment in the notebook\.
<!-- </ul> -->
## Next steps ##
Promote the trained model to a deployment space so that you can test it with new data and generate predictions\.
## Learn more ##
[AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
C926DFB3758881E6698F630E496F3817101E4176 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en | AutoAI tutorial: Build a Binary Classification Model | AutoAI tutorial: Build a Binary Classification Model
This tutorial guides you through training a model to predict if a customer is likely to buy a tent from an outdoor equipment store.
Create an AutoAI experiment to build a model that analyzes your data and selects the best model type and algorithms to produce, train, and optimize pipelines. After you review the pipelines, save one as a model, deploy it, and then test it to get a prediction.
Watch this video to see a preview of the steps in this tutorial.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
* Transcript
Synchronize transcript with video
Time Transcript
00:00 In this video, you will see how to build a binary classification model that assesses the likelihood that a customer of an outdoor equipment company will buy a tent.
00:11 This video uses a data set called "GoSales", which you'll find in the Gallery.
00:16 View the data set.
00:20 The feature columns are "GENDER", "AGE", "MARITAL_STATUS", and "PROFESSION" and contain the attributes on which the machine learning model will base predictions.
00:31 The label columns are "IS_TENT", "PRODUCT_LINE", and "PURCHASE_AMOUNT" and contain historical outcomes that the models could be trained to predict.
00:44 Add this data set to the "Machine Learning" project and then go to the project.
00:56 You'll find the GoSales.csv file with your other data assets.
01:02 Add to the project an "AutoAI experiment".
01:08 This project already has the Watson Machine Learning service associated.
01:13 If you haven't done that yet, first, watch the video showing how to run an AutoAI experiment based on a sample.
01:22 Just provide a name for the experiment and then click "Create".
01:30 The AutoAI experiment builder displays.
01:33 You first need to load the training data.
01:36 In this case, the data set will be from the project.
01:40 Select the GoSales.csv file from the list.
01:45 AutoAI reads the data set and lists the columns found in the data set.
01:50 Since you want the model to predict the likelihood that a given customer will purchase a tent, select "IS_TENT" as the column to predict.
01:59 Now, edit the experiment settings.
02:03 First, look at the settings for the data source.
02:06 If you have a large data set, you can run the experiment on a subsample of rows and you can configure how much of the data will be used for training and how much will be used for evaluation.
02:19 The default is a 90%/10% split, where 10% of the data is reserved for evaluation.
02:27 You can also select which columns from the data set to include when running the experiment.
02:35 On the "Prediction" panel, you can select a prediction type.
02:39 In this case, AutoAI analyzed your data and determined that the "IS_TENT" column contains true-false information, making this data suitable for a "Binary classification" model.
02:52 The positive class is "TRUE" and the recommended metric is "Accuracy".
03:01 If you'd like, you can choose specific algorithms to consider for this experiment and the number of top algorithms for AutoAI to test, which determines the number of pipelines generated.
03:16 On the "Runtime" panel, you can review other details about the experiment.
03:21 In this case, accepting the default settings makes the most sense.
03:25 Now, run the experiment.
03:28 AutoAI first loads the data set, then splits the data into training data and holdout data.
03:37 Then wait, as the "Pipeline leaderboard" fills in to show the generated pipelines using different estimators, such as XGBoost classifier, or enhancements such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the accuracy metric.
03:58 Hyperparameter optimization is a mechanism for automatically exploring a search space for potential hyperparameters, building a series of models and comparing the models using metrics of interest.
04:10 Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction.
04:21 Okay, the run has completed.
04:24 By default, you'll see the "Relationship map".
04:28 But you can swap views to see the "Progress map".
04:32 You may want to start with comparing the pipelines.
04:36 This chart provides metrics for the eight pipelines, viewed by cross validation score or by holdout score.
04:46 You can see the pipelines ranked based on other metrics, such as average precision.
04:55 Back on the "Experiment summary" tab, expand a pipeline to view the model evaluation measures and ROC curve.
05:03 During AutoAI training, your data set is split into two parts: training data and holdout data.
05:11 The training data is used by the AutoAI training stages to generate the model pipelines, and cross validation scores are used to rank them.
05:21 After training, the holdout data is used for the resulting pipeline model evaluation and computation of performance information, such as ROC curves and confusion matrices.
05:33 You can view an individual pipeline to see more details in addition to the confusion matrix, precision recall curve, model information, and feature importance.
05:46 This pipeline had the highest ranking, so you can save this as a machine learning model.
05:52 Just accept the defaults and save the model.
05:56 Now that you've trained the model, you're ready to view the model and deploy it.
06:04 The "Overview" tab shows a model summary and the input schema.
06:09 To deploy the model, you'll need to promote it to a deployment space.
06:15 Select the deployment space from the list, add a description for the model, and click "Promote".
06:24 Use the link to go to the deployment space.
06:28 Here's the model you just created, which you can now deploy.
06:33 In this case, it will be an online deployment.
06:37 Just provide a name for the deployment and click "Create".
06:41 Then wait, while the model is deployed.
06:44 When the model deployment is complete, view the deployment.
06:49 On the "API reference" tab, you'll find the scoring endpoint for future reference.
06:56 You'll also find code snippets for various programming languages to utilize this deployment from your application.
07:05 On the "Test" tab, you can test the model prediction.
07:09 You can either enter test input data or paste JSON input data, and click "Predict".
07:20 This shows that there's a very high probability that the first customer will buy a tent and a very high probability that the second customer will not buy a tent.
07:33 And back in the project, you'll find the AutoAI experiment and the model on the "Assets" tab.
07:44 Find more videos in the Cloud Pak for Data as a Service documentation.
Overview of the data sets
The sample data is structured (in rows and columns) and saved in a .csv file format.
You can view the sample data file in a text editor or spreadsheet program:

What do you want to predict?
Choose the column whose values that your model predicts.
In this tutorial, the model predicts the values of the IS_TENT column:
* IS_TENT: Whether the customer bought a tent
The model that is built in this tutorial predicts whether a customer is likely to purchase a tent.
Tasks overview
This tutorial presents the basic steps for building and training a machine learning model with AutoAI:
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep0)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep1)
3. [Training the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep2)
4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep3)
5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep4)
6. [Creating a batch to score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep5)
Task 1: Create a project
1. From the Samples, download the [GoSales](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa07a773f71cf1172a349f33e2028e4e?context=wx) data set file to your local computer.
2. From the Projects page, to create a new project, select New Project.
a. Select Create an empty project.
b. Include your project name.
c. Click Create.
Task 2: Create an AutoAI experiment
1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
2. Specify a name and optional description for your new experiment.
3. Select the Associate a Machine Learning service instance link to associate the Watson Machine Learning Server instance with your project. Click Reload to confirm your configuration.
4. To add a data source, you can choose one of these options:
a. If you downloaded your file locally, upload the training data file, GoSales.csv, from your local computer. Drag the file onto the data panel or click browse and follow the prompts.
b. If you already uploaded your file to your project, click select from project, then select the data asset tab and choose GoSales.csv.
Task 3: Training the experiment
1. In Configuration details, select No for the option to create a Time Series Forecast.
2. Choose IS_TENT as the column to predict. AutoAI analyzes your data and determines that the IS_TENT column contains True and False information, making this data suitable for a binary classification model. The default metric for a binary classification is ROC/AUC.

3. Click Run experiment. As the model trains, an infographic shows the process of building the pipelines.
Note:You might see slight differences in results based on the Cloud Pak for Data platform and version you use.

For a list of algorithms or estimators that are available with each machine learning technique in AutoAI, see [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
4. When all the pipelines are created, you can compare their accuracy on the Pipeline leaderboard.

5. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This option saves the pipeline under the Models section in the Assets tab.
Task 4: Deploy the trained model
1. You can deploy the model from the model details page. You can access the model details page in one of these ways:
1. Clicking the model’s name in the notification displayed when you save the model.
2. Open the Assets tab for the project, select the Models section and select the model’s name.
2. Click Promote to Deployment Space then select or create the space where the model will be deployed.
1. To create a deployment space:
1. Enter a name.
2. Associate it with a Machine Learning Service.
3. Select Create.
3. After you create your deployment space or select an existing one, select Promote.
4. Click the deployment space link from the notification.
5. From the Assets tab of the deployment space:
1. Hover over the model’s name and click the deployment icon .
1. In the page that opens, complete the fields:
1. Select Online as the Deployment type.
2. Specify a name for the deployment.
3. Click Create.

After the deployment is complete, click Deployments and select the deployment name to view the details page.
Task 5: Test the deployed model
You can test the deployed model from the deployment details page:
1. On the Test tab of the deployment details page, complete the form with test values or enter JSON test data by clicking the terminal icon  to provide the following JSON input data.
{"input_data":[{
"fields":
"GENDER","AGE","MARITAL_STATUS","PROFESSION","PRODUCT_LINE","PURCHASE_AMOUNT"],
"values": "M",27,"Single", "Professional","Camping Equipment",144.78]]
}]}
Note: The test data replicates the data fields for the model, except for the prediction field.
2. Click Predict to predict whether a customer with the entered attributes is likely to buy a tent. The resulting prediction indicates that a customer with the attributes entered has a high probability of purchasing a tent.

Task 6: Creating a batch job to score the model
For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction.
In a real scenario, you would submit new data to the model to get a score. However, this tutorial uses the same training data GoSales-updated.csv that you downloaded as part of the tutorial setup. Ensure that you delete the IS_TENT column and save the file before you upload it to the batch job. When deploying a model, you can add the payload data to a project, upload it to a space, or link to it in a storage repository such as a Cloud Object Storage bucket. For this tutorial, upload the file directly to the deployment space.
Step 1: Add data to space
From the Assets page of the deployment space:
1. Click Add to space then choose Data.
2. Upload the file GoSales-updated.csv file that you saved locally.
Step 2: Create the batch deployment
Now you can define the batch deployment.
1. Click the deployment icon next to the model’s name.
2. Enter a name a name for the deployment.
1. Select Batch as the Deployment type.
2. Choose the smallest hardware specification.
3. Click Create.
Step 3: Create the batch job
The batch job runs the deployment. To create the job, you must specify the input data and the name for the output file. You can set up a job to run on a schedule or run immediately.
1. Click New job.
2. Specify a name for the job
3. Configure to the smallest hardware specification
4. (Optional): To set a schedule and receive notifications.
5. Upload the input file: GoSales-updated.csv
6. Name the output file: GoSales-output.csv
7. Review and click Create to run the job.
Step 4: View the output
When the deployment status changes to Deployed, return to the Assets page for the deployment space. The file GoSales-output.csv was created and added to your assets list.
Click the download icon next to the output file and open the file in an editor. You can review the prediction results for the customer information that is submitted for batch processing.
For each case, the prediction that is returned indicates the confidence score of whether a customer will buy a tent.
Next steps
[Building an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
| # AutoAI tutorial: Build a Binary Classification Model #
This tutorial guides you through training a model to predict if a customer is likely to buy a tent from an outdoor equipment store\.
Create an AutoAI experiment to build a model that analyzes your data and selects the best model type and algorithms to produce, train, and optimize pipelines\. After you review the pipelines, save one as a model, deploy it, and then test it to get a prediction\.
Watch this video to see a preview of the steps in this tutorial\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
<!-- <ul> -->
* Transcript
Synchronize transcript with video
<!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
| Time | Transcript |
| ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 00:00 | In this video, you will see how to build a binary classification model that assesses the likelihood that a customer of an outdoor equipment company will buy a tent. |
| 00:11 | This video uses a data set called "GoSales", which you'll find in the Gallery. |
| 00:16 | View the data set. |
| 00:20 | The feature columns are "GENDER", "AGE", "MARITAL\_STATUS", and "PROFESSION" and contain the attributes on which the machine learning model will base predictions. |
| 00:31 | The label columns are "IS\_TENT", "PRODUCT\_LINE", and "PURCHASE\_AMOUNT" and contain historical outcomes that the models could be trained to predict. |
| 00:44 | Add this data set to the "Machine Learning" project and then go to the project. |
| 00:56 | You'll find the GoSales.csv file with your other data assets. |
| 01:02 | Add to the project an "AutoAI experiment". |
| 01:08 | This project already has the Watson Machine Learning service associated. |
| 01:13 | If you haven't done that yet, first, watch the video showing how to run an AutoAI experiment based on a sample. |
| 01:22 | Just provide a name for the experiment and then click "Create". |
| 01:30 | The AutoAI experiment builder displays. |
| 01:33 | You first need to load the training data. |
| 01:36 | In this case, the data set will be from the project. |
| 01:40 | Select the GoSales.csv file from the list. |
| 01:45 | AutoAI reads the data set and lists the columns found in the data set. |
| 01:50 | Since you want the model to predict the likelihood that a given customer will purchase a tent, select "IS\_TENT" as the column to predict. |
| 01:59 | Now, edit the experiment settings. |
| 02:03 | First, look at the settings for the data source. |
| 02:06 | If you have a large data set, you can run the experiment on a subsample of rows and you can configure how much of the data will be used for training and how much will be used for evaluation. |
| 02:19 | The default is a 90%/10% split, where 10% of the data is reserved for evaluation. |
| 02:27 | You can also select which columns from the data set to include when running the experiment. |
| 02:35 | On the "Prediction" panel, you can select a prediction type. |
| 02:39 | In this case, AutoAI analyzed your data and determined that the "IS\_TENT" column contains true-false information, making this data suitable for a "Binary classification" model. |
| 02:52 | The positive class is "TRUE" and the recommended metric is "Accuracy". |
| 03:01 | If you'd like, you can choose specific algorithms to consider for this experiment and the number of top algorithms for AutoAI to test, which determines the number of pipelines generated. |
| 03:16 | On the "Runtime" panel, you can review other details about the experiment. |
| 03:21 | In this case, accepting the default settings makes the most sense. |
| 03:25 | Now, run the experiment. |
| 03:28 | AutoAI first loads the data set, then splits the data into training data and holdout data. |
| 03:37 | Then wait, as the "Pipeline leaderboard" fills in to show the generated pipelines using different estimators, such as XGBoost classifier, or enhancements such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the accuracy metric. |
| 03:58 | Hyperparameter optimization is a mechanism for automatically exploring a search space for potential hyperparameters, building a series of models and comparing the models using metrics of interest. |
| 04:10 | Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction. |
| 04:21 | Okay, the run has completed. |
| 04:24 | By default, you'll see the "Relationship map". |
| 04:28 | But you can swap views to see the "Progress map". |
| 04:32 | You may want to start with comparing the pipelines. |
| 04:36 | This chart provides metrics for the eight pipelines, viewed by cross validation score or by holdout score. |
| 04:46 | You can see the pipelines ranked based on other metrics, such as average precision. |
| 04:55 | Back on the "Experiment summary" tab, expand a pipeline to view the model evaluation measures and ROC curve. |
| 05:03 | During AutoAI training, your data set is split into two parts: training data and holdout data. |
| 05:11 | The training data is used by the AutoAI training stages to generate the model pipelines, and cross validation scores are used to rank them. |
| 05:21 | After training, the holdout data is used for the resulting pipeline model evaluation and computation of performance information, such as ROC curves and confusion matrices. |
| 05:33 | You can view an individual pipeline to see more details in addition to the confusion matrix, precision recall curve, model information, and feature importance. |
| 05:46 | This pipeline had the highest ranking, so you can save this as a machine learning model. |
| 05:52 | Just accept the defaults and save the model. |
| 05:56 | Now that you've trained the model, you're ready to view the model and deploy it. |
| 06:04 | The "Overview" tab shows a model summary and the input schema. |
| 06:09 | To deploy the model, you'll need to promote it to a deployment space. |
| 06:15 | Select the deployment space from the list, add a description for the model, and click "Promote". |
| 06:24 | Use the link to go to the deployment space. |
| 06:28 | Here's the model you just created, which you can now deploy. |
| 06:33 | In this case, it will be an online deployment. |
| 06:37 | Just provide a name for the deployment and click "Create". |
| 06:41 | Then wait, while the model is deployed. |
| 06:44 | When the model deployment is complete, view the deployment. |
| 06:49 | On the "API reference" tab, you'll find the scoring endpoint for future reference. |
| 06:56 | You'll also find code snippets for various programming languages to utilize this deployment from your application. |
| 07:05 | On the "Test" tab, you can test the model prediction. |
| 07:09 | You can either enter test input data or paste JSON input data, and click "Predict". |
| 07:20 | This shows that there's a very high probability that the first customer will buy a tent and a very high probability that the second customer will not buy a tent. |
| 07:33 | And back in the project, you'll find the AutoAI experiment and the model on the "Assets" tab. |
| 07:44 | Find more videos in the Cloud Pak for Data as a Service documentation. |
<!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
<!-- </ul> -->
## Overview of the data sets ##
The sample data is structured (in rows and columns) and saved in a \.csv file format\.
You can view the sample data file in a text editor or spreadsheet program:

#### What do you want to predict? ####
Choose the column whose values that your model predicts\.
In this tutorial, the model predicts the values of the `IS_TENT` column:
<!-- <ul> -->
* `IS_TENT`: Whether the customer bought a tent
<!-- </ul> -->
The model that is built in this tutorial predicts whether a customer is likely to purchase a tent\.
## Tasks overview ##
This tutorial presents the basic steps for building and training a machine learning model with AutoAI:
<!-- <ol> -->
1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step0)
2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step1)
3. [Training the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step2)
4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step3)
5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step4)
6. [Creating a batch to score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=en#step5)
<!-- </ol> -->
## Task 1: Create a project ##
<!-- <ol> -->
1. From the *Samples*, download the [GoSales](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa07a773f71cf1172a349f33e2028e4e?context=wx) data set file to your local computer\.
2. From the Projects page, to create a new project, select **New Project**\.
a. Select **Create an empty project**.
b. Include your project name.
c. Click **Create**.
<!-- </ol> -->
## Task 2: Create an AutoAI experiment ##
<!-- <ol> -->
1. On the *Assets* tab from within your project, click **New asset > Build machine learning models automatically**\.
2. Specify a name and optional description for your new experiment\.
3. Select the **Associate a Machine Learning service instance** link to associate the Watson Machine Learning Server instance with your project\. Click **Reload** to confirm your configuration\.
4. To add a data source, you can choose one of these options:
a. If you downloaded your file locally, upload the training data file, *GoSales.csv*, from your local computer. Drag the file onto the data panel or click **browse** and follow the prompts.
b. If you already uploaded your file to your project, click **select from project**, then select the **data asset** tab and choose *GoSales.csv*.
<!-- </ol> -->
## Task 3: Training the experiment ##
<!-- <ol> -->
1. In **Configuration details**, select **No** for the option to create a Time Series Forecast\.
2. Choose `IS_TENT` as the column to predict\. AutoAI analyzes your data and determines that the `IS_TENT` column contains True and False information, making this data suitable for a binary classification model\. The default metric for a binary classification is ROC/AUC\.

3. Click **Run experiment**\. As the model trains, an infographic shows the process of building the pipelines\.
Note:You might see slight differences in results based on the Cloud Pak for Data platform and version you use.

For a list of algorithms or estimators that are available with each machine learning technique in AutoAI, see [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
4. When all the pipelines are created, you can compare their accuracy on the **Pipeline leaderboard**\.

5. Select the pipeline with Rank 1 and click **Save as** to create your model\. Then, select **Create**\. This option saves the pipeline under the **Models** section in the **Assets** tab\.
<!-- </ol> -->
## Task 4: Deploy the trained model ##
<!-- <ol> -->
1. You can deploy the model from the model details page\. You can access the model details page in one of these ways:
<!-- <ol> -->
1. Clicking the model’s name in the notification displayed when you save the model.
2. Open the **Assets** tab for the project, select the **Models** section and select the model’s name.
<!-- </ol> -->
2. Click **Promote to Deployment Space** then select or create the space where the model will be deployed\.
<!-- <ol> -->
1. To create a deployment space:
<!-- <ol> -->
1. Enter a name.
2. Associate it with a Machine Learning Service.
3. Select **Create**.
<!-- </ol> -->
<!-- </ol> -->
3. After you create your deployment space or select an existing one, select **Promote**\.
4. Click the deployment space link from the notification\.
5. From the **Assets** tab of the deployment space:
<!-- <ol> -->
1. Hover over the model’s name and click the deployment icon .
<!-- <ol> -->
1. In the page that opens, complete the fields:
<!-- <ol> -->
1. Select **Online** as the **Deployment type**.
2. Specify a name for the deployment.
3. Click **Create**.
<!-- </ol> -->
<!-- </ol> -->
<!-- </ol> -->
<!-- </ol> -->

After the deployment is complete, click **Deployments** and select the deployment name to view the details page\.
## Task 5: Test the deployed model ##
You can test the deployed model from the deployment details page:
<!-- <ol> -->
1. On the **Test** tab of the deployment details page, complete the form with test values or enter JSON test data by clicking the terminal icon  to provide the following JSON input data\.
{"input_data":[{
"fields":
"GENDER","AGE","MARITAL_STATUS","PROFESSION","PRODUCT_LINE","PURCHASE_AMOUNT"],
"values": "M",27,"Single", "Professional","Camping Equipment",144.78]]
}]}
Note: The test data replicates the data fields for the model, except for the prediction field.
2. Click **Predict** to predict whether a customer with the entered attributes is likely to buy a tent\. The resulting prediction indicates that a customer with the attributes entered has a high probability of purchasing a tent\.
<!-- </ol> -->

## Task 6: Creating a batch job to score the model ##
For a batch deployment, you provide input data, also known as the model payload, in a CSV file\. The data must be structured like the training data, with the same column headers\. The batch job processes each row of data and creates a corresponding prediction\.
In a real scenario, you would submit new data to the model to get a score\. However, this tutorial uses the same training data GoSales\-updated\.csv that you downloaded as part of the tutorial setup\. Ensure that you delete the `IS_TENT` column and save the file before you upload it to the batch job\. When deploying a model, you can add the payload data to a project, upload it to a space, or link to it in a storage repository such as a Cloud Object Storage bucket\. For this tutorial, upload the file directly to the deployment space\.
### Step 1: Add data to space ###
From the **Assets** page of the deployment space:
<!-- <ol> -->
1. Click **Add to space** then choose **Data**\.
2. Upload the file GoSales\-updated\.csv file that you saved locally\.
<!-- </ol> -->
### Step 2: Create the batch deployment ###
Now you can define the batch deployment\.
<!-- <ol> -->
1. Click the deployment icon next to the model’s name\.
2. Enter a name a name for the deployment\.
<!-- <ol> -->
1. Select **Batch** as the **Deployment type**.
2. Choose the smallest hardware specification.
3. Click **Create**.
<!-- </ol> -->
<!-- </ol> -->
### Step 3: Create the batch job ###
The batch job runs the deployment\. To create the job, you must specify the input data and the name for the output file\. You can set up a job to run on a schedule or run immediately\.
<!-- <ol> -->
1. Click **New job**\.
2. Specify a name for the job
3. Configure to the smallest hardware specification
4. (Optional): To set a schedule and receive notifications\.
5. Upload the input file: *GoSales\-updated\.csv*
6. Name the output file: *GoSales\-output\.csv*
7. Review and click **Create** to run the job\.
<!-- </ol> -->
### Step 4: View the output ###
When the deployment status changes to *Deployed*, return to the **Assets** page for the deployment space\. The file *GoSales\-output\.csv* was created and added to your assets list\.
Click the download icon next to the output file and open the file in an editor\. You can review the prediction results for the customer information that is submitted for batch processing\.
For each case, the prediction that is returned indicates the confidence score of whether a customer will buy a tent\.
## Next steps ##
[Building an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
**Parent topic:**[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
<!-- </article "role="article" "> -->
|
7BF4B8F1F49406EEC43BE3B7350092F9165B0757 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html?context=cdpaas&locale=en | SPSS predictive analytics classification and regression algorithms in notebooks | SPSS predictive analytics classification and regression algorithms in notebooks
You can use generalized linear model, linear regression, linear support vector machine, random trees, or CHAID SPSS predictive analytics algorithms in notebooks.
Generalized Linear Model
The Generalized Linear Model (GLE) is a commonly used analytical algorithm for different types of data. It covers not only widely used statistical models, such as linear regression for normally distributed targets, logistic models for binary or multinomial targets, and log linear models for count data, but also covers many useful statistical models via its very general model formulation. In addition to building the model, Generalized Linear Model provides other useful features such as variable selection, automatic selection of distribution and link function, and model evaluation statistics. This model has options for regularization, such as LASSO, ridge regression, elastic net, etc., and is also capable of handling very wide data.
For more details about how to choose distribution and link function, see Distribution and Link Function Combination.
Example code 1:
This example shows a GLE setting with specified distribution and link function, specified effects, intercept, conducting ROC curve, and printing correlation matrix. This scenario builds a model, then scores the model.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle1 = GeneralizedLinear().
setTargetField("Work_experience").
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]).
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee", "Educational_level"], nestingLevels=0, 0])]).
setIntercept(True).
setDistribution("NORMAL").
setLinkFunction("LOG").
setAnalysisType("BOTH").
setConductRocCurve(True)
gleModel1 = gle1.fit(data)
PMML = gleModel1.toPMML()
statXML = gleModel1.statXML()
predictions1 = gleModel1.transform(data)
predictions1.show()
Example code 2:
This example shows a GLE setting with unspecified distribution and link function, and variable selection using the forward stepwise method. This scenario uses the forward stepwise method to select distribution, link function and effects, then builds and scores the model.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle2 = GeneralizedLinear().
setTargetField("Work_experience").
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]).
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0])]).
setIntercept(True).
setDistribution("UNKNOWN").
setLinkFunction("UNKNOWN").
setAnalysisType("BOTH").
setUseVariableSelection(True).
setVariableSelectionMethod("FORWARD_STEPWISE")
gleModel2 = gle2.fit(data)
PMML = gleModel2.toPMML()
statXML = gleModel2.statXML()
predictions2 = gleModel2.transform(data)
predictions2.show()
Example code 3:
This example shows a GLE setting with unspecified distribution, specified link function, and variable selection using the LASSO method, with two-way interaction detection and automatic penalty parameter selection. This scenario detects two-way interaction for effects, then uses the LASSO method to select distribution and effects using automatic penalty parameter selection, then builds and scores the model.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle3 = GeneralizedLinear().
setTargetField("Work_experience").
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]).
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0])]).
setIntercept(True).
setDistribution("UNKNOWN").
setLinkFunction("LOG").
setAnalysisType("BOTH").
setDetectTwoWayInteraction(True).
setUseVariableSelection(True).
setVariableSelectionMethod("LASSO").
setUserSpecPenaltyParams(False)
gleModel3 = gle3.fit(data)
PMML = gleModel3.toPMML()
statXML = gleModel3.statXML()
predictions3 = gleModel3.transform(data)
predictions3.show()
Linear Regression
The linear regression model analyzes the predictive relationship between a continuous target and one or more predictors which can be continuous or categorical.
Features of the linear regression model include automatic interaction effect detection, forward stepwise model selection, diagnostic checking, and unusual category detection based on Estimated Marginal Means (EMMEANS).
Example code:
Python example:
from spss.ml.classificationandregression.linearregression import LinearRegression
le = LinearRegression().
setTargetField("target").
setInputFieldList(["predictor1", "predictor2", "predictorn"]).
setDetectTwoWayInteraction(True).
setVarSelectionMethod("forwardStepwise")
leModel = le.fit(data)
predictions = leModel.transform(data)
predictions.show()
Linear Support Vector Machine
The Linear Support Vector Machine (LSVM) provides a supervised learning method that generates input-output mapping functions from a set of labeled training data. The mapping function can be either a classification function or a regression function. LSVM is designed to resolve large-scale problems in terms of the number of records and the number of variables (parameters). Its feature space is the same as the input space of the problem, and it can handle sparse data where the average number of non-zero elements in one record is small.
Example code:
Python example:
from spss.ml.classificationandregression.linearsupportvectormachine import LinearSupportVectorMachine
lsvm = LinearSupportVectorMachine().
setTargetField("BareNuc").
setInputFieldList(["Clump", "UnifSize", "UnifShape", "MargAdh", "SingEpiSize", "BlandChrom", "NormNucl", "Mit", "Class"]).
setPenaltyFunction("L2")
lsvmModel = lsvm.fit(df)
predictions = lsvmModel.transform(data)
predictions.show()
Random Trees
Random Trees is a powerful approach for generating strong (accurate) predictive models. It's comparable and sometimes better than other state-of-the-art methods for classification or regression problems.
Random Trees is an ensemble model consisting of multiple CART-like trees. Each tree grows on a bootstrap sample which is obtained by sampling the original data cases with replacement. Moreover, during the tree growth, for each node the best split variable is selected from a specified smaller number of variables that are drawn randomly from the full set of variables. Each tree grows to the largest extent possible, and there is no pruning. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression).
Example code:
Python example:
from spss.ml.classificationandregression.ensemble.randomtrees import RandomTrees
Random trees required a "target" field and some input fields. If "target" is continuous, then regression trees will be generate else classification .
You can use the SPSS Attribute or Spark ML Attribute to indicate the field to categorical or continuous.
randomTrees = RandomTrees().
setTargetField("target").
setInputFieldList(["feature1", "feature2", "feature3"]).
numTrees(10).
setMaxTreeDepth(5)
randomTreesModel = randomTrees.fit(df)
predictions = randomTreesModel.transform(scoreDF)
predictions.show()
CHAID
CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits. An extension applicable to regression problems is also available.
CHAID first examines the crosstabulations between each of the input fields and the target, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that's the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged. Continuous input fields other than the target can't be used directly; they must be binned into ordinal fields first.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
Example code:
Python example:
from spss.ml.classificationandregression.tree.chaid import CHAID
chaid = CHAID().
setTargetField("salary").
setInputFieldList(["educ", "jobcat", "gender"])
chaidModel = chaid.fit(data)
pmmlStr = chaidModel.toPMML()
statxmlStr = chaidModel.statXML()
predictions = chaidModel.transform(data)
predictions.show()
Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
| # SPSS predictive analytics classification and regression algorithms in notebooks #
You can use generalized linear model, linear regression, linear support vector machine, random trees, or CHAID SPSS predictive analytics algorithms in notebooks\.
## Generalized Linear Model ##
The Generalized Linear Model (GLE) is a commonly used analytical algorithm for different types of data\. It covers not only widely used statistical models, such as linear regression for normally distributed targets, logistic models for binary or multinomial targets, and log linear models for count data, but also covers many useful statistical models via its very general model formulation\. In addition to building the model, Generalized Linear Model provides other useful features such as variable selection, automatic selection of distribution and link function, and model evaluation statistics\. This model has options for regularization, such as LASSO, ridge regression, elastic net, etc\., and is also capable of handling very wide data\.
For more details about how to choose distribution and link function, see Distribution and Link Function Combination\.
**Example code 1:**
This example shows a GLE setting with specified distribution and link function, specified effects, intercept, conducting ROC curve, and printing correlation matrix\. This scenario builds a model, then scores the model\.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle1 = GeneralizedLinear(). \
setTargetField("Work_experience"). \
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]). \
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee", "Educational_level"], nestingLevels=0, 0])]). \
setIntercept(True). \
setDistribution("NORMAL"). \
setLinkFunction("LOG"). \
setAnalysisType("BOTH"). \
setConductRocCurve(True)
gleModel1 = gle1.fit(data)
PMML = gleModel1.toPMML()
statXML = gleModel1.statXML()
predictions1 = gleModel1.transform(data)
predictions1.show()
**Example code 2:**
This example shows a GLE setting with unspecified distribution and link function, and variable selection using the forward stepwise method\. This scenario uses the forward stepwise method to select distribution, link function and effects, then builds and scores the model\.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle2 = GeneralizedLinear(). \
setTargetField("Work_experience"). \
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]). \
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0])]). \
setIntercept(True). \
setDistribution("UNKNOWN"). \
setLinkFunction("UNKNOWN"). \
setAnalysisType("BOTH"). \
setUseVariableSelection(True). \
setVariableSelectionMethod("FORWARD_STEPWISE")
gleModel2 = gle2.fit(data)
PMML = gleModel2.toPMML()
statXML = gleModel2.statXML()
predictions2 = gleModel2.transform(data)
predictions2.show()
**Example code 3:**
This example shows a GLE setting with unspecified distribution, specified link function, and variable selection using the LASSO method, with two\-way interaction detection and automatic penalty parameter selection\. This scenario detects two\-way interaction for effects, then uses the LASSO method to select distribution and effects using automatic penalty parameter selection, then builds and scores the model\.
Python example:
from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
from spss.ml.classificationandregression.params.effect import Effect
gle3 = GeneralizedLinear(). \
setTargetField("Work_experience"). \
setInputFieldList(["Beginning_salary", "Sex_of_employee", "Educational_level", "Minority_classification", "Current_salary"]). \
setEffects([
Effect(fields="Beginning_salary"], nestingLevels=0]),
Effect(fields="Sex_of_employee"], nestingLevels=0]),
Effect(fields="Educational_level"], nestingLevels=0]),
Effect(fields="Current_salary"], nestingLevels=0])]). \
setIntercept(True). \
setDistribution("UNKNOWN"). \
setLinkFunction("LOG"). \
setAnalysisType("BOTH"). \
setDetectTwoWayInteraction(True). \
setUseVariableSelection(True). \
setVariableSelectionMethod("LASSO"). \
setUserSpecPenaltyParams(False)
gleModel3 = gle3.fit(data)
PMML = gleModel3.toPMML()
statXML = gleModel3.statXML()
predictions3 = gleModel3.transform(data)
predictions3.show()
## Linear Regression ##
The linear regression model analyzes the predictive relationship between a continuous target and one or more predictors which can be continuous or categorical\.
Features of the linear regression model include automatic interaction effect detection, forward stepwise model selection, diagnostic checking, and unusual category detection based on Estimated Marginal Means (EMMEANS)\.
**Example code:**
Python example:
from spss.ml.classificationandregression.linearregression import LinearRegression
le = LinearRegression(). \
setTargetField("target"). \
setInputFieldList(["predictor1", "predictor2", "predictorn"]). \
setDetectTwoWayInteraction(True). \
setVarSelectionMethod("forwardStepwise")
leModel = le.fit(data)
predictions = leModel.transform(data)
predictions.show()
## Linear Support Vector Machine ##
The Linear Support Vector Machine (LSVM) provides a supervised learning method that generates input\-output mapping functions from a set of labeled training data\. The mapping function can be either a classification function or a regression function\. LSVM is designed to resolve large\-scale problems in terms of the number of records and the number of variables (parameters)\. Its feature space is the same as the input space of the problem, and it can handle sparse data where the average number of non\-zero elements in one record is small\.
**Example code:**
Python example:
from spss.ml.classificationandregression.linearsupportvectormachine import LinearSupportVectorMachine
lsvm = LinearSupportVectorMachine().\
setTargetField("BareNuc").\
setInputFieldList(["Clump", "UnifSize", "UnifShape", "MargAdh", "SingEpiSize", "BlandChrom", "NormNucl", "Mit", "Class"]).\
setPenaltyFunction("L2")
lsvmModel = lsvm.fit(df)
predictions = lsvmModel.transform(data)
predictions.show()
## Random Trees ##
Random Trees is a powerful approach for generating strong (accurate) predictive models\. It's comparable and sometimes better than other state\-of\-the\-art methods for classification or regression problems\.
Random Trees is an ensemble model consisting of multiple CART\-like trees\. Each tree grows on a bootstrap sample which is obtained by sampling the original data cases with replacement\. Moreover, during the tree growth, for each node the best split variable is selected from a specified smaller number of variables that are drawn randomly from the full set of variables\. Each tree grows to the largest extent possible, and there is no pruning\. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression)\.
**Example code:**
Python example:
from spss.ml.classificationandregression.ensemble.randomtrees import RandomTrees
# Random trees required a "target" field and some input fields. If "target" is continuous, then regression trees will be generate else classification .
# You can use the SPSS Attribute or Spark ML Attribute to indicate the field to categorical or continuous.
randomTrees = RandomTrees(). \
setTargetField("target"). \
setInputFieldList(["feature1", "feature2", "feature3"]). \
numTrees(10). \
setMaxTreeDepth(5)
randomTreesModel = randomTrees.fit(df)
predictions = randomTreesModel.transform(scoreDF)
predictions.show()
## CHAID ##
CHAID, or Chi\-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi\-square statistics to identify optimal splits\. An extension applicable to regression problems is also available\.
CHAID first examines the crosstabulations between each of the input fields and the target, and tests for significance using a chi\-square independence test\. If more than one of these relations is statistically significant, CHAID will select the input field that's the most significant (smallest p value)\. If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together\. This is done by successively joining the pair of categories showing the least significant difference\. This category\-merging process stops when all remaining categories differ at the specified testing level\. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged\. Continuous input fields other than the target can't be used directly; they must be binned into ordinal fields first\.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute\.
**Example code:**
Python example:
from spss.ml.classificationandregression.tree.chaid import CHAID
chaid = CHAID(). \
setTargetField("salary"). \
setInputFieldList(["educ", "jobcat", "gender"])
chaidModel = chaid.fit(data)
pmmlStr = chaidModel.toPMML()
statxmlStr = chaidModel.statXML()
predictions = chaidModel.transform(data)
predictions.show()
**Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
<!-- </article "role="article" "> -->
|
CE1B598A354C454F2D201039A2BB6D69BABBF840 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html?context=cdpaas&locale=en | SPSS predictive analytics clustering algorithms in notebooks | SPSS predictive analytics clustering algorithms in notebooks
You can use the scalable Two-Step or the Cluster model evaluation algorithm to cluster data in notebooks.
Two-Step Cluster
Scalable Two-Step is based on the familiar two-step clustering algorithm, but extends both its functionality and performance in several directions.
First, it can effectively work with large and distributed data supported by Spark that provides the Map-Reduce computing paradigm.
Second, the algorithm provides mechanisms for selecting the most relevant features for clustering the given data, as well as detecting rare outlier points. Moreover, it provides an enhanced set of evaluation and diagnostic features for enabling insight.
The two-step clustering algorithm first performs a pre-clustering step by scanning the entire dataset and storing the dense regions of data cases in terms of summary statistics called cluster features. The cluster features are stored in memory in a data structure called the CF-tree. Finally, an agglomerative hierarchical clustering algorithm is applied to cluster the set of cluster features.
Python example code:
from spss.ml.clustering.twostep import TwoStep
cluster = TwoStep().
setInputFieldList(["region", "happy", "age"]).
setDistMeasure("LOGLIKELIHOOD").
setFeatureImportanceMethod("CRITERION").
setAutoClustering(True)
clusterModel = cluster.fit(data)
predictions = clusterModel.transform(data)
predictions.show()
Cluster model evaluation
Cluster model evaluation (CME) aims to interpret cluster models and discover useful insights based on various evaluation measures.
It's a post-modeling analysis that's generic and independent from any types of cluster models.
Python example code:
from spss.ml.clustering.twostep import TwoStep
cluster = TwoStep().
setInputFieldList(["region", "happy", "age"]).
setDistMeasure("LOGLIKELIHOOD").
setFeatureImportanceMethod("CRITERION").
setAutoClustering(True)
clusterModel = cluster.fit(data)
predictions = clusterModel.transform(data)
predictions.show()
Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
| # SPSS predictive analytics clustering algorithms in notebooks #
You can use the scalable Two\-Step or the Cluster model evaluation algorithm to cluster data in notebooks\.
## Two\-Step Cluster ##
Scalable Two\-Step is based on the familiar two\-step clustering algorithm, but extends both its functionality and performance in several directions\.
First, it can effectively work with large and distributed data supported by Spark that provides the Map\-Reduce computing paradigm\.
Second, the algorithm provides mechanisms for selecting the most relevant features for clustering the given data, as well as detecting rare outlier points\. Moreover, it provides an enhanced set of evaluation and diagnostic features for enabling insight\.
The two\-step clustering algorithm first performs a pre\-clustering step by scanning the entire dataset and storing the dense regions of data cases in terms of summary statistics called cluster features\. The cluster features are stored in memory in a data structure called the CF\-tree\. Finally, an agglomerative hierarchical clustering algorithm is applied to cluster the set of cluster features\.
**Python example code:**
from spss.ml.clustering.twostep import TwoStep
cluster = TwoStep(). \
setInputFieldList(["region", "happy", "age"]). \
setDistMeasure("LOGLIKELIHOOD"). \
setFeatureImportanceMethod("CRITERION"). \
setAutoClustering(True)
clusterModel = cluster.fit(data)
predictions = clusterModel.transform(data)
predictions.show()
## Cluster model evaluation ##
Cluster model evaluation (CME) aims to interpret cluster models and discover useful insights based on various evaluation measures\.
It's a post\-modeling analysis that's generic and independent from any types of cluster models\.
**Python example code:**
from spss.ml.clustering.twostep import TwoStep
cluster = TwoStep(). \
setInputFieldList(["region", "happy", "age"]). \
setDistMeasure("LOGLIKELIHOOD"). \
setFeatureImportanceMethod("CRITERION"). \
setAutoClustering(True)
clusterModel = cluster.fit(data)
predictions = clusterModel.transform(data)
predictions.show()
**Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
<!-- </article "role="article" "> -->
|
AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html?context=cdpaas&locale=en | Coding and running a notebook | Coding and running a notebook
After you created a notebook to use in the notebook editor, you need to add libraries, code, and data so you can do your analysis.
To develop analytic applications in a notebook, follow these general steps:
1. Open the notebook in edit mode: click the edit icon (). If the notebook is locked, you might be able to [unlock and edit](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmleditassets) it.
2. If the notebook is marked as being untrusted, tell the Jupyter service to trust your notebook content and allow executing all cells by:
1. Clicking Not Trusted in the upper right corner of the notebook.
2. Clicking Trust to execute all cells.
3. Determine if the environment template that is associated with the notebook has the correct hardware size for the anticipated analysis processing throughput.
1. Check the size of the environment by clicking the View notebook info icon () from the notebook toolbar and selecting the Environments page.
2. If you need to change the environment, select another one from the list or, if none fits your needs, create your own environment template. See [Creating emvironment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
If you create an environment template, you can add your own libraries to the template that are preinstalled at the time the environment is started. See [Customize your environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) for Python and R.
4. Import preinstalled libraries. See [Libraries and scripts for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html).
5. Load and access data. You can access data from project assets by running code that is generated for you when you select the asset or programmatically by using preinstalled library functions. See [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html).
6. Prepare and analyze the data with the appropriate methods:
* [Build Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
* [Build Decision Optimization models](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
* [Use Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
* [Use SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
* [Use geospatial location analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html)
* [Use Data skipping for Spark SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html)
* [Apply Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
* [Use Time series analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
7. If necessary, schedule the notebook to run at a regular time. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html).
1. Monitor the status of your job runs from the project's Jobs page.
2. Click your job to open the job's details page to view the runs for your job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to troubleshoot the run.
8. When you're not actively working on the notebook, click File > Stop Kernel to stop the notebook kernel and free up resources.
9. Stop the active runtime (and unnecessary capacity unit consumption) if no other notebook kernels are active under Tool runtimes on the Environments page on the Manage tab of your project.
Video disclaimer: Some minor steps and graphical elements in these videos may differ from your deployment.
Watch this short video to see how to create a Jupyter notebook and custom environment.
This video provides a visual method to learn the concepts and tasks in this documentation.
Watch this short video to see how to run basic SQL queries on Db2 Warehouse data in a Python notebook.
This video provides a visual method to learn the concepts and tasks in this documentation.
Learn more
* [Markdown cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)
* [Notebook interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html)
* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
* [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)
Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
| # Coding and running a notebook #
After you created a notebook to use in the notebook editor, you need to add libraries, code, and data so you can do your analysis\.
To develop analytic applications in a notebook, follow these general steps:
<!-- <ol> -->
1. Open the notebook in edit mode: click the edit icon ()\. If the notebook is locked, you might be able to [unlock and edit](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html#editassets) it\.
2. If the notebook is marked as being *untrusted*, tell the Jupyter service to trust your notebook content and allow executing all cells by:
<!-- <ol> -->
1. Clicking **Not Trusted** in the upper right corner of the notebook.
2. Clicking **Trust** to execute all cells.
<!-- </ol> -->
3. Determine if the environment template that is associated with the notebook has the correct hardware size for the anticipated analysis processing throughput\.
<!-- <ol> -->
1. Check the size of the environment by clicking the View notebook info icon () from the notebook toolbar and selecting the **Environments** page.
2. If you need to change the environment, select another one from the list or, if none fits your needs, create your own environment template. See [Creating emvironment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
If you create an environment template, you can add your own libraries to the template that are preinstalled at the time the environment is started. See [Customize your environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) for Python and R.
<!-- </ol> -->
4. Import preinstalled libraries\. See [Libraries and scripts for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)\.
5. Load and access data\. You can access data from project assets by running code that is generated for you when you select the asset or programmatically by using preinstalled library functions\. See [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)\.
6. Prepare and analyze the data with the appropriate methods:
<!-- <ul> -->
* [Build Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
* [Build Decision Optimization models](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
* [Use Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
* [Use SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
* [Use geospatial location analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html)
* [Use Data skipping for Spark SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html)
* [Apply Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
* [Use Time series analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
<!-- </ul> -->
7. If necessary, schedule the notebook to run at a regular time\. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)\.
<!-- <ol> -->
1. Monitor the status of your job runs from the project's **Jobs** page.
2. Click your job to open the job's details page to view the runs for your job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to troubleshoot the run.
<!-- </ol> -->
8. When you're not actively working on the notebook, click **File > Stop Kernel** to stop the notebook kernel and free up resources\.
9. Stop the active runtime (and unnecessary capacity unit consumption) if no other notebook kernels are active under **Tool runtimes** on the **Environments** page on the **Manage** tab of your project\.
<!-- </ol> -->
Video disclaimer: Some minor steps and graphical elements in these videos may differ from your deployment\.
Watch this short video to see how to create a Jupyter notebook and custom environment\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
Watch this short video to see how to run basic SQL queries on Db2 Warehouse data in a Python notebook\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Learn more ##
<!-- <ul> -->
* [Markdown cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)
* [Notebook interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html)
<!-- </ul> -->
<!-- <ul> -->
* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes)
* [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)
<!-- </ul> -->
**Parent topic:**[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
<!-- </article "role="article" "> -->
|
4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html?context=cdpaas&locale=en | Deployment space collaborator roles and permissions | Deployment space collaborator roles and permissions
When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. Learn how to add collaborators to your deployment spaces and the differences between access levels.
User roles and permissions in deployment spaces
You can assign the following roles to collaborators based on the access level that you want to provide:
* Admin: Administrators can control your deployment space assets, users, and settings.
* Editor: Editors can control your space assets.
* Viewer: Viewers can view your deployment space.
The following table provides details on permissions based on user access level:
Deployment space permissions
Enabled permission Viewer Editor Admin
View assets and deployments ✓ ✓ ✓
Comment ✓ ✓ ✓
Monitor ✓ ✓ ✓
Test model deployment API ✓ ✓ ✓
Find implementation details ✓ ✓ ✓
Configure deployments ✓ ✓
Batch deployment score ✓ ✓
Online deployment score ✓ ✓ ✓
Update assets ✓ ✓
Import assets ✓ ✓
Download assets ✓ ✓
Deploy assets ✓ ✓
Remove assets ✓ ✓
Remove deployments ✓ ✓
View spaces/members ✓ ✓ ✓
Delete space ✓
Service IDs
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Service IDs are not tied to a specific user. Therefore, if a user leaves an organization and is deleted from the account, the service ID remains. Thus, your application or service stays up and running. For more information, see [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids).
To learn more about assigning space access by using a service ID, see [Adding collaborators to your deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html?context=cdpaas&locale=enadding-collaborators).
Adding collaborators to your deployment space
Prerequisites:
All users in your IBM Cloud account with the Admin IAM platform access role for all IAM enabled services can manage space collaborators. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform).
Restriction:
You can add collaborators to your deployment space only if they are a part of your organization and if they provisioned Watson Studio.
To add one or more collaborators to a deployment space:
1. From your deployment space, go to the Manage tab and click Access Control.
2. Click Add collaborators and choose one of the following options:
* If you want to add a user, click Add users. Assign a role that applies to the user.
* If you want to add pre-defined user groups, click . Assign a role that applies to all members of the group.
3. Add the user or user groups that you want to have the same access level and click Add.
Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
| # Deployment space collaborator roles and permissions #
When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels\. Learn how to add collaborators to your deployment spaces and the differences between access levels\.
## User roles and permissions in deployment spaces ##
You can assign the following roles to collaborators based on the access level that you want to provide:
<!-- <ul> -->
* **Admin**: Administrators can control your deployment space assets, users, and settings\.
* **Editor**: Editors can control your space assets\.
* **Viewer**: Viewers can view your deployment space\.
<!-- </ul> -->
The following table provides details on permissions based on user access level:
<!-- <table> -->
Deployment space permissions
| Enabled permission | Viewer | Editor | Admin |
| --------------------------- | ------ | ------ | ----- |
| View assets and deployments | ✓ | ✓ | ✓ |
| Comment | ✓ | ✓ | ✓ |
| Monitor | ✓ | ✓ | ✓ |
| Test model deployment API | ✓ | ✓ | ✓ |
| Find implementation details | ✓ | ✓ | ✓ |
| Configure deployments | | ✓ | ✓ |
| Batch deployment score | | ✓ | ✓ |
| Online deployment score | ✓ | ✓ | ✓ |
| Update assets | | ✓ | ✓ |
| Import assets | | ✓ | ✓ |
| Download assets | | ✓ | ✓ |
| Deploy assets | | ✓ | ✓ |
| Remove assets | | ✓ | ✓ |
| Remove deployments | | ✓ | ✓ |
| View spaces/members | ✓ | ✓ | ✓ |
| Delete space | | | ✓ |
<!-- </table ""> -->
### Service IDs ###
You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services\. Service IDs are not tied to a specific user\. Therefore, if a user leaves an organization and is deleted from the account, the service ID remains\. Thus, your application or service stays up and running\. For more information, see [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids)\.
To learn more about assigning space access by using a service ID, see [Adding collaborators to your deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html?context=cdpaas&locale=en#adding-collaborators)\.
## Adding collaborators to your deployment space ##
**Prerequisites:**
All users in your IBM Cloud account with the **Admin** IAM platform access role for all IAM enabled services can manage space collaborators\. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html#platform)\.
**Restriction:**
You can add collaborators to your deployment space only if they are a part of your organization and if they provisioned Watson Studio\.
To add one or more collaborators to a deployment space:
<!-- <ol> -->
1. From your deployment space, go to the **Manage** tab and click **Access Control**\.
2. Click **Add collaborators** and choose one of the following options:
<!-- <ul> -->
* If you want to add a user, click **Add users**. Assign a role that applies to the user.
* If you want to add pre-defined user groups, click . Assign a role that applies to all members of the group.
<!-- </ul> -->
3. Add the user or user groups that you want to have the same access level and click **Add**\.
<!-- </ol> -->
**Parent topic:**[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
<!-- </article "role="article" "> -->
|
BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html?context=cdpaas&locale=en | Creating environment templates | Creating environment templates
You can create custom environment templates if you do not want to use the default environments provided by Watson Studio.
Required permissions : To create an environment template, you must have the Admin or Editor role within the project.
You can create environment templates for the following types of assets:
* Notebooks in the Notebook editor
* Notebooks in RStudio
* Modeler flows in the SPSS Modeler
* Data Refinery flows
* Jobs that run operational assets, such as Data Refinery flows, or Notebooks in a project
Note:
To create an environment template:
1. On the Manage tab of your project, select the Environments page and click New template under Templates.
2. Enter a name and a description.
3. Select one of the following engine types:
* Default: Select for Python, R, and RStudio runtimes for Watson Studio.
* Spark: Select for Spark with Python or R runtimes for Watson Studio.
* GPU: Select for more computing power to improve model training performance for Watson Studio.
4. Select the hardware configuration from the Hardware configuration drop-down menu.
5. Select the software version if you selected a runtime of "Default," "Spark," or "GPU."
Where to find your custom environment template
Your new environment template is listed under Templates on the Environments page in the Manage tab of your project. From this page, you can:
* Check which runtimes are active
* Update custom environment templates
* Track the number of capacity units per hour that your runtimes have consumed so far
* Stop active runtimes.
Limitations
The default environments provided by Watson Studio cannot be edited or modified.
Notebook environments (Anaconda Python or R distributions):
: - You can't add a software customization to the default Python and R environment templates included in Watson Studio. You can only add a customization to an environment template that you create. : - If you add a software customization using conda, your environment must have at least 2 GB RAM. : - You can't customize an R environment for a notebook by installing R packages directly from CRAN or GitHub. You can check if the CRAN package you want is available only from conda channels and, if the package is available, add that package name in the customization list as r-<package-name>.
* After you have started a notebook in an Watson Studio environment, you can't create another conda environment from inside that notebook and use it. Watson Studio environments do not behave like a Conda environment manager.
Spark environments: : - You can't customize the software configuration of a Spark environment template.
Next steps
* [Customize environment templates for Python or R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
Learn more
Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
| # Creating environment templates #
You can create custom environment templates if you do not want to use the default environments provided by Watson Studio\.
**Required permissions** : To create an environment template, you must have the **Admin** or **Editor** role within the project\.
You can create environment templates for the following types of assets:
<!-- <ul> -->
* Notebooks in the Notebook editor
* Notebooks in RStudio
* Modeler flows in the SPSS Modeler
* Data Refinery flows
* Jobs that run operational assets, such as Data Refinery flows, or Notebooks in a project
<!-- </ul> -->
Note:
To create an environment template:
<!-- <ol> -->
1. On the **Manage** tab of your project, select the **Environments** page and click **New template** under **Templates**\.
2. Enter a name and a description\.
3. Select one of the following engine types:
<!-- <ul> -->
* **Default**: Select for Python, R, and RStudio runtimes for Watson Studio.
* **Spark**: Select for Spark with Python or R runtimes for Watson Studio.
* **GPU**: Select for more computing power to improve model training performance for Watson Studio.
<!-- </ul> -->
4. Select the hardware configuration from the **Hardware configuration** drop\-down menu\.
5. Select the software version if you selected a runtime of "Default," "Spark," or "GPU\."
<!-- </ol> -->
### Where to find your custom environment template ###
Your new environment template is listed under Templates on the **Environments** page in the **Manage** tab of your project\. From this page, you can:
<!-- <ul> -->
* Check which runtimes are active
* Update custom environment templates
* Track the number of capacity units per hour that your runtimes have consumed so far
* Stop active runtimes\.
<!-- </ul> -->
## Limitations ##
The default environments provided by Watson Studio cannot be edited or modified\.
**Notebook environments** (Anaconda Python or R distributions):
: \- You can't add a software customization to the default Python and R environment templates included in Watson Studio\. You can only add a customization to an environment template that you create\. : \- If you add a software customization using conda, your environment must have at least 2 GB RAM\. : \- You can't customize an R environment for a notebook by installing R packages directly from CRAN or GitHub\. You can check if the CRAN package you want is available only from conda channels and, if the package is available, add that package name in the customization list as `r-<package-name>`\.
<!-- <ul> -->
* After you have started a notebook in an Watson Studio environment, you can't create another conda environment from inside that notebook and use it\. Watson Studio environments do not behave like a Conda environment manager\.
<!-- </ul> -->
**Spark environments**: : \- You can't customize the software configuration of a Spark environment template\.
## Next steps ##
<!-- <ul> -->
* [Customize environment templates for Python or R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
<!-- </ul> -->
## Learn more ##
**Parent topic:**[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
<!-- </article "role="article" "> -->
|
D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html?context=cdpaas&locale=en | Publishing a notebook as a gist | Publishing a notebook as a gist
A gist is a simple way to share a notebook or parts of a notebook with other users. Unlike when you publish to a GitHub repository, you don't need to manage your gists; you can edit your gists directly in the browser.
All project collaborators, who have administrator or editor permission, can share notebooks or parts of a notebook as gists. The latest saved version of your notebook is published as a gist.
Before you can create a gist, you must be logged in to GitHub and have authorized access to gists in GitHub from Watson Studio. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). If this information is missing, you are prompted for it.
To publish a notebook as a gist:
1. Open the notebook in edit mode.
2. Click the GitHub integration icon () and select Publish as gist.
Watch this video to see how to enable GitHub integration.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
* Transcript
Synchronize transcript with video
Time Transcript
00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account.
00:07 Navigate to your profile and settings.
00:11 On the "Integrations" tab, visit the link to generate a GitHub personal access token.
00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token.
00:29 Copy the token, return to the GitHub integration settings, and paste the token.
00:36 The token is validated when you save it to your profile settings.
00:42 Now, navigate to your projects.
00:44 You enable GitHub integration at the project level on the "Settings" tab.
00:50 Simply scroll to the bottom and paste the existing GitHub repository URL.
00:56 You'll find that on the "Code" tab in the repo.
01:01 Click "Update" to make the connection.
01:05 Now, go to the "Assets" tab and open the notebook you want to publish.
01:14 Notice that this notebook has the credentials replaced with X's.
01:19 It's a best practice to remove or replace credentials before publishing to GitHub.
01:24 So, this notebook is ready for publishing.
01:27 You can provide the target path along with a commit message.
01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published.
01:42 When you're, ready click "Publish".
01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit.
01:54 Let's take a look at the commit.
01:57 So, there's the commit, and you can navigate to the repository to see the published notebook.
02:04 Lastly, you can publish as a gist.
02:07 Gists are another way to share your work on GitHub.
02:10 Every gist is a git repository, so it can be forked and cloned.
02:15 There are two types of gists: public and secret.
02:19 If you start out with a secret gist, you can convert it to a public gist later.
02:24 And again, you have the option to remove hidden cells.
02:29 Follow the link to see the published gist.
02:32 So that's the basics of Watson Studio's GitHub integration.
02:37 Find more videos in the Cloud Pak for Data as a Service documentation.
Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
| # Publishing a notebook as a gist #
A gist is a simple way to share a notebook or parts of a notebook with other users\. Unlike when you publish to a GitHub repository, you don't need to manage your gists; you can edit your gists directly in the browser\.
All project collaborators, who have administrator or editor permission, can share notebooks or parts of a notebook as gists\. The latest saved version of your notebook is published as a gist\.
Before you can create a gist, you must be logged in to GitHub and have authorized access to gists in GitHub from Watson Studio\. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html)\. If this information is missing, you are prompted for it\.
To publish a notebook as a gist:
<!-- <ol> -->
1. Open the notebook in edit mode\.
2. Click the GitHub integration icon () and select **Publish as gist**\.
<!-- </ol> -->
Watch this video to see how to enable GitHub integration\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
<!-- <ul> -->
* Transcript
Synchronize transcript with video
<!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
| Time | Transcript |
| ----- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 00:00 | This video shows you how to publish notebooks from your Watson Studio project to your GitHub account. |
| 00:07 | Navigate to your profile and settings. |
| 00:11 | On the "Integrations" tab, visit the link to generate a GitHub personal access token. |
| 00:17 | Provide a descriptive name for the token and select the repo and gist scopes, then generate the token. |
| 00:29 | Copy the token, return to the GitHub integration settings, and paste the token. |
| 00:36 | The token is validated when you save it to your profile settings. |
| 00:42 | Now, navigate to your projects. |
| 00:44 | You enable GitHub integration at the project level on the "Settings" tab. |
| 00:50 | Simply scroll to the bottom and paste the existing GitHub repository URL. |
| 00:56 | You'll find that on the "Code" tab in the repo. |
| 01:01 | Click "Update" to make the connection. |
| 01:05 | Now, go to the "Assets" tab and open the notebook you want to publish. |
| 01:14 | Notice that this notebook has the credentials replaced with X's. |
| 01:19 | It's a best practice to remove or replace credentials before publishing to GitHub. |
| 01:24 | So, this notebook is ready for publishing. |
| 01:27 | You can provide the target path along with a commit message. |
| 01:31 | You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published. |
| 01:42 | When you're, ready click "Publish". |
| 01:45 | The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit. |
| 01:54 | Let's take a look at the commit. |
| 01:57 | So, there's the commit, and you can navigate to the repository to see the published notebook. |
| 02:04 | Lastly, you can publish as a gist. |
| 02:07 | Gists are another way to share your work on GitHub. |
| 02:10 | Every gist is a git repository, so it can be forked and cloned. |
| 02:15 | There are two types of gists: public and secret. |
| 02:19 | If you start out with a secret gist, you can convert it to a public gist later. |
| 02:24 | And again, you have the option to remove hidden cells. |
| 02:29 | Follow the link to see the published gist. |
| 02:32 | So that's the basics of Watson Studio's GitHub integration. |
| 02:37 | Find more videos in the Cloud Pak for Data as a Service documentation. |
<!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> -->
<!-- </ul> -->
**Parent topic:**[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
<!-- </article "role="article" "> -->
|
11A093CB8F1D24EA066663B3991084A84FC32BF2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=en | Creating jobs in deployment spaces | Creating jobs in deployment spaces
A job is a way of running a batch deployment, or a self-contained asset like a script, notebook, code package, or flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. From a deployment space, you can create, schedule, run, and manage jobs.
Creating a batch deployment job
Follow these steps when you are creating a batch deployment job:
Important: You must have an existing batch deployment to create a batch job.
1. From the Deployments tab, select your deployment and click New job. The Create a job dialog box opens.
2. In the Define details section, enter your job name, an optional description, and click Next.
3. In the Configure section, select a hardware specification.
You can follow these steps to optionally configure environment variables and job run retention settings:
* Optional: If you are deploying a Python script, an R script, or a notebook, then you can enter environment variables to pass parameters to the job. Click Environment variables to enter the key - value pair.
* Optional: To avoid finishing resources by retaining all historical job metadata, follow one of these options:
* Click By amount to set thresholds for saving a set number of job runs and associated logs.
* Click By duration (days) to set thresholds for saving artifacts for a specified number of days.
4. Optional: In the Schedule section, toggle the Schedule off button to schedule a run. You can set a date and time for start of schedule and set a schedule for repetition. Click Next.
Note: If you don't specify a schedule, the job runs immediately.
5. Optional: In the Notify section, toggle the Off button to turn on notifications associated with this job. Click Next.
Note: You can receive notifications for three types of events: success, warning, and failure.
6. In the Choose data section, provide inline data that corresponds with your model schema. You can provide input in JSON format. Click Next. See [Example JSON payload for inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=enexample-json).
7. In the Review and create section, verify your job details, and click Create and run.
Notes:
* Scheduled jobs display on the Jobs tab of the deployment space.
* Results of job runs are written to the specified output file and saved as a space asset.
* A data asset can be a data source file that you promoted to the space, a connected data source, or tables from databases and files from file-based data sources.
* If you exclude certain weekdays in your job schedule, the job might not run as you would expect. The reason is due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the main node where the job runs.
* When you create or modify a scheduled job, an API key is generated. Future runs use this generated API key.
Example JSON payload for inline data
{
"deployment": {
"id": "<deployment id>"
},
"space_id": "<your space id>",
"name": "test_v4_inline",
"scoring": {
"input_data": [{
"fields": "AGE", "SEX", "BP", "CHOLESTEROL", "NA", "K"],
"values": 47, "M", "LOW", "HIGH", 0.739, 0.056], 47, "M", "LOW", "HIGH", 0.739, 0.056]]
}]
}
}
Queuing and concurrent job executions
The maximum number of concurrent jobs for each deployment is handled internally by the deployment service. For batch deployment, by default, two jobs can be run concurrently. Any deployment job request for a batch deployment that already has two running jobs is placed in a queue for execution later. When any of the running jobs is completed, the next job in the queue is run. The queue has no size limit.
Limitation on using large inline payloads for batch deployments
Batch deployment jobs that use large inline payload might get stuck in starting or running state.
Tip: If you provide huge payloads to batch deployments, use data references instead of inline.
Retention of deployment job metadata
Job-related metadata is persisted and can be accessed until the job and its deployment are deleted.
Viewing deployment job details
When you create or view a batch job, the deployment ID and the job ID are displayed.

* The deployment ID represents the deployment definition, including the hardware and software configurations and related assets.
* The job ID represents the details for a job, including input data and an output location and a schedule for running the job.
Use these IDs to refer to the job in Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) requests or in notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
| # Creating jobs in deployment spaces #
A job is a way of running a batch deployment, or a self\-contained asset like a script, notebook, code package, or flow in Watson Machine Learning\. You can select the input and output for your job and choose to run it manually or on a schedule\. From a deployment space, you can create, schedule, run, and manage jobs\.
## Creating a batch deployment job ##
Follow these steps when you are creating a batch deployment job:
Important: You must have an existing batch deployment to create a batch job\.
<!-- <ol> -->
1. From the **Deployments** tab, select your deployment and click **New job**\. The *Create a job* dialog box opens\.
2. In the *Define details* section, enter your job name, an optional description, and click **Next**\.
3. In the *Configure* section, select a hardware specification\.
You can follow these steps to optionally configure environment variables and job run retention settings:
<!-- <ul> -->
* Optional: If you are deploying a Python script, an R script, or a notebook, then you can enter environment variables to pass parameters to the job. Click **Environment variables** to enter the *key* - *value* pair.
* Optional: To avoid finishing resources by retaining all historical job metadata, follow one of these options:
<!-- <ul> -->
* Click **By amount** to set thresholds for saving a set number of job runs and associated logs.
* Click **By duration (days)** to set thresholds for saving artifacts for a specified number of days.
<!-- </ul> -->
<!-- </ul> -->
4. Optional: In the *Schedule* section, toggle the **Schedule off** button to schedule a run\. You can set a date and time for start of schedule and set a schedule for repetition\. Click **Next**\.
Note: If you don't specify a schedule, the job runs immediately.
5. Optional: In the *Notify* section, toggle the **Off** button to turn on notifications associated with this job\. Click **Next**\.
Note: You can receive notifications for three types of events: success, warning, and failure.
6. In the *Choose data* section, provide inline data that corresponds with your model schema\. You can provide input in JSON format\. Click **Next**\. See [Example JSON payload for inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=en#example-json)\.
7. In the *Review and create* section, verify your job details, and click **Create and run**\.
<!-- </ol> -->
**Notes**:
<!-- <ul> -->
* Scheduled jobs display on the **Jobs** tab of the deployment space\.
* Results of job runs are written to the specified output file and saved as a space asset\.
* A data asset can be a data source file that you promoted to the space, a connected data source, or tables from databases and files from file\-based data sources\.
* If you exclude certain weekdays in your job schedule, the job might not run as you would expect\. The reason is due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the main node where the job runs\.
* When you create or modify a scheduled job, an API key is generated\. Future runs use this generated API key\.
<!-- </ul> -->
### Example JSON payload for inline data ###
{
"deployment": {
"id": "<deployment id>"
},
"space_id": "<your space id>",
"name": "test_v4_inline",
"scoring": {
"input_data": [{
"fields": "AGE", "SEX", "BP", "CHOLESTEROL", "NA", "K"],
"values": 47, "M", "LOW", "HIGH", 0.739, 0.056], 47, "M", "LOW", "HIGH", 0.739, 0.056]]
}]
}
}
## Queuing and concurrent job executions ##
The maximum number of concurrent jobs for each deployment is handled internally by the deployment service\. For batch deployment, by default, two jobs can be run concurrently\. Any deployment job request for a batch deployment that already has two running jobs is placed in a queue for execution later\. When any of the running jobs is completed, the next job in the queue is run\. The queue has no size limit\.
## Limitation on using large inline payloads for batch deployments ##
Batch deployment jobs that use large inline payload might get stuck in `starting` or `running` state\.
Tip: If you provide huge payloads to batch deployments, use data references instead of inline\.
## Retention of deployment job metadata ##
Job\-related metadata is persisted and can be accessed until the job and its deployment are deleted\.
## Viewing deployment job details ##
When you create or view a batch job, the deployment ID and the job ID are displayed\.

<!-- <ul> -->
* The deployment ID represents the deployment definition, including the hardware and software configurations and related assets\.
* The job ID represents the details for a job, including input data and an output location and a schedule for running the job\.
<!-- </ul> -->
Use these IDs to refer to the job in Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) requests or in notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\.
**Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
<!-- </article "role="article" "> -->
|
D1AFA9BB4E0475A56190DC8254E004308BEA484D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=en | Creating notebooks | Creating notebooks
You can add a notebook to your project by using one of these methods: creating a notebook file or copying a sample notebook from the Samples.
Required permissions : You must have the Admin or Editor role in the project to create a notebook.
Watch this short video to learn the basics of Jupyter notebooks.
This video provides a visual method to learn the concepts and tasks in this documentation.
Creating a notebook file in the notebook editor
To create a notebook file in the notebook editor:
1. From your project, click New asset > Work with data and models in Python or R notebooks.
2. On the New Notebook page, specify the method to use to create your notebook. You can create a blank notebook, upload a notebook file from your file system, or upload a notebook file from a URL:
* The notebook file you select to upload must follow these requirements:
* The file type must be .ipynb.
* The file name must not exceed 255 characters.
* The file name must not contain these characters: < > : ” / | ( ) ?
* The URL must be a public URL that is shareable and doesn't require authentication.

3. Specify the runtime environment for the language you want to use (Python or R). You can select a provided environment template or an environment template which you created and configured under Templates on the Environments page on the Manage tab of your project. For more information on environments, see [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html).
4. Click Create Notebook. The notebook opens in edit mode.
Note that the time that it takes to create a new notebook or to open an existing one for editing might vary. If no runtime container is available, a container needs to be created and only after it is available, the Jupyter notebook user interface can be loaded. The time it takes to create a container depends on the cluster load and size. Once a runtime container exists, subsequent calls to open notebooks will be significantly faster.
The opened notebook is locked by you. For more information, see [Locking and unlocking notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=enlocking-and-unlocking).
5. Tell the service to trust your notebook content and execute all cells.
When a new notebook is opened in edit mode, the notebook is considered to be untrusted by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, HTML or Javascript in Markdown cells or in any output cells that you did not generate.
1. Click Not Trusted in the upper right corner of the notebook.
2. Click Trust to execute all cells.
Adding a notebook from the Samples
Notebooks from the Samples are based on real-world scenarios and contain many useful examples of computations and visualizations that you can adapt to your analysis needs.
To copy a sample notebook:
1. In the main menu, click Samples, then filter for Notebooks to show only notebook cards.
2. Find the card for the sample notebook you want, and click the card. You can view the notebook contents to browse the steps and the code that it contains.
3. To work with a copy of the sample notebook, click Add to project.
4. Choose the project for the notebook, and click Add.
5. Optional: Change the name and description for the notebook.
6. Specify the runtime environment. If you created an environment template on the Environments page of your project, it will display in the list of runtimes you can select from.
7. Click Create. The notebook opens in edit mode and is locked by you. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. To get familiar with the structure of a notebook, see [Parts of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html).
Locking and unlocking notebooks
If you open a notebook in edit mode, this notebook is locked by you. While you hold the lock, only you can make changes to the notebook. All other projects users will see the lock icon on the notebook. Only project administrators are able to unlock a locked notebook and open it in edit mode.
When you close the notebook, the lock is released and another user can select to open the notebook in edit mode. Note that you must close the notebook while the runtime environment is still active. The notebook lock can't be released for you if the runtime was stopped or is in idle state. If the notebook lock is not released for you, you can unlock the notebook from the project's Assets page. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file.
Finding your notebooks
You can find and open notebooks from the Assets page of the project.
You can open a notebook in view or edit mode. When you open a notebook in view mode, you can't change or run the notebook. You can only change or run a notebook when it is opened in edit mode and started in an environment.
You can open a notebook by:
* Clicking the notebook. This opens the notebook in view mode. To then open the notebook in edit mode, click the pencil icon () on the notebook toolbar. This starts the environment associated with the notebook.
* Expanding the three vertical dots on the right of the notebook entry, and selecting View or Edit.
Next step
* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html)
Learn more
* [Provided CPU runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-cpu)
* [Provided Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-spark)
* [Change the environment runtime used by a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)
Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
| # Creating notebooks #
You can add a notebook to your project by using one of these methods: creating a notebook file or copying a sample notebook from the Samples\.
**Required permissions** : You must have the Admin or Editor role in the project to create a notebook\.
Watch this short video to learn the basics of Jupyter notebooks\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
## Creating a notebook file in the notebook editor ##
To create a notebook file in the notebook editor:
<!-- <ol> -->
1. From your project, click **New asset > Work with data and models in Python or R notebooks**\.
2. On the **New Notebook** page, specify the method to use to create your notebook\. You can create a blank notebook, upload a notebook file from your file system, or upload a notebook file from a URL:
<!-- <ul> -->
* The notebook file you select to upload must follow these requirements:
<!-- <ul> -->
* The file type must be *.ipynb*.
* The file name must not exceed 255 characters.
* The file name must not contain these characters: `< > : ” / | ( ) ?`
<!-- </ul> -->
* The URL must be a public URL that is shareable and doesn't require authentication.

<!-- </ul> -->
3. Specify the runtime environment for the language you want to use (Python or R)\. You can select a provided environment template or an environment template which you created and configured under **Templates** on the **Environments** page on the **Manage** tab of your project\. For more information on environments, see [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html)\.
4. Click **Create Notebook**\. The notebook opens in edit mode\.
Note that the time that it takes to create a new notebook or to open an existing one for editing might vary. If no runtime container is available, a container needs to be created and only after it is available, the Jupyter notebook user interface can be loaded. The time it takes to create a container depends on the cluster load and size. Once a runtime container exists, subsequent calls to open notebooks will be significantly faster.
The opened notebook is locked by you. For more information, see [Locking and unlocking notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=en#locking-and-unlocking).
5. Tell the service to trust your notebook content and execute all cells\.
When a new notebook is opened in edit mode, the notebook is considered to be *untrusted* by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, HTML or Javascript in Markdown cells or in any output cells that you did not generate.
<!-- <ol> -->
1. Click **Not Trusted** in the upper right corner of the notebook.
2. Click **Trust** to execute all cells.
<!-- </ol> -->
<!-- </ol> -->
## Adding a notebook from the Samples ##
Notebooks from the Samples are based on real\-world scenarios and contain many useful examples of computations and visualizations that you can adapt to your analysis needs\.
To copy a sample notebook:
<!-- <ol> -->
1. In the main menu, click **Samples**, then filter for **Notebooks** to show only notebook cards\.
2. Find the card for the sample notebook you want, and click the card\. You can view the notebook contents to browse the steps and the code that it contains\.
3. To work with a copy of the sample notebook, click **Add to project**\.
4. Choose the project for the notebook, and click **Add**\.
5. Optional: Change the name and description for the notebook\.
6. Specify the runtime environment\. If you created an environment template on the *Environments* page of your project, it will display in the list of runtimes you can select from\.
7. Click **Create**\. The notebook opens in edit mode and is locked by you\. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file\. To get familiar with the structure of a notebook, see [Parts of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html)\.
<!-- </ol> -->
## Locking and unlocking notebooks ##
If you open a notebook in edit mode, this notebook is locked by you\. While you hold the lock, only you can make changes to the notebook\. All other projects users will see the lock icon on the notebook\. Only project administrators are able to unlock a locked notebook and open it in edit mode\.
When you close the notebook, the lock is released and another user can select to open the notebook in edit mode\. Note that you must close the notebook while the runtime environment is still active\. The notebook lock can't be released for you if the runtime was stopped or is in idle state\. If the notebook lock is not released for you, you can unlock the notebook from the project's Assets page\. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file\.
## Finding your notebooks ##
You can find and open notebooks from the **Assets** page of the project\.
You can open a notebook in view or edit mode\. When you open a notebook in view mode, you can't change or run the notebook\. You can only change or run a notebook when it is opened in edit mode and started in an environment\.
You can open a notebook by:
<!-- <ul> -->
* Clicking the notebook\. This opens the notebook in view mode\. To then open the notebook in edit mode, click the pencil icon () on the notebook toolbar\. This starts the environment associated with the notebook\.
* Expanding the three vertical dots on the right of the notebook entry, and selecting **View** or **Edit**\.
<!-- </ul> -->
## Next step ##
<!-- <ul> -->
* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Provided CPU runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#default-cpu)
* [Provided Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#default-spark)
* [Change the environment runtime used by a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#change-env)
<!-- </ul> -->
**Parent topic:**[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
<!-- </article "role="article" "> -->
|
3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html?context=cdpaas&locale=en | Customizing environment templates | Customizing environment templates
You can change the name, the description, and the hardware configuration of an environment template that you created. You can customize the software configuration of Jupyter notebook environment templates through conda channels or by using pip. You can provide a list of conda packages, a list of pip packages, or a combination of both. When using conda packages, you can provide a list of additional conda channel locations through which the packages can be obtained.
Required permissions : You must be have the Admin or Editor role in the project to customize an environment template.
Restrictions : You cannot change the language of an existing environment template. : You can’t customize the software configuration of a Spark environment template you created.
To customize an environment template that you created:
1. Under your project's Manage tab, click the Environments page.
2. In the Active Runtimes section, check that no runtime is active for the environment template you want to change.
3. In the Environment Templates section, click the environment template you want to customize.
4. Make your changes.
For a Juypter notebook environment template, select to create a customization and specify the libraries to add to the standard packages that are available by default. You can also use the customization to upgrade or downgrade packages that are part of the standard software configuration.
The libraries that are added to an environment template through the customization aren't persisted; however, they are automatically installed each time the environment runtime is started. Note that if you add a library using pip install through a notebook cell and not through the customization, only you will be able to use this library; the library is not available to someone else using the same environment template.
If you want you can use the provided template to add the custom libraries. There is a different template for Python and for R. The following example shows you how to add Python packages:
Modify the following content to add a software customization to an environment.
To remove an existing customization, delete the entire content and click Apply.
Add conda channels below defaults, indented by two spaces and a hyphen.
channels:
- defaults
To add packages through conda or pip, remove the comment on the following line.
dependencies:
Add conda packages here, indented by two spaces and a hyphen.
Remove the comment on the following line and replace sample package name with your package name:
- a_conda_package=1.0
Add pip packages here, indented by four spaces and a hyphen.
Remove the comments on the following lines and replace sample package name with your package name.
- pip:
- a_pip_package==1.0
Important when customizing:
* Before you customize a package, verify that the changes you are planning have the intended effect.
* conda can report the changes required for installing a given package, without actually installing it. You can verify the changes from your notebook. For example, for the library Plotly:
* In a Python notebook, enter: !conda install --dry-run plotly
* In an R notebook, enter: print(system2("conda", args=c("install","--dry-run","r-plotly"), stdout=TRUE))
* pip does install the package. However, restarting the runtime again after verification will remove the package. Here too you verify the changes from your notebook. For example, for the library Plotly:
* In a Python notebook, enter: !pip install plotly
* In an R notebook, enter: print(system2("pip", args="install plotly", stdout=TRUE))
* If you can get a package through conda from the default channels and through pip from PyPI, the preferred method is through conda from the default channels.
* Conda does dependency checking when installing packages which can be memory intensive if you add many packages to the customization. Ensure that you select an environment with sufficient RAM to enable dependency checking at the time the runtime is started.
* To prevent unnecessary dependency checking if you only want packages from one Conda channel, exclude the default channels by removing defaults from the channels list in the template and adding nodefaults.
* In addition to the Anaconda main channel, many packages for R can be found in Anaconda's R channel. In R environments, this channel is already part of the default channels, hence it does not need to be added separately.
* If you add packages only through pip or only through conda to the customization template, you must make sure that dependencies is not commented out in the template.
* When you specify a package version, use a single = for conda packages and == for pip packages. Wherever possible, specify a version number as this reduces the installation time and memory consumption significantly. If you don't specify a version, the package manager might pick the latest version available, or keep the version that is available in the package.
* You cannot add arbitrary notebook extensions as a customization because notebook extensions must be pre-installed.
5. Apply your changes.
Learn more
* [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html)
* [Installing custom packages through a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html)
Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
| # Customizing environment templates #
You can change the name, the description, and the hardware configuration of an environment template that you created\. You can customize the software configuration of Jupyter notebook environment templates through conda channels or by using pip\. You can provide a list of conda packages, a list of pip packages, or a combination of both\. When using conda packages, you can provide a list of additional conda channel locations through which the packages can be obtained\.
**Required permissions** : You must be have the **Admin** or **Editor** role in the project to customize an environment template\.
**Restrictions** : You cannot change the language of an existing environment template\. : You can’t customize the software configuration of a Spark environment template you created\.
To customize an environment template that you created:
<!-- <ol> -->
1. Under your project's **Manage** tab, click the **Environments** page\.
2. In the **Active Runtimes** section, check that no runtime is active for the environment template you want to change\.
3. In the **Environment Templates** section, click the environment template you want to customize\.
4. Make your changes\.
For a Juypter notebook environment template, select to create a customization and specify the libraries to add to the standard packages that are available by default. You can also use the customization to upgrade or downgrade packages that are part of the standard software configuration.
The libraries that are added to an environment template through the customization aren't persisted; however, they are automatically installed each time the environment runtime is started. Note that if you add a library using `pip install` through a notebook cell and not through the customization, only you will be able to use this library; the library is not available to someone else using the same environment template.
If you want you can use the provided template to add the custom libraries. There is a different template for Python and for R. The following example shows you how to add Python packages:
# Modify the following content to add a software customization to an environment.
# To remove an existing customization, delete the entire content and click Apply.
# Add conda channels below defaults, indented by two spaces and a hyphen.
channels:
- defaults
# To add packages through conda or pip, remove the comment on the following line.
# dependencies:
# Add conda packages here, indented by two spaces and a hyphen.
# Remove the comment on the following line and replace sample package name with your package name:
# - a_conda_package=1.0
# Add pip packages here, indented by four spaces and a hyphen.
# Remove the comments on the following lines and replace sample package name with your package name.
# - pip:
# - a_pip_package==1.0
**Important when customizing**:
<!-- <ul> -->
* Before you customize a package, verify that the changes you are planning have the intended effect.
<!-- <ul> -->
* `conda` can report the changes required for installing a given package, without actually installing it. You can verify the changes from your notebook. For example, for the library Plotly:
<!-- <ul> -->
* In a Python notebook, enter: `!conda install --dry-run plotly`
* In an R notebook, enter: `print(system2("conda", args=c("install","--dry-run","r-plotly"), stdout=TRUE))`
<!-- </ul> -->
* `pip` does install the package. However, restarting the runtime again after verification will remove the package. Here too you verify the changes from your notebook. For example, for the library Plotly:
<!-- <ul> -->
* In a Python notebook, enter: `!pip install plotly`
* In an R notebook, enter: `print(system2("pip", args="install plotly", stdout=TRUE))`
<!-- </ul> -->
<!-- </ul> -->
* If you can get a package through `conda` from the default channels and through `pip` from PyPI, the preferred method is through `conda` from the default channels.
* Conda does dependency checking when installing packages which can be memory intensive if you add many packages to the customization. Ensure that you select an environment with sufficient RAM to enable dependency checking at the time the runtime is started.
* To prevent unnecessary dependency checking if you only want packages from one Conda channel, exclude the default channels by removing `defaults` from the channels list in the template and adding `nodefaults`.
* In addition to the Anaconda main channel, many packages for R can be found in Anaconda's R channel. In R environments, this channel is already part of the default channels, hence it does not need to be added separately.
* If you add packages only through pip or only through conda to the customization template, you must make sure that `dependencies` is not commented out in the template.
* When you specify a package version, use a single `=` for `conda` packages and `==` for `pip` packages. Wherever possible, specify a version number as this reduces the installation time and memory consumption significantly. If you don't specify a version, the package manager might pick the latest version available, or keep the version that is available in the package.
* You cannot add arbitrary notebook extensions as a customization because notebook extensions must be pre-installed.
<!-- </ul> -->
5. Apply your changes\.
<!-- </ol> -->
## Learn more ##
<!-- <ul> -->
* [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html)
* [Installing custom packages through a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html)
<!-- </ul> -->
**Parent topic:**[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
<!-- </article "role="article" "> -->
|
04B717FD06C5D906268E8530F4B521686065C6D5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html?context=cdpaas&locale=en | Data load support | Data load support
You can add automatically generated code to load data from project data assets to a notebook cell. The asset type can be a file or a database connection.
By clicking in an empty code cell in your notebook, clicking the Code snippets icon () from the notebook toolbar, selecting Read data and an asset from the project, you can:
* Insert the data source access credentials. This capability is available for all data assets that are added to a project. With the credentials, you can write your own code to access the asset and load the data into data structures of your choice.
* Generate code that is added to the notebook cell. The inserted code serves as a quick start to allow you to easily begin working with a data set or connection. For production systems, you should carefully review the inserted code to determine if you should write your own code that better meets your needs.
When you run the code cell, the data is accessed and loaded into the data structure you selected.
Notes:
1. The ability to provide generated code is disabled for some connections if:
* The connection credentials are personal credentials
* The connection uses a secure gateway link
* The connection credentials are stored in vaults
2. If the file type or database connection that you are using doesn't appear in the following lists, you can select to create generic code. For Python this is a StreamingBody object and for R a textConnection object.
The following tables show you which data source connections (file types and database connections) support the option to generate code. The options for generating code vary depending on the data source, the notebook coding language, and the notebook runtime compute.
Supported files types
Table 1. Supported file types
Data source Notebook coding language Compute engine type Available support to load data
CSV files
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame and sparkSessionDataFrame
With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
R Anaconda R distribution Load data into R data frame
With Spark Load data into R data frame and sparkSessionDataFrame
With Hadoop Load data into R data frame and sparkSessionDataFrame
Python Script
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
With Hadoop Load data into pandasStreamingBody
R Anaconda R distribution Load data into rRawObject
With Spark Load data into rRawObject
With Hadoop Load data into rRawObject
JSON files
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame and sparkSessionDataFrame
With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
R Anaconda R distribution Load data into R data frame
With Spark Load data into R data frame, rRawObject and sparkSessionDataFrame
With Hadoop Load data into R data frame, rRawObject and sparkSessionDataFrame
.xlsx and .xls files
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame
With Hadoop Load data into pandasDataFrame
R Anaconda R distribution Load data into rRawObject
With Spark No data load support
With Hadoop No data load support
Octet-stream file types
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
R Anaconda R distribution Load data in rRawObject
With Spark Load data in rDataObject
PDF file type
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
With Hadoop Load data into pandasStreamingBody
R Anaconda R distribution Load data in rRawObject
With Spark Load data in rDataObject
With Hadoop Load data into rRawData
ZIP file type
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
R Anaconda R distribution Load data in rRawObject
With Spark Load data in rDataObject
JPEG, PNG image files
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
With Hadoop Load data into pandasStreamingBody
R Anaconda R distribution Load data in rRawObject
With Spark Load data in rDataObject
With Hadoop Load data in rDataObject
Binary files
Python Anaconda Python distribution Load data into pandasStreamingBody
With Spark Load data into pandasStreamingBody
Hadoop No data load support
R Anaconda R distribution Load data in rRawObject
With Spark Load data into rRawObject
Hadoop Load data in rDataObject
Supported database connections
Table 2. Supported database connections
Data source Notebook coding language Compute engine type Available support to load data
- [Db2 Warehouse on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) <br>- [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) <br>- [IBM Db2 Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas
With Spark Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame
With Hadoop Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame
R Anaconda R distribution Load data into ibmdbrIda and ibmdbrDataframe
With Spark Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame
With Hadoop Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame
- [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) <br>
Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas
With Spark No data load support
- [Amazon Simple Storage Services (S3)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) <br>- [Amazon Simple Storage Services (S3) with an IAM access policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
Python Anaconda Python distribution Load data into pandasStreamingBody
With Hadoop Load data into pandasStreamingBody and sparkSessionSetup
R Anaconda R distributuion Load data into rRawObject
With Hadoop Load data into rRawObject and sparkSessionSetup
- [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) <br>- [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame
R Anaconda R distribution Load data into R data frame
With Spark Load data into R data frame and sparkSessionDataFrame
- [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)
Python Anaconda Python distribution Load data into pandasDataFrame <br> <br>In the generated code: <br>- Edit the path parameter in the last line of code <br>- Remove the comment tagging <br> <br>To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) <br>To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) <br>To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html)
With Spark No data load support
R Anaconda R distribution Load data into R data frame <br> <br>In the generated code: <br>- Edit the path parameter in the last line of code <br>- Remove the comment tagging <br> <br>To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) <br>To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) <br>To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html)
With Spark No data load support
- [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html)
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame
R Anaconda R distribution No data load support
With Spark No data load support
- [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) <br>
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame
R Anaconda R distribution Load data into R data frame and sparkSessionDataFrame
With Spark No data load support
- [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) <br>- [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) <br>- [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html)
Python Anaconda Python distribution Load data into pandasDataFrame
With Spark Load data into pandasDataFrame
R Anaconda R distribution Load data into R data frame
With Spark Load data into R data frame
Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
| # Data load support #
You can add automatically generated code to load data from project data assets to a notebook cell\. The asset type can be a file or a database connection\.
By clicking in an empty code cell in your notebook, clicking the **Code snippets** icon () from the notebook toolbar, selecting **Read data** and an asset from the project, you can:
<!-- <ul> -->
* Insert the data source access credentials\. This capability is available for all data assets that are added to a project\. With the credentials, you can write your own code to access the asset and load the data into data structures of your choice\.
* Generate code that is added to the notebook cell\. The inserted code serves as a quick start to allow you to easily begin working with a data set or connection\. For production systems, you should carefully review the inserted code to determine if you should write your own code that better meets your needs\.
When you run the code cell, the data is accessed and loaded into the data structure you selected.
**Notes**:
<!-- <ol> -->
1. The ability to provide generated code is disabled for some connections if:
<!-- <ul> -->
* The connection credentials are personal credentials
* The connection uses a secure gateway link
* The connection credentials are stored in vaults
<!-- </ul> -->
2. If the file type or database connection that you are using doesn't appear in the following lists, you can select to create generic code. For Python this is a StreamingBody object and for R a textConnection object.
<!-- </ol> -->
<!-- </ul> -->
The following tables show you which data source connections (file types and database connections) support the option to generate code\. The options for generating code vary depending on the data source, the notebook coding language, and the notebook runtime compute\.
## Supported files types ##
<!-- <table> -->
Table 1\. Supported file types
| Data source | Notebook coding language | Compute engine type | Available support to load data |
| ------------------------ | ------------------------ | ---------------------------- | ----------------------------------------------------------------- |
| CSV files | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame and sparkSessionDataFrame |
| | | With Hadoop | Load data into pandasDataFrame and sparkSessionDataFrame |
| | R | Anaconda R distribution | Load data into R data frame |
| | | With Spark | Load data into R data frame and sparkSessionDataFrame |
| | | With Hadoop | Load data into R data frame and sparkSessionDataFrame |
| Python Script | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | | With Hadoop | Load data into pandasStreamingBody |
| | R | Anaconda R distribution | Load data into rRawObject |
| | | With Spark | Load data into rRawObject |
| | | With Hadoop | Load data into rRawObject |
| JSON files | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame and sparkSessionDataFrame |
| | | With Hadoop | Load data into pandasDataFrame and sparkSessionDataFrame |
| | R | Anaconda R distribution | Load data into R data frame |
| | | With Spark | Load data into R data frame, rRawObject and sparkSessionDataFrame |
| | | With Hadoop | Load data into R data frame, rRawObject and sparkSessionDataFrame |
| \.xlsx and \.xls files | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame |
| | | With Hadoop | Load data into pandasDataFrame |
| | R | Anaconda R distribution | Load data into rRawObject |
| | | With Spark | No data load support |
| | | With Hadoop | No data load support |
| Octet\-stream file types | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | R | Anaconda R distribution | Load data in rRawObject |
| | | With Spark | Load data in rDataObject |
| PDF file type | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | | With Hadoop | Load data into pandasStreamingBody |
| | R | Anaconda R distribution | Load data in rRawObject |
| | | With Spark | Load data in rDataObject |
| | | With Hadoop | Load data into rRawData |
| ZIP file type | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | R | Anaconda R distribution | Load data in rRawObject |
| | | With Spark | Load data in rDataObject |
| JPEG, PNG image files | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | | With Hadoop | Load data into pandasStreamingBody |
| | R | Anaconda R distribution | Load data in rRawObject |
| | | With Spark | Load data in rDataObject |
| | | With Hadoop | Load data in rDataObject |
| Binary files | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Spark | Load data into pandasStreamingBody |
| | | Hadoop | No data load support |
| | R | Anaconda R distribution | Load data in rRawObject |
| | | With Spark | Load data into rRawObject |
| | | Hadoop | Load data in rDataObject |
<!-- </table ""> -->
## Supported database connections ##
<!-- <table> -->
Table 2\. Supported database connections
| Data source | Notebook coding language | Compute engine type | Available support to load data |
| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| \- [Db2 Warehouse on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) <br>\- [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) <br>\- [IBM Db2 Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) | |
| | Python | Anaconda Python distribution | Load data into ibmdbpyIda and ibmdbpyPandas |
| | | With Spark | Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame |
| | | With Hadoop | Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame |
| | R | Anaconda R distribution | Load data into ibmdbrIda and ibmdbrDataframe |
| | | With Spark | Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame |
| | | With Hadoop | Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame |
| \- [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) <br> | |
| | Python | Anaconda Python distribution | Load data into ibmdbpyIda and ibmdbpyPandas |
| | | With Spark | No data load support |
| \- [Amazon Simple Storage Services (S3)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) <br>\- [Amazon Simple Storage Services (S3) with an IAM access policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) | |
| | Python | Anaconda Python distribution | Load data into pandasStreamingBody |
| | | With Hadoop | Load data into pandasStreamingBody and sparkSessionSetup |
| | R | Anaconda R distributuion | Load data into rRawObject |
| | | With Hadoop | Load data into rRawObject and sparkSessionSetup |
| \- [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) <br>\- [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame |
| | R | Anaconda R distribution | Load data into R data frame |
| | | With Spark | Load data into R data frame and sparkSessionDataFrame |
| \- [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html) | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame <br> <br>In the generated code: <br>\- Edit the path parameter in the last line of code <br>\- Remove the comment tagging <br> <br>To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) <br>To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) <br>To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html) |
| | | With Spark | No data load support |
| | R | Anaconda R distribution | Load data into R data frame <br> <br>In the generated code: <br>\- Edit the path parameter in the last line of code <br>\- Remove the comment tagging <br> <br>To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) <br>To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) <br>To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html) |
| | | With Spark | No data load support |
| \- [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame |
| | R | Anaconda R distribution | No data load support |
| | | With Spark | No data load support |
| \- [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) <br> | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame |
| | R | Anaconda R distribution | Load data into R data frame and sparkSessionDataFrame |
| | | With Spark | No data load support |
| \- [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) <br>\- [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) <br>\- [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) | |
| | Python | Anaconda Python distribution | Load data into pandasDataFrame |
| | | With Spark | Load data into pandasDataFrame |
| | R | Anaconda R distribution | Load data into R data frame |
| | | With Spark | Load data into R data frame |
<!-- </table ""> -->
**Parent topic:**[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
<!-- </article "role="article" "> -->
|
E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html?context=cdpaas&locale=en | Analyzing data and working with models | Analyzing data and working with models
You can analyze data and build or work with models in projects. The methods that you choose for preparing data or working models help you determine which tools best fit your needs.
Each tool has a specific, primary task. Some tools have capabilities for multiple types of tasks.
You can choose a tool based on how much automation you want:
* Code editor tools: Use to write code in Python or R, all also with Spark.
* Graphical builder tools: Use menus and drag-and-drop functionality on a builder to visually program.
* Automated builder tools: Use to configure automated tasks that require limited user input.
Tool to tasks
Tool Primary task Tool type Work with data Work with models
[Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Prepare and visualize data Graphical builder ✓
[Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Build graphs to visualize data Graphical builder ✓
[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with foundation models and prompts Graphical builder ✓
[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tune a foundation model to return output in a certain style or format Graphical builder ✓ ✓
[Jupyter notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Work with data and models in Python or R notebooks Code editor ✓ ✓
[Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train models on distributed data Code editor ✓
[RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Work with data and models in R Code editor ✓ ✓
[SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Build models as a visual flow Graphical builder ✓ ✓
[Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solve optimization problems Graphical builder, code editor ✓ ✓
[AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Build machine learning models automatically Automated builder ✓ ✓
[Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automate model lifecycle Graphical builder ✓ ✓
[Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data Graphical builder ✓ ✓
Learn more
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
| # Analyzing data and working with models #
You can analyze data and build or work with models in projects\. The methods that you choose for preparing data or working models help you determine which tools best fit your needs\.
Each tool has a specific, primary task\. Some tools have capabilities for multiple types of tasks\.
You can choose a tool based on how much automation you want:
<!-- <ul> -->
* Code editor tools: Use to write code in Python or R, all also with Spark\.
* Graphical builder tools: Use menus and drag\-and\-drop functionality on a builder to visually program\.
* Automated builder tools: Use to configure automated tasks that require limited user input\.
<!-- </ul> -->
<!-- <table> -->
Tool to tasks
| Tool | Primary task | Tool type | Work with data | Work with models |
| ---------------------------- | --------------------------------------------------------------------- | ------------------------------ | -------------- | ---------------- |
| [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) | Prepare and visualize data | Graphical builder | ✓ | |
| [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) | Build graphs to visualize data | Graphical builder | ✓ | |
| [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) | Experiment with foundation models and prompts | Graphical builder | | ✓ |
| [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) | Tune a foundation model to return output in a certain style or format | Graphical builder | ✓ | ✓ |
| [Jupyter notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) | Work with data and models in Python or R notebooks | Code editor | ✓ | ✓ |
| [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) | Train models on distributed data | Code editor | | ✓ |
| [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) | Work with data and models in R | Code editor | ✓ | ✓ |
| [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) | Build models as a visual flow | Graphical builder | ✓ | ✓ |
| [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) | Solve optimization problems | Graphical builder, code editor | ✓ | ✓ |
| [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) | Build machine learning models automatically | Automated builder | ✓ | ✓ |
| [Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) | Automate model lifecycle | Graphical builder | ✓ | ✓ |
| [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) | Generate synthetic tabular data | Graphical builder | ✓ | ✓ |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
* [Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
89F9E0463D14DED51B14392A4FD7A69BB53FA1BF | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=en | Data skipping for Spark SQL | Data skipping for Spark SQL
Data skipping can significantly boost the performance of SQL queries by skipping over irrelevant data objects or files based on a summary metadata associated with each object.
Data skipping uses the open source Xskipper library for creating, managing and deploying data skipping indexes with Apache Spark. See [Xskipper - An Extensible Data Skipping Framework](https://xskipper.io).
For more details on how to work with Xskipper see:
* [Quick Start Guide](https://xskipper.io/getting-started/quick-start-guide/)
* [Demo Notebooks](https://xskipper.io/getting-started/sample-notebooks/)
In addition to the open source features in Xskipper, the following features are also available:
* [Geospatial data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=engeospatial-skipping)
* [Encrypting indexes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enencrypting-indexes)
* [Data skipping with joins (for Spark 3 only)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enskipping-with-joins)
* [Samples showing these features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=ensamples)
Geospatial data skipping
You can also use data skipping when querying geospatial data sets using [geospatial functions](https://www.ibm.com/support/knowledgecenter/en/SSCJDQ/com.ibm.swg.im.dashdb.analytics.doc/doc/geo_functions.html) from the [spatio-temporal library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html).
* To benefit from data skipping in data sets with latitude and longitude columns, you can collect the min/max indexes on the latitude and longitude columns.
* Data skipping can be used in data sets with a geometry column (a UDT column) by using a built-in [Xskipper plugin](https://xskipper.io/api/indexing/plugins).
The next sections show you to work with the geospatial plugin.
Setting up the geospatial plugin
To use the plugin, load the relevant implementations using the Registration module. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
* For Scala:
import com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory
import com.ibm.xskipper.stmetaindex.index.STIndexFactory
import com.ibm.xskipper.stmetaindex.translation.parquet.{STParquetMetaDataTranslator, STParquetMetadatastoreClauseTranslator}
import io.xskipper._
Registration.addIndexFactory(STIndexFactory)
Registration.addMetadataFilterFactory(STMetaDataFilterFactory)
Registration.addClauseTranslator(STParquetMetadatastoreClauseTranslator)
Registration.addMetaDataTranslator(STParquetMetaDataTranslator)
* For Python:
from xskipper import Xskipper
from xskipper import Registration
Registration.addMetadataFilterFactory(spark, 'com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory')
Registration.addIndexFactory(spark, 'com.ibm.xskipper.stmetaindex.index.STIndexFactory')
Registration.addMetaDataTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetaDataTranslator')
Registration.addClauseTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetadatastoreClauseTranslator')
Index building
To build an index, you can use the addCustomIndex API. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
* For Scala:
import com.ibm.xskipper.stmetaindex.implicits._
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
xskipper
.indexBuilder()
// using the implicit method defined in the plugin implicits
.addSTBoundingBoxLocationIndex("location")
// equivalent
//.addCustomIndex(STBoundingBoxLocationIndex("location"))
.build(reader).show(false)
* For Python:
xskipper = Xskipper(spark, dataset_path)
adding the index using the custom index API
xskipper.indexBuilder()
.addCustomIndex("com.ibm.xskipper.stmetaindex.index.STBoundingBoxLocationIndex", ['location'], dict())
.build(reader)
.show(10, False)
Supported functions
The list of supported geospatial functions includes the following:
* ST_Distance
* ST_Intersects
* ST_Contains
* ST_Equals
* ST_Crosses
* ST_Touches
* ST_Within
* ST_Overlaps
* ST_EnvelopesIntersect
* ST_IntersectsInterior
Encrypting indexes
If you use a Parquet metadata store, the metadata can optionally be encrypted using Parquet Modular Encryption (PME). This is achieved by storing the metadata itself as a Parquet data set, and thus PME can be used to encrypt it. This feature applies to all input formats, for example, a data set stored in CSV format can have its metadata encrypted using PME.
In the following section, unless specified otherwise, when referring to footers, columns, and so on, these are with respect to metadata objects, and not to objects in the indexed data set.
Index encryption is modular and granular in the following way:
* Each index can either be encrypted (with a per-index key granularity) or left in plain text
* Footer + object name column:
* Footer column of the metadata object which in itself is a Parquet file contains, among other things:
* Schema of the metadata object, which reveals the types, parameters and column names for all indexes collected. For example, you can learn that a BloomFilter is defined on column city with a false-positive probability of 0.1.
* Full path to the original data set or a table name in case of a Hive metastore table.
* Object name column stores the names of all indexed objects.
* Footer + metadata column can either be:
* Both encrypted using the same key. This is the default. In this case, the plain text footer configuration for the Parquet objects comprising the metadata in encrypted footer mode, and the object name column is encrypted using the selected key.
* Both in plain text. In this case, the Parquet objects comprising the metadata are in plain text footer mode, and the object name column is not encrypted.
If at least one index is marked as encrypted, then a footer key must be configured regardless of whether plain text footer mode is enabled or not. If plain text footer is set then the footer key is used only for tamper-proofing. Note that in that case the object name column is not tamper proofed.
If a footer key is configured, then at least one index must be encrypted.
Before using index encryption, you should check the documentation on [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) and make sure you are familiar with the concepts.
Important: When using index encryption, whenever a key is configured in any Xskipper API, it's always the label NEVER the key itself.
To use index encryption:
1. Follow all the steps to make sure PME is enabled. See [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html).
2. Perform all regular PME configurations, including Key Management configurations.
3. Create encrypted metadata for a data set:
1. Follow the regular flow for creating metadata.
2. Configure a footer key. If you wish to set a plain text footer + object name column, set io.xskipper.parquet.encryption.plaintext.footer to true (See samples below).
3. In IndexBuilder, for each index you want to encrypt, add the label of the key to use for that index.
To use metadata during query time or to refresh existing metadata, no setup is necessary other than the regular PME setup required to make sure the keys are accessible (literally the same configuration needed to read an encrypted data set).
Samples
The following samples show metadata creation using a key named k1 as a footer + object name key, and a key named k2 as a key to encrypt a MinMax for temp, while also creating a ValueList for city, which is left in plain text. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
* For Scala:
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
// Configuring the JVM wide parameters
val jvmComf = Map(
"io.xskipper.parquet.mdlocation" -> md_base_location,
"io.xskipper.parquet.mdlocation.type" -> "EXPLICIT_BASE_PATH_LOCATION")
Xskipper.setConf(jvmConf)
// set the footer key
val conf = Map(
"io.xskipper.parquet.encryption.footer.key" -> "k1")
xskipper.setConf(conf)
xskipper
.indexBuilder()
// Add an encrypted MinMax index for temp
.addMinMaxIndex("temp", "k2")
// Add a plaintext ValueList index for city
.addValueListIndex("city")
.build(reader).show(false)
* For Python
xskipper = Xskipper(spark, dataset_path)
Add JVM Wide configuration
jvmConf = dict([
("io.xskipper.parquet.mdlocation", md_base_location),
("io.xskipper.parquet.mdlocation.type", "EXPLICIT_BASE_PATH_LOCATION")])
Xskipper.setConf(spark, jvmConf)
configure footer key
conf = dict([("io.xskipper.parquet.encryption.footer.key", "k1")])
xskipper.setConf(conf)
adding the indexes
xskipper.indexBuilder()
.addMinMaxIndex("temp", "k1")
.addValueListIndex("city")
.build(reader)
.show(10, False)
If you want the footer + object name to be left in plain text mode (as mentioned above), you need to add the configuration parameter:
* For Scala:
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
// Configuring the JVM wide parameters
val jvmComf = Map(
"io.xskipper.parquet.mdlocation" -> md_base_location,
"io.xskipper.parquet.mdlocation.type" -> "EXPLICIT_BASE_PATH_LOCATION")
Xskipper.setConf(jvmConf)
// set the footer key
val conf = Map(
"io.xskipper.parquet.encryption.footer.key" -> "k1",
"io.xskipper.parquet.encryption.plaintext.footer" -> "true")
xskipper.setConf(conf)
xskipper
.indexBuilder()
// Add an encrypted MinMax index for temp
.addMinMaxIndex("temp", "k2")
// Add a plaintext ValueList index for city
.addValueListIndex("city")
.build(reader).show(false)
* For Python
xskipper = Xskipper(spark, dataset_path)
Add JVM Wide configuration
jvmConf = dict([
("io.xskipper.parquet.mdlocation", md_base_location),
("io.xskipper.parquet.mdlocation.type", "EXPLICIT_BASE_PATH_LOCATION")])
Xskipper.setConf(spark, jvmConf)
configure footer key
conf = dict([("io.xskipper.parquet.encryption.footer.key", "k1"),
("io.xskipper.parquet.encryption.plaintext.footer", "true")])
xskipper.setConf(conf)
adding the indexes
xskipper.indexBuilder()
.addMinMaxIndex("temp", "k1")
.addValueListIndex("city")
.build(reader)
.show(10, False)
Data skipping with joins (for Spark 3 only)
With Spark 3, you can use data skipping in join queries such as:
SELECT
FROM orders, lineitem
WHERE l_orderkey = o_orderkey and o_custkey = 800
This example shows a star schema based on the TPC-H benchmark schema (see [TPC-H](http://www.tpc.org/tpch/)) where lineitem is a fact table and contains many records, while the orders table is a dimension table which has a relatively small number of records compared to the fact tables.
The above query has a predicate on the orders tables which contains a small number of records which means using min/max will not benefit much from data skipping.
Dynamic data skipping is a feature which enables queries such as the above to benefit from data skipping by first extracting the relevant l_orderkey values based on the condition on the orders table and then using it to push down a predicate on l_orderkey that uses data skipping indexes to filter irrelevant objects.
To use this feature, enable the following optimization rule. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
* For Scala:
import com.ibm.spark.implicits.
spark.enableDynamicDataSkipping()
* For Python:
from sparkextensions import SparkExtensions
SparkExtensions.enableDynamicDataSkipping(spark)
Then use the Xskipper API as usual and your queries will benefit from using data skipping.
For example, in the above query, indexing l_orderkey using min/max will enable skipping over the lineitem table and will improve query performance.
Support for older metadata
Xskipper supports older metadata created by the MetaIndexManager seamlessly. Older metadata can be used for skipping as updates to the Xskipper metadata are carried out automatically by the next refresh operation.
If you see DEPRECATED_SUPPORTED in front of an index when listing indexes or running a describeIndex operation, the metadata version is deprecated but is still supported and skipping will work. The next refresh operation will update the metadata automatically.
| # Data skipping for Spark SQL #
Data skipping can significantly boost the performance of SQL queries by skipping over irrelevant data objects or files based on a summary metadata associated with each object\.
Data skipping uses the open source Xskipper library for creating, managing and deploying data skipping indexes with Apache Spark\. See [Xskipper \- An Extensible Data Skipping Framework](https://xskipper.io)\.
For more details on how to work with Xskipper see:
<!-- <ul> -->
* [Quick Start Guide](https://xskipper.io/getting-started/quick-start-guide/)
* [Demo Notebooks](https://xskipper.io/getting-started/sample-notebooks/)
<!-- </ul> -->
In addition to the open source features in Xskipper, the following features are also available:
<!-- <ul> -->
* [Geospatial data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=en#geospatial-skipping)
* [Encrypting indexes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=en#encrypting-indexes)
* [Data skipping with joins (for Spark 3 only)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=en#skipping-with-joins)
* [Samples showing these features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=en#samples)
<!-- </ul> -->
## Geospatial data skipping ##
You can also use data skipping when querying geospatial data sets using [geospatial functions](https://www.ibm.com/support/knowledgecenter/en/SSCJDQ/com.ibm.swg.im.dashdb.analytics.doc/doc/geo_functions.html) from the [spatio\-temporal library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html)\.
<!-- <ul> -->
* To benefit from data skipping in data sets with latitude and longitude columns, you can collect the min/max indexes on the latitude and longitude columns\.
* Data skipping can be used in data sets with a geometry column (a UDT column) by using a built\-in [Xskipper plugin](https://xskipper.io/api/indexing/#plugins)\.
<!-- </ul> -->
The next sections show you to work with the geospatial plugin\.
### Setting up the geospatial plugin ###
To use the plugin, load the relevant implementations using the Registration module\. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio\.
<!-- <ul> -->
* For Scala:
import com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory
import com.ibm.xskipper.stmetaindex.index.STIndexFactory
import com.ibm.xskipper.stmetaindex.translation.parquet.{STParquetMetaDataTranslator, STParquetMetadatastoreClauseTranslator}
import io.xskipper._
Registration.addIndexFactory(STIndexFactory)
Registration.addMetadataFilterFactory(STMetaDataFilterFactory)
Registration.addClauseTranslator(STParquetMetadatastoreClauseTranslator)
Registration.addMetaDataTranslator(STParquetMetaDataTranslator)
* For Python:
from xskipper import Xskipper
from xskipper import Registration
Registration.addMetadataFilterFactory(spark, 'com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory')
Registration.addIndexFactory(spark, 'com.ibm.xskipper.stmetaindex.index.STIndexFactory')
Registration.addMetaDataTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetaDataTranslator')
Registration.addClauseTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetadatastoreClauseTranslator')
<!-- </ul> -->
### Index building ###
To build an index, you can use the `addCustomIndex` API\. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio\.
<!-- <ul> -->
* For Scala:
import com.ibm.xskipper.stmetaindex.implicits._
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
xskipper
.indexBuilder()
// using the implicit method defined in the plugin implicits
.addSTBoundingBoxLocationIndex("location")
// equivalent
//.addCustomIndex(STBoundingBoxLocationIndex("location"))
.build(reader).show(false)
* For Python:
xskipper = Xskipper(spark, dataset_path)
# adding the index using the custom index API
xskipper.indexBuilder() \
.addCustomIndex("com.ibm.xskipper.stmetaindex.index.STBoundingBoxLocationIndex", ['location'], dict()) \
.build(reader) \
.show(10, False)
<!-- </ul> -->
### Supported functions ###
The list of supported geospatial functions includes the following:
<!-- <ul> -->
* ST\_Distance
* ST\_Intersects
* ST\_Contains
* ST\_Equals
* ST\_Crosses
* ST\_Touches
* ST\_Within
* ST\_Overlaps
* ST\_EnvelopesIntersect
* ST\_IntersectsInterior
<!-- </ul> -->
## Encrypting indexes ##
If you use a Parquet metadata store, the metadata can optionally be encrypted using Parquet Modular Encryption (PME)\. This is achieved by storing the metadata itself as a Parquet data set, and thus PME can be used to encrypt it\. This feature applies to all input formats, for example, a data set stored in CSV format can have its metadata encrypted using PME\.
In the following section, unless specified otherwise, when referring to footers, columns, and so on, these are with respect to metadata objects, and not to objects in the indexed data set\.
Index encryption is modular and granular in the following way:
<!-- <ul> -->
* Each index can either be encrypted (with a per\-index key granularity) or left in plain text
* Footer \+ object name column:
<!-- <ul> -->
* Footer column of the metadata object which in itself is a Parquet file contains, among other things:
<!-- <ul> -->
* Schema of the metadata object, which reveals the types, parameters and column names for all indexes collected. For example, you can learn that a `BloomFilter` is defined on column `city` with a false-positive probability of `0.1`.
* Full path to the original data set or a table name in case of a Hive metastore table.
<!-- </ul> -->
* Object name column stores the names of all indexed objects.
<!-- </ul> -->
* Footer \+ metadata column can either be:
<!-- <ul> -->
* Both encrypted using the same key. This is the default. In this case, the plain text footer configuration for the Parquet objects comprising the metadata in encrypted footer mode, and the object name column is encrypted using the selected key.
* Both in plain text. In this case, the Parquet objects comprising the metadata are in plain text footer mode, and the object name column is not encrypted.
If at least one index is marked as encrypted, then a footer key must be configured regardless of whether plain text footer mode is enabled or not. If plain text footer is set then the footer key is used only for tamper-proofing. Note that in that case the object name column is not tamper proofed.
If a footer key is configured, then at least one index must be encrypted.
<!-- </ul> -->
<!-- </ul> -->
Before using index encryption, you should check the documentation on [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) and make sure you are familiar with the concepts\.
Important: When using index encryption, whenever a `key` is configured in any Xskipper API, it's always the label `NEVER the key itself`\.
To use index encryption:
<!-- <ol> -->
1. Follow all the steps to make sure PME is enabled\. See [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)\.
2. Perform all *regular* PME configurations, including Key Management configurations\.
3. Create encrypted metadata for a data set:
<!-- <ol> -->
1. Follow the regular flow for creating metadata.
2. Configure a footer key. If you wish to set a plain text footer \+ object name column, set `io.xskipper.parquet.encryption.plaintext.footer` to `true` (See samples below).
3. In `IndexBuilder`, for each index you want to encrypt, add the label of the key to use for that index.
To use metadata during query time or to refresh existing metadata, no setup is necessary other than the *regular* PME setup required to make sure the keys are accessible (literally the same configuration needed to read an encrypted data set).
<!-- </ol> -->
<!-- </ol> -->
## Samples ##
The following samples show metadata creation using a key named `k1` as a footer \+ object name key, and a key named `k2` as a key to encrypt a `MinMax` for `temp`, while also creating a `ValueList` for `city`, which is left in plain text\. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio\.
<!-- <ul> -->
* For Scala:
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
// Configuring the JVM wide parameters
val jvmComf = Map(
"io.xskipper.parquet.mdlocation" -> md_base_location,
"io.xskipper.parquet.mdlocation.type" -> "EXPLICIT_BASE_PATH_LOCATION")
Xskipper.setConf(jvmConf)
// set the footer key
val conf = Map(
"io.xskipper.parquet.encryption.footer.key" -> "k1")
xskipper.setConf(conf)
xskipper
.indexBuilder()
// Add an encrypted MinMax index for temp
.addMinMaxIndex("temp", "k2")
// Add a plaintext ValueList index for city
.addValueListIndex("city")
.build(reader).show(false)
* For Python
xskipper = Xskipper(spark, dataset_path)
# Add JVM Wide configuration
jvmConf = dict([
("io.xskipper.parquet.mdlocation", md_base_location),
("io.xskipper.parquet.mdlocation.type", "EXPLICIT_BASE_PATH_LOCATION")])
Xskipper.setConf(spark, jvmConf)
# configure footer key
conf = dict([("io.xskipper.parquet.encryption.footer.key", "k1")])
xskipper.setConf(conf)
# adding the indexes
xskipper.indexBuilder() \
.addMinMaxIndex("temp", "k1") \
.addValueListIndex("city") \
.build(reader) \
.show(10, False)
<!-- </ul> -->
If you want the footer \+ object name to be left in plain text mode (as mentioned above), you need to add the configuration parameter:
<!-- <ul> -->
* For Scala:
// index the dataset
val xskipper = new Xskipper(spark, dataset_path)
// Configuring the JVM wide parameters
val jvmComf = Map(
"io.xskipper.parquet.mdlocation" -> md_base_location,
"io.xskipper.parquet.mdlocation.type" -> "EXPLICIT_BASE_PATH_LOCATION")
Xskipper.setConf(jvmConf)
// set the footer key
val conf = Map(
"io.xskipper.parquet.encryption.footer.key" -> "k1",
"io.xskipper.parquet.encryption.plaintext.footer" -> "true")
xskipper.setConf(conf)
xskipper
.indexBuilder()
// Add an encrypted MinMax index for temp
.addMinMaxIndex("temp", "k2")
// Add a plaintext ValueList index for city
.addValueListIndex("city")
.build(reader).show(false)
* For Python
xskipper = Xskipper(spark, dataset_path)
# Add JVM Wide configuration
jvmConf = dict([
("io.xskipper.parquet.mdlocation", md_base_location),
("io.xskipper.parquet.mdlocation.type", "EXPLICIT_BASE_PATH_LOCATION")])
Xskipper.setConf(spark, jvmConf)
# configure footer key
conf = dict([("io.xskipper.parquet.encryption.footer.key", "k1"),
("io.xskipper.parquet.encryption.plaintext.footer", "true")])
xskipper.setConf(conf)
# adding the indexes
xskipper.indexBuilder() \
.addMinMaxIndex("temp", "k1") \
.addValueListIndex("city") \
.build(reader) \
.show(10, False)
<!-- </ul> -->
## Data skipping with joins (for Spark 3 only) ##
With Spark 3, you can use data skipping in join queries such as:
SELECT *
FROM orders, lineitem
WHERE l_orderkey = o_orderkey and o_custkey = 800
This example shows a star schema based on the TPC\-H benchmark schema (see [TPC\-H](http://www.tpc.org/tpch/)) where lineitem is a fact table and contains many records, while the orders table is a dimension table which has a relatively small number of records compared to the fact tables\.
The above query has a predicate on the orders tables which contains a small number of records which means using min/max will not benefit much from data skipping\.
*Dynamic data skipping* is a feature which enables queries such as the above to benefit from data skipping by first extracting the relevant `l_orderkey` values based on the condition on the `orders` table and then using it to push down a predicate on `l_orderkey` that uses data skipping indexes to filter irrelevant objects\.
To use this feature, enable the following optimization rule\. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio\.
<!-- <ul> -->
* For Scala:
import com.ibm.spark.implicits.
spark.enableDynamicDataSkipping()
* For Python:
from sparkextensions import SparkExtensions
SparkExtensions.enableDynamicDataSkipping(spark)
<!-- </ul> -->
Then use the Xskipper API as usual and your queries will benefit from using data skipping\.
For example, in the above query, indexing `l_orderkey` using min/max will enable skipping over the `lineitem` table and will improve query performance\.
## Support for older metadata ##
Xskipper supports older metadata created by the MetaIndexManager seamlessly\. Older metadata can be used for skipping as updates to the Xskipper metadata are carried out automatically by the next refresh operation\.
If you see `DEPRECATED_SUPPORTED` in front of an index when listing indexes or running a `describeIndex` operation, the metadata version is deprecated but is still supported and skipping will work\. The next refresh operation will update the metadata automatically\.
<!-- </article "role="article" "> -->
|
E0B0B51CD757048207EEFE4EC8F1E98E967D9E69 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html?context=cdpaas&locale=en | SPSS predictive analytics data preparation algorithms in notebooks | SPSS predictive analytics data preparation algorithms in notebooks
Descriptives provides efficient computation of the univariate and bivariate statistics and automatic data preparation features on large scale data. It can be used widely in data profiling, data exploration, and data preparation for subsequent modeling analyses.
The core statistical features include essential univariate and bivariate statistical summaries, univariate order statistics, metadata information creation from raw data, statistics for visualization of single fields and field pairs, data preparation features, and data interestingness score and data quality assessment. It can efficiently support the functionality required for automated data processing, user interactivity, and obtaining data insights for single fields or the relationships between the pairs of fields inclusive with a specified target.
Python example code:
from spss.ml.datapreparation.descriptives import Descriptives
de = Descriptives().
setInputFieldsList(["Field1", "Field2"]).
setTargetFieldList(["Field3"]).
setTrimBlanks("TRIM_BOTH")
deModel = de.fit(df)
PMML = deModel.toPMML()
statXML = deModel.statXML()
predictions = deModel.transform(df)
predictions.show()
Descriptives Selection Strategy
When the number of field pairs is too large (for example, larger than the default of 1000), SelectionStrategy is used to limit the number of pairs for which bivariate statistics will be computed. The strategy involves 2 steps:
1. Limit the number of pairs based on the univariate statistics.
2. Limit the number of pairs based on the core association bivariate statistics.
Notice that the pair will always be included under the following conditions:
1. The pair consists of a predictor field and a target field.
2. The pair of predictors or targets is enforced.
Smart Data Preprocessing
The Smart Data Preprocessing (SDP) engine is an analytic component for data preparation. It consists of three separate modules: relevance analysis, relevance and redundancy analysis, and smart metadata (SMD) integration.
Given the data with regular fields, list fields, and map fields, relevance analysis evaluates the associations of input fields with targets, and selects a specified number of fields for subsequent analysis. Meanwhile, it expands list fields and map fields, and extracts the selected fields into regular column-based format.
Due to the efficiency of relevance analysis, it's also used to reduce the large number of fields in wide data to a moderate level where traditional analytics can work.
SmartDataPreprocessingRelevanceAnalysis exports these outputs:
* JSON file, containing model information
* new column-based data
* the related data model
Python example code:
from spss.ml.datapreparation.smartdatapreprocessing import SmartDataPreprocessingRelevanceAnalysis
sdpRA = SmartDataPreprocessingRelevanceAnalysis().
setInputFieldList(["holderage", "vehicleage", "claimamt"]).
setTargetFieldList(["vehiclegroup", "nclaims"]).
setMaxNumTarget(3).
setInvalidPairsThresEnabled(True).
setRMSSEThresEnabled(True).
setAbsVariCoefThresEnabled(True).
setInvalidPairsThreshold(0.7).
setRMSSEThreshold(0.7).
setAbsVariCoefThreshold(0.05).
setMaxNumSelFields(2).
setConCatRatio(0.3).
setFilterSelFields(True)
predictions = sdpRA.transform(data)
predictions.show()
Sparse Data Convertor
Sparse Data Convertor (SDC) converts regular data fields into list fields. You just need to specify the fields that you want to convert into list fields, then SDC will merge the fields according to their measurement level. It will generate, at most, three kinds of list fields: continuous list field, categorical list field, and map field.
Python example code:
from spss.ml.datapreparation.sparsedataconverter import SparseDataConverter
sdc = SparseDataConverter().
setInputFieldList(["Age", "Sex", "Marriage", "BP", "Cholesterol", "Na", "K", "Drug"])
predictions = sdc.transform(data)
predictions.show()
Binning
You can use this function to derive one or more new binned fields or to obtain the bin definitions used to determine the bin values.
Python example code:
from spss.ml.datapreparation.binning.binning import Binning
binDefinition = BinDefinitions(1, False, True, True, [CutPoint(50.0, False)])
binField = BinRequest("integer_field", "integer_bin", binDefinition, None)
params = [binField]
bining = Binning().setBinRequestsParam(params)
outputDF = bining.transform(inputDF)
Hex Binning
You can use this function to calculate and assign hexagonal bins to two fields.
Python example code:
from spss.ml.datapreparation.binning.hexbinning import HexBinning
from spss.ml.param.binningsettings import HexBinningSetting
params = [HexBinningSetting("field1_out", "field1", 5, -1.0, 25.0, 5.0),
HexBinningSetting("field2_out", "field2", 5, -1.0, 25.0, 5.0)]
hexBinning = HexBinning().setHexBinRequestsParam(params)
outputDF = hexBinning.transform(inputDF)
Complex Sampling
The complexSampling function selects a pseudo-random sample of records from a data source.
The complexSampling function performs stratified sampling of incoming data using simple exact sampling and simple proportional sampling. The stratifying fields are specified as input and the sampling counts or sampling ratio for each of the strata to be sampled must also be provided. Optionally, the record counts for each strata may be provided to improve performance.
Python example code:
from spss.ml.datapreparation.sampling.complexsampling import ComplexSampling
from spss.ml.datapreparation.params.sampling import RealStrata, Strata, Stratification
transformer = ComplexSampling().
setRandomSeed(123444).
setRepeatable(True).
setStratification(Stratification(["real_field"], [
Strata(key=RealStrata(11.1)], samplingCount=25),
Strata(key=RealStrata(2.4)], samplingCount=40),
Strata(key=RealStrata(12.9)], samplingRatio=0.5)])).
setFrequencyField("frequency_field")
sampled = transformer.transform(unionDF)
Count and Sample
The countAndSample function produces a pseudo-random sample having a size approximately equal to the \'samplingCount\' input.
The sampling is accomplished by calling the SamplingComponent with a sampling ratio that's computed as \'samplingCount / totalRecords\' where \'totalRecords\' is the record count of the incoming data.
Python example code:
from spss.ml.datapreparation.sampling.countandsample import CountAndSample
transformer = CountAndSample().setSamplingCount(20000).setRandomSeed(123)
sampled = transformer.transform(unionDF)
MR Sampling
The mrsampling function selects a pseudo-random sample of records from a data source at a specified sampling ratio. The size of the sample will be approximately the specified proportion of the total number of records subject to an optional maximum. The set of records and their total number will vary with random seed. Every record in the data source has the same probability of being selected.
Python example code:
from spss.ml.datapreparation.sampling.mrsampling import MRSampling
transformer = MRSampling().setSamplingRatio(0.5).setRandomSeed(123).setDiscard(True)
sampled = transformer.transform(unionDF)
Sampling Model
The samplingModel function selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum.
When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random.
Note that with distributed data, the samplingModel function applies the selection criteria independently to each data split. The maximum sample size, if any, applies independently to each split and not to the entire data source; the subsequence is started fresh at the start of each split.
Python example code:
from spss.ml.datapreparation.sampling.samplingcomponent import SamplingModel
transformer = SamplingModel().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
sampled = transformer.transform(unionDF)
Sequential Sampling
The sequentialSampling function is similar to the samplingModel function. It also selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum.
When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random. The main difference between sequentialSampling and samplingModel is that with distributed data, the sequentialSampling function applies the selection criteria to the entire data source, while the samplingModel function applies the selection criteria independently to each data split.
Python example code:
from spss.ml.datapreparation.sampling.samplingcomponent import SequentialSampling
transformer = SequentialSampling().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
sampled = transformer.transform(unionDF)
Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
| # SPSS predictive analytics data preparation algorithms in notebooks #
Descriptives provides efficient computation of the univariate and bivariate statistics and automatic data preparation features on large scale data\. It can be used widely in data profiling, data exploration, and data preparation for subsequent modeling analyses\.
The core statistical features include essential univariate and bivariate statistical summaries, univariate order statistics, metadata information creation from raw data, statistics for visualization of single fields and field pairs, data preparation features, and data interestingness score and data quality assessment\. It can efficiently support the functionality required for automated data processing, user interactivity, and obtaining data insights for single fields or the relationships between the pairs of fields inclusive with a specified target\.
**Python example code:**
from spss.ml.datapreparation.descriptives import Descriptives
de = Descriptives(). \
setInputFieldsList(["Field1", "Field2"]). \
setTargetFieldList(["Field3"]). \
setTrimBlanks("TRIM_BOTH")
deModel = de.fit(df)
PMML = deModel.toPMML()
statXML = deModel.statXML()
predictions = deModel.transform(df)
predictions.show()
## Descriptives Selection Strategy ##
When the number of field pairs is too large (for example, larger than the default of 1000), SelectionStrategy is used to limit the number of pairs for which bivariate statistics will be computed\. The strategy involves 2 steps:
<!-- <ol> -->
1. Limit the number of pairs based on the univariate statistics\.
2. Limit the number of pairs based on the core association bivariate statistics\.
<!-- </ol> -->
Notice that the pair will always be included under the following conditions:
<!-- <ol> -->
1. The pair consists of a predictor field and a target field\.
2. The pair of predictors or targets is enforced\.
<!-- </ol> -->
## Smart Data Preprocessing ##
The Smart Data Preprocessing (SDP) engine is an analytic component for data preparation\. It consists of three separate modules: relevance analysis, relevance and redundancy analysis, and smart metadata (SMD) integration\.
Given the data with regular fields, list fields, and map fields, relevance analysis evaluates the associations of input fields with targets, and selects a specified number of fields for subsequent analysis\. Meanwhile, it expands list fields and map fields, and extracts the selected fields into regular column\-based format\.
Due to the efficiency of relevance analysis, it's also used to reduce the large number of fields in wide data to a moderate level where traditional analytics can work\.
SmartDataPreprocessingRelevanceAnalysis exports these outputs:
<!-- <ul> -->
* JSON file, containing model information
* new column\-based data
* the related data model
<!-- </ul> -->
**Python example code:**
from spss.ml.datapreparation.smartdatapreprocessing import SmartDataPreprocessingRelevanceAnalysis
sdpRA = SmartDataPreprocessingRelevanceAnalysis(). \
setInputFieldList(["holderage", "vehicleage", "claimamt"]). \
setTargetFieldList(["vehiclegroup", "nclaims"]). \
setMaxNumTarget(3). \
setInvalidPairsThresEnabled(True). \
setRMSSEThresEnabled(True). \
setAbsVariCoefThresEnabled(True). \
setInvalidPairsThreshold(0.7). \
setRMSSEThreshold(0.7). \
setAbsVariCoefThreshold(0.05). \
setMaxNumSelFields(2). \
setConCatRatio(0.3). \
setFilterSelFields(True)
predictions = sdpRA.transform(data)
predictions.show()
## Sparse Data Convertor ##
Sparse Data Convertor (SDC) converts regular data fields into list fields\. You just need to specify the fields that you want to convert into list fields, then SDC will merge the fields according to their measurement level\. It will generate, at most, three kinds of list fields: continuous list field, categorical list field, and map field\.
**Python example code:**
from spss.ml.datapreparation.sparsedataconverter import SparseDataConverter
sdc = SparseDataConverter(). \
setInputFieldList(["Age", "Sex", "Marriage", "BP", "Cholesterol", "Na", "K", "Drug"])
predictions = sdc.transform(data)
predictions.show()
## Binning ##
You can use this function to derive one or more new binned fields or to obtain the bin definitions used to determine the bin values\.
**Python example code:**
from spss.ml.datapreparation.binning.binning import Binning
binDefinition = BinDefinitions(1, False, True, True, [CutPoint(50.0, False)])
binField = BinRequest("integer_field", "integer_bin", binDefinition, None)
params = [binField]
bining = Binning().setBinRequestsParam(params)
outputDF = bining.transform(inputDF)
## Hex Binning ##
You can use this function to calculate and assign hexagonal bins to two fields\.
**Python example code:**
from spss.ml.datapreparation.binning.hexbinning import HexBinning
from spss.ml.param.binningsettings import HexBinningSetting
params = [HexBinningSetting("field1_out", "field1", 5, -1.0, 25.0, 5.0),
HexBinningSetting("field2_out", "field2", 5, -1.0, 25.0, 5.0)]
hexBinning = HexBinning().setHexBinRequestsParam(params)
outputDF = hexBinning.transform(inputDF)
## Complex Sampling ##
The complexSampling function selects a pseudo\-random sample of records from a data source\.
The complexSampling function performs stratified sampling of incoming data using simple exact sampling and simple proportional sampling\. The stratifying fields are specified as input and the sampling counts or sampling ratio for each of the strata to be sampled must also be provided\. Optionally, the record counts for each strata may be provided to improve performance\.
**Python example code:**
from spss.ml.datapreparation.sampling.complexsampling import ComplexSampling
from spss.ml.datapreparation.params.sampling import RealStrata, Strata, Stratification
transformer = ComplexSampling(). \
setRandomSeed(123444). \
setRepeatable(True). \
setStratification(Stratification(["real_field"], [
Strata(key=RealStrata(11.1)], samplingCount=25),
Strata(key=RealStrata(2.4)], samplingCount=40),
Strata(key=RealStrata(12.9)], samplingRatio=0.5)])). \
setFrequencyField("frequency_field")
sampled = transformer.transform(unionDF)
## Count and Sample ##
The countAndSample function produces a pseudo\-random sample having a size approximately equal to the \\'samplingCount\\' input\.
The sampling is accomplished by calling the SamplingComponent with a sampling ratio that's computed as \\'samplingCount / totalRecords\\' where \\'totalRecords\\' is the record count of the incoming data\.
**Python example code:**
from spss.ml.datapreparation.sampling.countandsample import CountAndSample
transformer = CountAndSample().setSamplingCount(20000).setRandomSeed(123)
sampled = transformer.transform(unionDF)
## MR Sampling ##
The mrsampling function selects a pseudo\-random sample of records from a data source at a specified sampling ratio\. The size of the sample will be approximately the specified proportion of the total number of records subject to an optional maximum\. The set of records and their total number will vary with random seed\. Every record in the data source has the same probability of being selected\.
**Python example code:**
from spss.ml.datapreparation.sampling.mrsampling import MRSampling
transformer = MRSampling().setSamplingRatio(0.5).setRandomSeed(123).setDiscard(True)
sampled = transformer.transform(unionDF)
## Sampling Model ##
The samplingModel function selects a pseudo\-random percentage of the subsequence of input records defined by every Nth record for a given step size N\. The total sample size may be optionally limited by a maximum\.
When the step size is 1, the subsequence is the entire sequence of input records\. When the sampling ratio is 1\.0, selection becomes deterministic, not pseudo\-random\.
Note that with distributed data, the samplingModel function applies the selection criteria independently to each data split\. The maximum sample size, if any, applies independently to each split and not to the entire data source; the subsequence is started fresh at the start of each split\.
**Python example code:**
from spss.ml.datapreparation.sampling.samplingcomponent import SamplingModel
transformer = SamplingModel().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
sampled = transformer.transform(unionDF)
## Sequential Sampling ##
The sequentialSampling function is similar to the samplingModel function\. It also selects a pseudo\-random percentage of the subsequence of input records defined by every Nth record for a given step size N\. The total sample size may be optionally limited by a maximum\.
When the step size is 1, the subsequence is the entire sequence of input records\. When the sampling ratio is 1\.0, selection becomes deterministic, not pseudo\-random\. The main difference between sequentialSampling and samplingModel is that with distributed data, the sequentialSampling function applies the selection criteria to the entire data source, while the samplingModel function applies the selection criteria independently to each data split\.
**Python example code:**
from spss.ml.datapreparation.sampling.samplingcomponent import SequentialSampling
transformer = SequentialSampling().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
sampled = transformer.transform(unionDF)
**Parent topic:**[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
<!-- </article "role="article" "> -->
|
3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=en | Data sources for scoring batch deployments | Data sources for scoring batch deployments
You can supply input data for a batch deployment job in several ways, including directly uploading a file or providing a link to database tables. The types of allowable input data vary according to the type of deployment job that you are creating.
For supported input types by framework, refer to [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html).
Input data can be supplied to a batch job as [inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=eninline_data) or [data reference](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=endata_ref).
Available input types for batch deployments by framework and asset type
Available input types for batch deployments by framework and asset type
Framework Batch deployment type
Decision optimization Reference
Python function Inline
PyTorch Inline and Reference
Tensorflow Inline and Reference
Scikit-learn Inline and Reference
Python scripts Reference
Spark MLlib Inline and Reference
SPSS Inline and Reference
XGBoost Inline and Reference
Inline data description
Inline type input data for batch processing is specified in the batch deployment job's payload. For example, you can pass a CSV file as the deployment input in the UI or as a value for the scoring.input_data parameter in a notebook. When the batch deployment job is completed, the output is written to the corresponding job's scoring.predictions metadata parameter.
Data reference description
Input and output data of type data reference that is used for batch processing can be stored:
* In a remote data source, like a Cloud Object Storage bucket or an SQL or no-SQL database.
* As a local or managed data asset in a deployment space.
Details for data references include:
* Data source reference type depends on the asset type. Refer to Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* For data_asset type, the reference to input data must be specified as a /v2/assets href in the input_data_references.location.href parameter in the deployment job's payload. The data asset that is specified is a reference to a local or a connected data asset. Also, if the batch deployment job's output data must be persisted in a remote data source, the references to output data must be specified as a /v2/assets href in output_data_reference.location.href parameter in the deployment job's payload.
* Any input and output data_asset references must be in the same space ID as the batch deployment.
* If the batch deployment job's output data must be persisted in a deployment space as a local asset, output_data_reference.location.name must be specified. When the batch deployment job is completed successfully, the asset with the specified name is created in the space.
* Output data can contain information on where in a remote database the data asset is located. In this situation, you can specify whether to append the batch output to the table or truncate the table and update the output data. Use the output_data_references.location.write_mode parameter to specify the values truncate or append.
* Specifying truncate as value truncates the table and inserts the batch output data.
* Specifying append as value appends the batch output data to the remote database table.
* write_mode is applicable only for the output_data_references parameter.
* write_mode is applicable only for remote database-related data assets. This parameter is not applicable for a local data asset or a Cloud Object Storage based data asset.
Example data_asset payload
"input_data_references": [{
"type": "data_asset",
"connection": {
},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
}
}]
Example connection_asset payload
"input_data_references": [{
"type": "connection_asset",
"connection": {
"id": "<connection_guid>"
},
"location": {
"bucket": "<bucket name>",
"file_name": "<directory_name>/<file name>"
}
<other wdp-properties supported by runtimes>
}]
Structuring the input data
How you structure the input data, also known as the payload, for the batch job depends on the framework for the asset you are deploying.
A .csv input file or other structured data formats must be formatted to match the schema of the asset. List the column names (fields) in the first row and values to be scored in subsequent rows. For example, see the following code snippet:
PassengerId, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked
1,3,"Braund, Mr. Owen Harris",0,22,1,0,A/5 21171,7.25,,S
4,1,"Winslet, Mr. Leo Brown",1,65,1,0,B/5 200763,7.50,,S
A JSON input file must provide the same information on fields and values, by using this format:
{"input_data":[{
"fields": <field1>, <field2>, ...],
"values": <value1>, <value2>, ...]]
}]}
For example:
{"input_data":[{
"fields": "PassengerId","Pclass","Name","Sex","Age","SibSp","Parch","Ticket","Fare","Cabin","Embarked"],
"values": 1,3,"Braund, Mr. Owen Harris",0,22,1,0,"A/5 21171",7.25,null,"S"],
4,1,"Winselt, Mr. Leo Brown",1,65,1,0,"B/5 200763",7.50,null,"S"]]
}]}
Preparing a payload that matches the schema of an existing model
Refer to this sample code:
model_details = client.repository.get_details("<model_id>") retrieves details and includes schema
columns_in_schema = []
for i in range(0, len(model_details['entity']['input'].get('fields'))):
columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored
(...)
scoring_values = X.values.tolist()
array_of_input_fields = X.columns.tolist()
payload_scoring = {"input_data": [{"fields": array_of_input_fields],"values": scoring_values}]}
Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
| # Data sources for scoring batch deployments #
You can supply input data for a batch deployment job in several ways, including directly uploading a file or providing a link to database tables\. The types of allowable input data vary according to the type of deployment job that you are creating\.
For supported input types by framework, refer to [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)\.
Input data can be supplied to a batch job as [inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=en#inline_data) or [data reference](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=en#data_ref)\.
## Available input types for batch deployments by framework and asset type ##
<!-- <table> -->
Available input types for batch deployments by framework and asset type
| Framework | Batch deployment type |
| --------------------- | --------------------- |
| Decision optimization | Reference |
| Python function | Inline |
| PyTorch | Inline and Reference |
| Tensorflow | Inline and Reference |
| Scikit\-learn | Inline and Reference |
| Python scripts | Reference |
| Spark MLlib | Inline and Reference |
| SPSS | Inline and Reference |
| XGBoost | Inline and Reference |
<!-- </table ""> -->
### Inline data description ###
Inline type input data for batch processing is specified in the batch deployment job's payload\. For example, you can pass a CSV file as the deployment input in the UI or as a value for the `scoring.input_data` parameter in a notebook\. When the batch deployment job is completed, the output is written to the corresponding job's `scoring.predictions` metadata parameter\.
### Data reference description ###
Input and output data of type *data reference* that is used for batch processing can be stored:
<!-- <ul> -->
* In a remote data source, like a Cloud Object Storage bucket or an SQL or no\-SQL database\.
* As a local or managed data asset in a deployment space\.
<!-- </ul> -->
Details for data references include:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* For `data_asset` type, the reference to input data must be specified as a `/v2/assets` href in the `input_data_references.location.href` parameter in the deployment job's payload\. The data asset that is specified is a reference to a local or a connected data asset\. Also, if the batch deployment job's output data must be persisted in a remote data source, the references to output data must be specified as a `/v2/assets` href in `output_data_reference.location.href` parameter in the deployment job's payload\.
* Any input and output `data_asset` references must be in the same space ID as the batch deployment\.
* If the batch deployment job's output data must be persisted in a deployment space as a local asset, `output_data_reference.location.name` must be specified\. When the batch deployment job is completed successfully, the asset with the specified name is created in the space\.
* Output data can contain information on where in a remote database the data asset is located\. In this situation, you can specify whether to append the batch output to the table or truncate the table and update the output data\. Use the `output_data_references.location.write_mode` parameter to specify the values `truncate` or `append`\.
<!-- <ul> -->
* Specifying `truncate` as value truncates the table and inserts the batch output data.
* Specifying `append` as value appends the batch output data to the remote database table.
* `write_mode` is applicable only for the `output_data_references` parameter.
* `write_mode` is applicable only for remote database-related data assets. This parameter is not applicable for a local data asset or a Cloud Object Storage based data asset.
<!-- </ul> -->
<!-- </ul> -->
#### Example data\_asset payload ####
"input_data_references": [{
"type": "data_asset",
"connection": {
},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
}
}]
#### Example connection\_asset payload ####
"input_data_references": [{
"type": "connection_asset",
"connection": {
"id": "<connection_guid>"
},
"location": {
"bucket": "<bucket name>",
"file_name": "<directory_name>/<file name>"
}
<other wdp-properties supported by runtimes>
}]
## Structuring the input data ##
How you structure the input data, also known as the payload, for the batch job depends on the framework for the asset you are deploying\.
A \.csv input file or other structured data formats must be formatted to match the schema of the asset\. List the column names (fields) in the first row and values to be scored in subsequent rows\. For example, see the following code snippet:
PassengerId, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked
1,3,"Braund, Mr. Owen Harris",0,22,1,0,A/5 21171,7.25,,S
4,1,"Winslet, Mr. Leo Brown",1,65,1,0,B/5 200763,7.50,,S
A JSON input file must provide the same information on fields and values, by using this format:
{"input_data":[{
"fields": <field1>, <field2>, ...],
"values": <value1>, <value2>, ...]]
}]}
For example:
{"input_data":[{
"fields": "PassengerId","Pclass","Name","Sex","Age","SibSp","Parch","Ticket","Fare","Cabin","Embarked"],
"values": 1,3,"Braund, Mr. Owen Harris",0,22,1,0,"A/5 21171",7.25,null,"S"],
4,1,"Winselt, Mr. Leo Brown",1,65,1,0,"B/5 200763",7.50,null,"S"]]
}]}
### Preparing a payload that matches the schema of an existing model ###
Refer to this sample code:
model_details = client.repository.get_details("<model_id>") # retrieves details and includes schema
columns_in_schema = []
for i in range(0, len(model_details['entity']['input'].get('fields'))):
columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
X = X[columns_in_schema] # where X is a pandas dataframe that contains values to be scored
#(...)
scoring_values = X.values.tolist()
array_of_input_fields = X.columns.tolist()
payload_scoring = {"input_data": [{"fields": array_of_input_fields],"values": scoring_values}]}
**Parent topic:**[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
<!-- </article "role="article" "> -->
|
653FFEDFAC00F360750F776A3A60F6AAD38ED954 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html?context=cdpaas&locale=en | Creating batch deployments in Watson Machine Learning | Creating batch deployments in Watson Machine Learning
A batch deployment processes input data from a file, data connection, or connected data in a storage bucket, and writes the output to a selected destination.
Before you begin
1. Save a model to a deployment space.
2. Promote or add the input file for the batch deployment to the space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
Supported frameworks
Batch deployment is supported for these frameworks and asset types:
* Decision Optimization
* PMML
* Python functions
* PyTorch-Onnx
* Tensorflow
* Scikit-learn
* Python scripts
* Spark MLlib
* SPSS
* XGBoost
Notes:
* You can create batch deployments only of Python functions and models based on the PMML framework programmatically.
* Your list of deployment jobs can contain two types of jobs: WML deployment job and WML batch deployment.
* When you create a batch deployment (through the UI or programmatically), an extra default deployment job is created of the type WML deployment job. The extra job is a parent job that stores all deployment runs generated for that batch deployment that were triggered by the Watson Machine Learning API.
* The standard WML batch deployment type job is created only when you create a deployment from the UI. You cannot create a WML batch deployment type job by using the API.
* The limitations of WML deployment job are as follows:
* The job cannot be edited.
* The job cannot be deleted unless the associated batch deployment is deleted.
* The job doesn't allow scheduling.
* The job doesn't allow notifications.
* The job doesn't allow changing retention settings.
For more information, see [Data sources for scoring batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html). For more information, see [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
Creating a batch deployment
To create a batch deployment:
1. From the deployment space, click the name of the saved model that you want to deploy. The model detail page opens.
2. Click New deployment.
3. Choose Batch as the deployment type.
4. Enter a name and an optional description for your deployment.
5. Select a [hardware specification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html).
6. Click Create. When status changes to Deployed, your deployment is created.
Note: Additionally, you can create a batch deployment by using any of these interfaces:
* Watson Studio user interface, from an Analytics deployment space
* Watson Machine Learning Python Client
* Watson Machine Learning REST APIs
Creating batch deployments programmatically
See [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
Viewing deployment details
Click the name of a deployment to view the details.

You can view the configuration details such as hardware and software specifications. You can also get the deployment ID, which you can use in API calls from an endpoint. For more information, see [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html).
Learn more
* For more information, see [Creating jobs in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html).
* Refer to [Machine Learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning-cp) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
| # Creating batch deployments in Watson Machine Learning #
A batch deployment processes input data from a file, data connection, or connected data in a storage bucket, and writes the output to a selected destination\.
## Before you begin ##
<!-- <ol> -->
1. Save a model to a deployment space\.
2. Promote or add the input file for the batch deployment to the space\. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)\.
<!-- </ol> -->
## Supported frameworks ##
Batch deployment is supported for these frameworks and asset types:
<!-- <ul> -->
* Decision Optimization
* PMML
* Python functions
* PyTorch\-Onnx
* Tensorflow
* Scikit\-learn
* Python scripts
* Spark MLlib
* SPSS
* XGBoost
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* You can create batch deployments only of Python functions and models based on the PMML framework programmatically\.
* Your list of deployment jobs can contain two types of jobs: `WML deployment job` and `WML batch deployment`\.
* When you create a batch deployment (through the UI or programmatically), an extra `default` deployment job is created of the type `WML deployment job`\. The extra job is a parent job that stores all deployment runs generated for that batch deployment that were triggered by the Watson Machine Learning API\.
* The standard `WML batch deployment` type job is created only when you create a deployment from the UI\. You cannot create a `WML batch deployment` type job by using the API\.
* The limitations of `WML deployment job` are as follows:
<!-- <ul> -->
* The job cannot be edited.
* The job cannot be deleted unless the associated batch deployment is deleted.
* The job doesn't allow scheduling.
* The job doesn't allow notifications.
* The job doesn't allow changing retention settings.
<!-- </ul> -->
<!-- </ul> -->
For more information, see [Data sources for scoring batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html)\. For more information, see [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
## Creating a batch deployment ##
To create a batch deployment:
<!-- <ol> -->
1. From the deployment space, click the name of the saved model that you want to deploy\. The model detail page opens\.
2. Click **New deployment**\.
3. Choose **Batch** as the deployment type\.
4. Enter a name and an optional description for your deployment\.
5. Select a [hardware specification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html)\.
6. Click **Create**\. When status changes to **Deployed**, your deployment is created\.
<!-- </ol> -->
Note: Additionally, you can create a batch deployment by using any of these interfaces:
<!-- <ul> -->
* Watson Studio user interface, from an Analytics deployment space
* Watson Machine Learning Python Client
* Watson Machine Learning REST APIs
<!-- </ul> -->
## Creating batch deployments programmatically ##
See [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\.
## Viewing deployment details ##
Click the name of a deployment to view the details\.

You can view the configuration details such as hardware and software specifications\. You can also get the deployment ID, which you can use in API calls from an endpoint\. For more information, see [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html)\.
## Learn more ##
<!-- <ul> -->
* For more information, see [Creating jobs in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)\.
* Refer to [Machine Learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning-cp) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\.
<!-- </ul> -->
**Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
<!-- </article "role="article" "> -->
|
7F755B81AB25CBD0950D528A240B12262FE6CA08 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-autoai.html?context=cdpaas&locale=en | Batch deployment input details for AutoAI models | Batch deployment input details for AutoAI models
Follow these rules when you are specifying input details for batch deployments of AutoAI models.
Data type summary table:
Data Description
Type inline, data references
File formats CSV
Data Sources
Input/output data references:
* Local/managed assets from the space
* Connected (remote) assets: Cloud Object Storage
Notes:
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) , you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).{: new_window}
* Your training data source can differ from your deployment data source, but the schema of the data must match or the deployment will fail. For example, you can train an experiment by using data from a Snowflake database and deploy by using input data from a Db2 database if the schema is an exact match.
* The environment variables parameter of deployment jobs is not applicable.
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* For AutoAI assets, if the input or output data reference is of type connection_asset and the remote data source is a database then location.table_name and location.schema_name are required parameters. For example:
"input_data_references": [{
"type": "connection_asset",
"connection": {
"id": <connection_guid>
},
"location": {
"table_name": <table name>,
"schema_name": <schema name>
<other wdp-properties supported by runtimes>
}
}]
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for AutoAI models #
Follow these rules when you are specifying input details for batch deployments of AutoAI models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ----------------------- |
| Type | inline, data references |
| File formats | CSV |
<!-- </table ""> -->
## Data Sources ##
Input/output data references:
<!-- <ul> -->
* Local/managed assets from the space
* Connected (remote) assets: Cloud Object Storage
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) , you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)\.\{: new\_window\}
* Your training data source can differ from your deployment data source, but the schema of the data must match or the deployment will fail\. For example, you can train an experiment by using data from a Snowflake database and deploy by using input data from a Db2 database if the schema is an exact match\.
* The environment variables parameter of deployment jobs is not applicable\.
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* For AutoAI assets, if the input or output data reference is of type `connection_asset` and the remote data source is a database then `location.table_name` and `location.schema_name` are required parameters\. For example:
<!-- </ul> -->
"input_data_references": [{
"type": "connection_asset",
"connection": {
"id": <connection_guid>
},
"location": {
"table_name": <table name>,
"schema_name": <schema name>
<other wdp-properties supported by runtimes>
}
}]
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
4C6242D9F2B3E125780FDF188F994270A6E2340D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html?context=cdpaas&locale=en | Batch deployment input details by framework | Batch deployment input details by framework
Various data types are supported as input for batch deployments, depending on your specific model type.
For details, follow these links:
* [AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-autoai.html)
* [Decision optimization models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-do.html)
* [Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-function.html)
* [Python scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-script.html)
* [Pytorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html)
* [Scikit-Learn and XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html)
* [Spark models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html)
* [SPSS models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spss.html)
* [Tensorflow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-tensorflow.html)
For more information, see [Using multiple inputs for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
| # Batch deployment input details by framework #
Various data types are supported as input for batch deployments, depending on your specific model type\.
For details, follow these links:
<!-- <ul> -->
* [AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-autoai.html)
* [Decision optimization models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-do.html)
* [Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-function.html)
* [Python scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-script.html)
* [Pytorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html)
* [Scikit\-Learn and XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html)
* [Spark models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html)
* [SPSS models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spss.html)
* [Tensorflow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-tensorflow.html)
<!-- </ul> -->
For more information, see [Using multiple inputs for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html)\.
**Parent topic:**[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
<!-- </article "role="article" "> -->
|
722D44681192F1766A0B1BACC328E719526E8DE2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-do.html?context=cdpaas&locale=en | Batch deployment input details for Decision Optimization models | Batch deployment input details for Decision Optimization models
Follow these rules when you are specifying input details for batch deployments of Decision Optimization models.
Data type summary table:
Data Description
Type inline and data references
File formats Refer to [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html).
Data sources
Input/output inline data:
* Inline input data is converted to CSV files and used by the engine.
* CSV output data is converted to output inline data.
* Base64-encoded raw data is supported as input and output.
Input/output data references:
* Tabular data is loaded from CSV, XLS, XLSX, JSON files or database data sources supported by the WDP connection library, converted to CSV files, and used by the engine.
* CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files, or database data sources supported by the WDP connection library.
* Raw data can be loaded and saved from or to any file data sources that are supported by the WDP connection library.
* No support for compressed files.
* The environment variables parameter of deployment jobs is not applicable.
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* For S3 or Db2, connection details must be specified in the input_data_references.connection parameter, in the deployment job’s payload.
* For S3 or Db2, location details such as table name, bucket name, or path must be specified in the input_data_references.location.path parameter, in the deployment job’s payload.
* For data_asset, a managed asset can be updated or created. For creation, you can set the name and description for the created asset.
* You can use a pattern in ID or connection properties. For example, see the following code snippet:
* To collect all output CSV as inline data:
"output_data": [ { "id":"..csv"}]
* To collect job output in a particular S3 folder:
"output_data_references": [ {"id":".", "type": "s3", "connection": {...}, "location": { "bucket": "do-wml", "path": "${job_id}/${attachment_name}" }}]
Note:Support for s3 and db2 values for scoring.input_data_references.type and scoring.output_data_references.type is deprecated and will be removed in the future. Use connection_asset or data_asset instead. See the documentation for the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/){: new_window} for details and examples.
For more information, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html).
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Decision Optimization models #
Follow these rules when you are specifying input details for batch deployments of Decision Optimization models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ------------------------------------------------------- |
| Type | inline and data references |
| File formats | Refer to [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html)\. |
<!-- </table ""> -->
## Data sources ##
Input/output inline data:
<!-- <ul> -->
* Inline input data is converted to CSV files and used by the engine\.
* CSV output data is converted to output inline data\.
* Base64\-encoded raw data is supported as input and output\.
<!-- </ul> -->
Input/output data references:
<!-- <ul> -->
* Tabular data is loaded from CSV, XLS, XLSX, JSON files or database data sources supported by the WDP connection library, converted to CSV files, and used by the engine\.
* CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files, or database data sources supported by the WDP connection library\.
* Raw data can be loaded and saved from or to any file data sources that are supported by the WDP connection library\.
* No support for compressed files\.
* The environment variables parameter of deployment jobs is not applicable\.
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* For S3 or Db2, connection details must be specified in the `input_data_references.connection` parameter, in the deployment job’s payload\.
* For S3 or Db2, location details such as table name, bucket name, or path must be specified in the `input_data_references.location.path` parameter, in the deployment job’s payload\.
* For `data_asset`, a managed asset can be updated or created\. For creation, you can set the name and description for the created asset\.
* You can use a pattern in ID or connection properties\. For example, see the following code snippet:
<!-- <ul> -->
* To collect all output CSV as inline data:
"output_data": [ { "id":".*\.csv"}]
* To collect job output in a particular S3 folder:
"output_data_references": [ {"id":".*", "type": "s3", "connection": {...}, "location": { "bucket": "do-wml", "path": "${job_id}/${attachment_name}" }}]
<!-- </ul> -->
<!-- </ul> -->
Note:Support for `s3` and `db2` values for `scoring.input_data_references.type` and `scoring.output_data_references.type` is deprecated and will be removed in the future\. Use `connection_asset` or `data_asset` instead\. See the documentation for the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\{: new\_window\} for details and examples\.
For more information, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html)\.
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
4F89E6B2B76E64B9618F799611DD1B053D045222 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-function.html?context=cdpaas&locale=en | Batch deployment input details for Python functions | Batch deployment input details for Python functions
Follow these rules when you are specifying input details for batch deployments of Python functions.
Data type summary table:
Data Description
Type inline
File formats N/A
You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions in the same way that they send data to deployed models. Deploying functions gives you the ability to:
* Hide details (such as credentials)
* Preprocess data before you pass it to models
* Handle errors
* Include calls to multiple models All of these actions take place within the deployed function, instead of in your application.
Data sources
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
Notes:
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
* The environment variables parameter of deployment jobs is not applicable.
* Make sure that the output is structured to match the output schema that is described in [Execute a synchronous deployment prediction](https://cloud.ibm.com/apidocs/machine-learningdeployments-compute-predictions).
Learn more
[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html).
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Python functions #
Follow these rules when you are specifying input details for batch deployments of Python functions\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ----------- |
| Type | inline |
| File formats | N/A |
<!-- </table ""> -->
You can deploy Python functions in Watson Machine Learning the same way that you can deploy models\. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions in the same way that they send data to deployed models\. Deploying functions gives you the ability to:
<!-- <ul> -->
* Hide details (such as credentials)
* Preprocess data before you pass it to models
* Handle errors
* Include calls to multiple models All of these actions take place within the deployed function, instead of in your application\.
<!-- </ul> -->
## Data sources ##
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)\.
* The environment variables parameter of deployment jobs is not applicable\.
* Make sure that the output is structured to match the output schema that is described in [Execute a synchronous deployment prediction](https://cloud.ibm.com/apidocs/machine-learning#deployments-compute-predictions)\.
<!-- </ul> -->
## Learn more ##
[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html)\.
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
85A8F36D819B12B355508090E787F4A182686394 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-script.html?context=cdpaas&locale=en | Batch deployment input details for Python scripts | Batch deployment input details for Python scripts
Follow these rules when you specify input details for batch deployments of Python scripts.
Data type summary table:
Data Description
Type Data references
File formats Any
Data sources
Input or output data references:
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
Notes:
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage(infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. For more information, see Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* You can specify the environment variables that are required for running the Python Script as 'key': 'value' pairs in scoring.environment_variables. The key must be the name of an environment variable and the value must be the corresponding value of the environment variable.
* The deployment job's payload is saved as a JSON file in the deployment container where you run the Python script. The Python script can access the full path file name of the JSON file that uses the JOBS_PAYLOAD_FILE environment variable.
* If input data is referenced as a local or managed data asset, deployment service downloads the input data and places it in the deployment container where you run the Python script. You can access the location (path) of the downloaded input data through the BATCH_INPUT_DIR environment variable.
* For input data references (data asset or connection asset), downloading of the data must be handled by the Python script. If a connected data asset or a connection asset is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable that contains the full path to the deployment job's payload that is saved as a JSON file.
* If output data must be persisted as a local or managed data asset in a space, you can specify the name of the asset to be created in scoring.output_data_reference.location.name. As part of a Python script, output data can be placed in the path that is specified by the BATCH_OUTPUT_DIR environment variable. The deployment service compresses the data to compressed file format and upload it in the location that is specified in BATCH_OUTPUT_DIR.
* These environment variables are set internally. If you try to set them manually, your values are overridden:
* BATCH_INPUT_DIR
* BATCH_OUTPUT_DIR
* JOBS_PAYLOAD_FILE
* If output data must be saved in a remote data store, you must specify the reference of the output data reference (for example, a data asset or a connected data asset) in output_data_reference.location.href. The Python script must take care of uploading the output data to the remote data source. If a connected data asset or a connection asset reference is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable, which contains the full path to the deployment job's payload that is saved as a JSON file.
* If the Python script does not require any input or output data references to be specified in the deployment job payload, then do not provide the scoring.input_data_references and scoring.output_data_references objects in the payload.
Learn more
[Deploying scripts in Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html).
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Python scripts #
Follow these rules when you specify input details for batch deployments of Python scripts\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | --------------- |
| Type | Data references |
| File formats | Any |
<!-- </table ""> -->
## Data sources ##
Input or output data references:
<!-- <ul> -->
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage(infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)\.
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. For more information, see **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* You can specify the environment variables that are required for running the Python Script as `'key': 'value'` pairs in `scoring.environment_variables`\. The `key` must be the name of an environment variable and the `value` must be the corresponding value of the environment variable\.
* The deployment job's payload is saved as a JSON file in the deployment container where you run the Python script\. The Python script can access the full path file name of the JSON file that uses the `JOBS_PAYLOAD_FILE` environment variable\.
* If input data is referenced as a local or managed data asset, deployment service downloads the input data and places it in the deployment container where you run the Python script\. You can access the location (path) of the downloaded input data through the `BATCH_INPUT_DIR` environment variable\.
* For input data references (data asset or connection asset), downloading of the data must be handled by the Python script\. If a connected data asset or a connection asset is present in the deployment jobs payload, you can access it using the `JOBS_PAYLOAD_FILE` environment variable that contains the full path to the deployment job's payload that is saved as a JSON file\.
* If output data must be persisted as a local or managed data asset in a space, you can specify the name of the asset to be created in `scoring.output_data_reference.location.name`\. As part of a Python script, output data can be placed in the path that is specified by the `BATCH_OUTPUT_DIR` environment variable\. The deployment service compresses the data to compressed file format and upload it in the location that is specified in `BATCH_OUTPUT_DIR`\.
* These environment variables are set internally\. If you try to set them manually, your values are overridden:
<!-- <ul> -->
* `BATCH_INPUT_DIR`
* `BATCH_OUTPUT_DIR`
* `JOBS_PAYLOAD_FILE`
<!-- </ul> -->
* If output data must be saved in a remote data store, you must specify the reference of the output data reference (for example, a data asset or a connected data asset) in `output_data_reference.location.href`\. The Python script must take care of uploading the output data to the remote data source\. If a connected data asset or a connection asset reference is present in the deployment jobs payload, you can access it using the `JOBS_PAYLOAD_FILE` environment variable, which contains the full path to the deployment job's payload that is saved as a JSON file\.
* If the Python script does not require any input or output data references to be specified in the deployment job payload, then do not provide the `scoring.input_data_references` and `scoring.output_data_references` objects in the payload\.
<!-- </ul> -->
## Learn more ##
[Deploying scripts in Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html)\.
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
27A861059A73E83BC02C633EE194DAC6F8ACE374 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html?context=cdpaas&locale=en | Batch deployment input details for Pytorch models | Batch deployment input details for Pytorch models
Follow these rules when you are specifying input details for batch deployments of Pytorch models.
Data type summary table:
Data Description
Type inline, data references
File formats .zip archive that contains JSON files
Data sources
Input or output data references:
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* If you deploy Pytorch models with ONNX format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (always set opset_version to the most recent version that is supported by the deployment runtime).
torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
Note: The environment variables parameter of deployment jobs is not applicable.
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Pytorch models #
Follow these rules when you are specifying input details for batch deployments of Pytorch models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | -------------------------------------- |
| Type | inline, data references |
| File formats | \.zip archive that contains JSON files |
<!-- </table ""> -->
## Data sources ##
Input or output data references:
<!-- <ul> -->
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* If you deploy Pytorch models with ONNX format, specify the `keep_initializers_as_inputs=True` flag and set `opset_version` to `9` (always set `opset_version` to the most recent version that is supported by the deployment runtime)\.
torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
<!-- </ul> -->
Note: The environment variables parameter of deployment jobs is not applicable\.
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
CDF460B2BB910F74723297BCB8E940BF370C6FFD | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html?context=cdpaas&locale=en | Batch deployment input details for Scikit-learn and XGBoost models | Batch deployment input details for Scikit-learn and XGBoost models
Follow these rules when you are specifying input details for batch deployments of Scikit-learn and XGBoost models.
Data type summary table:
Data Description
Type inline, data references
File formats CSV, .zip archive that contains CSV files
Data source
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
Notes:
* The environment variables parameter of deployment jobs is not applicable.
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main),
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Scikit\-learn and XGBoost models #
Follow these rules when you are specifying input details for batch deployments of Scikit\-learn and XGBoost models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ------------------------------------------ |
| Type | inline, data references |
| File formats | CSV, \.zip archive that contains CSV files |
<!-- </table ""> -->
## Data source ##
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* The environment variables parameter of deployment jobs is not applicable\.
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main),
<!-- </ul> -->
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
ADBD308EEB761B4A1516D49F68C880EAF3F08D78 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html?context=cdpaas&locale=en | Batch deployment input details for Spark models | Batch deployment input details for Spark models
Follow these rules when you are specifying input details for batch deployments of Spark models.
Data type summary table:
Data Description
Type Inline
File formats N/A
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Spark models #
Follow these rules when you are specifying input details for batch deployments of Spark models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ----------- |
| Type | Inline |
| File formats | N/A |
<!-- </table ""> -->
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
62BF74E391CFE1696E5218B3DF0926B735A4788F | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spss.html?context=cdpaas&locale=en | Batch deployment input details for SPSS models | Batch deployment input details for SPSS models
Follow these rules when you are specifying input details for batch deployments of SPSS models.
Data type summary table:
Data Description
Type inline, data references
File formats CSV
Data sources
Input or output data references:
* Local or managed assets from the space
* Connected (remote) assets from these sources:
* [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
* [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [Google Big-Query (googlebq)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
* [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html)
* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
* [Teradata (teradata)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html)
* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html)
* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html)
* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html)
* [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html)
* [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html)
Notes:
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
* For SPSS deployments, these data sources are not compliant with Federal Information Processing Standard (FIPS):
* Cloud Object Storage
* Cloud Object Storage (infrastructure)
* Storage volumes
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
* SPSS jobs support multiple data source inputs and a single output. If the schema is not provided in the model metadata at the time of saving the model, you must enter id manually and select a data asset for each connection. If the schema is provided in model metadata, id names are populated automatically by using metadata. You select the data asset for the corresponding ids in Watson Studio. For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
* To create a local or managed asset as an output data reference, the name field must be specified for output_data_reference so that a data asset is created with the specified name. Specifying an href that refers to an existing local data asset is not supported. Note:
Connected data assets that refer to supported databases can be created in the output_data_references only when the input_data_references also refers to one of these sources.
* Table names that are provided in input and output data references are ignored. Table names that are referred in the SPSS model stream are used during the batch deployment.
* Use SQL PushBack to generate SQL statements for IBM SPSS Modeler operations that can be “pushed back” to or run in the database to improve performance. SQL Pushback is only supported by:
* Db2
* SQL Server
* Netezza Performance Server
* If you are creating a job by using the Python client, you must provide the connection name that is referred in the data nodes of the SPSS model stream in the id field, and the data asset href in location.href for input/output data references of the deployment jobs payload. For example, you can construct the job payload like this:
job_payload_ref = {
client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [{
"id": "DB2Connection",
"name": "drug_ref_input1",
"type": "data_asset",
"connection": {},
"location": {
"href": <input_asset_href1>
}
},{
"id": "Db2 WarehouseConn",
"name": "drug_ref_input2",
"type": "data_asset",
"connection": {},
"location": {
"href": <input_asset_href2>
}
}],
client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
"type": "data_asset",
"connection": {},
"location": {
"href": <output_asset_href>
}
}
}
Using connected data for an SPSS Modeler flow job
An SPSS Modeler flow can have a number of input and output data nodes. When you connect to a supported database as an input and output data source, the connection details are selected from the input and output data reference, but the input and output table names are selected from the SPSS model stream file.
For batch deployment of an SPSS model that uses a database connection, make sure that the modeler stream Input and Output nodes are Data Asset nodes. In SPSS Modeler, the Data Asset nodes must be configured with the table names that are used later for job predictions. Set the nodes and table names before you save the model to Watson Machine Learning. When you are configuring the Data Asset nodes, choose the table name from the Connections; choosing a Data Asset that is created in your project is not supported.
When you are creating the deployment job for an SPSS model, make sure that the types of data sources are the same for input and output. The configured table names from the model stream are passed to the batch deployment and the input/output table names that are provided in the connected data are ignored.
For batch deployment of an SPSS model that uses a Cloud Object Storage connection, make sure that the SPSS model stream has single input and output data asset nodes.
Supported combinations of input and output sources
You must specify compatible sources for the SPSS Modeler flow input, the batch job input, and the output. If you specify an incompatible combination of types of data sources, you get an error when you try to run the batch job.
These combinations are supported for batch jobs:
SPSS model stream input/output Batch deployment job input Batch deployment job output
File Local, managed, or referenced data asset or connection asset (file) Remote data asset or connection asset (file) or name
Database Remote data asset or connection asset (database) Remote data asset or connection asset (database)
Specifying multiple inputs
If you are specifying multiple inputs for an SPSS model stream deployment with no schema, specify an ID for each element in input_data_references.
For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
In this example, when you create the job, provide three input entries with IDs: sample_db2_conn, sample_teradata_conn, and sample_googlequery_conn and select the required connected data for each input.
{
"deployment": {
"href": "/v4/deployments/<deploymentID>"
},
"scoring": {
"input_data_references": [{
"id": "sample_db2_conn",
"name": "DB2 connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
},
{
"id": "sample_teradata_conn",
"name": "Teradata connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
},
{
"id": "sample_googlequery_conn",
"name": "Google bigquery connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
}],
"output_data_references": {
"id": "sample_db2_conn",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
}
}
Note: The environment variables parameter of deployment jobs is not applicable.
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for SPSS models #
Follow these rules when you are specifying input details for batch deployments of SPSS models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | ----------------------- |
| Type | inline, data references |
| File formats | CSV |
<!-- </table ""> -->
## Data sources ##
Input or output data references:
<!-- <ul> -->
* Local or managed assets from the space
* Connected (remote) assets from these sources:
<!-- <ul> -->
* [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
* [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
* [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
* [Google Big-Query (googlebq)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
* [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html)
* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
* [Teradata (teradata)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html)
* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html)
* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html)
* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html)
* [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html)
* [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html)
<!-- </ul> -->
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)\.
* For SPSS deployments, these data sources are not compliant with Federal Information Processing Standard (FIPS):
<!-- <ul> -->
* Cloud Object Storage
* Cloud Object Storage (infrastructure)
* Storage volumes
<!-- </ul> -->
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
* SPSS jobs support multiple data source inputs and a single output\. If the schema is not provided in the model metadata at the time of saving the model, you must enter `id` manually and select a data asset for each connection\. If the schema is provided in model metadata, `id` names are populated automatically by using metadata\. You select the data asset for the corresponding `id`s in Watson Studio\. For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html)\.
* To create a local or managed asset as an output data reference, the `name` field must be specified for `output_data_reference` so that a data asset is created with the specified name\. Specifying an `href` that refers to an existing local data asset is not supported\. Note:
<!-- </ul> -->
Connected data assets that refer to supported databases can be created in the `output_data_references` only when the `input_data_references` also refers to one of these sources\.
<!-- <ul> -->
* Table names that are provided in input and output data references are ignored\. Table names that are referred in the SPSS model stream are used during the batch deployment\.
* Use SQL PushBack to generate SQL statements for IBM SPSS Modeler operations that can be “pushed back” to or run in the database to improve performance\. SQL Pushback is only supported by:
<!-- <ul> -->
* Db2
* SQL Server
* Netezza Performance Server
<!-- </ul> -->
* If you are creating a job by using the Python client, you must provide the connection name that is referred in the data nodes of the SPSS model stream in the `id` field, and the data asset href in `location.href` for input/output data references of the deployment jobs payload\. For example, you can construct the job payload like this:
job_payload_ref = {
client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [{
"id": "DB2Connection",
"name": "drug_ref_input1",
"type": "data_asset",
"connection": {},
"location": {
"href": <input_asset_href1>
}
},{
"id": "Db2 WarehouseConn",
"name": "drug_ref_input2",
"type": "data_asset",
"connection": {},
"location": {
"href": <input_asset_href2>
}
}],
client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
"type": "data_asset",
"connection": {},
"location": {
"href": <output_asset_href>
}
}
}
<!-- </ul> -->
#### Using connected data for an SPSS Modeler flow job ####
An SPSS Modeler flow can have a number of input and output data nodes\. When you connect to a supported database as an input and output data source, the connection details are selected from the input and output data reference, but the input and output table names are selected from the SPSS model stream file\.
For batch deployment of an SPSS model that uses a database connection, make sure that the modeler stream Input and Output nodes are Data Asset nodes\. In SPSS Modeler, the Data Asset nodes must be configured with the table names that are used later for job predictions\. Set the nodes and table names before you save the model to Watson Machine Learning\. When you are configuring the Data Asset nodes, choose the table name from the Connections; choosing a Data Asset that is created in your project is not supported\.
When you are creating the deployment job for an SPSS model, make sure that the types of data sources are the same for input and output\. The configured table names from the model stream are passed to the batch deployment and the input/output table names that are provided in the connected data are ignored\.
For batch deployment of an SPSS model that uses a Cloud Object Storage connection, make sure that the SPSS model stream has single input and output data asset nodes\.
#### Supported combinations of input and output sources ####
You must specify compatible sources for the SPSS Modeler flow input, the batch job input, and the output\. If you specify an incompatible combination of types of data sources, you get an error when you try to run the batch job\.
These combinations are supported for batch jobs:
<!-- <table> -->
| SPSS model stream input/output | Batch deployment job input | Batch deployment job output |
| ------------------------------ | ------------------------------------------------------------------- | ---------------------------------------------------- |
| File | Local, managed, or referenced data asset or connection asset (file) | Remote data asset or connection asset (file) or name |
| Database | Remote data asset or connection asset (database) | Remote data asset or connection asset (database) |
<!-- </table ""> -->
#### Specifying multiple inputs ####
If you are specifying multiple inputs for an SPSS model stream deployment with no schema, specify an ID for each element in `input_data_references`\.
For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html)\.
In this example, when you create the job, provide three input entries with IDs: `sample_db2_conn`, `sample_teradata_conn`, and `sample_googlequery_conn` and select the required connected data for each input\.
{
"deployment": {
"href": "/v4/deployments/<deploymentID>"
},
"scoring": {
"input_data_references": [{
"id": "sample_db2_conn",
"name": "DB2 connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
},
{
"id": "sample_teradata_conn",
"name": "Teradata connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
},
{
"id": "sample_googlequery_conn",
"name": "Google bigquery connection",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
}],
"output_data_references": {
"id": "sample_db2_conn",
"type": "data_asset",
"connection": {},
"location": {
"href": "/v2/assets/<asset_id>?space_id=<space_id>"
},
}
}
Note: The environment variables parameter of deployment jobs is not applicable\.
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
7D385692A31E1E88E675AF0B91F98F55797BC02D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-tensorflow.html?context=cdpaas&locale=en | Batch deployment input details for Tensorflow models | Batch deployment input details for Tensorflow models
Follow these rules when you are specifying input details for batch deployments of Tensorflow models.
Data type summary table:
Data Description
Type Inline or data references
File formats .zip archive that contains JSON files
Data sources
Input or output data references:
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
If you are specifying input/output data references programmatically:
* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
Notes:
* The environment variables parameter of deployment jobs is not applicable.
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
| # Batch deployment input details for Tensorflow models #
Follow these rules when you are specifying input details for batch deployments of Tensorflow models\.
Data type summary table:
<!-- <table> -->
| Data | Description |
| ------------ | -------------------------------------- |
| Type | Inline or data references |
| File formats | \.zip archive that contains JSON files |
<!-- </table ""> -->
## Data sources ##
Input or output data references:
<!-- <ul> -->
* Local or managed assets from the space
* Connected (remote) assets: Cloud Object Storage
<!-- </ul> -->
If you are specifying input/output data references programmatically:
<!-- <ul> -->
* Data source reference `type` depends on the asset type\. Refer to the **Data source reference types** section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\.
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* The environment variables parameter of deployment jobs is not applicable\.
* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure **Access key** and **Secret key**, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main)\.
<!-- </ul> -->
**Parent topic:**[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
<!-- </article "role="article" "> -->
|
09897DCF1128D66144D2B165564C228C16CD5EC5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-found-assets.html?context=cdpaas&locale=en | Deploying foundation model assets | Deploying foundation model assets
Deploy foundation model assets to test the assets, put them into production, and monitor them.
After you save a prompt template as a project asset, you can promote it to a deployment space. A deployment space is used to organize the assets for deployments and to manage access to deployed assets. Use a Pre-production space to test and validate assets, and use a Production space for deploying assets for productive use.
For details, see [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html).
Learn more
* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html)
* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
| # Deploying foundation model assets #
Deploy foundation model assets to test the assets, put them into production, and monitor them\.
After you save a prompt template as a project asset, you can promote it to a deployment space\. A deployment space is used to organize the assets for deployments and to manage access to deployed assets\. Use a *Pre\-production* space to test and validate assets, and use a *Production* space for deploying assets for productive use\.
For details, see [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html)\.
## Learn more ##
<!-- <ul> -->
* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html)
* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
<!-- </ul> -->
**Parent topic:**[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
<!-- </article "role="article" "> -->
|
F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=en | Managing deployment jobs | Managing deployment jobs
A job is a way of running a batch deployment, script, or notebook in Watson Machine Learning. You can choose to run a job manually or on a schedule that you specify. After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space.
From the Jobs tab of your space, you can:
* See the list of the jobs in your space
* View the details of each job. You can change the schedule settings of a job and pick a different environment template.
* Monitor job runs
* Delete jobs
See the following sections for various aspects of job management:
* [Creating a job for a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=encreate-jobs-batch)
* [Viewing jobs in a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=enviewing-jobs-in-a-space)
* [Managing job metadata retention ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=endelete-jobs)
Creating a job for a batch deployment
Important: You must have an existing batch deployment to create a batch job.
To learn how to create a job for a batch deployment, see [Creating jobs in a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html).
Viewing jobs in a space
You can view all of the jobs that exist for your deployment space from the Jobs page. You can also delete a job.
To view the details of a specific job, click the job. From the job's details page, you can do the following:
* View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support.
* When a job is running, a progress indicator on the information page displays information about relative progress of the run. You can use the progress indicator to monitor a long run.
* Edit schedule settings or pick another environment template.
* Run the job manually by clicking the run icon from the job action bar. You must deselect the schedule to run the job manually.
Managing job metadata retention
The Watson Machine Learning plan that is associated with your IBM Cloud account sets limits on the number of running and stored deployments that you can create. If you exceed your limit, you cannot create new deployments until you delete existing deployments or upgrade your plan. For more information, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
Managing metadata retention and deletion programmatically
If you are managing a job programmatically by using the Python client or REST API, you can retrieve metadata from the deployment endpoint by using the GET method during the 30 days.
To keep the metadata for more or less than 30 days, change the query parameter from the default of retention=30 for the POST method to override the default and preserve the metadata.
Note:Changing the value to retention=-1 cancels the auto-delete and preserves the metadata.
To delete a job programmatically, specify the query parameter hard_delete=true for the Watson Machine Learning DELETE method to completely remove the job metadata.
The following example shows how to use DELETE method:
DELETE /ml/v4/deployment_jobs/{JobsID}
Learn from samples
Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments and jobs by using the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
| # Managing deployment jobs #
A job is a way of running a batch deployment, script, or notebook in Watson Machine Learning\. You can choose to run a job manually or on a schedule that you specify\. After you create one or more jobs, you can view and manage them from the **Jobs** tab of your deployment space\.
From the **Jobs** tab of your space, you can:
<!-- <ul> -->
* See the list of the jobs in your space
* View the details of each job\. You can change the schedule settings of a job and pick a different environment template\.
* Monitor job runs
* Delete jobs
<!-- </ul> -->
See the following sections for various aspects of job management:
<!-- <ul> -->
* [Creating a job for a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=en#create-jobs-batch)
* [Viewing jobs in a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=en#viewing-jobs-in-a-space)
* [Managing job metadata retention ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=en#delete-jobs)
<!-- </ul> -->
## Creating a job for a batch deployment ##
Important: You must have an existing batch deployment to create a batch job\.
To learn how to create a job for a batch deployment, see [Creating jobs in a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)\.
## Viewing jobs in a space ##
You can view all of the jobs that exist for your deployment space from the Jobs page\. You can also delete a job\.
To view the details of a specific job, click the job\. From the job's details page, you can do the following:
<!-- <ul> -->
* View the runs for that job and the status of each run\. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run\. A failed run might be related to a temporary connection or environment problem\. Try running the job again\. If the job still fails, you can send the log to Customer Support\.
* When a job is running, a progress indicator on the information page displays information about relative progress of the run\. You can use the progress indicator to monitor a long run\.
* Edit schedule settings or pick another environment template\.
* Run the job manually by clicking the run icon from the job action bar\. You must deselect the schedule to run the job manually\.
<!-- </ul> -->
## Managing job metadata retention ##
The Watson Machine Learning plan that is associated with your IBM Cloud account sets limits on the number of running and stored deployments that you can create\. If you exceed your limit, you cannot create new deployments until you delete existing deployments or upgrade your plan\. For more information, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)\.
### Managing metadata retention and deletion programmatically ###
If you are managing a job programmatically by using the Python client or REST API, you can retrieve metadata from the deployment endpoint by using the `GET` method during the 30 days\.
To keep the metadata for more or less than 30 days, change the query parameter from the default of `retention=30` for the `POST` method to override the default and preserve the metadata\.
Note:Changing the value to `retention=-1` cancels the auto\-delete and preserves the metadata\.
To delete a job programmatically, specify the query parameter `hard_delete=true` for the Watson Machine Learning `DELETE` method to completely remove the job metadata\.
The following example shows how to use `DELETE` method:
DELETE /ml/v4/deployment_jobs/{JobsID}
## Learn from samples ##
Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments and jobs by using the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\.
**Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
<!-- </article "role="article" "> -->
|
F4A482326D45DC729EB8D1A6735CEFACD7AE5578 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en | Creating online deployments in Watson Machine Learning | Creating online deployments in Watson Machine Learning
Create an online (also called Web service) deployment to load a model or Python code when the deployment is created to generate predictions online, in real time. For example, if you create a classification model to test whether a new customer is likely to participate in a sales promotion, you can create an online deployment for the model. Then, you can enter the new customer data to get an immediate prediction.
Supported frameworks
Online deployment is supported for these frameworks:
* PMML
* Python Function
* PyTorch-Onnx
* Tensorflow
* Scikit-Learn
* Spark MLlib
* SPSS
* XGBoost
You can create an online deployment [from the user interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-interface) or [programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-programmatically).
To send payload data to an asset that is deployed online, you must know the endpoint URL of the deployment. Examples include, classification of data, or making predictions from the data. For more information, see [Retrieving the deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enget-online-endpoint).
Additionally, you can:
* [Test your online deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=entest-online-deployment)
* [Access the deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enaccess-online-details)
Creating an online deployment from the User Interface
1. From the deployment space, click the name of the asset that you want to deploy. The details page opens.
2. Click New deployment.
3. Choose Online as the deployment type.
4. Provide a name and an optional description for the deployment.
5. If you want to specify a name to be used instead of deployment ID, use the Serving name field.
* The name must be validated to be unique per IBM cloud region (all names in a specific region share a global namespace).
* The name must contain only these characters: [a-z,0-9,_] and must be a maximum 36 characters long.
* Serving name works only as part of the prediction URL. In some cases, you must still use the deployment ID.
6. Click Create to create the deployment.
Creating an online deployment programmatically
Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks. These notebooks demonstrate creating online deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
Retrieving the online deployment endpoint
You can find the endpoint URL of a deployment in these ways:
* From the Deployments tab of your space, click your deployment name. A page with deployment details opens. You can find the endpoint there.
* Using the Watson Machine Learning Python client:
1. List the deployments by calling the [Python client method](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Deployments.list)client.deployments.list()
2. Find the row with your deployment. The deployment endpoint URL is listed in the url column.
Notes:
* If you added Serving name to the deployment, two alternative endpoint URLs show on the screen; one containing the deployment ID, and the other containing your serving name. You can use either one of these URLs with your deployment.
* The API Reference tab also shows code snippets in various programming languages that illustrate how to access the deployment.
For more information, see [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url).
Testing your online deployment
From the Deployments tab of your space, click your deployment name. A page with deployment details opens. The Test tab provides a place where you can enter data and get a prediction back from the deployed model. If your model has a defined schema, a form shows on screen. In the form, you can enter data in one of these ways:
* Enter data directly in the form
* Download a CSV template, enter values, and upload the input data
* Upload a file that contains input data from your local file system or from the space
* Change to the JSON tab and enter your input data as JSON code Regardless of method, the input data must match the schema of the model. Submit the input data and get a score, or prediction, back.
Sample deployment code
When you submit JSON code as the payload, or input data, for a deployment, your input data must match the schema of the model. The 'fields' must match the column headers for the data, and the 'values' must contain the data, in the same order. Use this format:
{"input_data":[{
"fields": <field1>, <field2>, ...],
"values": <value1>, <value2>, ...]]
}]}
Refer to this example:
{"input_data":[{
"fields": "PassengerId","Pclass","Name","Sex","Age","SibSp","Parch","Ticket","Fare","Cabin","Embarked"],
"values": 1,3,"Braund, Mr. Owen Harris",0,22,1,0,"A/5 21171",7.25,null,"S"]]
}]}
Notes:
* All strings are enclosed in double quotation marks. The Python notation for dictionaries looks similar, but Python strings in single quotation marks are not accepted in the JSON data.
* Missing values can be indicated with null.
* You can specify a hardware specification for an online deployment, for example if you are [scaling a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html).
Preparing payload that matches the schema of an existing model
Refer to this sample code:
model_details = client.repository.get_details("<model_id>") retrieves details and includes schema
columns_in_schema = []
for i in range(0, len(model_details['entity']['input'].get('fields'))):
columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored
(...)
scoring_values = X.values.tolist()
array_of_input_fields = X.columns.tolist()
payload_scoring = {"input_data": [{"fields": array_of_input_fields],"values": scoring_values}]}
Accessing the online deployment details
To access your online deployment details: From the Deployments tab of your space, click your deployment name and then click the Deployment details tab. The Deployment details tab contains specific information that is related to the currently opened online deployment and allows for adding a model to the model inventory, to enable activity tracking and model comparison.
Additional information
Refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) for details on managing deployment jobs, and updating, scaling, or deleting an online deployment.
Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
| # Creating online deployments in Watson Machine Learning #
Create an online (also called `Web service`) deployment to load a model or Python code when the deployment is created to generate predictions online, in real time\. For example, if you create a classification model to test whether a new customer is likely to participate in a sales promotion, you can create an online deployment for the model\. Then, you can enter the new customer data to get an immediate prediction\.
### Supported frameworks ###
Online deployment is supported for these frameworks:
<!-- <ul> -->
* PMML
* Python Function
* PyTorch\-Onnx
* Tensorflow
* Scikit\-Learn
* Spark MLlib
* SPSS
* XGBoost
<!-- </ul> -->
You can create an online deployment [from the user interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en#online-interface) or [programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en#online-programmatically)\.
To send payload data to an asset that is deployed online, you must know the endpoint URL of the deployment\. Examples include, classification of data, or making predictions from the data\. For more information, see [Retrieving the deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en#get-online-endpoint)\.
Additionally, you can:
<!-- <ul> -->
* [Test your online deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en#test-online-deployment)
* [Access the deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=en#access-online-details)
<!-- </ul> -->
## Creating an online deployment from the User Interface ##
<!-- <ol> -->
1. From the deployment space, click the name of the asset that you want to deploy\. The details page opens\.
2. Click **New deployment**\.
3. Choose **Online** as the deployment type\.
4. Provide a name and an optional description for the deployment\.
5. If you want to specify a name to be used instead of deployment ID, use the **Serving name** field\.
<!-- <ul> -->
* The name must be validated to be unique per IBM cloud region (all names in a specific region share a global namespace).
* The name must contain only these characters: \[a-z,0-9,\_\] and must be a maximum 36 characters long.
* Serving name works only as part of the prediction URL. In some cases, you must still use the deployment ID.
<!-- </ul> -->
6. Click **Create** to create the deployment\.
<!-- </ol> -->
## Creating an online deployment programmatically ##
Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks\. These notebooks demonstrate creating online deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/)\.
## Retrieving the online deployment endpoint ##
You can find the endpoint URL of a deployment in these ways:
<!-- <ul> -->
* From the **Deployments** tab of your space, click your deployment name\. A page with deployment details opens\. You can find the endpoint there\.
* Using the Watson Machine Learning Python client:
<!-- <ol> -->
1. List the deployments by calling the [Python client method](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#client.Deployments.list)`client.deployments.list()`
2. Find the row with your deployment. The deployment endpoint URL is listed in the `url` column.
<!-- </ol> -->
<!-- </ul> -->
**Notes**:
<!-- <ul> -->
* If you added **Serving name** to the deployment, two alternative endpoint URLs show on the screen; one containing the deployment ID, and the other containing your serving name\. You can use either one of these URLs with your deployment\.
* The **API Reference** tab also shows code snippets in various programming languages that illustrate how to access the deployment\.
<!-- </ul> -->
For more information, see [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learning#endpoint-url)\.
## Testing your online deployment ##
From the **Deployments** tab of your space, click your deployment name\. A page with deployment details opens\. The **Test** tab provides a place where you can enter data and get a prediction back from the deployed model\. If your model has a defined schema, a form shows on screen\. In the form, you can enter data in one of these ways:
<!-- <ul> -->
* Enter data directly in the form
* Download a CSV template, enter values, and upload the input data
* Upload a file that contains input data from your local file system or from the space
* Change to the JSON tab and enter your input data as JSON code Regardless of method, the input data must match the schema of the model\. Submit the input data and get a score, or prediction, back\.
<!-- </ul> -->
### Sample deployment code ###
When you submit JSON code as the payload, or input data, for a deployment, your input data must match the schema of the model\. The 'fields' must match the column headers for the data, and the 'values' must contain the data, in the same order\. Use this format:
{"input_data":[{
"fields": <field1>, <field2>, ...],
"values": <value1>, <value2>, ...]]
}]}
Refer to this example:
{"input_data":[{
"fields": "PassengerId","Pclass","Name","Sex","Age","SibSp","Parch","Ticket","Fare","Cabin","Embarked"],
"values": 1,3,"Braund, Mr. Owen Harris",0,22,1,0,"A/5 21171",7.25,null,"S"]]
}]}
**Notes:**
<!-- <ul> -->
* All strings are enclosed in double quotation marks\. The Python notation for dictionaries looks similar, but Python strings in single quotation marks are not accepted in the JSON data\.
* Missing values can be indicated with `null`\.
* You can specify a hardware specification for an online deployment, for example if you are [scaling a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html)\.
<!-- </ul> -->
### Preparing payload that matches the schema of an existing model ###
Refer to this sample code:
model_details = client.repository.get_details("<model_id>") # retrieves details and includes schema
columns_in_schema = []
for i in range(0, len(model_details['entity']['input'].get('fields'))):
columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
X = X[columns_in_schema] # where X is a pandas dataframe that contains values to be scored
#(...)
scoring_values = X.values.tolist()
array_of_input_fields = X.columns.tolist()
payload_scoring = {"input_data": [{"fields": array_of_input_fields],"values": scoring_values}]}
## Accessing the online deployment details ##
To access your online deployment details: From the **Deployments** tab of your space, click your deployment name and then click the **Deployment details** tab\. The **Deployment details** tab contains specific information that is related to the currently opened online deployment and allows for adding a model to the model inventory, to enable activity tracking and model comparison\.
## Additional information ##
Refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) for details on managing deployment jobs, and updating, scaling, or deleting an online deployment\.
**Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
<!-- </article "role="article" "> -->
|
32AFAFA1C90D43BA1D3330A64491039F63D9FEB5 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html?context=cdpaas&locale=en | Deploying scripts in Watson Machine Learning | Deploying scripts in Watson Machine Learning
When a script is copied to a deployment space, you can deploy it for use. Supported script types are Python scripts. [Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) is the only supported deployment type for a script.
* When the script is promoted from a project, your software specification is included.
* When you create a deployment job for a script, you must manually override the default environment with the correct environment for your script. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)
Learn more
* To learn more about supported input and output types and setting environment variables, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html).
* To learn more about software specifications, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
| # Deploying scripts in Watson Machine Learning #
When a script is copied to a deployment space, you can deploy it for use\. Supported script types are Python scripts\. [Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) is the only supported deployment type for a script\.
<!-- <ul> -->
* When the script is promoted from a project, your software specification is included\.
* When you create a deployment job for a script, you must manually override the default environment with the correct environment for your script\. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* To learn more about supported input and output types and setting environment variables, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)\.
* To learn more about software specifications, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)\.
<!-- </ul> -->
**Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
<!-- </article "role="article" "> -->
|
2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html?context=cdpaas&locale=en | Choosing compute resources for running tools in projects | Choosing compute resources for running tools in projects
You use compute resources in projects when you run jobs and most tools. Depending on the tool, you might have a choice of compute resources for the runtime for the tool.
Compute resources are known as either environment templates or hardware and software specifications. In general, compute resources with larger hardware configurations incur larger usage costs.
These tools have multiple choices for configuring runtimes that you can choose from:
* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html)
* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html)
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html)
* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html)
* [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html)
* [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html)
* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html)
* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html)
Prompt Lab does not consume compute resources. Prompt Lab usage is measured by the number of processed tokens.
Learn more
* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
| # Choosing compute resources for running tools in projects #
You use compute resources in projects when you run jobs and most tools\. Depending on the tool, you might have a choice of compute resources for the runtime for the tool\.
Compute resources are known as either environment templates or hardware and software specifications\. In general, compute resources with larger hardware configurations incur larger usage costs\.
These tools have multiple choices for configuring runtimes that you can choose from:
<!-- <ul> -->
* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html)
* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html)
* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html)
* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html)
* [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html)
* [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html)
* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html)
* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html)
<!-- </ul> -->
Prompt Lab does not consume compute resources\. Prompt Lab usage is measured by the number of processed tokens\.
## Learn more ##
<!-- <ul> -->
* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
<!-- </ul> -->
**Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
<!-- </article "role="article" "> -->
|
D83BAAE9C79E5DF9CA904AB1886AC4826447B495 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en | Examples of environment template customizations | Examples of environment template customizations
You can follow examples of how to add custom libraries through conda or pip using the provided templates for Python and R when you create an environment template.
You can use mamba in place of conda in the following examples with conda. Remember to select the checkbox to install from mamba if you add channels or packages from mamba to the existing environment template.
Examples exist for:
* [Adding conda packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-conda-package)
* [Adding pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-pip-package)
* [Combining conda and pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encombine-conda-pip)
* [Adding complex packages with internal dependencies](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encomplex-packages)
* [Adding conda packages for R notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enconda-in-r)
* [Setting environment variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enset-vars)
Hints and tips:
* [Best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enbest-practices)
Adding conda packages
To get latest versions of pandas-profiling:
dependencies:
- pandas-profiling
This is equivalent to running conda install pandas-profiling in a notebook.
Adding pip packages
You can also customize an environment using pip if a particular package is not available in conda channels:
dependencies:
- pip:
- ibm-watson-machine-learning
This is equivalent to running pip install ibm-watson-machine-learning in a notebook.
The customization will actually do more than just install the specified pip package. The default behavior of conda is to also look for a new version of pip itself and then install it. Checking all the implicit dependencies in conda often takes several minutes and also gigabytes of memory. The following customization will shortcut the installation of pip:
channels:
- empty
- nodefaults
dependencies:
- pip:
- ibm-watson-machine-learning
The conda channel empty does not provide any packages. There is no pip package in particular. conda won't try to install pip and will use the already pre-installed version instead. Note that the keyword nodefaults in the list of channels needs at least one other channel in the list. Otherwise conda will silently ignore the keyword and use the default channels.
Combining conda and pip packages
You can list multiple packages with one package per line. A single customization can have both conda packages and pip packages.
dependencies:
- pandas-profiling
- scikit-learn=0.20
- pip:
- watson-machine-learning-client-V4
- sklearn-pandas==1.8.0
Note that the required template notation is sensitive to leading spaces. Each item in the list of conda packages must have two leading spaces. Each item in the list of pip packages must have four leading spaces. The version of a conda package must be specified using a single equals symbol (=), while the version of a pip package must be added using two equals symbols (==).
Adding complex packages with internal dependencies
When you add many packages or a complex package with many internal dependencies, the conda installation might take long or might even stop without you seeing any error message. To avoid this from happening:
* Specify the versions of the packages you want to add. This reduces the search space for conda to resolve dependencies.
* Increase the memory size of the environment.
* Use a specific channel instead of the default conda channels that are defined in the .condarc file. This avoids running lengthy searches through big channels.
Example of a customization that doesn't use the default conda channels:
get latest version of the prophet package from the conda-forge channel
channels:
- conda-forge
- nodefaults
dependencies:
- prophet
This customization corresponds to the following command in a notebook:
!conda install -c conda-forge --override-channels prophet -y
Adding conda packages for R notebooks
The following example shows you how to create a customization that adds conda packages to use in an R notebook:
channels:
- defaults
dependencies:
- r-plotly
This customization corresponds to the following command in a notebook:
print(system("conda install r-plotly", intern=TRUE))
The names of R packages in conda generally start with the prefix r-. If you just use plotly in your customization, the installation would succeed but the Python package would be installed instead of the R package. If you then try to use the package in your R code as in library(plotly), this would return an error.
Setting environment variables
You can set environment variables in your environment by adding a variables section to the software customization template as shown in the following example:
variables:
my_var: my_value
HTTP_PROXY: https://myproxy:3128
HTTPS_PROXY: https://myproxy:3128
NO_PROXY: cluster.local
The example also shows that you can use the variables section to set a proxy server for an environment.
Limitation: You cannot override existing environment variables, for example LD_LIBRARY_PATH, using this approach.
Best practices
To avoid problems that can arise finding packages or resolving conflicting dependencies, start by installing the packages you need manually through a notebook in a test environment. This enables you to check interactively if packages can be installed without errors. After you have verified that the packages were all correctly installed, create a customization for your development or production environment and add the packages to the customization template.
Parent topic:[Customizing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
| # Examples of environment template customizations #
You can follow examples of how to add custom libraries through conda or pip using the provided templates for Python and R when you create an environment template\.
You can use mamba in place of conda in the following examples with conda\. Remember to select the checkbox to install from mamba if you add channels or packages from mamba to the existing environment template\.
Examples exist for:
<!-- <ul> -->
* [Adding conda packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#add-conda-package)
* [Adding pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#add-pip-package)
* [Combining conda and pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#combine-conda-pip)
* [Adding complex packages with internal dependencies](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#complex-packages)
* [Adding conda packages for R notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#conda-in-r)
* [Setting environment variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#set-vars)
<!-- </ul> -->
Hints and tips:
<!-- <ul> -->
* [Best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=en#best-practices)
<!-- </ul> -->
## Adding conda packages ##
To get latest versions of pandas\-profiling:
dependencies:
- pandas-profiling
This is equivalent to running `conda install pandas-profiling` in a notebook\.
## Adding pip packages ##
You can also customize an environment using `pip` if a particular package is not available in conda channels:
dependencies:
- pip:
- ibm-watson-machine-learning
This is equivalent to running `pip install ibm-watson-machine-learning` in a notebook\.
The customization will actually do more than just install the specified `pip` package\. The default behavior of `conda` is to also look for a new version of `pip` itself and then install it\. Checking all the implicit dependencies in `conda` often takes several minutes and also gigabytes of memory\. The following customization will shortcut the installation of `pip`:
channels:
- empty
- nodefaults
dependencies:
- pip:
- ibm-watson-machine-learning
The conda channel `empty` does not provide any packages\. There is no `pip` package in particular\. `conda` won't try to install `pip` and will use the already pre\-installed version instead\. Note that the keyword `nodefaults` in the list of channels needs at least one other channel in the list\. Otherwise `conda` will silently ignore the keyword and use the default channels\.
## Combining conda and pip packages ##
You can list multiple packages with one package per line\. A single customization can have both conda packages and pip packages\.
dependencies:
- pandas-profiling
- scikit-learn=0.20
- pip:
- watson-machine-learning-client-V4
- sklearn-pandas==1.8.0
Note that the required template notation is sensitive to leading spaces\. Each item in the list of conda packages must have two leading spaces\. Each item in the list of pip packages must have four leading spaces\. The version of a conda package must be specified using a single equals symbol (`=`), while the version of a pip package must be added using two equals symbols (`==`)\.
## Adding complex packages with internal dependencies ##
When you add many packages or a complex package with many internal dependencies, the conda installation might take long or might even stop without you seeing any error message\. To avoid this from happening:
<!-- <ul> -->
* Specify the versions of the packages you want to add\. This reduces the search space for conda to resolve dependencies\.
* Increase the memory size of the environment\.
* Use a specific channel instead of the default conda channels that are defined in the `.condarc` file\. This avoids running lengthy searches through big channels\.
<!-- </ul> -->
Example of a customization that doesn't use the default conda channels:
# get latest version of the prophet package from the conda-forge channel
channels:
- conda-forge
- nodefaults
dependencies:
- prophet
This customization corresponds to the following command in a notebook:
!conda install -c conda-forge --override-channels prophet -y
## Adding conda packages for R notebooks ##
The following example shows you how to create a customization that adds conda packages to use in an R notebook:
channels:
- defaults
dependencies:
- r-plotly
This customization corresponds to the following command in a notebook:
print(system("conda install r-plotly", intern=TRUE))
The names of R packages in conda generally start with the prefix `r-`\. If you just use `plotly` in your customization, the installation would succeed but the Python package would be installed instead of the R package\. If you then try to use the package in your R code as in `library(plotly)`, this would return an error\.
## Setting environment variables ##
You can set environment variables in your environment by adding a variables section to the software customization template as shown in the following example:
variables:
my_var: my_value
HTTP_PROXY: https://myproxy:3128
HTTPS_PROXY: https://myproxy:3128
NO_PROXY: cluster.local
The example also shows that you can use the variables section to set a proxy server for an environment\.
**Limitation**: You cannot override existing environment variables, for example LD\_LIBRARY\_PATH, using this approach\.
## Best practices ##
To avoid problems that can arise finding packages or resolving conflicting dependencies, start by installing the packages you need manually through a notebook in a test environment\. This enables you to check interactively if packages can be installed without errors\. After you have verified that the packages were all correctly installed, create a customization for your development or production environment and add the packages to the customization template\.
**Parent topic:**[Customizing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
<!-- </article "role="article" "> -->
|
A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html?context=cdpaas&locale=en | IBM Federated Learning | IBM Federated Learning
Federated Learning provides the tools for multiple remote parties to collaboratively train a single machine learning model without sharing data. Each party trains a local model with a private data set. Only the local model is sent to the aggregator to improve the quality of the global model that benefits all parties.
Data format
Any data format including but not limited to CSV files, JSON files, and databases for PostgreSQL.
How Federated Learning works
Watch this overview video to learn the basic concepts and elements of a Federated Learning experiment. Learn how you can apply the tools for your company's analytics enhancements.
This video provides a visual method to learn the concepts and tasks in this documentation.
An example for using Federated Learning is when an aviation alliance wants to model how a global pandemic impacts airline delays. Each participating party in the federation can use their data to train a common model without ever moving or sharing their data. They can do so either in application silos or any other scenario where regulatory or pragmatic considerations prevent users from sharing data. The resulting model benefits each member of the alliance with improved business insights while lowering risk from data migration and privacy issues.
As the following graphic illustrates, parties can be geographically distributed and run on different platforms.

Why use IBM Federated Learning
IBM Federated Learning has a wide range of applications across many enterprise industries. Federated Learning:
* Enables sites with large volumes of data to be collected, cleaned, and trained on an enterprise scale without migration.
* Accommodates for the differences in data format, quality, and constraints.
* Complies with data privacy and security while training models with different data sources.
Learn more
* [Federated Learning tutorials and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
* [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
* [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
* [Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
* [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html)
* [Limitations and troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html)
Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
| # IBM Federated Learning #
Federated Learning provides the tools for multiple remote parties to collaboratively train a single machine learning model without sharing data\. Each party trains a local model with a private data set\. Only the local model is sent to the aggregator to improve the quality of the global model that benefits all parties\.
**Data format**
Any data format including but not limited to CSV files, JSON files, and databases for PostgreSQL\.
## How Federated Learning works ##
Watch this overview video to learn the basic concepts and elements of a Federated Learning experiment\. Learn how you can apply the tools for your company's analytics enhancements\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
An example for using Federated Learning is when an aviation alliance wants to model how a global pandemic impacts airline delays\. Each participating party in the federation can use their data to train a common model without ever moving or sharing their data\. They can do so either in application silos or any other scenario where regulatory or pragmatic considerations prevent users from sharing data\. The resulting model benefits each member of the alliance with improved business insights while lowering risk from data migration and privacy issues\.
As the following graphic illustrates, parties can be geographically distributed and run on different platforms\.

## Why use IBM Federated Learning ##
IBM Federated Learning has a wide range of applications across many enterprise industries\. Federated Learning:
<!-- <ul> -->
* Enables sites with large volumes of data to be collected, cleaned, and trained on an enterprise scale without migration\.
* Accommodates for the differences in data format, quality, and constraints\.
* Complies with data privacy and security while training models with different data sources\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Federated Learning tutorials and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- <ul> -->
* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
<!-- </ul> -->
* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
<!-- <ul> -->
* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
<!-- </ul> -->
* [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
<!-- <ul> -->
* [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
<!-- </ul> -->
* [Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- <ul> -->
* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
<!-- </ul> -->
* [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html)
* [Limitations and troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html)
<!-- </ul> -->
**Parent topic:**[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
<!-- </article "role="article" "> -->
|
CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=en | Starting the aggregator (Admin) | Starting the aggregator (Admin)
An administrator completes the following steps to start the experiment and train the global model.
* [Step 1: Set up the Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enfl-setup)
* [Step 2: Create the remote training system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enrts)
* [Step 3: Start the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enstart)
Step 1: Set up the Federated Learning experiment
Set up a Federated Learning experiment from a project.
1. From the project, click New asset > Federated Learning.
2. Name the experiment.
Optional: Add an optional description and tags.
3. [Add new collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to the project.
4. In the Configure tab, choose the training framework and model type. See [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) for a table listing supported frameworks, fusion methods, and their attributes. Optional: You can choose to enable the homomorphic encryption feature. For more details, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html).
5. Click Select under Model specification and upload the .zip file that contains your initial model.
6. In the Define hyperparameters tab, you can choose hyperparameter options available for your framework and fusion method to tune your model.
Step 2: Create the Remote Training System
Create Remote Training Systems (RTS) that authenticates the participating parties of the experiment.
1. At Select remote training system, click Add new systems.

2. Configure the RTS.
| Field name | Definition | Example | | -- | -- | -- | | Name | A name to identify this RTS instance. | Canada Bank Model: Federated Learning Experiment | | Description
(Optional) | Description of the training system. | This Remote Training System is for a
Federated Learning experiment to train a model for
predicting credit card fraud with data from Canadian banks. | | System administrator
(Optional) | Specify a user with read-only access to this RTS. They can see system details, logs, and scripts, but not necessarily participate in the experiment. They should be contacted if issues occur when running the experiment. | Admin ([email protected]) | | Allowed identities | List project collaborators who can participate in the Federated Learning experiment training. Multiple collaborators can be registered in this RTS, but only one can participate in the experiment. Multiple RTS's are needed to authenticate all participating collaborators. | John Doe ([email protected])
Jane Doe ([email protected]) | | Allowed IP addresses
(Optional) | Restrict individual parties from connecting to Federated Learning outside of a specified IP address.
1. To configure this, click Configure.
2. For Allowed identities, select the user to place IP constraints on.
3. For Allowed IP addresses for user, enter a comma seperated list of IPs and or CIDRs that can connect to the Remote Training System. Note: Both IPv4 and IPv6 are supported. | John
1234:5678:90ab:cdef:1234:5678:90ab:cdef: (John’s office IP), 123.123.123.123 (John’s home IP), 0987.6543.21ab.cdef (Remote VM IP)
Jane
123.123.123.0/16 (Jane's home IP), 0987.6543.21ab.cdef (Remote machine IP) | | Tags
(Optional) | Associate keywords with the Remote Training System to make it easier to find. | Canada
Bank
Model
Credit
Fraud |
1. Click Add to save the RTS instance. If you are creating multiple remote training instances, you can repeat these steps.
2. Click Add systems to save the RTS as an asset in the project.
Tip: You can use an RTS definition for future experiments. For example, in the Select remote training system tab, you can select any Remote Training System that you previously created.
3. Each RTS can only authenticate one of its allowed party identities. Create an RTS for each new participating part(ies).
Step 3: Start the experiment
Start the Federated Learning aggregator to initiate training of the global model.
1. Click Review and create to view the settings of your current Federated Learning experiment. Then, click Create. 
2. The Federated Learning experiment will be in Pending status while the aggregator is starting. When the aggregator starts, the status will change to Setup – Waiting for remote systems.
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Starting the aggregator (Admin) #
An administrator completes the following steps to start the experiment and train the global model\.
<!-- <ul> -->
* [Step 1: Set up the Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=en#fl-setup)
* [Step 2: Create the remote training system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=en#rts)
* [Step 3: Start the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=en#start)
<!-- </ul> -->
## Step 1: Set up the Federated Learning experiment ##
Set up a Federated Learning experiment from a project\.
<!-- <ol> -->
1. From the project, click **New asset > Federated Learning**\.
2. Name the experiment\.
*Optional:* Add an optional description and tags.
3. [Add new collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to the project\.
4. In the **Configure** tab, choose the training framework and model type\. See [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) for a table listing supported frameworks, fusion methods, and their attributes\. *Optional:* You can choose to enable the homomorphic encryption feature\. For more details, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html)\.
5. Click **Select** under **Model specification** and upload the `.zip` file that contains your initial model\.
6. In the **Define hyperparameters** tab, you can choose hyperparameter options available for your framework and fusion method to tune your model\.
<!-- </ol> -->
## Step 2: Create the Remote Training System ##
Create Remote Training Systems (RTS) that authenticates the participating parties of the experiment\.
<!-- <ol> -->
1. At **Select remote training system**, click **Add new systems**\.

2. Configure the RTS\.
\| Field name \| Definition \| Example \| \| -- \| -- \| -- \| \| Name \| A name to identify this RTS instance. \| `Canada Bank Model: Federated Learning Experiment` \| \| Description
(Optional) \| Description of the training system. \| This Remote Training System is for a
Federated Learning experiment to train a model for
predicting credit card fraud with data from Canadian banks. \| \| System administrator
(Optional) \| Specify a user with read-only access to this RTS. They can see system details, logs, and scripts, but not necessarily participate in the experiment. They should be contacted if issues occur when running the experiment. \| `Admin ([email protected])` \| \| Allowed identities \| List project collaborators who can participate in the Federated Learning experiment training. Multiple collaborators can be registered in this RTS, but only one can participate in the experiment. Multiple RTS's are needed to authenticate all participating collaborators. \| `John Doe ([email protected])`
`Jane Doe ([email protected])` \| \| Allowed IP addresses
(Optional) \| Restrict individual parties from connecting to Federated Learning outside of a specified IP address.
1. To configure this, click **Configure**.
2. For *Allowed identities*, select the user to place IP constraints on.
3. For *Allowed IP addresses for user*, enter a comma seperated list of IPs and or CIDRs that can connect to the Remote Training System. Note: Both IPv4 and IPv6 are supported. \| John
1234:5678:90ab:cdef:1234:5678:90ab:cdef: (John’s office IP), 123.123.123.123 (John’s home IP), 0987.6543.21ab.cdef (Remote VM IP)
Jane
123.123.123.0/16 (Jane's home IP), 0987.6543.21ab.cdef (Remote machine IP) \| \| Tags
(Optional) \| Associate keywords with the Remote Training System to make it easier to find. \| `Canada`
`Bank`
`Model`
`Credit`
`Fraud` \|
<!-- </ol> -->
<!-- <ol> -->
1. Click **Add** to save the RTS instance\. If you are creating multiple remote training instances, you can repeat these steps\.
2. Click **Add systems** to save the RTS as an asset in the project\.
Tip: You can use an RTS definition for future experiments. For example, in the **Select remote training system** tab, you can select any Remote Training System that you previously created.
3. Each RTS can only authenticate one of its allowed party identities\. Create an RTS for each new participating part(ies)\.
<!-- </ol> -->
## Step 3: Start the experiment ##
Start the Federated Learning aggregator to initiate training of the global model\.
<!-- <ol> -->
1. Click **Review and create** to view the settings of your current Federated Learning experiment\. Then, click **Create**\. 
2. The Federated Learning experiment will be in `Pending` status while the aggregator is starting\. When the aggregator starts, the status will change to `Setup – Waiting for remote systems`\.
<!-- </ol> -->
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
4B48EF3D089F3142B1ED604A32873217F89E052F | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html?context=cdpaas&locale=en | Federated Learning architecture | Federated Learning architecture
IBM Federated Learning has two main components: the aggregator and the remote training parties.
Aggregator
The aggregator is a model fusion processor. The admin manages the aggregator.
The aggregator runs the following tasks:
* Runs as a platform service in regions Dallas, Frankfurt, London, or Tokyo.
* Starts with a Federated Learning experiment.
Party
A party is a user that provides model input to the Federated Learning experiment aggregator. The party can be:
* on any system that can run the Watson Machine Learning Python client and compatible with Watson Machine Learning frameworks.
Note:The system does not have to be specifically IBM watsonx. For a list of system requirements, see [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html).
* running on the system in any geographical location. You are recommended to locate each party in the same region where the data is to avoid data extraction out of different regions.
This illustration shows the architecture of IBM Federated Learning.
A Remote Training System is used to authenticate the party's identity to the aggregator during training.

User workflow
1. The data scientist:
1. Identifies the data sources.
2. Creates an initial "untrained" model.
3. Creates a data handler file.
These tasks might overlap with a training party entity.
2. A party connects to the aggregator on their system, which can be remote.
3. An admin controls the Federated Learning experiment by:
1. Configuring the experiment to accommodate remote parties.
2. Starting the aggregator.
This illustration shows the actions that are associated with each role in the Federated Learning process.

Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
| # Federated Learning architecture #
IBM Federated Learning has two main components: the aggregator and the remote training parties\.
## Aggregator ##
The aggregator is a model fusion processor\. The admin manages the aggregator\.
The aggregator runs the following tasks:
<!-- <ul> -->
* Runs as a platform service in regions Dallas, Frankfurt, London, or Tokyo\.
* Starts with a Federated Learning experiment\.
<!-- </ul> -->
## Party ##
A party is a user that provides model input to the Federated Learning experiment aggregator\. The party can be:
<!-- <ul> -->
* on any system that can run the Watson Machine Learning Python client and compatible with Watson Machine Learning frameworks\.
Note:The system does not have to be specifically IBM watsonx. For a list of system requirements, see [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html).
* running on the system in any geographical location\. You are recommended to locate each party in the same region where the data is to avoid data extraction out of different regions\.
<!-- </ul> -->
This illustration shows the architecture of IBM Federated Learning\.
A Remote Training System is used to authenticate the party's identity to the aggregator during training\.

## User workflow ##
<!-- <ol> -->
1. The data scientist:
<!-- <ol> -->
1. Identifies the data sources.
2. Creates an initial "untrained" model.
3. Creates a data handler file.
These tasks might overlap with a training party entity.
<!-- </ol> -->
2. A party connects to the aggregator on their system, which can be remote\.
3. An admin controls the Federated Learning experiment by:
<!-- <ol> -->
1. Configuring the experiment to accommodate remote parties.
2. Starting the aggregator.
<!-- </ol> -->
<!-- </ol> -->
This illustration shows the actions that are associated with each role in the Federated Learning process\.

**Parent topic:**[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
<!-- </article "role="article" "> -->
|
924550083A3A6ACD177024DF788C02D236874893 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html?context=cdpaas&locale=en | Connecting to the aggregator (Party) | Connecting to the aggregator (Party)
Each party follows these steps to connect to a started aggregator.
1. Open the project and click the Federated Learning experiment.
2. Click View setup information and click the download icon to download the party connector script. 
3. Each party must configure the party connector script and provide valid credentials to run the script. This is what a sample completed party connector script looks like:
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
wml_client = APIClient(wml_credentials)
wml_client.set.default_project("XXX-XXX-XXX-XXX-XXX")
party_metadata = {
wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
"name": "MnistSklearnDataHandler",
"path": "example.mnist_sklearn_data_handler",
"info": {
"npz_file":"./example_data/example_data.npz"
}
party = wml_client.remote_training_systems.create_party("XXX-XXX-XXX-XXX-XXX", party_metadata)
party.monitor_logs()
party.run(aggregator_id="XXX-XXX-XXX-XXX-XXX", asynchronous=False)
Parameters:
* api_key:
Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud Pak for Data API key under Manage > Access(IAM) > API keys.
Optional: If you're reusing a script from a different project, you can copy the updated project_id, aggregator_id and experiment_id from the setup information window and copy them into the script.
4. Install Watson Machine Learning with the latest Federated Learning package if you have not yet done so:
* If you are using M-series on a Mac, install the latest package with the following script:
-----------------------------------------------------------------------------------------
(C) Copyright IBM Corp. 2023.
https://opensource.org/licenses/BSD-3-Clause
-----------------------------------------------------------------------------------------
Script to create a conda environment and install ibm-watson-machine-learning with
the dependencies required for Federated Learning on MacOS.
The name of the conda environment to be created is passed as the first argument.
Note: This script requires miniforge to be installed for conda.
usage=". install_fl_rt22.2_macos.sh conda_env_name"
arch=$(uname -m)
os=$(uname -s)
if (($ < 1))
then
echo $usage
exit
fi
ENAME=$1
conda create -y -n ${ENAME} python=3.10
conda activate ${ENAME}
pip install ibm-watson-machine-learning
if [ "$os" == "Darwin" -a "$arch" == "arm64" ]
then
conda install -y -c apple tensorflow-deps
fi
python - <<EOF
import pkg_resources
import platform
import subprocess
package = 'ibm-watson-machine-learning'
extra = 'fl-rt22.2-py3.10'
extra_ = extra.replace('.','-')
extra_s = '; extra == "{}"'
remove = None
add = []
if platform.system() == "Darwin" and platform.processor() == "arm":
remove = 'tensorflow'
add = ['tensorflow-macos==2.9.2']
pkgs = pkg_resources.working_set.by_key[package].requires(extras=[extra])
pkgs = [ p.__str__().removesuffix(extra_s.format(extra)).removesuffix(extra_s.format(extra_)) for p in pkgs if ( extra in p.__str__() or extra_ in p.__str__() ) and ( not remove or remove not in p.__str__() )]
print( "Installing standard packages for {}[{}]:{}".format(package,extra,pkgs) )
print( "Installing additional packages:{}".format(add) )
cmd = [ 'pip', 'install'] + add + pkgs
subprocess.run( cmd )
EOF
* Otherwise install with the following command:
pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'
1. When your configuration is complete and you save the party connector script, enter this command in a command line to run the script:
python3 rts_<RTS Name>_<RTS ID>.py
More resources
[Federated Learning library functions](https://ibm.github.io/watson-machine-learning-sdk/)
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Connecting to the aggregator (Party) #
Each party follows these steps to connect to a started aggregator\.
<!-- <ol> -->
1. Open the project and click the Federated Learning experiment\.
2. Click **View setup information** and click the download icon to download the party connector script\. 
3. Each party must configure the party connector script and provide valid credentials to run the script\. This is what a sample completed party connector script looks like:
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
wml_client = APIClient(wml_credentials)
wml_client.set.default_project("XXX-XXX-XXX-XXX-XXX")
party_metadata = {
wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
"name": "MnistSklearnDataHandler",
"path": "example.mnist_sklearn_data_handler",
"info": {
"npz_file":"./example_data/example_data.npz"
}
party = wml_client.remote_training_systems.create_party("XXX-XXX-XXX-XXX-XXX", party_metadata)
party.monitor_logs()
party.run(aggregator_id="XXX-XXX-XXX-XXX-XXX", asynchronous=False)
**Parameters**:
<!-- <ul> -->
* *`api_key`:*
Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click **Create an IBM Cloud Pak for Data API key** under **Manage > Access(IAM) > API keys**.
*Optional:* If you're reusing a script from a different project, you can copy the updated `project_id`, `aggregator_id` and `experiment_id` from the setup information window and copy them into the script.
<!-- </ul> -->
4. Install Watson Machine Learning with the latest Federated Learning package if you have not yet done so:
<!-- <ul> -->
* If you are using M-series on a Mac, install the latest package with the following script:
# -----------------------------------------------------------------------------------------
# (C) Copyright IBM Corp. 2023.
# https://opensource.org/licenses/BSD-3-Clause
# -----------------------------------------------------------------------------------------
#
#
# Script to create a conda environment and install ibm-watson-machine-learning with
# the dependencies required for Federated Learning on MacOS.
# The name of the conda environment to be created is passed as the first argument.
#
# Note: This script requires miniforge to be installed for conda.
#
usage=". install_fl_rt22.2_macos.sh conda_env_name"
arch=$(uname -m)
os=$(uname -s)
if (($# < 1))
then
echo $usage
exit
fi
ENAME=$1
conda create -y -n ${ENAME} python=3.10
conda activate ${ENAME}
pip install ibm-watson-machine-learning
if [ "$os" == "Darwin" -a "$arch" == "arm64" ]
then
conda install -y -c apple tensorflow-deps
fi
python - <<EOF
import pkg_resources
import platform
import subprocess
package = 'ibm-watson-machine-learning'
extra = 'fl-rt22.2-py3.10'
extra_ = extra.replace('.','-')
extra_s = '; extra == "{}"'
remove = None
add = []
if platform.system() == "Darwin" and platform.processor() == "arm":
remove = 'tensorflow'
add = ['tensorflow-macos==2.9.2']
pkgs = pkg_resources.working_set.by_key[package].requires(extras=[extra])
pkgs = [ p.__str__().removesuffix(extra_s.format(extra)).removesuffix(extra_s.format(extra_)) for p in pkgs if ( extra in p.__str__() or extra_ in p.__str__() ) and ( not remove or remove not in p.__str__() )]
print( "Installing standard packages for {}[{}]:{}".format(package,extra,pkgs) )
print( "Installing additional packages:{}".format(add) )
cmd = [ 'pip', 'install'] + add + pkgs
subprocess.run( cmd )
EOF
* Otherwise install with the following command:
`pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'`
<!-- </ul> -->
<!-- </ol> -->
<!-- <ol> -->
1. When your configuration is complete and you save the party connector script, enter this command in a command line to run the script:
python3 rts_<RTS Name>_<RTS ID>.py
<!-- </ol> -->
## More resources ##
[Federated Learning library functions](https://ibm.github.io/watson-machine-learning-sdk/)
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
D579ABA442C4652BAC088173107ECFEBBF4D8290 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html?context=cdpaas&locale=en | Federated Learning tutorials and samples | Federated Learning tutorials and samples
Select the tutorial that fits your needs. To facilitate the learning process of Federated Learning, one tutorial with a UI-based approach and one tutorial with an API calling approach for multiple frameworks and data sets is provided. The results of either are the same. All UI-based tutorials demonstrate how to create the Federated Learning experiment in a low-code environment. All API-based tutorials use two sample notebooks with Python scripts to demonstrate how to build and train the experiment.
Tensorflow
These hands-on tutorials teach you how to create a Federated Learning experiment step by step. These tutorials use the MNIST data set to demonstrate how different parties can contribute data to train a model to recognize handwriting. You can choose between a UI-based or API version of the tutorial.
* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
XGBoost
This is a tutorial for Federated Learning that teaches you how to create an experiment step by step with an income in the XGBoost framework. The tutorial demonstrates how different parties can contribute data to train a model about adult incomes.
* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
Homomorphic encryption
This is a tutorial for Federated Learning that teaches you how to use the advanced method of homomorphic encryption step by step.
* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
Parent topic:[IBM Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Federated Learning tutorials and samples #
Select the tutorial that fits your needs\. To facilitate the learning process of Federated Learning, one tutorial with a UI\-based approach and one tutorial with an API calling approach for multiple frameworks and data sets is provided\. The results of either are the same\. All UI\-based tutorials demonstrate how to create the Federated Learning experiment in a low\-code environment\. All API\-based tutorials use two sample notebooks with Python scripts to demonstrate how to build and train the experiment\.
## Tensorflow ##
These hands\-on tutorials teach you how to create a Federated Learning experiment step by step\. These tutorials use the MNIST data set to demonstrate how different parties can contribute data to train a model to recognize handwriting\. You can choose between a UI\-based or API version of the tutorial\.
<!-- <ul> -->
* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
<!-- </ul> -->
## XGBoost ##
This is a tutorial for Federated Learning that teaches you how to create an experiment step by step with an income in the XGBoost framework\. The tutorial demonstrates how different parties can contribute data to train a model about adult incomes\.
<!-- <ul> -->
* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
<!-- </ul> -->
## Homomorphic encryption ##
This is a tutorial for Federated Learning that teaches you how to use the advanced method of homomorphic encryption step by step\.
<!-- <ul> -->
* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
<!-- </ul> -->
**Parent topic:**[IBM Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html?context=cdpaas&locale=en | Federated Learning homomorphic encryption sample for API | Federated Learning homomorphic encryption sample for API
Download and review sample files that show how to run a Federated Learning experiment with Fully Homomorphic Encryption (FHE).
Homomorphic encryption
FHE is an advanced, optional method to provide additional security and privacy for your data by encrypting data sent between parties and the aggregator. This method still creates a computational result that is the same as if the computations were done on unencrypted data. For more details on applying homomorphic encryption in Federated Learning, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html).
Download the Federated Learning sample files
Download the following notebooks.
[Federated Learning FHE Demo](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa449d3939b73847c502bd7822d0949a)
Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
| # Federated Learning homomorphic encryption sample for API #
Download and review sample files that show how to run a Federated Learning experiment with Fully Homomorphic Encryption (FHE)\.
## Homomorphic encryption ##
FHE is an advanced, optional method to provide additional security and privacy for your data by encrypting data sent between parties and the aggregator\. This method still creates a computational result that is the same as if the computations were done on unencrypted data\. For more details on applying homomorphic encryption in Federated Learning, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html)\.
## Download the Federated Learning sample files ##
Download the following notebooks\.
[Federated Learning FHE Demo](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa449d3939b73847c502bd7822d0949a)
**Parent topic:**[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- </article "role="article" "> -->
|
1D1783967CBF46A0B75539BADBAA1D601BC9F412 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html?context=cdpaas&locale=en | Frameworks, fusion methods, and Python versions | Frameworks, fusion methods, and Python versions
These are the available machine learning model frameworks and model fusion methods for the Federated Learning model. The software spec and frameworks are also compatible with specific Python versions.
Frameworks and fusion methods
This table lists supported software frameworks for building Federated Learning models. For each framework you can see the supported model types, fusion methods, and hyperparameter options.
Table 1. Frameworks and fusion methods
Frameworks Model Type Fusion Method Description Hyperparameters
TensorFlow <br>Used to build neural networks. <br>See [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmltf-config). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds <br>- Termination predicate (Optional) <br>- Quorum (Optional) <br>- Max Timeout (Optional)
Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds <br>- Termination predicate (Optional) <br>- Quorum (Optional) <br>- Max Timeout (Optional)
Scikit-learn <br>Used for predictive data analysis. <br>See [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlsklearn-config). Classification Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds <br>- Termination predicate (Optional)
Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds <br>- Termination predicate (Optional)
Regression Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
XGBoost XGBoost Classification Use to build classification models that use XGBoost. - Learning rate <br>- Loss <br>- Rounds <br>- Number of classes
XGBoost Regression Use to build regression models that use XGBoost. - Learning rate <br>- Rounds <br>- Loss
K-Means/SPAHM Used to train KMeans (unsupervised learning) models when parties have heterogeneous data sets. - Max Iter <br>- N cluster
Pytorch <br>Used for training neural network models. <br>See [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlpytorch). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds <br>- Epochs <br>- Quorum (Optional) <br>- Max Timeout (Optional)
Neural Networks Probabilistic Federated Neural Matching (PFNM) Communication-efficient method for fully connected neural networks when parties have heterogeneous data sets. - Rounds <br>- Termination accuracy (Optional) <br>- Epochs <br>- sigma <br>- sigma0 <br>- gamma <br>- iters
Software specifications and Python version by framework
This table lists the software spec and Python versions available for each framework.
Software specifications and Python version by framework
Watson Studio frameworks Python version Software Spec Python Client Extras Framework package
scikit-learn 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 scikit-learn 1.1.1
Tensorflow 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 tensorflow 2.9.2
PyTorch 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 torch 1.12.1
Learn more
[Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Frameworks, fusion methods, and Python versions #
These are the available machine learning model frameworks and model fusion methods for the Federated Learning model\. The software spec and frameworks are also compatible with specific Python versions\.
## Frameworks and fusion methods ##
This table lists supported software frameworks for building Federated Learning models\. For each framework you can see the supported model types, fusion methods, and hyperparameter options\.
<!-- <table> -->
Table 1\. Frameworks and fusion methods
| Frameworks | Model Type | Fusion Method | Description | Hyperparameters |
| ---------------------------------------------------------------------------------------------------------------------------------- | --------------- | ---------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| **TensorFlow** <br>Used to build neural networks\. <br>See [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html#tf-config)\. | Any | Simple Avg | Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted\. | \- Rounds <br>\- Termination predicate *(Optional)* <br>\- Quorum *(Optional)* <br>\- Max Timeout *(Optional)* |
| | | Weighted Avg | Weights the average of updates based on the number of each party sample\. Use with training data sets of widely differing sizes\. | \- Rounds <br>\- Termination predicate *(Optional)* <br>\- Quorum *(Optional)* <br>\- Max Timeout *(Optional)* |
| **Scikit\-learn** <br>Used for predictive data analysis\. <br>See [Save the Scikit\-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html#sklearn-config)\. | Classification | Simple Avg | Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted\. | \- Rounds <br>\- Termination predicate *(Optional)* |
| | | Weighted Avg | Weights the average of updates based on the number of each party sample\. Use with training data sets of widely differing sizes\. | \- Rounds <br>\- Termination predicate *(Optional)* |
| | Regression | Simple Avg | Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted\. | \- Rounds |
| | | Weighted Avg | Weights the average of updates based on the number of each party sample\. Use with training data sets of widely differing sizes\. | \- Rounds |
| | XGBoost | XGBoost Classification | Use to build classification models that use XGBoost\. | \- Learning rate <br>\- Loss <br>\- Rounds <br>\- Number of classes |
| | | XGBoost Regression | Use to build regression models that use XGBoost\. | \- Learning rate <br>\- Rounds <br>\- Loss |
| | | K\-Means/SPAHM | Used to train KMeans (unsupervised learning) models when parties have heterogeneous data sets\. | \- Max Iter <br>\- N cluster |
| **Pytorch** <br>Used for training neural network models\. <br>See [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html#pytorch)\. | Any | Simple Avg | Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted\. | \- Rounds <br>\- Epochs <br>\- Quorum *(Optional)* <br>\- Max Timeout *(Optional)* |
| | Neural Networks | Probabilistic Federated Neural Matching (PFNM) | Communication\-efficient method for fully connected neural networks when parties have heterogeneous data sets\. | \- Rounds <br>\- Termination accuracy *(Optional)* <br>\- Epochs <br>\- sigma <br>\- sigma0 <br>\- gamma <br>\- iters |
<!-- </table ""> -->
## Software specifications and Python version by framework ##
This table lists the software spec and Python versions available for each framework\.
<!-- <table> -->
Software specifications and Python version by framework
| Watson Studio frameworks | Python version | Software Spec | Python Client Extras | Framework package |
| ------------------------ | -------------- | ----------------------- | -------------------- | --------------------- |
| scikit\-learn | 3\.10 | runtime\-22\.2\-py3\.10 | fl\-rt22\.2\-py3\.10 | scikit\-learn 1\.1\.1 |
| Tensorflow | 3\.10 | runtime\-22\.2\-py3\.10 | fl\-rt22\.2\-py3\.10 | tensorflow 2\.9\.2 |
| PyTorch | 3\.10 | runtime\-22\.2\-py3\.10 | fl\-rt22\.2\-py3\.10 | torch 1\.12\.1 |
<!-- </table ""> -->
## Learn more ##
[Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
**Parent topic:**[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
ADEB3C4BA4949F2C87919D5493B71B67028B76EE | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html?context=cdpaas&locale=en | Get started | Get started
Federated Learning is appropriate for any situation where different entities from different geographical locations or Cloud providers want to train an analytical model without sharing their data.
To get started with Federated Learning, choose from these options:
* Familiarize yourself with the key concepts and [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
* Review the [architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) for creating a Federated Learning experiment.
* Follow a tutorial for step-by-step instructions for creating a [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) or review samples.
Learn more
* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Get started #
Federated Learning is appropriate for any situation where different entities from different geographical locations or Cloud providers want to train an analytical model without sharing their data\.
To get started with Federated Learning, choose from these options:
<!-- <ul> -->
* Familiarize yourself with the key concepts and [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)\.
* Review the [architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) for creating a Federated Learning experiment\.
* Follow a tutorial for step\-by\-step instructions for creating a [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) or review samples\.
<!-- </ul> -->
## Learn more ##
<!-- <ul> -->
* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
<!-- </ul> -->
**Parent topic:**[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
51426DCF985B97AF6172727AFCF353A481591560 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html?context=cdpaas&locale=en | Create the data handler | Create the data handler
Each party in a Federated Learning experiment must get a data handler to process their data. You or a data scientist must create the data handler. A data handler is a Python class that loads and transforms data so that all data for the experiment is in a consistent format.
About the data handler class
The data handler performs the following functions:
* Accesses the data that is required to train the model. For example, reads data from a CSV file into a Pandas data frame.
* Pre-processes the data so data is in a consistent format across all parties. Some example cases are as follows:
* The Date column might be stored as a time epoch or timestamp.
* The Country column might be encoded or abbreviated.
* The data handler ensures that the data formatting is in agreement.
* Optional: feature engineer as needed.
The following illustration shows how a data handler is used to process data and make it consumable by the experiment:

Data handler template
A general data handler template is as follows:
your import statements
from ibmfl.data.data_handler import DataHandler
class MyDataHandler(DataHandler):
"""
Data handler for your dataset.
"""
def __init__(self, data_config=None):
super().__init__()
self.file_name = None
if data_config is not None:
This can be any string field.
For example, if your data set is in csv format,
<your_data_file_type> can be "CSV", ".csv", "csv", "csv_file" and more.
if '<your_data_file_type>' in data_config:
self.file_name = data_config['<your_data_file_type>']
extract other additional parameters from info if any.
load and preprocess the training and testing data
self.load_and_preprocess_data()
"""
Example:
(self.x_train, self.y_train), (self.x_test, self.y_test) = self.load_dataset()
"""
def load_and_preprocess_data(self):
"""
Loads and pre-processeses local datasets,
and updates self.x_train, self.y_train, self.x_test, self.y_test.
Example:
return (self.x_train, self.y_train), (self.x_test, self.y_test)
"""
pass
def get_data(self):
"""
Gets the prepared training and testing data.
:return: ((x_train, y_train), (x_test, y_test)) most build-in training modules expect data is returned in this format
:rtype: tuple
This function should be as brief as possible. Any pre-processing operations should be performed in a separate function and not inside get_data(), especially computationally expensive ones.
Example:
X, y = load_somedata()
x_train, x_test, y_train, y_test =
train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE)
return (x_train, y_train), (x_test, y_test)
"""
pass
def preprocess(self, X, y):
pass
Parameters
* your_data_file_type: This can be any string field. For example, if your data set is in csv format, your_data_file_type can be "CSV", ".csv", "csv", "csv_file" and more.
Return a data generator defined by Keras or Tensorflow
The following is a code example that needs to be included as part of the get_data function to return a data generator defined by Keras or Tensorflow:
train_gen = ImageDataGenerator(rotation_range=8,
width_sht_range=0.08,
shear_range=0.3,
height_shift_range=0.08,
zoom_range=0.08)
train_datagenerator = train_gen.flow(
x_train, y_train, batch_size=64)
return train_datagenerator
Data handler examples
* [MNIST Keras data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py)
* [Adult XGBoost data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/adult_sklearn_data_handler.py)
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Create the data handler #
Each party in a Federated Learning experiment must get a data handler to process their data\. You or a data scientist must create the data handler\. A data handler is a Python class that loads and transforms data so that all data for the experiment is in a consistent format\.
## About the data handler class ##
The data handler performs the following functions:
<!-- <ul> -->
* Accesses the data that is required to train the model\. For example, reads data from a CSV file into a Pandas data frame\.
* Pre\-processes the data so data is in a consistent format across all parties\. Some example cases are as follows:
<!-- <ul> -->
* The **Date** column might be stored as a time epoch or timestamp.
* The **Country** column might be encoded or abbreviated.
<!-- </ul> -->
* The data handler ensures that the data formatting is in agreement\.
<!-- <ul> -->
* *Optional:* feature engineer as needed.
<!-- </ul> -->
<!-- </ul> -->
The following illustration shows how a data handler is used to process data and make it consumable by the experiment:

## Data handler template ##
A general data handler template is as follows:
# your import statements
from ibmfl.data.data_handler import DataHandler
class MyDataHandler(DataHandler):
"""
Data handler for your dataset.
"""
def __init__(self, data_config=None):
super().__init__()
self.file_name = None
if data_config is not None:
# This can be any string field.
# For example, if your data set is in `csv` format,
# <your_data_file_type> can be "CSV", ".csv", "csv", "csv_file" and more.
if '<your_data_file_type>' in data_config:
self.file_name = data_config['<your_data_file_type>']
# extract other additional parameters from `info` if any.
# load and preprocess the training and testing data
self.load_and_preprocess_data()
"""
# Example:
# (self.x_train, self.y_train), (self.x_test, self.y_test) = self.load_dataset()
"""
def load_and_preprocess_data(self):
"""
Loads and pre-processeses local datasets,
and updates self.x_train, self.y_train, self.x_test, self.y_test.
# Example:
# return (self.x_train, self.y_train), (self.x_test, self.y_test)
"""
pass
def get_data(self):
"""
Gets the prepared training and testing data.
:return: ((x_train, y_train), (x_test, y_test)) # most build-in training modules expect data is returned in this format
:rtype: `tuple`
This function should be as brief as possible. Any pre-processing operations should be performed in a separate function and not inside get_data(), especially computationally expensive ones.
# Example:
# X, y = load_somedata()
# x_train, x_test, y_train, y_test = \
# train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE)
# return (x_train, y_train), (x_test, y_test)
"""
pass
def preprocess(self, X, y):
pass
**Parameters**
<!-- <ul> -->
* `your_data_file_type`: This can be any string field\. For example, if your data set is in `csv` format, `your_data_file_type` can be "CSV", "\.csv", "csv", "csv\_file" and more\.
<!-- </ul> -->
### Return a data generator defined by Keras or Tensorflow ###
The following is a code example that needs to be included as part of the `get_data` function to return a data generator defined by Keras or Tensorflow:
train_gen = ImageDataGenerator(rotation_range=8,
width_sht_range=0.08,
shear_range=0.3,
height_shift_range=0.08,
zoom_range=0.08)
train_datagenerator = train_gen.flow(
x_train, y_train, batch_size=64)
return train_datagenerator
## Data handler examples ##
<!-- <ul> -->
* [MNIST Keras data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py)
* [Adult XGBoost data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/adult_sklearn_data_handler.py)
<!-- </ul> -->
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
C48E63F001DFAE875E1C82B5D163B7A2C9961CE2 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html?context=cdpaas&locale=en | Applying homomorphic encryption for security and privacy | Applying homomorphic encryption for security and privacy
Federated learning supports homomorphic encryption as an added measure of security for federated training data. Homomorphic encryption is a form of public key cryptography that enables computations on the encrypted data without first decrypting it, meaning the data can be used in modeling without exposing it to the risk of discovery.
With homomorphic encryption, the results of the computations remain in encrypted form and when decrypted, result in an output that is the same as the output produced with computations performed on unencrypted data. It uses a public key for encryption and a private key for decryption.
How it works with Federated Learning
Homomorphic encryption is an optional encryption method to add additional security and privacy to a Federated Learning experiment. When homomorphic encryption is applied in a Federated Learning experiment, the parties send their homomorphically encrypted model updates to the aggregator. The aggregator does not have the private key and can only see the homomorphically encrypted model updates. For example, the aggregator cannot reverse engineer the model updates to discover information on the parties' training data. The aggregator fuses the model updates in their encrypted form which results in an encrypted aggregated model. Then the aggregator sends the encrypted aggregated model to the participating parties who can use their private key for decryption and continue with the next round of training. Only the participating parties can decrypt model data.
Supported frameworks and fusion methods
Fully Homomorphic Encryption (FHE) supports the simple average fusion method for these model frameworks:
* Tensorflow
* Pytorch
* Scikit-learn classification
* Scikit-learn regression
Before you begin
To get started with using homomorphic encryption, ensure that your experiment meets the following requirements:
* The hardware spec must be minimum small. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption caused by more powerful data encryption. See the encryption level table in Configuring the aggregator.- The software spec is fl-rt22.2-py3.10.
* FHE is supported in Python client version 1.0.263 or later. All parties must use the same Python client version.
Requirements for the parties
Each party must:
* Run on a Linux x86 system.
* Configure with a root certificate that identifies a certificate authority that is uniform to all parties.
* Configure an RSA public and private key pair with attributes described in the following table.
* Configure with a certificate of the party issued by the certificate authority. The RSA public key must be included in the party's certificate.
Note: You can also choose to use self-signed certificates.
Homomorphic public and private encryption keys are generated and distributed automatically and securely among the parties for each experiment. Only the parties participating in an experiment have access to the private key generated for the experiment. To support the automatic generation and distribution mechanism, the parties must be configured with the certificates and RSA keys specified previously.
RSA key requirements
Table 1. RSA Key Requirements
Attribute Requirement
Key size 4096 bit
Public exponent 65537
Password None
Hash algorithm SHA256
File format The key and certificate files must be in "PEM" format
Configuring the aggregator (admin)
As you create a Federated Learning experiment, follow these steps:
1. In the Configure tab, toggle "Enable homomorphic encryption".
2. Choose small or above for Hardware specification. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption for homomorphic encryption.
3. Ensure that you upload an unencrypted initial model when selecting the model file for Model specification.
4. Select "Simple average (encrypted)" for Fusion method. Click Next.
5. Check Show advanced in the Define hyperparameters tab.
6. Select the level of encryption in Encryption level.
Higher encryption levels increase security and precision, and require higher resource consumption (e.g. computation, memory, network bandwidth). The default is encryption level 1.
See the following table for description of the encryption levels:
Increasing encryption level and security and precision
Level Security Precision
1 High Good
2 High High
3 Very high Good
4 Very high High
Security is the strength of the encryption, typically measured by the number of operations that an attacker must perform to break the encryption.
Precision is the precision of the encryption system's outcomes. Higher precision levels reduce loss of accuracy of the model due to the encryption.
Connecting to the aggregator (party)
The following steps only show the configuration needed for homomorphic encryption. For a step-by-step tutorial of using homomorphic encryption in Federated Learning, see [FHE sample](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html).
To see how to create a general end-to-end party connector script, see [Connect to the aggregator (party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html).
1. Install the Python client with FHE with the following command:
pip install 'ibm_watson_machine_learning[fl-rt23.1-py3.10,fl-crypto]'
2. Configure the party as follows:
party_config = {
"local_training": {
"info": {
"crypto": {
"key_manager": {
"key_mgr_info": {
"distribution": {
"ca_cert_file_path": "path of the root certificate file identifying the certificate authority",
"my_cert_file_path": "path of the certificate file of the party issued by the certificate authority",
"asym_key_file_path": "path of the RSA key file of the party"
}
}
}
}
}
}
}
}
3. Run the party connector script after configuration.
Additional resources
Parent topic:[Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Applying homomorphic encryption for security and privacy #
Federated learning supports homomorphic encryption as an added measure of security for federated training data\. Homomorphic encryption is a form of public key cryptography that enables computations on the encrypted data without first decrypting it, meaning the data can be used in modeling without exposing it to the risk of discovery\.
With homomorphic encryption, the results of the computations remain in encrypted form and when decrypted, result in an output that is the same as the output produced with computations performed on unencrypted data\. It uses a public key for encryption and a private key for decryption\.
## How it works with Federated Learning ##
Homomorphic encryption is an optional encryption method to add additional security and privacy to a Federated Learning experiment\. When homomorphic encryption is applied in a Federated Learning experiment, the parties send their homomorphically encrypted model updates to the aggregator\. The aggregator does not have the private key and can only see the homomorphically encrypted model updates\. For example, the aggregator cannot reverse engineer the model updates to discover information on the parties' training data\. The aggregator fuses the model updates in their encrypted form which results in an encrypted aggregated model\. Then the aggregator sends the encrypted aggregated model to the participating parties who can use their private key for decryption and continue with the next round of training\. Only the participating parties can decrypt model data\.
## Supported frameworks and fusion methods ##
Fully Homomorphic Encryption (FHE) supports the simple average fusion method for these model frameworks:
<!-- <ul> -->
* Tensorflow
* Pytorch
* Scikit\-learn classification
* Scikit\-learn regression
<!-- </ul> -->
## Before you begin ##
To get started with using homomorphic encryption, ensure that your experiment meets the following requirements:
<!-- <ul> -->
* The hardware spec must be minimum *small*\. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption caused by more powerful data encryption\. See the encryption level table in **Configuring the aggregator**\.\- The software spec is `fl-rt22.2-py3.10`\.
* FHE is supported in Python client version 1\.0\.263 or later\. All parties must use the same Python client version\.
<!-- </ul> -->
### Requirements for the parties ###
Each party must:
<!-- <ul> -->
* Run on a Linux x86 system\.
* Configure with a root certificate that identifies a certificate authority that is uniform to all parties\.
* Configure an RSA public and private key pair with attributes described in the following table\.
* Configure with a certificate of the party issued by the certificate authority\. The RSA public key must be included in the party's certificate\.
<!-- </ul> -->
Note: You can also choose to use self\-signed certificates\.
Homomorphic public and private encryption keys are generated and distributed automatically and securely among the parties for each experiment\. Only the parties participating in an experiment have access to the private key generated for the experiment\. To support the automatic generation and distribution mechanism, the parties must be configured with the certificates and RSA keys specified previously\.
### RSA key requirements ###
<!-- <table> -->
Table 1\. RSA Key Requirements
| Attribute | Requirement |
| --------------- | ----------------------------------------------------- |
| Key size | 4096 bit |
| Public exponent | 65537 |
| Password | None |
| Hash algorithm | SHA256 |
| File format | The key and certificate files must be in "PEM" format |
<!-- </table ""> -->
## Configuring the aggregator (admin) ##
As you create a Federated Learning experiment, follow these steps:
<!-- <ol> -->
1. In the **Configure** tab, toggle "Enable homomorphic encryption"\.
2. Choose *small* or above for *Hardware specification*\. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption for homomorphic encryption\.
3. Ensure that you upload an unencrypted initial model when selecting the model file for *Model specification*\.
4. Select "Simple average (encrypted)" for *Fusion method*\. Click **Next**\.
5. Check *Show advanced* in the **Define hyperparameters** tab\.
6. Select the level of encryption in *Encryption level*\.
Higher encryption levels increase security and precision, and require higher resource consumption (e.g. computation, memory, network bandwidth). The default is encryption level 1.
See the following table for description of the encryption levels:
<!-- </ol> -->
<!-- <table> -->
Increasing encryption level and security and precision
| Level | Security | Precision |
| ----- | --------- | --------- |
| 1 | High | Good |
| 2 | High | High |
| 3 | Very high | Good |
| 4 | Very high | High |
<!-- </table ""> -->
*Security* is the strength of the encryption, typically measured by the number of operations that an attacker must perform to break the encryption\.
*Precision* is the precision of the encryption system's outcomes\. Higher precision levels reduce loss of accuracy of the model due to the encryption\.
## Connecting to the aggregator (party) ##
The following steps only show the configuration needed for homomorphic encryption\. For a step\-by\-step tutorial of using homomorphic encryption in Federated Learning, see [FHE sample](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)\.
To see how to create a general end\-to\-end party connector script, see [Connect to the aggregator (party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)\.
<!-- <ol> -->
1. Install the Python client with FHE with the following command:
`pip install 'ibm_watson_machine_learning[fl-rt23.1-py3.10,fl-crypto]'`
2. Configure the party as follows:
party_config = {
"local_training": {
"info": {
"crypto": {
"key_manager": {
"key_mgr_info": {
"distribution": {
"ca_cert_file_path": "path of the root certificate file identifying the certificate authority",
"my_cert_file_path": "path of the certificate file of the party issued by the certificate authority",
"asym_key_file_path": "path of the RSA key file of the party"
}
}
}
}
}
}
}
}
3. Run the party connector script after configuration\.
<!-- </ol> -->
### Additional resources ###
**Parent topic:**[Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
4CD539B8153216F80B26729A35AD4CD04A9C27DB | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en | Creating the initial model | Creating the initial model
Parties can create and save the initial model before training by following a set of examples.
* [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=entf-config)
* [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensklearn-config)
* [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=enpytorch)
Consider the configuration examples that match your model type.
Save the Tensorflow model
import tensorflow as tf
from tensorflow.keras import
from tensorflow.keras.layers import
import numpy as np
import os
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10)
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
Create an instance of the model
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True)
optimizer = tf.keras.optimizers.Adam()
acc = tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy')
model.compile(optimizer=optimizer, loss=loss_object, metrics=[acc])
img_rows, img_cols = 28, 28
input_shape = (None, img_rows, img_cols, 1)
model.compute_output_shape(input_shape=input_shape)
dir = "./model_architecture"
if not os.path.exists(dir):
os.makedirs(dir)
model.save(dir)
If you choose Tensorflow as the model framework, you need to save a Keras model as the SavedModel format. A Keras model can be saved in SavedModel format by using tf.keras.model.save().
To compress your files, run the command zip -r mymodel.zip model_architecture. The contents of your .zip file must contain:
mymodel.zip
└── model_architecture
├── assets
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
Save the Scikit-learn model
* [SKLearn classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-class)
* [SKLearn regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-reg)
* [SKLearn Kmeans](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-k)
SKLearn classification
SKLearn classification
from sklearn.linear_model import SGDClassifier
import numpy as np
import joblib
model = SGDClassifier(loss='log', penalty='l2')
model.classes_ = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
You must specify the class label for IBM Federated Learning using model.classes. Class labels must be contained in a numpy array.
In the example, there are 10 classes.
joblib.dump(model, "./model_architecture.pickle")
SKLearn regression
Sklearn regression
from sklearn.linear_model import SGDRegressor
import pickle
model = SGDRegressor(loss='huber', penalty='l2')
with open("./model_architecture.pickle", 'wb') as f:
pickle.dump(model, f)
SKLearn Kmeans
SKLearn Kmeans
from sklearn.cluster import KMeans
import joblib
model = KMeans()
joblib.dump(model, "./model_architecture.pickle")
You need to create a .zip file that contains your model in pickle format by running the command zip mymodel.zip model_architecture.pickle. The contents of your .zip file must contain:
mymodel.zip
└── model_architecture.pickle
Save the PyTorch model
import torch
import torch.nn as nn
model = nn.Sequential(
nn.Flatten(start_dim=1, end_dim=-1),
nn.Linear(in_features=784, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=100, bias=True),
nn.ReLU(),
nn.Linear(in_features=100, out_features=50, bias=True),
nn.ReLU(),
nn.Linear(in_features=50, out_features=10, bias=True),
nn.LogSoftmax(dim=1),
).double()
torch.save(model, "./model_architecture.pt")
You need to create a .zip file containing your model in pickle format. Run the command zip mymodel.zip model_architecture.pt. The contents of your .zip file should contain:
mymodel.zip
└── model_architecture.pt
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Creating the initial model #
Parties can create and save the initial model before training by following a set of examples\.
<!-- <ul> -->
* [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#tf-config)
* [Save the Scikit\-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#sklearn-config)
* [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#pytorch)
<!-- </ul> -->
Consider the configuration examples that match your model type\.
## Save the Tensorflow model ##
import tensorflow as tf
from tensorflow.keras import *
from tensorflow.keras.layers import *
import numpy as np
import os
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10)
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True)
optimizer = tf.keras.optimizers.Adam()
acc = tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy')
model.compile(optimizer=optimizer, loss=loss_object, metrics=[acc])
img_rows, img_cols = 28, 28
input_shape = (None, img_rows, img_cols, 1)
model.compute_output_shape(input_shape=input_shape)
dir = "./model_architecture"
if not os.path.exists(dir):
os.makedirs(dir)
model.save(dir)
If you choose Tensorflow as the model framework, you need to save a Keras model as the `SavedModel` format\. A Keras model can be saved in `SavedModel` format by using `tf.keras.model.save()`\.
To compress your files, run the command `zip -r mymodel.zip model_architecture`\. The contents of your `.zip` file must contain:
mymodel.zip
└── model_architecture
├── assets
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
## Save the Scikit\-learn model ##
<!-- <ul> -->
* [SKLearn classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#sk-class)
* [SKLearn regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#sk-reg)
* [SKLearn Kmeans](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=en#sk-k)
<!-- </ul> -->
### SKLearn classification ###
# SKLearn classification
from sklearn.linear_model import SGDClassifier
import numpy as np
import joblib
model = SGDClassifier(loss='log', penalty='l2')
model.classes_ = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
# You must specify the class label for IBM Federated Learning using model.classes. Class labels must be contained in a numpy array.
# In the example, there are 10 classes.
joblib.dump(model, "./model_architecture.pickle")
### SKLearn regression ###
# Sklearn regression
from sklearn.linear_model import SGDRegressor
import pickle
model = SGDRegressor(loss='huber', penalty='l2')
with open("./model_architecture.pickle", 'wb') as f:
pickle.dump(model, f)
### SKLearn Kmeans ###
# SKLearn Kmeans
from sklearn.cluster import KMeans
import joblib
model = KMeans()
joblib.dump(model, "./model_architecture.pickle")
You need to create a `.zip` file that contains your model in pickle format by running the command `zip mymodel.zip model_architecture.pickle`\. The contents of your `.zip` file must contain:
mymodel.zip
└── model_architecture.pickle
## Save the PyTorch model ##
import torch
import torch.nn as nn
model = nn.Sequential(
nn.Flatten(start_dim=1, end_dim=-1),
nn.Linear(in_features=784, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=256, bias=True),
nn.ReLU(),
nn.Linear(in_features=256, out_features=100, bias=True),
nn.ReLU(),
nn.Linear(in_features=100, out_features=50, bias=True),
nn.ReLU(),
nn.Linear(in_features=50, out_features=10, bias=True),
nn.LogSoftmax(dim=1),
).double()
torch.save(model, "./model_architecture.pt")
You need to create a `.zip` file containing your model in pickle format\. Run the command `zip mymodel.zip model_architecture.pt`\. The contents of your `.zip` file should contain:
mymodel.zip
└── model_architecture.pt
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html?context=cdpaas&locale=en | Monitoring the experiment and saving the model | Monitoring the experiment and saving the model
Any party or admin with collaborator access to the experiment can monitor the experiment and save a copy of the model.
As the experiment runs, you can check the progress of the experiment. After the training is complete, you can view your results, save and deploy the model, and then test the model with new data.
Monitoring the experiment
When all parties run the party connector script, the experiment starts training automatically. As the training runs, you can view a dynamic diagram of the training progress. For each round of training, you can view the four stages of a training round:
* Sending model: Federated Learning sends the model metrics to each party.
* Training: The process of training the data locally. Each party trains to produce a local model that is fused. No data is exchanged between parties.
* Receiving models: After training is complete, each party sends its local model to the aggregator. The data is not sent and remains private.
* Aggregating: The aggregator combines the models that are sent by each of the remote parties to create an aggregated model.
Saving your model
When the training is complete, a chart that displays the model accuracy over each round of training is drawn. Hover over the points on the chart for more information on a single point's exact metrics.
A Training rounds table shows details for each training round. The table displays the participating parties' average accuracy of their model training for each round.

When you are done with the viewing, click Save model to project to save the Federated Learning model to your project.
Rerun the experiment
You can rerun the experiment as many times as you need in your project.
Note:If you encounter errors when rerunning an experiment, see [Troubleshoot](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) for more details.
Deploying your model
After you save your Federated Learning model, you can deploy and score the model like other machine learning models in a Watson Studio platform.
See [Deploying models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) for more details.
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Monitoring the experiment and saving the model #
Any party or admin with collaborator access to the experiment can monitor the experiment and save a copy of the model\.
As the experiment runs, you can check the progress of the experiment\. After the training is complete, you can view your results, save and deploy the model, and then test the model with new data\.
## Monitoring the experiment ##
When all parties run the party connector script, the experiment starts training automatically\. As the training runs, you can view a dynamic diagram of the training progress\. For each round of training, you can view the four stages of a training round:
<!-- <ul> -->
* **Sending model**: Federated Learning sends the model metrics to each party\.
* **Training**: The process of training the data locally\. Each party trains to produce a local model that is fused\. No data is exchanged between parties\.
* **Receiving models**: After training is complete, each party sends its local model to the aggregator\. The data is not sent and remains private\.
* **Aggregating**: The aggregator combines the models that are sent by each of the remote parties to create an aggregated model\.
<!-- </ul> -->
## Saving your model ##
When the training is complete, a chart that displays the model accuracy over each round of training is drawn\. Hover over the points on the chart for more information on a single point's exact metrics\.
A **Training rounds** table shows details for each training round\. The table displays the participating parties' average accuracy of their model training for each round\.

When you are done with the viewing, click **Save model to project** to save the Federated Learning model to your project\.
### Rerun the experiment ###
You can rerun the experiment as many times as you need in your project\.
Note:If you encounter errors when rerunning an experiment, see [Troubleshoot](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) for more details\.
### Deploying your model ###
After you save your Federated Learning model, you can deploy and score the model like other machine learning models in a Watson Studio platform\.
See [Deploying models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) for more details\.
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
4B16740C786C0846194987998DAD887250BE95BF | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html?context=cdpaas&locale=en | Hyperparameter definitions | Hyperparameter definitions
Definitions of hyperparameters used in the experiment training. One or more of these hyperparameter options might be used, depending on your framework and fusion method.
Hyperparameter definitions
Hyperparameters Description
Rounds Int value. The number of training iterations to complete between the aggregator and the remote systems.
Termination accuracy (Optional) Float value. Takes model_accuracy and compares it to a numerical value. If the condition is satisfied, then the experiment finishes early. <br> <br>For example, termination_predicate: accuracy >= 0.8 finishes the experiment when the mean of model accuracy for participating parties is greater than or equal to 80%. Currently, Federated Learning accepts one type of early termination condition (model accuracy) for classification models only.
Quorum (Optional) Float value. Proceeds with model training after the aggregator reaches a certain ratio of party responses. Takes a decimal value between 0 - 1. The default is 1. The model training starts only after party responses reach the indicated ratio value. <br>For example, setting this value to 0.5 starts the training after 50% of the registered parties responded to the aggregator call.
Max Timeout (Optional) Int value. Terminates the Federated Learning experiment if the waiting time for party responses exceeds this value in seconds. Takes a numerical value up to 43200. If this value in seconds passes and the quorum ratio is not reached, the experiment terminates. <br> <br>For example, max_timeout = 1000 terminates the experiment after 1000 seconds if the parties do not respond in that time.
Sketch accuracy vs privacy (Optional) Float value. Used with XGBoost training to control the relative accuracy of sketched data sent to the aggregator. Takes a decimal value between 0 and 1. Higher values will result in higher quality models but with a reduction in data privacy and increase in resource consumption.
Number of classes Int value. Number of target classes for the classification model. Required if "Loss" hyperparameter is: <br>- auto <br>- binary_crossentropy <br>- categorical_crossentropy <br>
Learning rate Decimal value. The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values.
Loss String value. The loss function to use in the boosting process. <br>- binary_crossentropy (also known as logistic loss) is used for binary classification. <br>- categorical_crossentropy is used for multiclass classification. <br>- auto chooses either loss function depending on the nature of the problem. <br>- least_squares is used for regression.
Max Iter Int value. The total number of passes over the local training data set to train a Scikit-learn model.
N cluster Int value. The number of clusters to form and the number of centroids to generate.
Epoch (Optional) Int value. The number of local training iterations to be preformed by each remote party for each round. For example, if you set Rounds to 2 and Epochs to 5, all remote parties train locally 5 times before the model is sent to the aggregator. In round 2, the aggregator model is trained locally again by all parties 5 times and re-sent to the aggregator.
sigma Float value. Determines how far the local model neurons are allowed from the global model. A bigger value allows more matching and produces a smaller global model. Default value is 1.
sigma0 Float value. Defines the permitted deviation of the global network neurons. Default value is 1.
gamma Float value. Indian Buffet Process parameter that controls the expected number of features in each observation. Default value is 1.
Parent topic:[Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
| # Hyperparameter definitions #
Definitions of hyperparameters used in the experiment training\. One or more of these hyperparameter options might be used, depending on your framework and fusion method\.
<!-- <table> -->
Hyperparameter definitions
| Hyperparameters | Description |
| --------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Rounds | Int value\. The number of training iterations to complete between the aggregator and the remote systems\. |
| Termination accuracy *(Optional)* | Float value\. Takes `model_accuracy` and compares it to a numerical value\. If the condition is satisfied, then the experiment finishes early\. <br> <br>For example, `termination_predicate: accuracy >= 0.8` finishes the experiment when the mean of model accuracy for participating parties is greater than or equal to 80%\. Currently, Federated Learning accepts one type of early termination condition (model accuracy) for classification models only\. |
| Quorum *(Optional)* | Float value\. Proceeds with model training after the aggregator reaches a certain ratio of party responses\. Takes a decimal value between 0 \- 1\. The default is 1\. The model training starts only after party responses reach the indicated ratio value\. <br>For example, setting this value to 0\.5 starts the training after 50% of the registered parties responded to the aggregator call\. |
| Max Timeout *(Optional)* | Int value\. Terminates the Federated Learning experiment if the waiting time for party responses exceeds this value in seconds\. Takes a numerical value up to 43200\. If this value in seconds passes and the `quorum` ratio is not reached, the experiment terminates\. <br> <br>For example, `max_timeout = 1000` terminates the experiment after 1000 seconds if the parties do not respond in that time\. |
| Sketch accuracy vs privacy *(Optional)* | Float value\. Used with XGBoost training to control the relative accuracy of sketched data sent to the aggregator\. Takes a decimal value between 0 and 1\. Higher values will result in higher quality models but with a reduction in data privacy and increase in resource consumption\. |
| Number of classes | Int value\. Number of target classes for the classification model\. Required if "Loss" hyperparameter is: <br>\- `auto` <br>\- `binary_crossentropy` <br>\- `categorical_crossentropy` <br> |
| Learning rate | Decimal value\. The learning rate, also known as *shrinkage*\. This is used as a multiplicative factor for the leaves values\. |
| Loss | String value\. The loss function to use in the boosting process\. <br>\- `binary_crossentropy` (also known as logistic loss) is used for binary classification\. <br>\- `categorical_crossentropy` is used for multiclass classification\. <br>\- `auto` chooses either loss function depending on the nature of the problem\. <br>\- `least_squares` is used for regression\. |
| Max Iter | Int value\. The total number of passes over the local training data set to train a Scikit\-learn model\. |
| N cluster | Int value\. The number of clusters to form and the number of centroids to generate\. |
| Epoch *(Optional)* | Int value\. The number of local training iterations to be preformed by each remote party for each round\. For example, if you set Rounds to 2 and Epochs to 5, all remote parties train locally 5 times before the model is sent to the aggregator\. In round 2, the aggregator model is trained locally again by all parties 5 times and re\-sent to the aggregator\. |
| sigma | Float value\. Determines how far the local model neurons are allowed from the global model\. A bigger value allows more matching and produces a smaller global model\. Default value is 1\. |
| sigma0 | Float value\. Defines the permitted deviation of the global network neurons\. Default value is 1\. |
| gamma | Float value\. Indian Buffet Process parameter that controls the expected number of features in each observation\. Default value is 1\. |
<!-- </table ""> -->
**Parent topic:**[Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
<!-- </article "role="article" "> -->
|
E0D36A6F5028FC5ED005E87FAF9F65F976E62A37 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html?context=cdpaas&locale=en | Set up your system | Set up your system
Before you can use IBM Federated Learning, ensure that you have the required hardware, software, and dependencies.
Core requirements by role
Each entity that participates in a Federated Learning experiment must meet the requirements for their role.
Admin software requirements
Designate an admin for the Federated Learning experiment. The admin must have:
* Access to the platform with Watson Studio and Watson Machine Learning enabled.
You must [create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning).
* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) for assembling the global model. You must [associate the Watson Machine Learning service instance with your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html).
Party hardware and software requirements
Each party must have a system that meets these minimum requirements.
Note: Remote parties participating in the same Federated Learning experiment can use different hardware specs and architectures, as long as they each meet the minimum requirement.
Supported architectures
* x86 64-bit
* PPC
* Mac M-series
* 4 GB memory or greater
Supported environments
* Linux
* Mac OS/Unix
* Windows
Software dependencies
* A supported [Python version and a machine learning framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html).
* The Watson Machine Learning Python client.
1. If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
2. If you are using Mac OS with M-series CPU and Conda, download the installation script and then run ./install_fl_rt22.2_macos.sh <name for new conda environment>.
Network requirements
An outbound connection from the remote party to aggregator is required. Parties can use firewalls that restrict internal connections with each other.
Data sources requirements
Data must comply with these requirements.
* Data must be in a directory or storage repository that is accessible to the party that uses them.
* Each data source for a federate model must have the same features. IBM Federated Learning supports horizontal federated learning only.
* Data must be in a readable format, but the formats can vary by data source. Suggested formats include:
* Hive
* Excel
* CSV
* XML
* Database
Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
| # Set up your system #
Before you can use IBM Federated Learning, ensure that you have the required hardware, software, and dependencies\.
## Core requirements by role ##
Each entity that participates in a Federated Learning experiment must meet the requirements for their role\.
### Admin software requirements ###
Designate an admin for the Federated Learning experiment\. The admin must have:
<!-- <ul> -->
* Access to the platform with Watson Studio and Watson Machine Learning enabled\.
You must [create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning).
* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) for assembling the global model\. You must [associate the Watson Machine Learning service instance with your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)\.
<!-- </ul> -->
### Party hardware and software requirements ###
Each party must have a system that meets these minimum requirements\.
Note: Remote parties participating in the same Federated Learning experiment can use different hardware specs and architectures, as long as they each meet the minimum requirement\.
#### Supported architectures ####
<!-- <ul> -->
* x86 64\-bit
* PPC
* Mac M\-series
* 4 GB memory or greater
<!-- </ul> -->
#### Supported environments ####
<!-- <ul> -->
* Linux
* Mac OS/Unix
* Windows
<!-- </ul> -->
#### Software dependencies ####
<!-- <ul> -->
* A supported [Python version and a machine learning framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)\.
* The Watson Machine Learning Python client\.
<!-- <ol> -->
1. If you are using Linux, run `pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'`.
2. If you are using Mac OS with M-series CPU and Conda, download the installation script and then run `./install_fl_rt22.2_macos.sh <name for new conda environment>`.
<!-- </ol> -->
<!-- </ul> -->
## Network requirements ##
An outbound connection from the remote party to aggregator is required\. Parties can use firewalls that restrict internal connections with each other\.
## Data sources requirements ##
Data must comply with these requirements\.
<!-- <ul> -->
* Data must be in a directory or storage repository that is accessible to the party that uses them\.
* Each data source for a federate model must have the same features\. IBM Federated Learning supports horizontal federated learning only\.
* Data must be in a readable format, but the formats can vary by data source\. Suggested formats include:
<!-- <ul> -->
* Hive
* Excel
* CSV
* XML
* Database
<!-- </ul> -->
<!-- </ul> -->
**Parent topic:**[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
<!-- </article "role="article" "> -->
|
E5895BC081EDBF0CD7340015DECD0D0180AAC44A | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html?context=cdpaas&locale=en | Creating a Federated Learning experiment | Creating a Federated Learning experiment
Learn how to create a Federated Learning experiment to train a machine learning model.
Watch this short overview video of how to create a Federated Learning experiment.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
Follow these steps to create a Federated Learning experiment:
* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Creating a Federated Learning experiment #
Learn how to create a Federated Learning experiment to train a machine learning model\.
Watch this short overview video of how to create a Federated Learning experiment\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
Follow these steps to create a Federated Learning experiment:
<!-- <ul> -->
* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
<!-- </ul> -->
**Parent topic:**[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
8FFE1FB9CAF854DED9CA52190D4874D8280D26B0 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html?context=cdpaas&locale=en | Terminology | Terminology
Terminology that is used in IBM Federated Learning training processes.
Terminology
Federated Learning terminology
Term Definition
Party Users that contribute different sources of data to train a model collaboratively. Federated Learning ensures that the training occurs with no data exposure risk across the different parties. <br>A party must have at least Viewer permission in the Watson Studio Federated Learning project.
Admin A party member that configures the Federated Learning experiment to specify how many parties are allowed, which frameworks to use, and sets up the Remote Training Systems (RTS). They start the Federated Learning experiment and see it to the end. <br>An admin must have at least Editor permission in the Watson Studio Federated Learning project.
Remote Training System An asset that is used to authenticate a party to the aggregator. Project members register in the Remote Training System (RTS) before training. Only one of the members can use one RTS to participate in an experiment as a party. Multiple contributing parties must each authenticate with one RTS for an experiment.
Aggregator The aggregator fuses the model results between the parties to build one model.
Fusion method The algorithm that is used to combine the results that the parties return to the aggregator.
Data handler In IBM Federated Learning, data handler is a class that is used to load and pre-process data. It also helps to ensure that data that is collected from multiple sources are formatted uniformly to be trained. More details about the data handler can be found in [Data Handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html).
Global model The resulting model that is fused between different parties.
Training round A training round is the process of local data training, global model fusion, and update. Training is iterative. The admin can choose the number of training rounds.
Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
| # Terminology #
Terminology that is used in IBM Federated Learning training processes\.
## Terminology ##
<!-- <table> -->
Federated Learning terminology
| Term | Definition |
| ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Party | Users that contribute different sources of data to train a model collaboratively\. Federated Learning ensures that the training occurs with no data exposure risk across the different parties\. <br>A party must have at least *Viewer* permission in the Watson Studio Federated Learning project\. |
| Admin | A party member that configures the Federated Learning experiment to specify how many parties are allowed, which frameworks to use, and sets up the Remote Training Systems (RTS)\. They start the Federated Learning experiment and see it to the end\. <br>An admin must have at least *Editor* permission in the Watson Studio Federated Learning project\. |
| Remote Training System | An asset that is used to authenticate a party to the aggregator\. Project members register in the Remote Training System (RTS) before training\. Only one of the members can use one RTS to participate in an experiment as a party\. Multiple contributing parties must each authenticate with one RTS for an experiment\. |
| Aggregator | The aggregator fuses the model results between the parties to build one model\. |
| Fusion method | The algorithm that is used to combine the results that the parties return to the aggregator\. |
| Data handler | In IBM Federated Learning, data handler is a class that is used to load and pre\-process data\. It also helps to ensure that data that is collected from multiple sources are formatted uniformly to be trained\. More details about the data handler can be found in [Data Handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)\. |
| Global model | The resulting model that is fused between different parties\. |
| Training round | A training round is the process of local data training, global model fusion, and update\. Training is iterative\. The admin can choose the number of training rounds\. |
<!-- </table ""> -->
**Parent topic:**[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
<!-- </article "role="article" "> -->
|
E64B1811E55868CF510B06BFD1A24BA4AC3008F1 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html?context=cdpaas&locale=en | Federated Learning Tensorflow samples | Federated Learning Tensorflow samples
Download and review sample files that show how to run a Federated Learning experiment by using API calls with a Tensorflow Keras model framework.
To see a step-by-step UI driven approach rather than sample files, see the [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html).
Download the Federated Learning sample files
The Federated Learning sample has two parts, both in Jupyter Notebook format that can run in the latest Python environment.
For single-user demonstrative purposes, the Notebooks are placed in a project. Access the [Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/cab78523832431e767c41527a42a6727), and click Create project to get all the sample files at once.
You can also get the Notebook separately. Since, for practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
1. [Federated Learning Tensorflow Demo Part 1 - for Admin](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_1.ipynb)
2. [Federated Learning Tensorflow Demo Part 2 - for Party](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_2.ipynb)
Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
| # Federated Learning Tensorflow samples #
Download and review sample files that show how to run a Federated Learning experiment by using API calls with a Tensorflow Keras model framework\.
To see a step\-by\-step UI driven approach rather than sample files, see the [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)\.
## Download the Federated Learning sample files ##
The Federated Learning sample has two parts, both in Jupyter Notebook format that can run in the latest Python environment\.
For single\-user demonstrative purposes, the Notebooks are placed in a project\. Access the [Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/cab78523832431e767c41527a42a6727), and click **Create project** to get all the sample files at once\.
You can also get the Notebook separately\. Since, for practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook\. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)\.
<!-- <ol> -->
1. [Federated Learning Tensorflow Demo Part 1 \- for Admin](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_1.ipynb)
2. [Federated Learning Tensorflow Demo Part 2 \- for Party](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_2.ipynb)
<!-- </ol> -->
**Parent topic:**[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- </article "role="article" "> -->
|
37DC9376A7FB6EB772D242B85909A023C43C2417 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=en | Federated Learning Tensorflow tutorial | Federated Learning Tensorflow tutorial
This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with a Tensorflow framework.
Note:This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, see [Federated Learning Tensorflow samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html). Tip:In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full runthrough as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
Watch this short video tutorial of how to create a Federated Learning experiment with Watson Studio.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
This video provides a visual method to learn the concepts and tasks in this documentation.
In this tutorial you will learn to:
* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-1)
* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-2)
* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-3)
Step 1: Start Federated Learning as the admin
In this tutorial, you train a Federated Learning experiment with a Tensorflow framework and the MNIST data set.
Before you begin
1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email.
2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment.
3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx).
4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission.
5. Associate the Watson Machine Learning service with your project.
1. In your project, click the Manage > Service & integrations.
2. Click Associate service.
3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.

Start the aggregator
1. Create the Federated learning experiment asset:
1. Click the Assets tab in your project.
2. Click New asset > Train models on distributed data.
3. Type a Name for your experiment and optionally a description.
4. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps:
1. Click Associate a Machine Learning Service Instance.
2. Select an existing instance and click Associate, or create a New service.
3. Click Reload to see the associated service.

4. Click Next.
2. Configure the experiment.
1. On the Configure page, select a Hardware specification.
2. Under the Machine learning framework dropdown, select Tensorflow 2.
3. Select a Model type.
4. Download the [untrained model](https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/tf_mnist_model.zip).
5. Back in the Federated Learning experiment, click Select under Model specification.
6. Drag the downloaded file named tf_mnist_model.zip onto the Upload file box.1. Select runtime-22.2-py3.10 for the Software Specification dropdown.
7. Give your model a name, and then click Add.

8. Click Weighted average for the Fusion method, and click Next.

3. Define the hyperparameters.
1. Accept the default hyperparameters or adjust as needed.
2. When you are finished, click Next.
4. Select remote training systems.
1. Click Add new systems.

2. Give your Remote Training System a name.
3. Under Allowed identities, choose the user that is your party, and then click Add. In this tutorial, you can add a dummy user or yourself, for demonstrative purposes.
This user must be added to your project as a collaborator with Editor or higher permissions. Add additional systems by repeating this step for each remote party you intent to use.
4. When you are finished, click Add systems.

5. Return to the Select remote training systems page, verify that your system is selected, and then click Next.
5. Review your settings, and then click Create.
6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes.
7. Click View setup information to download the party configuration and the party connector script that can be run on the remote party.
8. Click the download icon besides each of the remote training systems that you created, and then click Party connector script. This gives you the party connector script. Save the script to a directory on your machine.

Step 2: Train model as the party
Follow these steps to train the model as a party:
1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk).
2. Create a new local directory, and put your party connector script in it.
3. [Download the data handler mnist_keras_data_handler.py](https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/mnist_keras_data_handler.py) by right-clicking on it and click Save link as. Save it to the same directory as the party connector script.
4. [Download the MNIST handwriting data set](https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/903188bb984a30f38bb889102a1baae5/data) from our Samples. In the the same directory as the party connector script, data handler, and the rest of your files, unzip it by running the unzip command unzip MNIST-pkl.zip.
5. Install Watson Machine Learning.
* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh <name for new conda environment>.
You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory.
6. Your party connector script looks similar to the following. Edit it by filling in the data file locations, the data handler, and API key for the user defined in the remote training system. To get your API key, go to Manage > Access(IAM) > API keys in your [IBM Cloud account](https://cloud.ibm.com/iam/apikeys). If you don't have one, click Create API key, fill out the fields, and click Create.
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
wml_client = APIClient(wml_credentials)
wml_client.set.default_project("XXX-XXX-XXX-XXX-XXX")
party_metadata = {
wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
Supply the name of the data handler class and path to it.
The info section may be used to pass information to the
data handler.
For example,
"name": "MnistSklearnDataHandler",
"path": "example.mnist_sklearn_data_handler",
"info": {
"train_file": pwd + "/mnist-keras-train.pkl",
"test_file": pwd + "/mnist-keras-test.pkl"
}
"name": "<data handler>",
"path": "<path to data handler>",
"info": {
"<information to pass to data handler>"
}
}
}
party = wml_client.remote_training_systems.create_party("XXX-XXX-XXX-XXX-XXX", party_metadata)
party.monitor_logs()
party.run(aggregator_id="XXX-XXX-XXX-XXX-XXX", asynchronous=False)
7. Run the party connector script: python3 rts_<RTS Name>_<RTS ID>.py.
From the UI you can monitor the status of your Federated Learning experiment.
Step 3: Save and deploy the model online
In this section, you will learn to save and deploy the model that you trained.
1. Save your model.
1. In your completed Federated Learning experiment, click Save model to project.
2. Give your model a name and click Save.
3. Go to your project home.
2. Create a deployment space, if you don't have one.
1. From the navigation menu , click Deployments.
2. Click New deployment space.
3. Fill in the fields, and click Create.
3. Promote the model to a space.
1. Return to your project, and click the Assets tab.
2. In the Models section, click the model to view its details page.
3. Click Promote to space.
4. Choose a deployment space for your trained model.
5. Select the Go to the model in the space after promoting it option.
6. Click Promote.
4. When the model displays inside the deployment space, click New deployment.
1. Select Online as the Deployment type.
2. Specify a name for the deployment.
3. Click Create.
5. Click the Deployments tab to monitor your model's deployment status.
Next steps
Ready to create your own customized Federated Experiment? See the high level steps in [Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html).
Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
| # Federated Learning Tensorflow tutorial #
This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data\. The steps are done in a low code environment with the UI and with a Tensorflow framework\.
Note:This is a step\-by\-step tutorial for running a UI driven Federated Learning experiment\. To see a code sample for an API driven approach, see [Federated Learning Tensorflow samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)\. Tip:In this tutorial, *admin* refers to the user that starts the Federated Learning experiment, and *party* refers to one or more users who send their model results after the experiment is started by the admin\. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full runthrough as both the admin and the party\. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party\. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)\.
Watch this short video tutorial of how to create a Federated Learning experiment with Watson Studio\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
This video provides a visual method to learn the concepts and tasks in this documentation\.
In this tutorial you will learn to:
<!-- <ul> -->
* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=en#step-1)
* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=en#step-2)
* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=en#step-3)
<!-- </ul> -->
## Step 1: Start Federated Learning as the admin ##
In this tutorial, you train a Federated Learning experiment with a Tensorflow framework and the MNIST data set\.
### Before you begin ###
<!-- <ol> -->
1. Log in to [IBM Cloud](https://cloud.ibm.com/)\. If you don't have an account, create one with any email\.
2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment\.
3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx)\.
4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one\. You must have at least admin permission\.
5. Associate the Watson Machine Learning service with your project\.
<!-- <ol> -->
1. In your project, click the **Manage > Service & integrations**.
2. Click **Associate service**.
3. Select your Watson Machine Learning instance from the list, and click **Associate**; or click **New service** if you do not have one to set up an instance.
<!-- </ol> -->

<!-- </ol> -->
### Start the aggregator ###
<!-- <ol> -->
1. Create the Federated learning experiment asset:
<!-- <ol> -->
1. Click the **Assets** tab in your project.
2. Click **New asset > Train models on distributed data**.
3. Type a *Name* for your experiment and optionally a description.
4. Verify the associated Watson Machine Learning instance under *Select a machine learning instance*. If you don't see a Watson Machine Learning instance associated, follow these steps:
<!-- <ol> -->
1. Click **Associate a Machine Learning Service Instance**.
2. Select an existing instance and click **Associate**, or create a **New service**.
3. Click **Reload** to see the associated service.

4. Click **Next**.
<!-- </ol> -->
<!-- </ol> -->
2. Configure the experiment\.
<!-- <ol> -->
1. On the *Configure* page, select a *Hardware specification*.
2. Under the *Machine learning framework* dropdown, select **Tensorflow 2**.
3. Select a *Model type*.
4. Download the [untrained model](https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/tf_mnist_model.zip).
5. Back in the Federated Learning experiment, click **Select** under *Model specification*.
6. Drag the downloaded file named `tf_mnist_model.zip` onto the *Upload* file box.1. Select `runtime-22.2-py3.10` for the **Software Specification** dropdown.
7. Give your model a name, and then click **Add**.

8. Click **Weighted average** for the *Fusion method*, and click **Next**.

<!-- </ol> -->
3. Define the hyperparameters\.
<!-- <ol> -->
1. Accept the default hyperparameters or adjust as needed.
2. When you are finished, click **Next**.
<!-- </ol> -->
4. Select remote training systems\.
<!-- <ol> -->
1. Click **Add new systems**.

2. Give your Remote Training System a name.
3. Under **Allowed identities**, choose the user that is your party, and then click **Add**. In this tutorial, you can add a dummy user or yourself, for demonstrative purposes.
This user must be added to your project as a collaborator with *Editor* or higher permissions. Add additional systems by repeating this step for each remote party you intent to use.
4. When you are finished, click **Add systems**.

5. Return to the *Select remote training systems* page, verify that your system is selected, and then click **Next**.
<!-- </ol> -->
5. Review your settings, and then click **Create**\.
6. Watch the status\. Your Federated Learning experiment status is *Pending* when it starts\. When your experiment is ready for parties to connect, the status will change to *Setup – Waiting for remote systems*\. This may take a few minutes\.
7. Click **View setup information** to download the party configuration and the party connector script that can be run on the remote party\.
8. Click the download icon besides each of the remote training systems that you created, and then click **Party connector script**\. This gives you the party connector script\. Save the script to a directory on your machine\.

<!-- </ol> -->
## Step 2: Train model as the party ##
Follow these steps to train the model as a party:
<!-- <ol> -->
1. Ensure that you are using the same Python version as the admin\. Using a different Python version might cause compatibility issues\. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html#fl-py-fmwk)\.
2. Create a new local directory, and put your party connector script in it\.
3. [Download the data handler mnist\_keras\_data\_handler\.py](https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/mnist_keras_data_handler.py) by right\-clicking on it and click **Save link as**\. Save it to the same directory as the party connector script\.
4. [Download the MNIST handwriting data set](https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/903188bb984a30f38bb889102a1baae5/data) from our Samples\. In the the same directory as the party connector script, data handler, and the rest of your files, unzip it by running the unzip command `unzip MNIST-pkl.zip`\.
5. Install Watson Machine Learning\.
<!-- <ul> -->
* If you are using Linux, run `pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'`.
* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run `./install_fl_rt22.2_macos.sh <name for new conda environment>`.
You now have the party connector script, `mnist_keras_data_handler.py`, `mnist-keras-test.pkl` and `mnist-keras-train.pkl`, data handler in the same directory.
<!-- </ul> -->
6. Your party connector script looks similar to the following\. Edit it by filling in the data file locations, the data handler, and API key for the user defined in the remote training system\. To get your API key, go to **Manage > Access(IAM) > API keys** in your [IBM Cloud account](https://cloud.ibm.com/iam/apikeys)\. If you don't have one, click **Create API key**, fill out the fields, and click **Create**\.
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
wml_client = APIClient(wml_credentials)
wml_client.set.default_project("XXX-XXX-XXX-XXX-XXX")
party_metadata = {
wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
# Supply the name of the data handler class and path to it.
# The info section may be used to pass information to the
# data handler.
# For example,
# "name": "MnistSklearnDataHandler",
# "path": "example.mnist_sklearn_data_handler",
# "info": {
# "train_file": pwd + "/mnist-keras-train.pkl",
# "test_file": pwd + "/mnist-keras-test.pkl"
# }
"name": "<data handler>",
"path": "<path to data handler>",
"info": {
"<information to pass to data handler>"
}
}
}
party = wml_client.remote_training_systems.create_party("XXX-XXX-XXX-XXX-XXX", party_metadata)
party.monitor_logs()
party.run(aggregator_id="XXX-XXX-XXX-XXX-XXX", asynchronous=False)
7. Run the party connector script: `python3 rts_<RTS Name>_<RTS ID>.py`\.
From the UI you can monitor the status of your Federated Learning experiment.
<!-- </ol> -->
## Step 3: Save and deploy the model online ##
In this section, you will learn to save and deploy the model that you trained\.
<!-- <ol> -->
1. Save your model\.
<!-- <ol> -->
1. In your completed Federated Learning experiment, click **Save model to project**.
2. Give your model a name and click **Save**.
3. Go to your project home.
<!-- </ol> -->
2. Create a deployment space, if you don't have one\.
<!-- <ol> -->
1. From the navigation menu , click **Deployments**.
2. Click **New deployment space**.
3. Fill in the fields, and click **Create**.
<!-- </ol> -->
3. Promote the model to a space\.
<!-- <ol> -->
1. Return to your project, and click the **Assets** tab.
2. In the *Models* section, click the model to view its details page.
3. Click **Promote to space**.
4. Choose a deployment space for your trained model.
5. Select the **Go to the model in the space after promoting it** option.
6. Click **Promote**.
<!-- </ol> -->
4. When the model displays inside the deployment space, click **New deployment**\.
<!-- <ol> -->
1. Select **Online** as the *Deployment type*.
2. Specify a name for the deployment.
3. Click **Create**.
<!-- </ol> -->
5. Click the **Deployments** tab to monitor your model's deployment status\.
<!-- </ol> -->
### Next steps ###
Ready to create your own customized Federated Experiment? See the high level steps in [Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)\.
**Parent topic:**[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- </article "role="article" "> -->
|
866BBCABEF2C6E3EDDF66300DC2639C938D815F4 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html?context=cdpaas&locale=en | Troubleshooting Federated Learning experiments | Troubleshooting Federated Learning experiments
The following are some of the limitations and troubleshoot methods that apply to Federated learning experiments.
Limitations
* If you choose to enable homomorphic encryption, intermediate models can no longer be saved. However, the final model of the training experiment can be saved and used normally. The aggregator will not be able to decrypt the model updates and the intermediate global models. The aggregator can see only the final global model.
Troubleshooting
* If a quorum error occurs during homomorphic keys distribution, restart the experiment.
* Changing the name of a Federated Learning experiment causes it to lose its current name, including earlier runs. If this is not intended, create a new experiment with the new name.
* The default software spec is used by every run. If your model type becomes outdated and not compatible with future software specs, re-running an older experiment might run into issues.
* As Remote Training Systems are meant to run on different servers, you might encounter unexpected behavior when you run with multiple parties that are based in the same server.
Federated Learning known issues
* [Known issues for Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlwml)
Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
| # Troubleshooting Federated Learning experiments #
The following are some of the limitations and troubleshoot methods that apply to Federated learning experiments\.
## Limitations ##
<!-- <ul> -->
* If you choose to enable homomorphic encryption, intermediate models can no longer be saved\. However, the final model of the training experiment can be saved and used normally\. The aggregator will not be able to decrypt the model updates and the intermediate global models\. The aggregator can see only the final global model\.
<!-- </ul> -->
## Troubleshooting ##
<!-- <ul> -->
* If a quorum error occurs during homomorphic keys distribution, restart the experiment\.
* Changing the name of a Federated Learning experiment causes it to lose its current name, including earlier runs\. If this is not intended, create a new experiment with the new name\.
* The default software spec is used by every run\. If your model type becomes outdated and not compatible with future software specs, re\-running an older experiment might run into issues\.
* As Remote Training Systems are meant to run on different servers, you might encounter unexpected behavior when you run with multiple parties that are based in the same server\.
<!-- </ul> -->
## Federated Learning known issues ##
<!-- <ul> -->
* [Known issues for Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.html#wml)
<!-- </ul> -->
**Parent topic:**[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
<!-- </article "role="article" "> -->
|
D0142FFCD3063427101CCC165C5E5F2B0FA286DB | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html?context=cdpaas&locale=en | Federated Learning XGBoost samples | Federated Learning XGBoost samples
These are links to sample files to run Federated Learning by using API calls with an XGBoost framework. To see a step-by-step UI driven approach, go to [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html).
Download the Federated Learning sample files
The Federated Learning samples have two parts, both in Jupyter Notebook format that can run in the latest Python environment.
For single-user demonstrative purposes, the Notebooks are placed in a project. Go to the following link and click Create project to get all the sample files.
[Download the Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/45a71514d67d87bb7900880b4501732c?context=wx)
You can also get the Notebook separately. For practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
1. [Federated Learning XGBoost Demo Part 1 - for Admin](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c95a130a2efdddc0a4b38c319a011fed)
2. [Federated Learning XGBoost Demo Part 2 - for Party](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/155a5e78ca72a013e45d54ae87012306)
Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
| # Federated Learning XGBoost samples #
These are links to sample files to run Federated Learning by using API calls with an XGBoost framework\. To see a step\-by\-step UI driven approach, go to [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)\.
## Download the Federated Learning sample files ##
The Federated Learning samples have two parts, both in Jupyter Notebook format that can run in the latest Python environment\.
For single\-user demonstrative purposes, the Notebooks are placed in a project\. Go to the following link and click **Create project** to get all the sample files\.
[Download the Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/45a71514d67d87bb7900880b4501732c?context=wx)
You can also get the Notebook separately\. For practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook\. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
<!-- <ol> -->
1. [Federated Learning XGBoost Demo Part 1 \- for Admin](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c95a130a2efdddc0a4b38c319a011fed)
2. [Federated Learning XGBoost Demo Part 2 \- for Party](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/155a5e78ca72a013e45d54ae87012306)
<!-- </ol> -->
**Parent topic:**[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- </article "role="article" "> -->
|
FE207218CE0D1148AA57D10ED8848CD7E6FFD87E | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en | Federated Learning XGBoost tutorial for UI | Federated Learning XGBoost tutorial for UI
This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with an XGBoost framework.
In this tutorial you learn to:
* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-1)
* [Before you begin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enbefore-you-begin)
* [Start the aggregator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstart-the-aggregator)
* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-2)
* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-3)
* [Step 4: Score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-4)
Notes:
* This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, go to [Federated Learning XGBoost samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html).
* In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full run through as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
Step 1: Start Federated Learning
In this section, you learn to start the Federated Learning experiment.
Before you begin
1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email.
2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment.
3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx).
4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission.
5. Associate the Watson Machine Learning service with your project.
1. In your project, click the Manage > Service & integrations.
2. Click Associate service.
3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.

Start the aggregator
1. Create the Federated learning experiment asset:
1. Click the Assets tab in your project.
1. Click New asset > Train models on distributed data.
2. Type a Name for your experiment and optionally a description.
3. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps:
1. Click Associate a Machine Learning Service Instance.
2. Select an existing instance and click Associate, or create a New service.
3. Click Reload to see the associated service.

4. Click Next.
2. Configure the experiment.
1. On the Configure page, select a Hardware specification.
2. Under the Machine learning framework dropdown, select scikit-learn.
3. For the Model type, select XGBoost.
4. For the Fusion method, select XGBoost classification fusion

3. Define the hyperparameters.
1. Set the value for the Rounds field to 5.
2. Accept the default values for the rest of the fields.

3. Click Next.
4. Select remote training systems.
1. Click Add new systems.

2. Give your Remote Training System a name.
3. Under Allowed identities, select the user that will participate in the experiment, and then click Add. You can add as many allowed identities as participants in this Federated Experiment training instance. For this tutorial, choose only yourself.
Any allowed identities must be part of the project and have at leastAdmin permission.
4. When you are finished, click Add systems.

5. Return to the Select remote training systems page, verify that your system is selected, and then click Next.

5. Review your settings, and then click Create.
6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes.
Step 2: Train model as a party
1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk).
2. Create a new local directory.
3. Download the Adult data set into the directory with this command: wget https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/5fcc01b02d8f0e50af8972dc8963f98e/data -O adult.csv.
4. Download the data handler by running wget https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/adult_sklearn_data_handler.py -O adult_sklearn_data_handler.py.
5. Install Watson Machine Learning.
* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh <name for new conda environment>.
You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory.
6. Go back to the Federated Learning experiment page, where the aggregator is running. Click View Setup Information.
7. Click the download icon next to the remote training system, and select Party connector script.
8. Ensure that you have the party connector script, the Adult data set, and the data handler in the same directory. If you run ls -l, you should see:
adult.csv
adult_sklearn_data_handler.py
rts_<RTS Name>_<RTS ID>.py
9. In the party connector script:
1. Authenticate using any method.
2. Put in these parameters for the "data" section:
"data": {
"name": "AdultSklearnDataHandler",
"path": "./adult_sklearn_data_handler.py",
"info": {
"txt_file": "./adult.csv"
},
},
where:
* name: Class name defined for the data handler.
* path: Path of where the data handler is located.
* info: Create a key value pair for the file type of local data set, or the path of your data set.
10. Run the party connector script: python3 rts_<RTS Name>_<RTS ID>.py.
11. When all participating parties connect to the aggregator, the aggregator facilitates the local model training and global model update. Its status is Training. You can monitor the status of your Federated Learning experiment from the user interface.
12. When training is complete, the party receives a Received STOP message on the party.
13. Now, you can save the trained model and deploy it to a space.
Step 3: Save and deploy the model online
In this section, you learn how to save and deploy the model that you trained.
1. Save your model.
1. In your completed Federated Learning experiment, click Save model to project.
2. Give your model a name and click Save.
3. Go to your project home.
2. Create a deployment space, if you don't have one.
1. From the navigation menu , click Deployments.
2. Click New deployment space.
3. Fill in the fields, and click Create.

3. Promote the model to a space.
1. Return to your project, and click the Assets tab.
2. In the Models section, click the model to view its details page.
3. Click Promote to space.
4. Choose a deployment space for your trained model.
5. Select the Go to the model in the space after promoting it option.
6. Click Promote.
4. When the model displays inside the deployment space, click New deployment.
1. Select Online as the Deployment type.
2. Specify a name for the deployment.
3. Click Create.
Step 4: Score the model
In this section, you learn to create a Python function to process the scoring data to ensure that it is in the same format that was used during training. For comparison, you will also score the raw data set by calling the Python function that we created.
1. Define the Python function as follows. The function loads the scoring data in its raw format and processes the data exactly as it was done during training. Then, score the processed data.
def adult_scoring_function():
import pandas as pd
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
client = APIClient(wml_credentials)
client.set.default_space('<SPACE ID>')
converts scoring input data format to pandas dataframe
def create_dataframe(raw_dataset):
fields = raw_dataset.get("input_data")[0].get("fields")
values = raw_dataset.get("input_data")[0].get("values")
raw_dataframe = pd.DataFrame(
columns = fields,
data = values
)
return raw_dataframe
reuse preprocess definition from training data handler
def preprocess(training_data):
"""
Performs the following preprocessing on adult training and testing data:
* Drop following features: 'workclass', 'fnlwgt', 'education', 'marital-status', 'occupation',
'relationship', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country'
* Map 'race', 'sex' and 'class' values to 0/1
* ' White': 1, ' Amer-Indian-Eskimo': 0, ' Asian-Pac-Islander': 0, ' Black': 0, ' Other': 0
* ' Male': 1, ' Female': 0
* Further details in Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination
* Split 'age' and 'education' columns into multiple columns based on value
:param training_data: Raw training data
:type training_data: pandas.core.frame.DataFrame
:return: Preprocessed training data
:rtype: pandas.core.frame.DataFrame
"""
if len(training_data.columns)==15:
drop 'fnlwgt' column
training_data = training_data.drop(training_data.columns[2], axis='columns')
training_data.columns = ['age',
'workclass',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'class']
filter out columns unused in training, and reorder columns
training_dataset = training_data['race', 'sex', 'age', 'education-num', 'class']]
map 'sex' and 'race' feature values based on sensitive attribute privileged/unpriveleged groups
training_dataset['sex'] = training_dataset['sex'].map({' Female': 0,
' Male': 1})
training_dataset['race'] = training_dataset['race'].map({' Asian-Pac-Islander': 0,
' Amer-Indian-Eskimo': 0,
' Other': 0,
' Black': 0,
' White': 1})
map 'class' values to 0/1 based on positive and negative classification
training_dataset['class'] = training_dataset['class'].map({' <=50K': 0, ' >50K': 1})
training_dataset['age'] = training_dataset['age'].astype(int)
training_dataset['education-num'] = training_dataset['education-num'].astype(int)
split age column into category columns
for i in range(8):
if i != 0:
training_dataset['age' + str(i)] = 0
for index, row in training_dataset.iterrows():
if row['age'] < 20:
training_dataset.loc[index, 'age1'] = 1
elif ((row['age'] < 30) & (row['age'] >= 20)):
training_dataset.loc[index, 'age2'] = 1
elif ((row['age'] < 40) & (row['age'] >= 30)):
training_dataset.loc[index, 'age3'] = 1
elif ((row['age'] < 50) & (row['age'] >= 40)):
training_dataset.loc[index, 'age4'] = 1
elif ((row['age'] < 60) & (row['age'] >= 50)):
training_dataset.loc[index, 'age5'] = 1
elif ((row['age'] < 70) & (row['age'] >= 60)):
training_dataset.loc[index, 'age6'] = 1
elif row['age'] >= 70:
training_dataset.loc[index, 'age7'] = 1
split age column into multiple columns
training_dataset['ed6less'] = 0
for i in range(13):
if i >= 6:
training_dataset['ed' + str(i)] = 0
training_dataset['ed12more'] = 0
for index, row in training_dataset.iterrows():
if row['education-num'] < 6:
training_dataset.loc[index, 'ed6less'] = 1
elif row['education-num'] == 6:
training_dataset.loc[index, 'ed6'] = 1
elif row['education-num'] == 7:
training_dataset.loc[index, 'ed7'] = 1
elif row['education-num'] == 8:
training_dataset.loc[index, 'ed8'] = 1
elif row['education-num'] == 9:
training_dataset.loc[index, 'ed9'] = 1
elif row['education-num'] == 10:
training_dataset.loc[index, 'ed10'] = 1
elif row['education-num'] == 11:
training_dataset.loc[index, 'ed11'] = 1
elif row['education-num'] == 12:
training_dataset.loc[index, 'ed12'] = 1
elif row['education-num'] > 12:
training_dataset.loc[index, 'ed12more'] = 1
training_dataset.drop(['age', 'education-num'], axis=1, inplace=True)
move class column to be last column
label = training_dataset['class']
training_dataset.drop('class', axis=1, inplace=True)
training_dataset['class'] = label
return training_dataset
def score(raw_dataset):
try:
create pandas dataframe from input
raw_dataframe = create_dataframe(raw_dataset)
reuse preprocess from training data handler
processed_dataset = preprocess(raw_dataframe)
drop class column
processed_dataset.drop('class', inplace=True, axis='columns')
create data payload for scoring
fields = processed_dataset.columns.values.tolist()
values = processed_dataset.values.tolist()
scoring_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
print(scoring_dataset)
score data
prediction = client.deployments.score('<MODEL DEPLOYMENT ID>', scoring_dataset)
return prediction
except Exception as e:
return {'error': repr(e)}
return score
2. Replace the variables in the previous Python function:
* API KEY: Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud API key under Manage > Access(IAM) > API keys.
* SPACE ID: ID of the Deployment space where the adult income deployment is running. To see your space ID, go to Deployment spaces > YOUR SPACE NAME > Manage. Copy the Space GUID.
* MODEL DEPLOYMENT ID: Online deployment ID for the adult income model. To see your model ID, you can see it by clicking the model in your project. It is in both the address bar and the information pane.
3. Get the Software Spec ID for Python 3.9. For list of other environments run client.software_specifications.list(). software_spec_id = client.software_specifications.get_id_by_name('default_py3.9')
4. Store the Python function into your Watson Studio space.
stores python function in space
meta_props = {
client.repository.FunctionMetaNames.NAME: 'Adult Income Scoring Function',
client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id
}
stored_function = client.repository.store_function(meta_props=meta_props, function=adult_scoring_function)
function_id = stored_function['metadata']
5. Create an online deployment by using the Python function.
create online deployment for fucntion
meta_props = {
client.deployments.ConfigurationMetaNames.NAME: "Adult Income Online Scoring Function",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
online_deployment = client.deployments.create(function_id, meta_props=meta_props)
function_deployment_id = online_deployment['metadata']
6. Download the Adult Income data set. This is reused as our scoring data.
import pandas as pd
read adult csv dataset
adult_csv = pd.read_csv('./adult.csv', dtype='category')
use 10 random rows for scoring
sample_dataset = adult_csv.sample(n=10)
fields = sample_dataset.columns.values.tolist()
values = sample_dataset.values.tolist()
7. Score the adult income data by using the Python function created.
raw_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
prediction = client.deployments.score(function_deployment_id, raw_dataset)
print(prediction)
Next steps
[Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html).
Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
| # Federated Learning XGBoost tutorial for UI #
This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data\. The steps are done in a low code environment with the UI and with an XGBoost framework\.
In this tutorial you learn to:
<!-- <ul> -->
* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#step-1)
<!-- <ul> -->
* [Before you begin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#before-you-begin)
* [Start the aggregator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#start-the-aggregator)
<!-- </ul> -->
* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#step-2)
<!-- <ul> -->
* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#step-3)
* [Step 4: Score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=en#step-4)
<!-- </ul> -->
<!-- </ul> -->
**Notes:**
<!-- <ul> -->
* This is a step\-by\-step tutorial for running a UI driven Federated Learning experiment\. To see a code sample for an API driven approach, go to [Federated Learning XGBoost samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)\.
* In this tutorial, *admin* refers to the user that starts the Federated Learning experiment, and *party* refers to one or more users who send their model results after the experiment is started by the admin\. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full run through as both the admin and the party\. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party\. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)\.
<!-- </ul> -->
## Step 1: Start Federated Learning ##
In this section, you learn to start the Federated Learning experiment\.
### Before you begin ###
<!-- <ol> -->
1. Log in to [IBM Cloud](https://cloud.ibm.com/)\. If you don't have an account, create one with any email\.
2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment\.
3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx)\.
4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one\. You must have at least admin permission\.
5. Associate the Watson Machine Learning service with your project\.
<!-- <ol> -->
1. In your project, click the **Manage > Service & integrations**.
2. Click **Associate service**.
3. Select your Watson Machine Learning instance from the list, and click **Associate**; or click **New service** if you do not have one to set up an instance.
<!-- </ol> -->

<!-- </ol> -->
### Start the aggregator ###
<!-- <ol> -->
1. Create the Federated learning experiment asset:
<!-- <ol> -->
1. Click the **Assets** tab in your project.
<!-- <ol> -->
1. Click **New asset > Train models on distributed data**.
2. Type a *Name* for your experiment and optionally a description.
3. Verify the associated Watson Machine Learning instance under *Select a machine learning instance*. If you don't see a Watson Machine Learning instance associated, follow these steps:
<!-- <ol> -->
1. Click **Associate a Machine Learning Service Instance**.
2. Select an existing instance and click **Associate**, or create a **New service**.
3. Click **Reload** to see the associated service.

4. Click **Next**.
<!-- </ol> -->
<!-- </ol> -->
<!-- </ol> -->
2. Configure the experiment\.
<!-- <ol> -->
1. On the *Configure* page, select a **Hardware specification**.
2. Under the *Machine learning framework* dropdown, select **scikit-learn**.
3. For the *Model type*, select **XGBoost**.
4. For the *Fusion method*, select **XGBoost classification fusion**

<!-- </ol> -->
3. Define the hyperparameters\.
<!-- <ol> -->
1. Set the value for the *Rounds* field to `5`.
2. Accept the default values for the rest of the fields.

3. Click **Next**.
<!-- </ol> -->
4. Select remote training systems\.
<!-- <ol> -->
1. Click **Add new systems**.

2. Give your Remote Training System a name.
3. Under **Allowed identities**, select the user that will participate in the experiment, and then click **Add**. You can add as many allowed identities as participants in this Federated Experiment training instance. For this tutorial, choose only yourself.
Any allowed identities must be part of the project and have at least**Admin** permission.
4. When you are finished, click **Add systems**.

5. Return to the *Select remote training systems* page, verify that your system is selected, and then click **Next**.

<!-- </ol> -->
5. Review your settings, and then click **Create**\.
6. Watch the status\. Your Federated Learning experiment status is *Pending* when it starts\. When your experiment is ready for parties to connect, the status will change to *Setup – Waiting for remote systems*\. This may take a few minutes\.
<!-- </ol> -->
## Step 2: Train model as a party ##
<!-- <ol> -->
1. Ensure that you are using the same Python version as the admin\. Using a different Python version might cause compatibility issues\. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html#fl-py-fmwk)\.
2. Create a new local directory\.
3. Download the *Adult* data set into the directory with this command: `wget https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/5fcc01b02d8f0e50af8972dc8963f98e/data -O adult.csv`\.
4. Download the data handler by running `wget https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/adult_sklearn_data_handler.py -O adult_sklearn_data_handler.py`\.
5. Install Watson Machine Learning\.
<!-- <ul> -->
* If you are using Linux, run `pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'`.
* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run `./install_fl_rt22.2_macos.sh <name for new conda environment>`.
You now have the party connector script, `mnist_keras_data_handler.py`, `mnist-keras-test.pkl` and `mnist-keras-train.pkl`, data handler in the same directory.
<!-- </ul> -->
6. Go back to the Federated Learning experiment page, where the aggregator is running\. Click **View Setup Information**\.
7. Click the download icon next to the remote training system, and select **Party connector script**\.
8. Ensure that you have the party connector script, the *Adult* data set, and the data handler in the same directory\. If you run `ls -l`, you should see:
adult.csv
adult_sklearn_data_handler.py
rts_<RTS Name>_<RTS ID>.py
9. In the party connector script:
<!-- <ol> -->
1. Authenticate using any method.
2. Put in these parameters for the `"data"` section:
"data": {
"name": "AdultSklearnDataHandler",
"path": "./adult_sklearn_data_handler.py",
"info": {
"txt_file": "./adult.csv"
},
},
where:
<!-- <ul> -->
* `name`: Class name defined for the data handler.
* `path`: Path of where the data handler is located.
* `info`: Create a key value pair for the file type of local data set, or the path of your data set.
<!-- </ul> -->
<!-- </ol> -->
10. Run the party connector script: `python3 rts_<RTS Name>_<RTS ID>.py`\.
11. When all participating parties connect to the aggregator, the aggregator facilitates the local model training and global model update\. Its status is *Training*\. You can monitor the status of your Federated Learning experiment from the user interface\.
12. When training is complete, the party receives a `Received STOP message` on the party\.
13. Now, you can save the trained model and deploy it to a space\.
<!-- </ol> -->
## Step 3: Save and deploy the model online ##
In this section, you learn how to save and deploy the model that you trained\.
<!-- <ol> -->
1. Save your model\.
<!-- <ol> -->
1. In your completed Federated Learning experiment, click **Save model to project**.
2. Give your model a name and click **Save**.
3. Go to your project home.
<!-- </ol> -->
2. Create a deployment space, if you don't have one\.
<!-- <ol> -->
1. From the navigation menu , click **Deployments**.
2. Click **New deployment space**.
3. Fill in the fields, and click **Create**.

<!-- </ol> -->
3. Promote the model to a space\.
<!-- <ol> -->
1. Return to your project, and click the **Assets** tab.
2. In the *Models* section, click the model to view its details page.
3. Click **Promote to space**.
4. Choose a deployment space for your trained model.
5. Select the **Go to the model in the space after promoting it** option.
6. Click **Promote**.
<!-- </ol> -->
4. When the model displays inside the deployment space, click **New deployment**\.
<!-- <ol> -->
1. Select **Online** as the *Deployment type*.
2. Specify a name for the deployment.
3. Click **Create**.
<!-- </ol> -->
<!-- </ol> -->
## Step 4: Score the model ##
In this section, you learn to create a Python function to process the scoring data to ensure that it is in the same format that was used during training\. For comparison, you will also score the raw data set by calling the Python function that we created\.
<!-- <ol> -->
1. Define the Python function as follows\. The function loads the scoring data in its raw format and processes the data exactly as it was done during training\. Then, score the processed data\.
def adult_scoring_function():
import pandas as pd
from ibm_watson_machine_learning import APIClient
wml_credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": "<API KEY>"
}
client = APIClient(wml_credentials)
client.set.default_space('<SPACE ID>')
# converts scoring input data format to pandas dataframe
def create_dataframe(raw_dataset):
fields = raw_dataset.get("input_data")[0].get("fields")
values = raw_dataset.get("input_data")[0].get("values")
raw_dataframe = pd.DataFrame(
columns = fields,
data = values
)
return raw_dataframe
# reuse preprocess definition from training data handler
def preprocess(training_data):
"""
Performs the following preprocessing on adult training and testing data:
* Drop following features: 'workclass', 'fnlwgt', 'education', 'marital-status', 'occupation',
'relationship', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country'
* Map 'race', 'sex' and 'class' values to 0/1
* ' White': 1, ' Amer-Indian-Eskimo': 0, ' Asian-Pac-Islander': 0, ' Black': 0, ' Other': 0
* ' Male': 1, ' Female': 0
* Further details in Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination
* Split 'age' and 'education' columns into multiple columns based on value
:param training_data: Raw training data
:type training_data: `pandas.core.frame.DataFrame
:return: Preprocessed training data
:rtype: `pandas.core.frame.DataFrame`
"""
if len(training_data.columns)==15:
# drop 'fnlwgt' column
training_data = training_data.drop(training_data.columns[2], axis='columns')
training_data.columns = ['age',
'workclass',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'class']
# filter out columns unused in training, and reorder columns
training_dataset = training_data['race', 'sex', 'age', 'education-num', 'class']]
# map 'sex' and 'race' feature values based on sensitive attribute privileged/unpriveleged groups
training_dataset['sex'] = training_dataset['sex'].map({' Female': 0,
' Male': 1})
training_dataset['race'] = training_dataset['race'].map({' Asian-Pac-Islander': 0,
' Amer-Indian-Eskimo': 0,
' Other': 0,
' Black': 0,
' White': 1})
# map 'class' values to 0/1 based on positive and negative classification
training_dataset['class'] = training_dataset['class'].map({' <=50K': 0, ' >50K': 1})
training_dataset['age'] = training_dataset['age'].astype(int)
training_dataset['education-num'] = training_dataset['education-num'].astype(int)
# split age column into category columns
for i in range(8):
if i != 0:
training_dataset['age' + str(i)] = 0
for index, row in training_dataset.iterrows():
if row['age'] < 20:
training_dataset.loc[index, 'age1'] = 1
elif ((row['age'] < 30) & (row['age'] >= 20)):
training_dataset.loc[index, 'age2'] = 1
elif ((row['age'] < 40) & (row['age'] >= 30)):
training_dataset.loc[index, 'age3'] = 1
elif ((row['age'] < 50) & (row['age'] >= 40)):
training_dataset.loc[index, 'age4'] = 1
elif ((row['age'] < 60) & (row['age'] >= 50)):
training_dataset.loc[index, 'age5'] = 1
elif ((row['age'] < 70) & (row['age'] >= 60)):
training_dataset.loc[index, 'age6'] = 1
elif row['age'] >= 70:
training_dataset.loc[index, 'age7'] = 1
# split age column into multiple columns
training_dataset['ed6less'] = 0
for i in range(13):
if i >= 6:
training_dataset['ed' + str(i)] = 0
training_dataset['ed12more'] = 0
for index, row in training_dataset.iterrows():
if row['education-num'] < 6:
training_dataset.loc[index, 'ed6less'] = 1
elif row['education-num'] == 6:
training_dataset.loc[index, 'ed6'] = 1
elif row['education-num'] == 7:
training_dataset.loc[index, 'ed7'] = 1
elif row['education-num'] == 8:
training_dataset.loc[index, 'ed8'] = 1
elif row['education-num'] == 9:
training_dataset.loc[index, 'ed9'] = 1
elif row['education-num'] == 10:
training_dataset.loc[index, 'ed10'] = 1
elif row['education-num'] == 11:
training_dataset.loc[index, 'ed11'] = 1
elif row['education-num'] == 12:
training_dataset.loc[index, 'ed12'] = 1
elif row['education-num'] > 12:
training_dataset.loc[index, 'ed12more'] = 1
training_dataset.drop(['age', 'education-num'], axis=1, inplace=True)
# move class column to be last column
label = training_dataset['class']
training_dataset.drop('class', axis=1, inplace=True)
training_dataset['class'] = label
return training_dataset
def score(raw_dataset):
try:
# create pandas dataframe from input
raw_dataframe = create_dataframe(raw_dataset)
# reuse preprocess from training data handler
processed_dataset = preprocess(raw_dataframe)
# drop class column
processed_dataset.drop('class', inplace=True, axis='columns')
# create data payload for scoring
fields = processed_dataset.columns.values.tolist()
values = processed_dataset.values.tolist()
scoring_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
print(scoring_dataset)
# score data
prediction = client.deployments.score('<MODEL DEPLOYMENT ID>', scoring_dataset)
return prediction
except Exception as e:
return {'error': repr(e)}
return score
2. Replace the variables in the previous Python function:
<!-- <ul> -->
* `API KEY`: Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click **Create an IBM Cloud API key** under **Manage > Access(IAM) > API keys**.
* `SPACE ID`: ID of the Deployment space where the adult income deployment is running. To see your space ID, go to **Deployment spaces > `YOUR SPACE NAME` > Manage**. Copy the *Space GUID*.
* `MODEL DEPLOYMENT ID`: Online deployment ID for the adult income model. To see your model ID, you can see it by clicking the model in your project. It is in both the address bar and the information pane.
<!-- </ul> -->
3. Get the Software Spec ID for Python 3\.9\. For list of other environments run client\.software\_specifications\.list()\. `software_spec_id = client.software_specifications.get_id_by_name('default_py3.9')`
4. Store the Python function into your Watson Studio space\.
# stores python function in space
meta_props = {
client.repository.FunctionMetaNames.NAME: 'Adult Income Scoring Function',
client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id
}
stored_function = client.repository.store_function(meta_props=meta_props, function=adult_scoring_function)
function_id = stored_function['metadata']
5. Create an online deployment by using the Python function\.
# create online deployment for fucntion
meta_props = {
client.deployments.ConfigurationMetaNames.NAME: "Adult Income Online Scoring Function",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
online_deployment = client.deployments.create(function_id, meta_props=meta_props)
function_deployment_id = online_deployment['metadata']
6. Download the Adult Income data set\. This is reused as our scoring data\.
import pandas as pd
# read adult csv dataset
adult_csv = pd.read_csv('./adult.csv', dtype='category')
# use 10 random rows for scoring
sample_dataset = adult_csv.sample(n=10)
fields = sample_dataset.columns.values.tolist()
values = sample_dataset.values.tolist()
7. Score the adult income data by using the Python function created\.
raw_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
prediction = client.deployments.score(function_deployment_id, raw_dataset)
print(prediction)
<!-- </ol> -->
### Next steps ###
[Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)\.
**Parent topic:**[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
<!-- </article "role="article" "> -->
|
FD48879C34D316981B4F67C2B82C8179E0042F74 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html?context=cdpaas&locale=en | Credentials for prompting foundation models (IBM Cloud API key and IAM token) | Credentials for prompting foundation models (IBM Cloud API key and IAM token)
To prompt foundation models in IBM watsonx.ai programmatically, you need an IBM Cloud API key and sometimes an IBM Cloud IAM token.
IBM Cloud API key
To use the [foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html), you need an IBM Cloud API key.
Python pseudo-code
my_credentials = {
"url" : "https://us-south.ml.cloud.ibm.com",
"apikey" : <my-IBM-Cloud-API-key>
}
...
model = Model( ... credentials=my_credentials ... )
You can create this API key by using multiple interfaces. For full instructions, see [Creating an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key)
IBM Cloud IAM token
When you click the View code button in the Prompt Lab, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token.
For information about generating that access token, see: [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey)
Parent topic:[Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
| # Credentials for prompting foundation models (IBM Cloud API key and IAM token) #
To prompt foundation models in IBM watsonx\.ai programmatically, you need an IBM Cloud API key and sometimes an IBM Cloud IAM token\.
## IBM Cloud API key ##
To use the [foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html), you need an IBM Cloud API key\.
**Python pseudo\-code**
my_credentials = {
"url" : "https://us-south.ml.cloud.ibm.com",
"apikey" : <my-IBM-Cloud-API-key>
}
...
model = Model( ... credentials=my_credentials ... )
You can create this API key by using multiple interfaces\. For full instructions, see [Creating an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=ui#create_user_key)
## IBM Cloud IAM token ##
When you click the **View code** button in the Prompt Lab, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response\. In the command, there is a placeholder for an IBM Cloud IAM token\.
For information about generating that access token, see: [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey)
**Parent topic:**[Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
<!-- </article "role="article" "> -->
|
52507FE59C92EF1667E463B2C5D709C139673F4D | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html?context=cdpaas&locale=en | Foundation model terms of use in watsonx.ai | Foundation model terms of use in watsonx.ai
Review these model terms of use to understand your responsibilities and risks with foundation models.
By using any foundation model provided with this IBM offering, you acknowledge and understand that:
* Some models that are included in this IBM offering are Non-IBM Products. Review the applicable model information for details on the third-party provider and license terms that apply. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
* Third Party models have been trained with data that may contain biases and inaccuracies and could generate outputs containing misinformation, obscene or offensive language, or discriminatory content. Users should review and validate the outputs that are generated.
* The output that is generated by all models is provided to augment, not replace, human decision-making by the Client.
Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
| # Foundation model terms of use in watsonx\.ai #
Review these model terms of use to understand your responsibilities and risks with foundation models\.
By using any foundation model provided with this IBM offering, you acknowledge and understand that:
<!-- <ul> -->
* Some models that are included in this IBM offering are Non\-IBM Products\. Review the applicable model information for details on the third\-party provider and license terms that apply\. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
* Third Party models have been trained with data that may contain biases and inaccuracies and could generate outputs containing misinformation, obscene or offensive language, or discriminatory content\. Users should review and validate the outputs that are generated\.
* The output that is generated by all models is provided to augment, not replace, human decision\-making by the Client\.
<!-- </ul> -->
**Parent topic:**[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
<!-- </article "role="article" "> -->
|
43785386700CF73E37A8F76ADC4EF9FB01EE0AEB | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html?context=cdpaas&locale=en | Generating accurate output | Generating accurate output
Foundation models sometimes generate output that is not factually accurate. If factual accuracy is important for your project, set yourself up for success by learning how and why these models might sometimes get facts wrong and how you can ground generated output in correct facts.
Why foundation models get facts wrong
Foundation models can get facts wrong for a few reasons:
* Pre-training builds word associations, not facts
* Pre-training data sets contain out-of-date facts
* Pre-training data sets do not contain esoteric or domain-specific facts and jargon
* Sampling decoding is more likely to stray from the facts
Pre-training builds word associations, not facts
During pre-training, a foundation model builds up a vocabulary of words ([tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)) encountered in the pre-training data sets. Also during pre-training, statistical relationships between those words become encoded in the model weights.
For example, "Mount Everest" often appears near "tallest mountain in the world" in many articles, books, speeches, and other common pre-training sources. As a result, a pre-trained model will probably correctly complete the prompt "The tallest mountain in the world is " with the output "Mount Everest."
These word associations can make it seem that facts have been encoded into these models too. For very common knowledge and immutable facts, you might have good luck generating factually accurate output using pre-trained foundation models with simple prompts like the tallest-mountain example. However, it is a risky strategy to rely on only pre-trained word associations when using foundation models in applications where accuracy matters.
Pre-training data sets contain out-of-date facts
Collecting pre-training data sets and performing pre-training runs can take a significant amount of time, sometimes months. If a model was pre-trained on a data set from several years ago, the model vocabulary and word associations encoded in the model weights won't reflect current world events or newly popular themes. For this reason, if you submit the prompt "The most recent winner of the world cup of football (soccer) is " to a model pre-trained on information a few years old, the generated output will be out of date.
Pre-training data sets do not contain esoteric or domain-specific facts and jargon
Common foundation model pre-training data sets, such as [The Pile (Wikipedia)](https://en.wikipedia.org/wiki/The_Pile_%28dataset%29), contain hundreds of millions of documents. Given how famous Mount Everest is, it's reasonable to expect a foundation model to have encoded a relationship between "tallest mountain in the world" and "Mount Everest". However, if a phenomenon, person, or concept is mentioned in only a handful of articles, chances are slim that a foundation model would have any word associations about that topic encoded in its weights. Prompting a pre-trained model about information that was not in its pre-training data sets is unlikely to produce factually accurate generated output.
Sampling decoding is more likely to stray from the facts
Decoding is the process a model uses to choose the words (tokens) in the generated output:
* Greedy decoding always selects the token with the highest probability
* Sampling decoding selects tokens pseudo-randomly from a probability distribution
Greedy decoding generates output that is more predictable and more repetitive. Sampling decoding is more random, which feels "creative". If, based on pre-training data sets, the most likely words to follow "The tallest mountain is " are "Mount Everest", then greedy decoding could reliably generate that factually correct output, whereas sampling decoding might sometimes generate the name of some other mountain or something that's not even a mountain.
How to ground generated output in correct facts
Rather than relying on only pre-trained word associations for factual accuracy, provide context in your prompt text.
Use context in your prompt text to establish facts
When you prompt a foundation model to generate output, the words (tokens) in the generated output are influenced by the words in the model vocabulary and the words in the prompt text. You can use your prompt text to boost factually accurate word associations.
Example 1
Here's a prompt to cause a model to complete a sentence declaring your favorite color:
My favorite color is
Given that only you know what your favorite color is, there's no way the model could reliably generate the correct output.
Instead, a color will be selected from colors mentioned in the model's pre-training data:
* If greedy decoding is used, whichever color appears most frequently with statements about favorite colors in pre-training content will be selected.
* If sampling decoding is used, a color will be selected randomly from colors mentioned most often as favorites in the pre-training content.
Example 2
Here's a prompt that includes context to establish the facts:
I recently painted my kitchen yellow, which is my favorite color.
My favorite color is
If you prompt a model with text that includes factually accurate context like this, then the output the model generates will be more likely to be accurate.
For more examples of including context in your prompt, see these samples:
* [Sample 4a - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4a)
* [Sample 4b - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4b)
Use less "creative" decoding
When you include context with the needed facts in your prompt, using greedy decoding is likely to generate accurate output. If you need some variety in the output, you can experiment with sampling decoding with low values for parameters like Temperature, Top P, and Top K. However, using sampling decoding increases the risk of inaccurate output.
Retrieval-augmented generation
The retrieval-augmented generation pattern scales out the technique of pulling context into prompts. If you have a knowledge base, such as process documentation in web pages, legal contracts in PDF files, a database of products for sale, a GitHub repository of C++ code files, or any other collection of information, you can use the retrieval-augmented generation pattern to generate factually accurate output based on the information in that knowledge base.
Retrieval-augmented generation involves three basic steps:
1. Search for relevant content in your knowledge base
2. Pull the most relevant content into your prompt as context
3. Send the combined prompt text to the model to generate output
For more information, see: [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
| # Generating accurate output #
Foundation models sometimes generate output that is not factually accurate\. If factual accuracy is important for your project, set yourself up for success by learning how and why these models might sometimes get facts wrong and how you can ground generated output in correct facts\.
## Why foundation models get facts wrong ##
Foundation models can get facts wrong for a few reasons:
<!-- <ul> -->
* Pre\-training builds word associations, not facts
* Pre\-training data sets contain out\-of\-date facts
* Pre\-training data sets do not contain esoteric or domain\-specific facts and jargon
* Sampling decoding is more likely to stray from the facts
<!-- </ul> -->
### Pre\-training builds word associations, not facts ###
During pre\-training, a foundation model builds up a vocabulary of words ([tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)) encountered in the pre\-training data sets\. Also during pre\-training, statistical relationships between those words become encoded in the model weights\.
For example, "Mount Everest" often appears near "tallest mountain in the world" in many articles, books, speeches, and other common pre\-training sources\. As a result, a pre\-trained model will probably correctly complete the prompt "The tallest mountain in the world is " with the output "Mount Everest\."
These word associations can make it seem that facts have been encoded into these models too\. For very common knowledge and immutable facts, you might have good luck generating factually accurate output using pre\-trained foundation models with simple prompts like the tallest\-mountain example\. However, it is a risky strategy to rely on only pre\-trained word associations when using foundation models in applications where accuracy matters\.
### Pre\-training data sets contain out\-of\-date facts ###
Collecting pre\-training data sets and performing pre\-training runs can take a significant amount of time, sometimes months\. If a model was pre\-trained on a data set from several years ago, the model vocabulary and word associations encoded in the model weights won't reflect current world events or newly popular themes\. For this reason, if you submit the prompt "The most recent winner of the world cup of football (soccer) is " to a model pre\-trained on information a few years old, the generated output will be out of date\.
### Pre\-training data sets do not contain esoteric or domain\-specific facts and jargon ###
Common foundation model pre\-training data sets, such as [The Pile (Wikipedia)](https://en.wikipedia.org/wiki/The_Pile_%28dataset%29), contain hundreds of millions of documents\. Given how famous Mount Everest is, it's reasonable to expect a foundation model to have encoded a relationship between "tallest mountain in the world" and "Mount Everest"\. However, if a phenomenon, person, or concept is mentioned in only a handful of articles, chances are slim that a foundation model would have any word associations about that topic encoded in its weights\. Prompting a pre\-trained model about information that was not in its pre\-training data sets is unlikely to produce factually accurate generated output\.
### Sampling decoding is more likely to stray from the facts ###
Decoding is the process a model uses to choose the words (tokens) in the generated output:
<!-- <ul> -->
* Greedy decoding always selects the token with the highest probability
* Sampling decoding selects tokens pseudo\-randomly from a probability distribution
<!-- </ul> -->
Greedy decoding generates output that is more predictable and more repetitive\. Sampling decoding is more random, which feels "creative"\. If, based on pre\-training data sets, the most likely words to follow "The tallest mountain is " are "Mount Everest", then greedy decoding could reliably generate that factually correct output, whereas sampling decoding might sometimes generate the name of some other mountain or something that's not even a mountain\.
## How to ground generated output in correct facts ##
Rather than relying on only pre\-trained word associations for factual accuracy, provide context in your prompt text\.
### Use context in your prompt text to establish facts ###
When you prompt a foundation model to generate output, the words (tokens) in the generated output are influenced by the words in the model vocabulary and the words in the prompt text\. You can use your prompt text to boost factually accurate word associations\.
#### Example 1 ####
Here's a prompt to cause a model to complete a sentence declaring your favorite color:
My favorite color is
Given that only you know what your favorite color is, there's no way the model could reliably generate the correct output\.
Instead, a color will be selected from colors mentioned in the model's pre\-training data:
<!-- <ul> -->
* If greedy decoding is used, whichever color appears most frequently with statements about favorite colors in pre\-training content will be selected\.
* If sampling decoding is used, a color will be selected randomly from colors mentioned most often as favorites in the pre\-training content\.
<!-- </ul> -->
#### Example 2 ####
Here's a prompt that includes context to establish the facts:
I recently painted my kitchen yellow, which is my favorite color.
My favorite color is
If you prompt a model with text that includes factually accurate context like this, then the output the model generates will be more likely to be accurate\.
For more examples of including context in your prompt, see these samples:
<!-- <ul> -->
* [Sample 4a \- Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4a)
* [Sample 4b \- Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4b)
<!-- </ul> -->
### Use less "creative" decoding ###
When you include context with the needed facts in your prompt, using greedy decoding is likely to generate accurate output\. If you need some variety in the output, you can experiment with sampling decoding with low values for parameters like `Temperature`, `Top P`, and `Top K`\. However, using sampling decoding increases the risk of inaccurate output\.
## Retrieval\-augmented generation ##
The retrieval\-augmented generation pattern scales out the technique of pulling context into prompts\. If you have a knowledge base, such as process documentation in web pages, legal contracts in PDF files, a database of products for sale, a GitHub repository of C\+\+ code files, or any other collection of information, you can use the retrieval\-augmented generation pattern to generate factually accurate output based on the information in that knowledge base\.
Retrieval\-augmented generation involves three basic steps:
<!-- <ol> -->
1. Search for relevant content in your knowledge base
2. Pull the most relevant content into your prompt as context
3. Send the combined prompt text to the model to generate output
<!-- </ol> -->
For more information, see: [Retrieval\-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
**Parent topic:**[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
<!-- </article "role="article" "> -->
|
E59B59312D1EB3B2BA78D7E78993883BB3784C2B | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=en | Techniques for avoiding undesirable output | Techniques for avoiding undesirable output
Every foundation model has the potential to generate output that includes incorrect or even harmful content. Understand the types of undesirable output that can be generated, the reasons for the undesirable output, and steps that you can take to reduce the risk of harm.
The foundation models that are available in IBM watsonx.ai can generate output that contains hallucinations, personal information, hate speech, abuse, profanity, and bias. The following techniques can help reduce the risk, but do not guarantee that generated output will be free of undesirable content.
Find techniques to help you avoid the following types of undesirable content in foundation model output:
* [Hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhallucinations)
* [Personal information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enpersonal-info)
* [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhap)
* [Bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enbias)
Hallucinations
When a foundation model generates off-topic, repetitive, or incorrect content or fabricates details, that behavior is sometimes called hallucination.
Off-topic hallucinations can happen because of pseudo-randomness in the decoding of the generated output. In the best cases, that randomness can result in wonderfully creative output. But randomness can also result in nonsense output that is not useful.
The model might return hallucinations in the form of fabricated details when it is prompted to generate text, but is not given enough related text to draw upon. If you include correct details in the prompt, for example, the model is less likely to hallucinate and make up details.
Techniques for avoiding hallucinations
To avoid hallucinations, test one or more of these techniques:
* Choose a model with pretraining and fine-tuning that matches your domain and the task you are doing.
* Provide context in your prompt.
If you instruct a foundation model to generate text on a subject that is not common in its pretraining data and you don't add information about the subject to the prompt, the model is more likely to hallucinate.
* Specify conservative values for the Min tokens and Max tokens parameters and specify one or more stop sequences.
When you specify a high value for the Min tokens parameter, you can force the model to generate a longer response than the model would naturally return for a prompt. The model is more likely to hallucinate as it adds words to the output to reach the required limit.
* For use cases that don't require much creativity in the generated output, use greedy decoding. If you prefer to use sampling decoding, be sure to specify conservative values for the temperature, top-p, and top-k parameters.
* To reduce repetitive text in the generated output, try increasing the repetition penalty parameter.
* If you see repetitive text in the generated output when you use greedy decoding, and if some creativity is acceptable for your use case, then try using sampling decoding instead. Be sure to set moderately low values for the temperature, top-p, and top-k parameters.
* In your prompt, instruct the model what to do when it has no confident or high-probability answer.
For example, in a question-answering scenario, you can include the instruction: If the answer is not in the article, say “I don't know”.
Personal information
A foundation model's vocabulary is formed from words in its pretraining data. If pretraining data includes web pages that are scraped from the internet, the model's vocabulary might contain the following types of information:
* Names of article authors
* Contact information from company websites
* Personal information from questions and comments that are posted in open community forums
If you use a foundation model to generate text for part of an advertising email, the generated content might include contact information for another company!
If you ask a foundation model to write a paper with citations, the model might include references that look legitimate but aren't. It might even attribute those made-up references to real authors from the correct field. A foundation model is likely to generate imitation citations, correct in form but not grounded in facts, because the models are good at stringing together words (including names) that have a high probability of appearing together. The fact that the model lends the output a touch of legitimacy, by including the names of real people as authors in citations, makes this form of hallucination compelling and believable. It also makes this form of hallucination dangerous. People can get into trouble if they believe that the citations are real. Not to mention the harm that can come to people who are listed as authors of works they did not write.
Techniques for excluding personal information
To exclude personal information, try these techniques:
* In your prompt, instruct the model to refrain from mentioning names, contact details, or personal information.
For example, when you prompt a model to generate an advertising email, instruct the model to include your company name and phone number. Also, instruct the model to “include no other company or personal information”.
* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to find and remove personal information.
Hate speech, abuse, and profanity
As with personal information, when pretraining data includes hateful or abusive terms or profanity, a foundation model that is trained on that data has those problematic terms in its vocabulary. If inappropriate language is in the model's vocabulary, the foundation model might generate text that includes undesirable content.
When you use foundation models to generate content for your business, you must do the following things:
* Recognize that this kind of output is always possible.
* Take steps to reduce the likelihood of triggering the model to produce this kind of harmful output.
* Build human review and verification processes into your solutions.
Techniques for reducing the risk of hate speech, abuse, and profanity
To avoid hate speech, abuse, and profanity, test one or more of these techniques:
* In the Prompt Lab, set the AI guardrails switch to On. When this feature is enabled, any sentence in the input prompt or generated output that contains harmful language is replaced with a message that says that potentially harmful text was removed.
* Do not include hate speech, abuse, or profanity in your prompt to prevent the model from responding in kind.
* In your prompt, instruct the model to use clean language.
For example, depending on the tone you need for the output, instruct the model to use “formal”, “professional”, “PG”, or “friendly” language.
* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to remove undesirable content.
Reducing the risk of bias in model output
During pretraining, a foundation model learns the statistical probability that certain words follow other words based on how those words appear in the training data. Any bias in the training data is trained into the model.
For example, if the training data more frequently refers to doctors as men and nurses as women, that bias is likely to be reflected in the statistical relationships between those words in the model. As a result, the model is likely to generate output that more frequently refers to doctors as men and nurses as women. Sometimes, people believe that algorithms can be more fair and unbiased than humans because the algorithms are “just using math to decide”. But bias in training data is reflected in content that is generated by foundation models that are trained on that data.
Techniques for reducing bias
It is difficult to debias output that is generated by a foundation model that was pretrained on biased data. However, you might improve results by including content in your prompt to counter bias that might apply to your use case.
For example, instead of instructing a model to “list heart attack symptoms”, you might instruct the model to “list heart attack symptoms, including symptoms common for men and symptoms common for women”.
Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
| # Techniques for avoiding undesirable output #
Every foundation model has the potential to generate output that includes incorrect or even harmful content\. Understand the types of undesirable output that can be generated, the reasons for the undesirable output, and steps that you can take to reduce the risk of harm\.
The foundation models that are available in IBM watsonx\.ai can generate output that contains hallucinations, personal information, hate speech, abuse, profanity, and bias\. The following techniques can help reduce the risk, but do not guarantee that generated output will be free of undesirable content\.
Find techniques to help you avoid the following types of undesirable content in foundation model output:
<!-- <ul> -->
* [Hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=en#hallucinations)
* [Personal information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=en#personal-info)
* [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=en#hap)
* [Bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=en#bias)
<!-- </ul> -->
## Hallucinations ##
When a foundation model generates off\-topic, repetitive, or incorrect content or fabricates details, that behavior is sometimes called *hallucination*\.
Off\-topic hallucinations can happen because of pseudo\-randomness in the decoding of the generated output\. In the best cases, that randomness can result in wonderfully creative output\. But randomness can also result in nonsense output that is not useful\.
The model might return hallucinations in the form of fabricated details when it is prompted to generate text, but is not given enough related text to draw upon\. If you include correct details in the prompt, for example, the model is less likely to hallucinate and make up details\.
### Techniques for avoiding hallucinations ###
To avoid hallucinations, test one or more of these techniques:
<!-- <ul> -->
* Choose a model with pretraining and fine\-tuning that matches your domain and the task you are doing\.
* Provide context in your prompt\.
If you instruct a foundation model to generate text on a subject that is not common in its pretraining data and you don't add information about the subject to the prompt, the model is more likely to hallucinate.
* Specify conservative values for the Min tokens and Max tokens parameters and specify one or more stop sequences\.
When you specify a high value for the Min tokens parameter, you can force the model to generate a longer response than the model would naturally return for a prompt. The model is more likely to hallucinate as it adds words to the output to reach the required limit.
* For use cases that don't require much creativity in the generated output, use greedy decoding\. If you prefer to use sampling decoding, be sure to specify conservative values for the temperature, top\-p, and top\-k parameters\.
* To reduce repetitive text in the generated output, try increasing the repetition penalty parameter\.
* If you see repetitive text in the generated output when you use greedy decoding, and if some creativity is acceptable for your use case, then try using sampling decoding instead\. Be sure to set moderately low values for the temperature, top\-p, and top\-k parameters\.
* In your prompt, instruct the model what to do when it has no confident or high\-probability answer\.
For example, in a question-answering scenario, you can include the instruction: `If the answer is not in the article, say “I don't know”.`
<!-- </ul> -->
## Personal information ##
A foundation model's vocabulary is formed from words in its pretraining data\. If pretraining data includes web pages that are scraped from the internet, the model's vocabulary might contain the following types of information:
<!-- <ul> -->
* Names of article authors
* Contact information from company websites
* Personal information from questions and comments that are posted in open community forums
<!-- </ul> -->
If you use a foundation model to generate text for part of an advertising email, the generated content might include contact information for another company\!
If you ask a foundation model to write a paper with citations, the model might include references that look legitimate but aren't\. It might even attribute those made\-up references to real authors from the correct field\. A foundation model is likely to generate imitation citations, correct in form but not grounded in facts, because the models are good at stringing together words (including names) that have a high probability of appearing together\. The fact that the model lends the output a touch of legitimacy, by including the names of real people as authors in citations, makes this form of hallucination compelling and believable\. It also makes this form of hallucination dangerous\. People can get into trouble if they believe that the citations are real\. Not to mention the harm that can come to people who are listed as authors of works they did not write\.
### Techniques for excluding personal information ###
To exclude personal information, try these techniques:
<!-- <ul> -->
* In your prompt, instruct the model to refrain from mentioning names, contact details, or personal information\.
For example, when you prompt a model to generate an advertising email, instruct the model to include your company name and phone number. Also, instruct the model to “include no other company or personal information”.
* In your larger application, pipeline, or solution, post\-process the content that is generated by the foundation model to find and remove personal information\.
<!-- </ul> -->
## Hate speech, abuse, and profanity ##
As with personal information, when pretraining data includes hateful or abusive terms or profanity, a foundation model that is trained on that data has those problematic terms in its vocabulary\. If inappropriate language is in the model's vocabulary, the foundation model might generate text that includes undesirable content\.
When you use foundation models to generate content for your business, you must do the following things:
<!-- <ul> -->
* Recognize that this kind of output is always possible\.
* Take steps to reduce the likelihood of triggering the model to produce this kind of harmful output\.
* Build human review and verification processes into your solutions\.
<!-- </ul> -->
### Techniques for reducing the risk of hate speech, abuse, and profanity ###
To avoid hate speech, abuse, and profanity, test one or more of these techniques:
<!-- <ul> -->
* In the Prompt Lab, set the **AI guardrails** switch to On\. When this feature is enabled, any sentence in the input prompt or generated output that contains harmful language is replaced with a message that says that potentially harmful text was removed\.
* Do not include hate speech, abuse, or profanity in your prompt to prevent the model from responding in kind\.
* In your prompt, instruct the model to use clean language\.
For example, depending on the tone you need for the output, instruct the model to use “formal”, “professional”, “PG”, or “friendly” language.
* In your larger application, pipeline, or solution, post\-process the content that is generated by the foundation model to remove undesirable content\.
<!-- </ul> -->
## Reducing the risk of bias in model output ##
During pretraining, a foundation model learns the statistical probability that certain words follow other words based on how those words appear in the training data\. Any bias in the training data is trained into the model\.
For example, if the training data more frequently refers to doctors as men and nurses as women, that bias is likely to be reflected in the statistical relationships between those words in the model\. As a result, the model is likely to generate output that more frequently refers to doctors as men and nurses as women\. Sometimes, people believe that algorithms can be more fair and unbiased than humans because the algorithms are “just using math to decide”\. But bias in training data is reflected in content that is generated by foundation models that are trained on that data\.
### Techniques for reducing bias ###
It is difficult to debias output that is generated by a foundation model that was pretrained on biased data\. However, you might improve results by including content in your prompt to counter bias that might apply to your use case\.
For example, instead of instructing a model to “list heart attack symptoms”, you might instruct the model to “list heart attack symptoms, including symptoms common for men and symptoms common for women”\.
**Parent topic:**[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
<!-- </article "role="article" "> -->
|
120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-choose.html?context=cdpaas&locale=en | Choosing a foundation model in watsonx.ai | Choosing a foundation model in watsonx.ai
To determine which models might work well for your project, consider model attributes, such as license, pretraining data, model size, and how the model was fine-tuned. After you have a short list of models that best fit your use case, systematically test the models to see which ones consistently return the results you want.
Table 1. Considerations for choosing a foundation model in IBM watsonx.ai
Model attribute Considerations
Context length Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter. For some models, the token length of model output for Lite plans is limited by a dynamic, model-specific, environment-driven upper limit.
Cost The cost of using foundation models is measured in resource units. The price of a resource unit is based on the rate of the billing class for the foundation model.
Fine-tuning After being pretrained, many foundation models are fine-tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that was fine-tuned on tasks similar to your planned use typically perform better with zero-shot prompts than models that were not fine-tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine-tune that model.
Instruction-tuned Instruction-tuned means that the model was fine-tuned with prompts that include an instruction. When a model is instruction-tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples.
IP indemnity In addition to license terms, review the intellectual property indemnification policy for the model. Some foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models. For information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
License In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution.
Model architecture The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures: <br>* Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction. <br>* Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions. <br>* Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization.
Regional availability You can work with models that are available in the same IBM Cloud regional data center as your watsonx services.
Supported natural languages Many foundation models work well in English only. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind.
Supported programming languages Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case.
Learn more
* [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)
* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html)
* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
* [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers)
Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
| # Choosing a foundation model in watsonx\.ai #
To determine which models might work well for your project, consider model attributes, such as license, pretraining data, model size, and how the model was fine\-tuned\. After you have a short list of models that best fit your use case, systematically test the models to see which ones consistently return the results you want\.
<!-- <table> -->
Table 1\. Considerations for choosing a foundation model in IBM watsonx\.ai
| Model attribute | Considerations |
| ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Context length | Sometimes called *context window length*, *context window*, or *maximum sequence length*, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output\. When you generate output with models in watsonx\.ai, the number of tokens in the generated output is limited by the Max tokens parameter\. For some models, the token length of model output for Lite plans is limited by a dynamic, model\-specific, environment\-driven upper limit\. |
| Cost | The cost of using foundation models is measured in resource units\. The price of a resource unit is based on the rate of the billing class for the foundation model\. |
| Fine\-tuning | After being pretrained, many foundation models are fine\-tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back\-and\-forth dialog chat\. A model that was fine\-tuned on tasks similar to your planned use typically perform better with zero\-shot prompts than models that were not fine\-tuned in a way that fits your use case\. One way to improve results for a fine\-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine\-tune that model\. |
| Instruction\-tuned | *Instruction\-tuned* means that the model was fine\-tuned with prompts that include an instruction\. When a model is instruction\-tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples\. |
| IP indemnity | In addition to license terms, review the intellectual property indemnification policy for the model\. Some foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models\. For information about contractual protections related to IBM watsonx\.ai, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)\. |
| License | In general, each foundation model comes with a different license that limits how the model can be used\. Review model licenses to make sure that you can use a model for your planned solution\. |
| Model architecture | The architecture of the model influences how the model behaves\. A transformer\-based model typically has one of the following architectures: <br>• *Encoder\-only*: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings\. Common tasks for encoder\-only models include classification and entity extraction\. <br>• *Decoder\-only*: Generates output text word\-by\-word by inference from the input sequence\. Common tasks for decoder\-only models include generating text and answering questions\. <br>• *Encoder\-decoder*: Both understands input text and generates output text based on the input text\. Common tasks for encoder\-decoder models include translation and summarization\. |
| Regional availability | You can work with models that are available in the same IBM Cloud regional data center as your watsonx services\. |
| Supported natural languages | Many foundation models work well in English only\. But some model creators include multiple languages in the pretraining data sets to fine\-tune their model on tasks in different languages, and to test their model's performance in multiple languages\. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind\. |
| Supported programming languages | Not all foundation models work well for programming use cases\. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine\-tuning activities to determine whether that model is a fit for your use case\. |
<!-- </table ""> -->
## Learn more ##
<!-- <ul> -->
* [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)
* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html)
* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
* [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html#data-centers)
<!-- </ul> -->
**Parent topic:**[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
<!-- </article "role="article" "> -->
|
42AE491240EF740E6A8C5CF32B817E606F554E49 | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html?context=cdpaas&locale=en | Foundation model parameters: decoding and stopping criteria | Foundation model parameters: decoding and stopping criteria
You can specify parameters to control how the model generates output in response to your prompt. This topic lists parameters that you can control in the Prompt Lab.
Decoding
Decoding is the process a model uses to choose the tokens in the generated output.
Greedy decoding selects the token with the highest probability at each step of the decoding process. Greedy decoding produces output that closely matches the most common language in the model's pretraining data and in your prompt text, which is desirable in less creative or fact-based use cases. A weakness of greedy decoding is that it can cause repetitive loops in the generated output.
Sampling decoding is more variable, more random than greedy decoding. Variability and randomness is desirable in creative use cases. However, with greater variability comes the risk of nonsensical output. Sampling decoding selects tokens from a probability distribution at each step:
* Temperature sampling refers to selecting a high- or low-probability next token.
* Top-k sampling refers to selecting the next token randomly from a specified number, k, of tokens with the highest probabilities.
* Top-p sampling refers to selecting the next token randomly from the smallest set of tokens for which the cumulative probability exceeds a specified value, p. (Top-p sampling is also called nucleus sampling.)
You can specify values for both Top K and Top P. When both parameters are used, Top K is applied first. When Top P is computed, any tokens below the cutoff set by Top K are considered to have a probability of zero.
Table 1. Supported values, defaults, and usage notes for sampling decoding
Parameter Supported values Default Use
Temperature Floating-point number in the range 0.0 (same as greedy decoding) to 2.0 (maximum creativity) 0.7 Higher values lead to greater variability
Top K Integer in the range 1 to 100 50 Higher values lead to greater variability
Top P Floating-point number in the range 0.0 to 1.0 1.0 Higher values lead to greater variability
Random seed
When you submit the same prompt to a model multiple times with sampling decoding, you'll usually get back different generated text each time. This variability is the result of intentional pseudo-randomness built into the decoding process. Random seed refers to the number used to generate that pseudo-random behavior.
* Supported values: Integer in the range 1 to 4 294 967 295
* Default: Generated based on the current server system time
* Use: To produce repeatable results, set the same random seed value every time.
Repetition penalty
If you notice the result generated for your chosen prompt, model, and parameters consistently contains repetitive text, you can try adding a repetition penalty.
* Supported values: Floating-point number in the range 1.0 (no penalty) to 2.0 (maximum penalty)
* Default: 1.0
* Use: The higher the penalty, the less likely it is that the result will include repeated text.
Stopping criteria
You can affect the length of the output generated by the model in two ways: specifying stop sequences and setting Min tokens and Max tokens. Text generation stops after the model considers the output to be complete, a stop sequence is generated, or the maximum token limit is reached.
Stop sequences
A stop sequence is a string of one or more characters. If you specify stop sequences, the model will automatically stop generating output after one of the stop sequences that you specify appears in the generated output. For example, one way to cause a model to stop generating output after just one sentence is to specify a period as a stop sequence. That way, after the model generates the first sentence and ends it with a period, output generation stops. Choosing effective stop sequences depends on your use case and the nature of the generated output you expect.
Supported values: 0 to 6 strings, each no longer than 40 tokens
Default: No stop sequence
Use:
* Stop sequences are ignored until after the number of tokens that are specified in the Min tokens parameter are generated.
* If your prompt includes examples of input-output pairs, ensure the sample output in the examples ends with one of the stop sequences.
Minimum and maximum new tokens
If you're finding the output from the model is too short or too long, try adjusting the parameters that control the number of generated tokens:
* The Min tokens parameter controls the minimum number of tokens in the generated output
* The Max tokens parameter controls the maximum number of tokens in the generated output
The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
Defaults:
* Min tokens: 0
* Max tokens: 20
Use:
* Min tokens must be less than or equal to Max tokens.
* Because the cost of using foundation models in IBM watsonx.ai is based on use, which is partly related to the number of tokens that are generated, specifying the lowest value for Max tokens that works for your use case is a cost-saving strategy.
* For Lite plans, output stops being generated after a dynamic, model-specific, environment-driven upper limit is reached, even if the value specified with the Max tokens parameter is not reached. To determine the upper limit, see the Tokens limits section for the model in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) or call the [get_details](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details) function of the foundation models Python library.
Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
| # Foundation model parameters: decoding and stopping criteria #
You can specify parameters to control how the model generates output in response to your prompt\. This topic lists parameters that you can control in the Prompt Lab\.
## Decoding ##
*Decoding* is the process a model uses to choose the tokens in the generated output\.
*Greedy decoding* selects the token with the highest probability at each step of the decoding process\. Greedy decoding produces output that closely matches the most common language in the model's pretraining data and in your prompt text, which is desirable in less creative or fact\-based use cases\. A weakness of greedy decoding is that it can cause repetitive loops in the generated output\.
*Sampling decoding* is more variable, more random than greedy decoding\. Variability and randomness is desirable in creative use cases\. However, with greater variability comes the risk of nonsensical output\. Sampling decoding selects tokens from a probability distribution at each step:
<!-- <ul> -->
* *Temperature sampling* refers to selecting a high\- or low\-probability next token\.
* *Top\-k sampling* refers to selecting the next token randomly from a specified number, k, of tokens with the highest probabilities\.
* *Top\-p sampling* refers to selecting the next token randomly from the smallest set of tokens for which the cumulative probability exceeds a specified value, p\. (Top\-p sampling is also called *nucleus sampling*\.)
<!-- </ul> -->
You can specify values for both Top K and Top P\. When both parameters are used, Top K is applied first\. When Top P is computed, any tokens below the cutoff set by Top K are considered to have a probability of zero\.
<!-- <table> -->
Table 1\. Supported values, defaults, and usage notes for sampling decoding
| Parameter | Supported values | Default | Use |
| --------------- | ----------------------------------------------------------------------------------------------- | ------- | ----------------------------------------- |
| **Temperature** | Floating\-point number in the range 0\.0 (same as greedy decoding) to 2\.0 (maximum creativity) | 0\.7 | Higher values lead to greater variability |
| **Top K** | Integer in the range 1 to 100 | 50 | Higher values lead to greater variability |
| **Top P** | Floating\-point number in the range 0\.0 to 1\.0 | 1\.0 | Higher values lead to greater variability |
<!-- </table ""> -->
### Random seed ###
When you submit the same prompt to a model multiple times with sampling decoding, you'll usually get back different generated text each time\. This variability is the result of intentional pseudo\-randomness built into the decoding process\. *Random seed* refers to the number used to generate that pseudo\-random behavior\.
<!-- <ul> -->
* **Supported values:** Integer in the range 1 to 4 294 967 295
* **Default:** Generated based on the current server system time
* **Use:** To produce repeatable results, set the same random seed value every time\.
<!-- </ul> -->
### Repetition penalty ###
If you notice the result generated for your chosen prompt, model, and parameters consistently contains repetitive text, you can try adding a *repetition penalty*\.
<!-- <ul> -->
* **Supported values:** Floating\-point number in the range 1\.0 (no penalty) to 2\.0 (maximum penalty)
* **Default:** 1\.0
* **Use:** The higher the penalty, the less likely it is that the result will include repeated text\.
<!-- </ul> -->
## Stopping criteria ##
You can affect the length of the output generated by the model in two ways: specifying stop sequences and setting Min tokens and Max tokens\. Text generation stops after the model considers the output to be complete, a stop sequence is generated, or the maximum token limit is reached\.
### Stop sequences ###
A *stop sequence* is a string of one or more characters\. If you specify stop sequences, the model will automatically stop generating output after one of the stop sequences that you specify appears in the generated output\. For example, one way to cause a model to stop generating output after just one sentence is to specify a period as a stop sequence\. That way, after the model generates the first sentence and ends it with a period, output generation stops\. Choosing effective stop sequences depends on your use case and the nature of the generated output you expect\.
**Supported values:** 0 to 6 strings, each no longer than 40 tokens
**Default:** No stop sequence
**Use:**
<!-- <ul> -->
* Stop sequences are ignored until after the number of tokens that are specified in the Min tokens parameter are generated\.
* If your prompt includes examples of input\-output pairs, ensure the sample output in the examples ends with one of the stop sequences\.
<!-- </ul> -->
### Minimum and maximum new tokens ###
If you're finding the output from the model is too short or too long, try adjusting the parameters that control the number of generated tokens:
<!-- <ul> -->
* The *Min tokens* parameter controls the minimum number of tokens in the generated output
* The *Max tokens* parameter controls the maximum number of tokens in the generated output
<!-- </ul> -->
The maximum number of tokens that are allowed in the output differs by model\. For more information, see the *Maximum tokens* information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)\.
**Defaults:**
<!-- <ul> -->
* Min tokens: 0
* Max tokens: 20
<!-- </ul> -->
**Use:**
<!-- <ul> -->
* Min tokens must be less than or equal to Max tokens\.
* Because the cost of using foundation models in IBM watsonx\.ai is based on use, which is partly related to the number of tokens that are generated, specifying the lowest value for Max tokens that works for your use case is a cost\-saving strategy\.
* For Lite plans, output stops being generated after a dynamic, model\-specific, environment\-driven upper limit is reached, even if the value specified with the Max tokens parameter is not reached\. To determine the upper limit, see the *Tokens limits* section for the model in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) or call the [`get_details`](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html#ibm_watson_machine_learning.foundation_models.Model.get_details) function of the foundation models Python library\.
<!-- </ul> -->
**Parent topic:**[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
<!-- </article "role="article" "> -->
|
B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C | https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models-ibm.html?context=cdpaas&locale=en | Foundation models built by IBM | Foundation models built by IBM
In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business.
The Granite family of foundation models includes decoder-only models that can efficiently predict and generate language in English.
The models were built with trusted data that has the following characteristics:
* Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more.
* Compliant with rigorous IBM data clearance and governance standards.
* Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things.
IBM is committed to building AI that is open, trusted, targeted, and empowering. For more information about contractual protections related to the IBM Granite foundation models, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) and [model license](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883).
The following Granite models are available in watsonx.ai today:
granite-13b-chat-v2 : General use model that is optimized for dialogue use cases. This version of the model is able to generate longer, higher-quality responses with a professional tone. The model can recognize mentions of people and can detect tone and sentiment.
granite-13b-chat-v1 : General use model that is optimized for dialogue use cases. Useful for virtual agent and chat applications that engage in conversation with users.
granite-13b-instruct-v2 : General use model. This version of the model is optimized for classification, extraction, and summarization tasks. The model can recognize mentions of people and can summarize longer inputs.
granite-13b-instruct-v1 : General use model. The model was tuned on relevant business tasks, such as detecting sentiment from earnings calls transcripts, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.
To learn more about the models, read the following resources:
* [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
* [granite-13b-instruct-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx)
* [granite-13b-instruct-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx)
* [granite-13b-chat-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx)
* [granite-13b-chat-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx)
To get started with the models, try these samples:
* [Prompt Lab sample: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample2a)
* [Prompt Lab sample: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3c)
* [prompt Lab sample: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c)
* [Prompt Lab sample: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d)
* [Prompt Lab sample: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a)
* [Sample Python notebook: Use watsonx and a Granite model to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx)
Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
| # Foundation models built by IBM #
In IBM watsonx\.ai, you can use IBM foundation models that are built with integrity and designed for business\.
The Granite family of foundation models includes decoder\-only models that can efficiently predict and generate language in English\.
The models were built with trusted data that has the following characteristics:
<!-- <ul> -->
* Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG\-19)), and more\.
* Compliant with rigorous IBM data clearance and governance standards\.
* Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things\.
<!-- </ul> -->
IBM is committed to building AI that is open, trusted, targeted, and empowering\. For more information about contractual protections related to the IBM Granite foundation models, see the [IBM watsonx\.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) and [model license](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883)\.
The following Granite models are available in watsonx\.ai today:
**granite\-13b\-chat\-v2** : General use model that is optimized for dialogue use cases\. This version of the model is able to generate longer, higher\-quality responses with a professional tone\. The model can recognize mentions of people and can detect tone and sentiment\.
**granite\-13b\-chat\-v1** : General use model that is optimized for dialogue use cases\. Useful for virtual agent and chat applications that engage in conversation with users\.
**granite\-13b\-instruct\-v2** : General use model\. This version of the model is optimized for classification, extraction, and summarization tasks\. The model can recognize mentions of people and can summarize longer inputs\.
**granite\-13b\-instruct\-v1** : General use model\. The model was tuned on relevant business tasks, such as detecting sentiment from earnings calls transcripts, extracting credit risk assessments, summarizing financial long\-form text, and answering financial or insurance\-related questions\.
To learn more about the models, read the following resources:
<!-- <ul> -->
* [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
* [granite\-13b\-instruct\-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx)
* [granite\-13b\-instruct\-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx)
* [granite\-13b\-chat\-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx)
* [granite\-13b\-chat\-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx)
<!-- </ul> -->
To get started with the models, try these samples:
<!-- <ul> -->
* [Prompt Lab sample: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample2a)
* [Prompt Lab sample: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample3c)
* [prompt Lab sample: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4c)
* [Prompt Lab sample: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample4d)
* [Prompt Lab sample: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#sample7a)
<!-- </ul> -->
<!-- <ul> -->
* [Sample Python notebook: Use watsonx and a Granite model to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx)
<!-- </ul> -->
**Parent topic:**[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.