|
question_id,question,correct_answer,correct_answer_document_ids |
|
watsonx_q_2,What foundation models have been built by IBM?, |
|
|
|
|
|
,['B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C'] |
|
watsonx_q_4,How can you ensure the removal of harmful content when utilizing foundation models in the Prompt Lab?,,['812C39CF410F9FE3F0D0E7C62ED1BC015370C849'] |
|
watsonx_q_5,When to tune a foundation model?, |
|
|
|
|
|
|
|
|
|
,['FBC3C5F81D060CD996489B772ABAC886F12130A3'] |
|
watsonx_q_9,How do I avoid generating personal information with foundation models?, |
|
|
|
,['E59B59312D1EB3B2BA78D7E78993883BB3784C2B'] |
|
watsonx_q_11,,,['83CD92CDB99DB6263492FAD998E932F50F0F8E99'] |
|
watsonx_q_15,What are the steps involved in configuring the watsonx platform for an organization's use?,"The setup process for the watsonx platform on IBM watsonx.ai involves several steps: signing up for the service, upgrading to a paid plan, configuring the required services, and assigning appropriate permissions to users within your organization. IBM watsonx.ai, hosted on the watsonx platform, offers cloud-based services for tasks such as data preparation, data science, and AI modeling. Additionally, the platform benefits from robust security measures comparable to those found on IBM Cloud.",['27DB2218237B89F557D3702F4270288E4460E9CB'] |
|
watsonx_q_16,What is the difference between fine-tuning and prompt-tuning foundation models?,"Fine-tuning changes the parameters of the underlying foundation model to guide the model to generate output that is optimized for a task. Prompt-tuning adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. In this case the underlying foundation model and its parameters are not edited, only the prompt input is altered.",['15A014C514B00FF78C689585F393E21BAE922DB2'] |
|
watsonx_q_19,How are words mapped to tokens?,"The mapping from words to tokens is context dependent. It depends on the word's position in a sentence, surrounding words, and on the language and chosen model. |
|
There are several types of joins that can be performed in Data Refinery, including left join, right join, inner join, full join, semi join, and anti join. Each type of join has a specific purpose and can be used to combine data from two data sets based on a comparison of the values in specified key columns. |
|
To edit the sample size in Data Refinery, open the Flow settings and go to the Source data sets tab. Click the overflow menu next to the data source and select Edit sample. |
|
To create a Watson Query connection, you need the following information: database name, hostname or IP address of the database, port number, instance ID, credentials information, application name (optional), client accounting information (optional), client hostname (optional), client user (optional), and SSL certificate (if required by the database server). |
|
To trust a notebook in Jupyter, click the Not Trusted button in the upper right corner of the notebook and then click Trust to execute all cells. |
|
The Data Audit node provides a comprehensive first look at the data you bring to SPSS Modeler, presented in an interactive, easy-to-read matrix that can be sorted and used to generate full-size graphs. This node can be used to gain a preliminary understanding of the data, including information about outliers, extremes, and missing values. |
|
To create a connection to Db2 Warehouse, you need the following information: database name, hostname or IP address of the database server, port number, API key or username and password, application name (optional), client accounting information (optional), client hostname (optional), client user (optional), and SSL certificate (if required by the database server). |
|
Optimization is the process of finding the most appropriate solution to a precisely defined problem while respecting the imposed constraints and limitations. For example, determining how to allocate resources or how to find the best elements or combinations from a large set of alternatives. |
|
A Jupyter Notebook is a web-based environment for interactive computing. It allows you to run small pieces of code that process your data, and then immediately view the results of your computation. |
|
|
|
A resource group is a logical grouping of resources that helps with access control. Resources are any service that is managed by IAM, such as databases. Whenever you create a service instance from the Cloud catalog, you must assign it to a resource group. |
|
There are three different families of classification algorithms that can be used to train a custom classification model: classic machine learning using SVM (Support Vector Machines), deep learning using CNN (Convolutional Neural Networks), and a transformer-based algorithm using a pre-trained transformer model. |
|
|
|
A visualization is a visual representation of data, such as a graph, chart, plot, table, or map. |
|
The IBM Cloud account owner or administrator assigns appropriate roles to users to provide access to Cloud Object Storage. Storage delegation must be disabled when using role-based access. Additionally, rather than assigning each individual user a set of roles, you can create an access group. Access groups expedite role assignments by grouping permissions. For instructions on creating access groups, see the IBM Cloud docs: Setting up access groups. |
|
What are the different methods for imputing missing data in binary classification, multiclass classification, or regression experiments?There are three methods for imputing missing data in binary classification, multiclass classification, or regression experiments: most frequent, median, and mean. Most frequent replaces missing values with the value that appears most frequently in the column, median replaces missing values with the value in the middle of the sorted column, and mean replaces missing values with the average value for the column. |
|
A candlestick chart is a type of financial chart that is used to describe price movements of a security, derivative, or currency. It typically shows one day of data and is most often used in the analysis of equity and currency price patterns. The data set that is used to create a candlestick chart must contain open, high, low, and close values for each time period you want to display. |
|
A Jupyter notebook is a web-based environment for interactive computing. It allows you to run small pieces of code that process your data, and immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data, including the data itself, the code computations that process the data, visualizations of the results, and text and rich media to enhance understanding. |
|
|
|
Scripting in SPSS Modeler can be used to automate repetitive tasks, impose a specific order for node executions in a flow, set properties for a node, and perform derivations using a subset of CLEM. Additionally, scripting can be used to specify an automatic sequence of actions that normally involves user interaction, such as building a model and then testing it. |
|
|
|
Watson OpenScale is a tool that helps organizations evaluate and monitor the performance of their AI models. It tracks and measures outcomes from AI models, and helps ensure that they remain fair, explainable, and compliant no matter where the models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production. |
|
A weight is a coefficient for a node that transforms input data within the network's layer. It is a parameter that an AI model learns through training, adjusting its value to reduce errors in the model's predictions. |
|
Traditional AI models are trained on large, structured, well-labeled data sets that encompass a specific task, and can be used for a single task. Foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks. |
|
The Symmetric Mean Absolute Percentage Error (SMAPE) metric is calculated by dividing the absolute difference between the actual value and the predicted value by half the sum of the absolute actual value and the predicted value, and then averaging the result across all fitted points. The Root Mean Squared Error (RMSE) metric is calculated by taking the square root of the mean of the squared differences between the actual values and the predicted values. |
|
You can monitor the progress of a federated learning experiment by viewing a dynamic diagram of the training progress. The diagram shows the four stages of a training round: sending model, training, receiving models, and aggregating. |
|
RFM analysis is a quantitative method for determining which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary). |
|
A knowledge base is a collection of information-containing artifacts, such as process information in internal company wiki pages, files in GitHub, messages in a collaboration tool, topics in product documentation, text passages in a database like Db2, a collection of legal contracts in PDF files, or customer support tickets in a content management system. |
|
How do you edit, duplicate, insert, or delete a step in Data Refinery?In the Steps pane, click the overflow menu on the step for the operation that you want to change. Select the action (Edit, Duplicate, Insert step before, Insert step after, or Delete). If you select Edit, Data Refinery goes into edit mode and either displays the operation to be edited on the command line or in the Operation pane. Apply the edited operation. If you select Duplicate, the duplicated step is inserted after the selected step. Note: The Duplicate action is not available for the Join or Union operations. Data Refinery updates the Data Refinery flow to reflect the changes and reruns all the operations. |
|
|
|
To save a Data Refinery flow, click the Save Data Refinery flow icon in the Data Refinery toolbar. The default output of the Data Refinery flow is saved as a data asset with the name source-file-name_shaped.csv. For example, if the source file is mydata.csv, the default name and output for the Data Refinery flow is mydata_csv_shaped. You can edit the name and add an extension by changing the target of the Data Refinery flow. |
|
|
|
To export the data from a Data Refinery flow to a CSV file, click the Export icon on the toolbar. This will create a CSV file that is downloaded to your computer's Downloads folder (or the user-specified download location) at the current step in the Data Refinery flow. If you are in snapshot view, the output of the CSV file will be at the step that you clicked. If you are viewing a sample (subset) of the data, only the sample data will be in the output.",['0999F59BB8E2E2AB7722D57CDBC051A0984ABE45'] |
|
ag_150,How do I import a space or a project to a new deployment space?,"To import a space or a project to a new deployment space, you need to create a new deployment space and enter the details for the space. Then, in the Upload space assets section, upload the exported compressed file that contains data assets and click Create. The assets from the exported file will be added as space assets.",['A11374B50B49477362FA00BBB32A277776F7E8E2'] |
|
ag_82,What is the purpose of the Feature Selection node?,"The Feature Selection node is used to identify the most important fields for a given analysis. It consists of three steps: screening, ranking, and selecting. Screening removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful. Ranking sorts remaining inputs and assigns ranks based on importance. Selecting identifies the subset of features to use in subsequent modelsâfor example, by preserving only the most important inputs and filtering or excluding all others.",['9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7'] |
|
ag_306,What is a Pareto chart?,A Pareto chart is a type of chart that contains both bars and a line graph. The bars represent individual variable categories and the line graph represents the cumulative total.,['6B4213FC5352021865E77592EBC27242E746B5AA'] |
|
ag_769,What are the four main areas of Watson OpenScale?,"The four main areas of Watson OpenScale are Insights, Explain a transaction, Configuration, and Support. Insights displays the models that you are monitoring and provides status on the results of model evaluations. Explain a transaction describes how the model determined a prediction. Configuration can be used to select a database, set up a machine learning provider, and optionally add integrated services. Support provides you with resources to get the help you need with Watson OpenScale.",['777F72F32FD20E96C4A5F0CCA461FE9A79334E96'] |
|
ag_300,What is artificial intelligence?,"Artificial intelligence is the capability to acquire, process, create and apply knowledge in the form of a model to make predictions, recommendations or decisions.",['F003581774D3028EF53E61A002C20A6D36BA8E00'] |
|
|