TECHNOLOGY
stringclasses 22
values | QUESTION
stringlengths 26
425
| SOLUTION
stringlengths 34
2.68k
|
---|---|---|
Azure Web Storage | Error "The value for one of the HTTP headers is not in the correct format" when using the storage emulator | This scenario typically occurs if you install and use the latest version of the Storage Client
Library without updating the storage emulator. You should either install the latest version of the storage emulator or use cloud storage instead of the emulator for development and testing. |
Azure Web Storage | I am experiencing unexpected delays in message delivery on a
queue | 1. Verify that the application is successfully adding the messages to the queue. Check that the application is not retrying the AddMessage method several times before succeeding.
2. Verify there is no clock skew between the worker role that adds the message to the queue and the worker role that reads the message from the queue. A clock skew makes it appear as if there is a delay in processing.
3. Check if the worker role that reads the messages from the queue is failing. If a queue client calls the GetMessage method but fails to respond with an acknowledgment, the message will remain invisible on the queue until the invisibilityTimeout period expires. At this point, the message becomes available for processing again.
4. Check if the queue length is growing over time. This can occur if you don't have sufficient workers available to process all of the messages that other workers are placing in the queue. Also, check the metrics to see if delete requests are failing and the dequeue count on messages, which might indicate repeated failed attempts to delete the message.
5. Examine the Storage logs for any queue operations that have higher than expected E2ELatency and ServerLatency values over a longer period of time than usual. |
Azure Web Storage | Root not redirecting to the index document | When you enable static website hosting on Azure Storage, you need to specify the name of the index document that will be served when a user requests the root URL of your website. For example, if you set the index document name to "index.html", then your website will display the content of that file when someone visits https://yourwebsite.zxx.web.core.windows.net/.
However, sometimes you may find that the root URL does not redirect to the index document, and instead shows a blank page or an error message. This could happen for several reasons:
Ensure the name and extension as set in the file name on the portal are the exact same of the file in the $web container, including case sensitivity. File names along with extensions are case sensitive. Even though this is served over HTTP, index.html != Index.html for Static Websites.
Ensure that the index document exists in the $web container and has a valid content type. You can check this by using Azure Portal, Azure CLI, or Azure Storage Explorer.
Ensure that there are no other files or folders in the $web container that have the same name as the index document. For example, if you have a folder named "index.html" in the $web container, it will conflict with the index document and prevent it from being served. |
Azure Web Storage | Unable to acquire token, tenant is filtered out | Sometimes you may see an error message that says a token can't be acquired because a tenant
is filtered out. This means you're trying to access a resource that's in a tenant you filtered out. To include the tenant, go to the Account Panel. Make sure the checkbox for the tenant specified in the error is selected. For more information on filtering tenants in Storage Explorer, see Managing accounts. |
Azure Web Storage | Slow performance when unzipping files in SMB file shares | Depending on the exact compression method and unzip operation used, decompression operations may perform more slowly on an Azure file share than on your local disk. This is often because unzipping tools perform a number of metadata operations in the process of performing
the decompression of a compressed archive. For the best performance, we recommend copying the compressed archive from the Azure file share to your local disk, unzipping there, and then using a copy tool such as Robocopy (or AzCopy) to copy back to the Azure file share. Using a copy tool like Robocopy can compensate for the decreased performance of metadata operations in Azure Files relative to your local disk by using multiple threads to copy data in parallel. |
Azure Web Storage | How to change the Lease state of Azure Blob to Available | A lease can only be cleanly released by using the lease id that was returned during the original lease operation.
You can change the lease state to available manually by leasing and releasing the blob using Azure CLI, or any other SDK. |
Azure Web Storage | I am trying to upload a binary file (a blob for an excel file, actually) to
my storage account but the client fails to authenticate under the error message: 403 (Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.) | This message you'll get if your SAS Token expired. If this is the case just create a new
version of the secret using a SAS token with a longer duration. |
Azure Web Storage | Unable to provision a network drive as Server Endpoint in Azure File Sync. Shows Server endpoint creation fails, with this error: "MgmtServerJobFailed" (Error code: -2147024894 or 0x80070002) | This error occurs if the server endpoint path specified is not valid. Verify the
server endpoint path specified is a locally attached NTFS volume. Note, Azure File Sync does not support mapped drives as a server endpoint path. |
Azure Web Storage | copy file from local machine to Azure Blob not successful. Error INFO: Any
empty folders will not be processed, because source and/or destination doesn't have full folder support | As indicated in the error INFO: Any empty folders will not be processed by azcopy, Just create a file inside the source directory and try the azcopy command again. |
Azure Web Storage | Unable to trigger azure function though service bus queue | That's an indication that your Function is not activated. And if it's not activated while a message is found on the queue, the potential issue would be Function configuration.
You need to specify connection string and queue name. If there is no connectivity exception, that tells me the connection string is working, Just validate is the right namespace connection. Then, check if the queue the Function is configured with is the right queue. |
Azure Web Storage | What are the possible ways available to Scale Out/In VMs based on number of outstanding requests of Azure storage queue? | You can use Metric alerts (assuming that number of requests is the same as Queue Message Count or some other metric) to create such alerts that are attached to action group that has link to automation service like Azure Automation runbook or Azure Function. The logic of those services will be code that scales out/in.
|
Azure Web Storage | I am not able to create a folder under the blob container | From the azure portal you can go inside your container —> click on Upload —> in
the Advanced section go to Upload to Folder and provide a folder name —> browse the file to upload —> click on Upload button You should see a folder getting created. |
Azure Web Storage | Lifecycle policy moving from cool to hot not working | The policy you have defined is moving the blob from "Hot" to "Cool" after 2 days of modification. If you want to move the blob from "Cool" to "Hot" after it gets modified, you need to change the action in the first rule to "tierToHot" instead of "tierToCool".
Also, you have defined the second rule to enable auto-tiering to "Hot" from "Cool" based on last access time. However, this rule will only take effect if the blob is currently in "Cool" and then accessed. It will not move the blob from "Cool" to "Hot" immediately after it gets modified.
You can try adding a new rule that moves the blob from "Cool" to "Hot" based on last modified time. |
Azure Web Storage | Missing error details for some failures in Insights for storage account | You need to create a diagnostic setting to collect resource logs for blobs. Once
the diagnostic setting is created you can investigate the logs. If you are using Log Analytics this can be done directly from Logs (preview) under monitoring. |
Azure Web Storage | Unable to create Storage account; error loading the Creation page of
Storage account | 1. Disable if there is adblock and clear all your cookies restart the browser and relogin into azure portal
2. If you are using chrome or firefox try opening azure portal from edge browser and create resource
3. Open InPrivate session from your browser and login into the portal |
Azure Web Storage | how to set access permissions for azure blob storage container at folder
(prefix) level | 1. If you use ADLS (HNS) I believe you can set an ACL on a folder . For existing storage account blob container, you would need to copy into an HNS enabled storage account (current situation)
2. You could produce a SAS for a blob container or for individual blobs(SAS token can be used to restrict access to either an entire blob container or an individual blob. This is because a folder in blob storage is virtual and not a real folder.). |
Azure Web Storage | Error trying to delete container in storage account | If you are getting the "Failed to delete 1 out of 1 container(s) The request uri is
invalid " Please first try to hard refresh the Screen/Browser page. There may be some interface issue. |
Azure Web Storage | Is there a way to enable Soft delete on Storage Account through custom
policy | Yes, you can turn on soft deletion for storage accounts through a policy. You can do this through the portal/powershell/azure cli/template options.
You can "[s]pecify a retention period between 1 and 365 days." PowerShell (7 days) |
Azure Web Storage | How to stream blobs to Azure Blob Storage with Node.js | Navigate to your storage account in the Azure Portal and copy the account name
and key (under Settings > Access keys) into the .env.example file. Save the file and then rename it from .env.example to .env. |
Azure Web Storage - Web App | How do I automate App Service web apps by using PowerShell? | You can use PowerShell cmdlets to manage and maintain App Service web apps. In our blog post Automate web apps hosted in Azure App Service by using PowerShell, we describe how to use Azure Resource Manager-based PowerShell cmdlets to automate common tasks. The blog post also has sample code for various web apps management tasks. |
Azure Web Storage - Web App | How do I view my web app's event logs? | To view your web app's event logs:
1. Sign in to your Kudu website (https://*yourwebsitename*.scm.azurewebsites.net).
2. In the menu, select Debug Console > CMD.
3. Select the LogFiles folder.
4. To view event logs, select the pencil icon next to eventlog.xml.
5. To download the logs, run the PowerShell cmdlet Save-AzureWebSiteLog -Name webappname. |
Azure Web Storage - Web App | How do I capture a user-mode memory dump of my web app? | To capture a user-mode memory dump of your web app:
1. Sign in to your Kudu website (https://*yourwebsitename*.scm.azurewebsites.net).
2. Select the Process Explorer menu.
3. Right-click the w3wp.exe process or your WebJob process.
4. Select Download Memory Dump > Full Dump. |
Azure Web Storage - Web App | I cannot create or delete a web app due to a permission error. What the
permissions do I need to create or delete a web app? | You would need minimum Contributor access on the Resource Group to deploy App Services. If you have Contributor access only on App Service Plan and web app, it won't allow you to create the app service in the Resource Group. |
Azure Web Storage - Web App | How do I restore a deleted web app or a deleted App Service Plan? | If the web app was deleted within the last 30 days, you can restore it using Restore-AzDeletedWebApp. |
Azure SQL | Which SQL cloud database deployment options are
available? | Azure SQL Database is available as a single database with
its own set of resources managed via a logical server,and
as a pooled database in an elastic pool, with a shared set of resources managed through a logical server. In general, elastic pools are designed for a typical software-as-a-service (SaaS) application pattern, with one database per custtomer or tenant. With pools, you manage the collective performance, and the databases scale up or down automatically. |
Azure SQL | Error message: Conversion failed when converting from a
character string to uniqueidentifier | In the copy activity sink, under PolyBase settings, set the use type
default option to false. |
Azure SQL | Cannot open database "master" requested by the login. The login
failed | 1. On the login screen of SSMS, select Options, and then select Connection Properties.
2. In the Connect to database field, enter the user's default database name as the default login database, and then select Connect. |
Azure SQL | Error 40552: The session has been terminated because of
excessive transaction log space usage | The issue can occur in any DML operation such as insert, update, or
delete. Review the transaction to avoid unnecessary writes. Try to reduce the number of rows that are operated on immediately by implementing batching or splitting into multiple smaller transactions. |
Azure SQL | Error 5: Cannot connect to < servername > | To resolve this issue, make sure that port 1433 is open for outbound
connections on all firewalls between the client and the internet. |
Azure SQL | Error 40551: The session has been terminated because of
excessive tempdb usage | 1. Change the queries to reduce temporary table space usage.
2. Drop temporary objects after they're no longer needed.
3. Truncate tables or remove unused tables. |
Azure SQL | Elastic pool not found for server: '%ls', elastic pool name: '%ls'.
Specified elastic pool does not exist in the specified server. | Provide a valid elastic pool name. |
Azure SQL | Getting error as Elastic pool does not support service tier '%ls'. Specified service tier is not supported for elastic pool provisioning. | Provide the correct edition or leave service tier blank to use the default
service tier. |
Azure SQL | Error Code:40860
Elastic pool '%ls' and service objective '%ls' combination is invalid. | Specify correct combination of elastic pool and service tier. |
Azure SQL | Error Code:40877
I cannot able to delete elastic pool | Remove databases from the elastic pool in order to delete it. |
Azure SQL | Error Code:40857
Elastic pool not found for server: '%ls', elastic pool name: '%ls'. | Provide a valid elastic pool name. |
Azure SQL | Error code: 2056 - SqlInfoValidationFailed | Make sure to change the target Azure SQL Database collation to the same
as the source SQL Server database. Azure SQL Database uses SQL_Latin1_General_CP1_CI_AS collation by default, in case your source SQL Server database uses a different collation you might need to re-create or select a different target database whose collation matches. |
Azure SQL | Not able to decrease the storage limit of the elastic pool | Consider reducing the storage usage of individual databases in the
elastic pool or remove databases from the pool in order to reduce its DTUs or storage limit. |
Azure SQL | c# error when connect to mysql "Object cannot be cast from
DBNull to other types" (mariadb 10.3) | when a column value is null, the object DBNull is returned rather than a
typed value. You must first test that the column value is not null via the api before accessing as the desired type. |
Azure SQL | Error code: AzureTableDuplicateColumnsFromSource | Double-check and fix the source columns, as necessary. |
Azure SQL | Error code: MongoDbUnsupportedUuidType | In the MongoDB connection string, add the uuidRepresentation=standard option. |
Azure SQL | Error message: Request rate is large in Azure CosmosDB | Try either of the following two solutions:
1. Increase the container RUs number to a greater value in Azure Cosmos DB. This solution will improve the copy activity performance, but it will incur more cost in Azure Cosmos DB.
2. Decrease writeBatchSize to a lesser value, such as 1000, and decrease parallelCopies to a lesser value, such as 1. This solution will reduce copy run performance, but it won't incur more cost in Azure Cosmos DB. |
Azure SQL | Error code: SqlOpenConnectionTimeout | Retry the operation to update the linked service connection string with
a larger connection timeout value. |
Azure SQL | Error code: SqlAutoCreateTableTypeMapFailed | Update the column type in mappings, or manually create the sink table
in the target server. |
Azure SQL | Error code: SqlParallelFailedToDetectPartitionColumn | Check the table to make sure that a primary key or a unique index is
created. |
Azure AKS | Client can't reach an Azure Kubernetes Service (AKS) cluster's API | Ensure that your client's IP address is within the ranges authorized by the cluster's API server:
1. Find your local IP address. For information on how to find it on Windows and Linux, see How to find my IP.
2. Update the range that's authorized by the API server by using the az aks update command in Azure CLI. Authorize your client's IP address. |
Azure AKS | when I try to upgrade an Azure Kubernetes Service (AKS) cluster getting
error as "PodDrainFailure" | 1. Adjust the PDB to enable pod draining. Generally, The allowed disruption is controlled by the Min Available / Max unavailable or Running pods / Replicas parameter. You can modify the Min Available / Max unavailable parameter at the PDB level or increase the number of Running pods / Replicas to push the Allowed Disruption value to 1 or greater.
2.Try again to upgrade the AKS cluster to the same version that you tried to upgrade to previously. This process will trigger a reconciliation. |
Azure AKS | AKS cluster upgrade fails, and getting "PublicIPCountLimitReached" as
error message | To raise the limit or quota for your subscription, go to the Azure portal, file a
Service and subscription limits (quotas) support ticket, and set the quota type to Networking.
After the quota change takes effect, try to upgrade the cluster to the same version that you previously tried to upgrade to. This process will trigger a reconciliation. |
Azure AKS | "SubnetIsFull" error code during an AKS cluster upgrade | Reduce the cluster nodes to reserve IP addresses for the upgrade.
If scaling down isn't an option, and your virtual network CIDR has enough IP addresses, try to add a node pool that has a unique subnet:
1. Add a new user node pool in the virtual network on a larger subnet.
2. Switch the original node pool to a system node pool type.
3. Scale up the user node pool.
4. Scale down the original node pool. |
Azure AKS | Failed to upgrade or scale Azure Kubernetes Service cluster due to missing
Log Analytics workspace | If it has been more than 14 days since the workspace was deleted, disable monitoring on the AKS cluster and then run the upgrade or scale operation again.
To disable monitoring on the AKS cluster, run the following command:
az aks disable-addons -a monitoring -g <clusterRG> -n <clusterName>
If the same error occurs while disabling the monitoring add-on, recreate the missing Log Analytics workspace and then run the upgrade or scale operation again. |
Azure AKS | Upgrades to Kubernetes 1.16 fail when node labels have a kubernetes.io
prefix | To mitigate this issue:
Upgrade your cluster control plane to 1.16 or later.
Add a new node pool on 1.16 or higher without the unsupported kubernetes.io labels.
Delete the older node pool. |
Azure AKS | CannotDeleteLoadBalancerWithPrivateLinkService or
PrivateLinkServiceWithPrivateEndpointConnectionsCannotBeDeleted error code | Make sure that the private link service isn't associated with any private endpoint
connections. Delete all private endpoint connections before you delete the private link service. |
Azure AKS | PublicIPAddressCannotBeDeleted, InUseSubnetCannotBeDeleted, or
InUseNetworkSecurityGroupCannotBeDeleted error code | 1. Remove all public IP addresses that are associated with Azure Load Balancer and the resource that's used by the subnet. For more information, see View, modify settings for, or delete a public IP address.
2. In the load balancer, remove the rules for Load Balance rules, Health probes, and Backend pools.
3. For the NSG and subnet, remove all associated rules. |
Azure AKS | when I try to delete a Microsoft Azure Kubernetes Service (AKS) cluster
getting error as InUseRouteTableCannotBeDeleted error code | Remove the associated subnet in the route table. |
Azure AKS | When I tried to delete an AKS cluster while the virtual machine scale set was still using the associated public IP address or network security group (NSG) getting LoadBalancerInUseByVirtualMachineScaleSet or
NetworkSecurityGroupInUseByVirtualMachineScaleSet error code | Remove all public IP addresses that are associated with the subnet, and remove
the NSG that's used by the subnet. |
Azure AKS | when I try to delete a Microsoft Azure Kubernetes Service (AKS) getting
clusterRequestDisallowedByPolicy error(for cluster deletions) | Verify that you have permission to make any changes to policy services. If you
don't have permission, find someone who has access so that they can make the necessary changes. Also, check the policy name that's causing the problem, and then temporarily deny that rule so that you (or someone who has permission) can do the delete operation. |
Azure AKS | Getting error asTooManyRequestsReceived or
SubscriptionRequestsThrottled when I try to delete a Microsoft Azure Kubernetes Service (AKS) cluster | The HTTP response includes a Retry-After value. This specifies the number of
seconds that your application should wait (or sleep) before it sends the next request. If you send a request before the retry value has elapsed, your request isn't processed, and a new retry value is returned |
Azure AKS | I get an "insufficientSubnetSize" error when I deploy an AKS cluster that
uses advanced networking | Because you can't update an existing subnet's CIDR range, you must have permission to create a new subnet to resolve this issue. Follow these steps:
1. Rebuild a new subnet that has a larger CIDR range that's sufficient for operation goals.
2.Create a new subnet that has a new non-overlapping range.
3.Create a new node pool on the new subnet.
4. Drain pods from the old node pool that resides in the old subnet that will be replaced.
5.Delete the old subnet and old node pool. |
Azure AKS | Cluster autoscaler fails to scale with "failed to fix node group sizes" error | To get out of this state, disable and re-enable the cluster autoscaler. |
Azure AKS | Node Not Ready failures that are followed by recoveries error | To prevent this issue from occurring in the future, take one or more of the following actions:
1. Make sure that your service tier is fully paid for.
2. Reduce the number of watch and get requests to the API server.
3. Replace the node pool with a healthy node pool. |
Azure AKS | Can't view resources in Kubernetes resource viewer in Azure portal | Make sure that when you run the az aks create or az aks update command in
Azure CLI, the --api-server-authorized-ip-ranges parameter includes access for the local client computer to the IP addresses or IP address ranges from which the portal is being browsed. |
Azure AKS | Getting an error when I try to upgrade or scale a Microsoft Azure Kubernetes Service (AKS) cluster | To resolve these scenarios, follow these steps:
1. Scale your cluster back to a stable goal state within the quota.
2. Request an increase in your resource quota.
3. Try to scale up again beyond the initial quota limits.
4. Retry the original operation. This second operation should bring your cluster to a successful state. |
Azure AKS | Insufficient subnet size error while deploying an AKS cluster with advanced networking | Create new subnets. Because you can't update an existing subnet's CIDR range, you'll need to be granted the permission to create a new subnet.
Rebuild a new subnet with a larger CIDR range that's sufficient for operation goals by following these steps:
1. Create a new subnet with a larger, non-overlapping range.
2. Create a new node pool on the new subnet.
3. Drain the pods from the old node pool that resides in the old subnet.
4. Delete the old subnet and old node pool. |
Azure AKS | Missing or invalid service principal when creating an AKS cluster | Make sure that there's a valid, findable service principal. To do this, use one of the following methods:
During cluster creation, use an existing service principal that has already propagated across regions to pass into AKS.
If you use automation scripts, add time delays between service principal creation and AKS cluster creation.
If you use the Azure portal, return to the cluster settings after you try to create the cluster, and then retry the validation page after a few minutes. |
Azure AKS | when I am creating an AKS cluster getting errors after restricting egress
traffic in AKS | Verify that your configuration doesn't conflict with any of the required or optionally recommended settings for the following items:
1. Outbound ports
2. Network rules
3. Fully qualified domain names (FQDNs)
4. Application rules |
Azure AKS | Error: TCP time-outs when kubectl or other third-party tools connect to the
API server | Make sure the nodes that host this pod aren't overly utilized or under stress.
Consider moving the nodes to their own system node pool. |
Azure Security IAM | How can I identify how and when key vaults are accessed? | After you create one or more key vaults, you'll likely want to monitor how and
when your key vaults are accessed, and by whom. You can do monitoring by enabling logging for Azure Key Vault |
Azure Security IAM | How can I monitor vault availability, service latency periods or other
performance metrics for key vault? | As you start to scale your service, the number of requests sent to your key vault
will rise. Such demand has a potential to increase the latency of your requests and in extreme cases, cause your requests to be throttled which will degrade the performance of your service. You can monitor key vault performance metrics and get alerted for specific thresholds |
Azure Security IAM | I'm not able to modify access policy, how can it be enabled? | The user needs to have sufficient Azure AD permissions to modify access policy.
In this case, the user would need to have higher contributor role. |
Azure Security IAM | How can I give the AD group access to the key vault? | Give the AD group permissions to your key vault using the Azure CLI az keyvault set-policy command, or the Azure PowerShell Set-AzKeyVaultAccessPolicy cmdlet.
The application also needs at least one Identity and Access Management (IAM) role assigned to the key vault. Otherwise it will not be able to log in and will fail with insufficient rights to access the subscription. Azure AD Groups with Managed Identities may require up to eight hours to refresh tokens and become effective. |
Azure Security IAM | Unable to assign a role using a service principal with Azure CLI | There are two ways to potentially resolve this error. The first way is to assign the Directory Readers role to the service principal so that it can read data in the directory.
The second way to resolve this error is to create the role assignment by using the --assignee-object-id parameter instead of --assignee. By using --assignee-object-id, Azure CLI will skip the Azure AD lookup. You'll need to get the object ID of the user, group, or application that you want to assign the role to. |
Azure Security IAM | ClientCertificateCredential authentication issueClient assertion contains
an invalid signature. | Ensure the specified certificate has been uploaded to the AAD application registration. |
Azure Security IAM | ManagedIdentityCredential authentication unavailable, no managed
identity endpoint found | Ensure the managed identity has been properly configured on the App Service.
Verify the App Service environment is properly configured and the managed identity endpoint is available. |
Azure Security IAM | Deleted or rejected private end point still shows Aprroved in ADF | You should delete the managed private end point in ADF once existing private
endpoints are rejected/deleted from source/sink datasets. |
Azure Security IAM | Connection error in public endpoint | 1. Having private endpoint enabled on the source and also the sink side when using the Managed VNet IR.
2. If you still want to use the public endpoint, you can switch to public IR only instead of using the Managed VNet IR for the source and the sink. Even if you switch back to public IR, the service may still use the Managed VNet IR if the Managed VNet IR is still there. |
Azure Security IAM | Not able to use self-hosted IR to bridge two on-premises datastores | Install drivers for both the source and destination datastores on the destination IR, and make sure that it can access the source datastore.
If the traffic can't pass through the network between two datastores (for example, they're configured in two virtual networks), you might not finish copying in one activity even with the IR installed. If you can't finish copying in a single activity, you can create two copy activities with two IRs, each in a VENT:
1.Copy one IR from datastore 1 to Azure Blob Storage
2. Copy another IR from Azure Blob Storage to datastore 2.
This solution could simulate the requirement to use the IR to create a bridge that connects two disconnected datastores. |
Azure Security IAM | Unable to register the self-hosted IR | Use localhost IP address 127.0.0.1 to host the file and resolve the issue. |
Azure Security IAM | I can sign in to Azure portal, but I see the error, No subscriptions found | To fix this issue:
1. Verify that the correct Azure directory is selected by selecting your account at the top-right corner.
2. If the correct Azure directory is selected, but you still receive the error message, have your account added as an Owner. |
Azure Security IAM | How do I check my current consumption level? | Azure customers can view their current usage levels in Cost Management |
Azure Security IAM | Unable to remove a credit card from a saved billing payment method | By design, you can't remove a credit card from the active subscription.
If an existing card has to be deleted, one of the following actions is required:
1. A new card must be added to the subscription so that the old payment instrument can be successfully deleted.
2. You can cancel the subscription to delete the subscription permanently and then remove the card. |
Azure Security IAM | VisualStudioCredential authentication issue: Failed To Read Credentials | 1. In Visual Studio select the Tools > Options menu to launch the Options dialog.
2. Navigate to the Azure Service Authentication options to sign in with your Azure Active Directory account.
3. If you already had logged in to your account, try logging out and logging in again as that will repopulate the cache and potentially mitigate the error you're getting. |
Azure Security IAM | AzureCliCredential authentication issue:Azure CLI not installed | 1. Ensure the Azure CLI is properly installed.
2. Validate the installation location has been added to the PATH environment variable. |
Azure Security IAM | RequestFailedException raised from the client with a status code of 401 or
403 | 1. Enable logging to determine which credential in the chain returned the authenticating token.
2. In the case a credential other than the expected is returning a token, bypass this by either signing out of the corresponding development tool, or excluding the credential with the ExcludeXXXCredential property in the DefaultAzureCredentialOptions
3. Ensure that the correct role is assigned to the account being used. For example, a service specific role rather than the subscription Owner role. |
Azure Security IAM | UsernamePasswordCredential authentication Error Code: AADSTS50126
| Ensure the username and password provided when constructing the credential are valid. |
Azure Security IAM | CredentialUnavailableException: The requested identity hasn't been
assigned to this resource. | If using a user assigned identity, ensure the specified clientId is correct.
If using a system assigned identity, make sure it has been enabled properly. |
Azure Security IAM | CredentialUnavailableException: ManagedIdentityCredential
authentication unavailable. | Ensure the managed identity has been properly configured on the App Service.
Verify the App Service environment is properly configured and the managed identity endpoint is available |
Azure - AML | Error code: 4110
Message: AzureMLExecutePipeline activity missing LinkedService definition in JSON.
| Check that the input AzureMLExecutePipeline activity JSON
definition has correctly linked service details. |
Azure - AML | Error code: 4111
Message: AzureMLExecutePipeline activity has wrong LinkedService type in JSON. Expected LinkedService type: '%expectedLinkedServiceType;', current LinkedService type: Expected LinkedService type: '%currentLinkedServiceType;'. | Check that the input AzureMLExecutePipeline activity JSON definition has correctly linked service details. |
Azure - AML | Error code: 4112
Message: AzureMLService linked service has invalid value for property '%propertyName;'. | Check if the linked service has the property %propertyName; defined with correct data. |
Azure - AML | Error code: 4121
Message: Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'. | It might be caused due to the Credential used to access Azure Machine Learning has expired.So I recommend you to verify that the credential is valid and retry. |
Azure - AML | Error code: 4122
Message: Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'. | Verify that the credential in Linked Service is valid, and has permission to access Azure Machine Learning. |
Azure - AML | Request sent to Azure Machine Learning for operation '%operation;' failed with http status code '%statusCode;'. Error message from Azure Machine Learning: '%externalMessage;'.
| Check that the value of activity properties matches the expected payload of the published Azure ML pipeline specified in Linked Service |
Azure - AML | Azure ML pipeline run failed with status: '%amlPipelineRunStatus;'. Azure ML pipeline run Id: '%amlPipelineRunId;'. Please check in Azure Machine Learning for more error logs.
| Check Azure Machine Learning for more error logs, then fix the ML pipeline. |
Azure - AML | Unable to pass data to PipelineData directory | Ensure you have created a directory in the script that corresponds to where
your pipeline expects the step output data. In most cases, an input argument will define the output directory, and then you create the directory explicitly. Use os.makedirs(args.output_dir, exist_ok=True) to create the output directory. |
Azure - AML | Pipeline is rerunning unnecessarily | To ensure that steps only rerun when their underlying data or scripts change,
decouple your source-code directories for each step. If you use the same source directory for multiple steps, you may experience unnecessary reruns. Use the source_directory parameter on a pipeline step object to point to your isolated directory for that step, and ensure you aren't using the same source_directory path for multiple steps. |
Azure - AML | Pipeline not reusing steps | Step reuse is enabled by default, but ensure you haven't disabled it in a
pipeline step. If reuse is disabled, the allow_reuse parameter in the step will be set to False. |
Azure - AML | I am getting the following error: ModuleNotFoundError: No module
named 'azureml.train' Whenever I try to import the HyperDriveConfig module: from azureml.train.hyperdrive import HyperDriveConfig. | The azureml-train package has been deprecated already and might not
receive future updates and removed from the distribution altogether. Please use azureml-train-core instead. |
Azure - AML | ModuleNotFoundError: No module named 'azureml' even after
installation | To resolve the issue, Please try installing on a notebook by adding % at the
beginning of pip install command. |
Azure - AML | I am using Azure ML for real-time machine learning. I have installed
the Kafka server, but I am having a connection issue when trying to create a topic. I received the following warning: WARN [AdminClient clientId=adminclient-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient). | 1. Verify that the Kafka broker is running: You can check if the Kafka broker is running by using the following command in a new terminal window: ./kafka_2.13-3.3.2/bin/kafka-server-start.sh ./kafka_2.13-3.3.2/config/server.properties
2. Verify that the address and port are correct: Make sure that the address and port specified in the bootstrap-server parameter are correct and that there are no firewall or network configuration issues preventing you from connecting to the broker.
3. Check the Kafka logs for errors: Check the Kafka logs to see if there are any error messages that could help identify the issue. You can find the Kafka logs in the logs directory of your Kafka installation.
4.Try using a different topic name: It's possible that the topic name you're using is already in use or is invalid. Try using a different topic name to see if that resolves the issue |
Azure - AML | In Azure ML studio deploy option is not there | In Azure Machine Learning Studio, the ability to deploy a model is only available in the paid tiers of the service. If you are using a trial account, you may not have access to the deploy functionality.
To deploy a model in Azure Machine Learning Studio, you will need to upgrade to a paid subscription. The deploy functionality is available in the Standard and Enterprise tiers of the service.
Once you have upgraded your subscription, you can follow these steps to deploy your trained model:
Open the Azure Machine Learning Studio and navigate to your workspace.
Navigate to the "Models" tab and select the trained model you want to deploy.
Click on the "Deploy" button and select the deployment target, such as Azure Kubernetes Service (AKS) or Azure Container Instances (ACI).
Configure the deployment settings, such as the number of nodes and the CPU and memory settings.
Click on the "Deploy" button to start the deployment process.
Once the deployment is complete, you can test the deployed model by sending requests to the endpoint. |
Azure - AML | can I use prebuilt component in custom pipeline mode? | Classic prebuilt components provides prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
Custom components allow you to provide your own code as a component. It supports sharing across workspaces and seamless authoring across Studio, CLI, and SDK interfaces. |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 44