query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" AND shieldedInstanceConfig.enableIntegrityMonitoring is false```
GCP Vertex AI Workbench Instance has Integrity monitoring disabled This policy identifies GCP Vertex AI Workbench Instances that have Integrity monitoring disabled. Integrity Monitoring continuously monitors the boot integrity, kernel integrity, and persistent data integrity of the underlying VM of the shielded workbench instances. It detects unauthorized modifications or tampering, enhancing security by verifying the trusted state of VM components throughout their lifecycle. Integrity monitoring provides active alerts, enabling administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. It is recommended to enable Integrity Monitoring for Workbench instances to detect and mitigate advanced threat, such as rootkits and bootkit malware. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Enable 'Turn on Integrity Monitoring'\n11. Click on 'Save'\n12. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.supportsHttpsTrafficOnly !exists```
VenuTestPolicyRem This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-method' AND json.rule = requestValidatorId does not exist ```
AWS API gateway request parameter is not validated This policy identifies the AWS API gateways for which the request parameters are not validated. When the validation fails, API Gateway fails the request, returns a 400 error response to the caller, and publishes the validation results in CloudWatch Logs. It is recommended to perform basic validation of an API request before proceeding with the integration request to block unvalidated calls to the backend. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS management console\n2. Navigate to 'API Gateway' service\n3. Select the region for which the API gateway is reported.\n4. Find the alerted API by the API gateway ID which is the first part of reported resource and click on it\n5. Navigate to the reported method\n6. Click on the clickable link of 'Method Request'\n7. Under section ‘Settings' ,click on the pencil symbol for 'Request Validator' field\n8. From the dropdown, Select the type of Request Validator as per the requirement\n9. Click on the tick symbol next to it to save the changes\n.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = 'autoProvisioningSettings[*].name equals default and (autoProvisioningSettings[*].properties.autoProvision equals Off or autoProvisioningSettings[*] does not exist)'```
Azure Microsoft Defender for Cloud automatic provisioning of log Analytics agent for Azure VMs is set to Off This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has automatic provisioning of log Analytics agent for Azure VMs is set to Off. Microsoft Defender for Cloud provisions the Microsoft Monitoring Agent on all existing supported Azure virtual machines and any new ones that are created. The Microsoft Monitoring Agent scans for various security-related configurations and events such as system updates, OS vulnerabilities, endpoint protection, and provides alerts. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud' dashboard\n3. Select 'Environment Settings'\n4. Click on the reported subscription name\n5. Select the 'Settings & monitoring'\n6. Set Status 'On' for 'Log Analytics agent/Azure Monitor agent' component\n7. Click on 'Continue'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case "Ready" and (['sqlServer'].['properties.minimalTlsVersion'] equal ignore case "None" or ['sqlServer'].['properties.minimalTlsVersion'] equals "1.0" or ['sqlServer'].['properties.minimalTlsVersion'] equals "1.1")```
Azure SQL server using insecure TLS version This policy identifies Azure SQL servers which use insecure TLS version. Enforcing TLS connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. As a security best practice, it is recommended to use the latest TLS version for Azure SQL server. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'SQL servers'\n3. Click on the reported SQL server instance you wanted to modify\n4. Navigate to Security -> Networking -> Connectivity\n5. Under 'Encryption in transit' section, Set 'Minimum TLS Version' to 'TLS 1.2' or higher.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.enabled is false```
AWS KMS Customer Managed Key (CMK) is disabled This policy identifies the AWS KMS Customer Managed Key (CMK) that is disabled. Ensuring that your Amazon Key Management Service (AWS KMS) key is enabled is important because it determines whether the key can be used to perform cryptographic operations. If an AWS KMS Key is disabled, any operations dependent on that key, such as encryption or decryption of data, will fail. This can lead to application downtime, data access issues, and potential data loss if not addressed promptly. It is recommended to enable the AWS KMS Customer Managed Key (CMK) if it is used in the application, to restore cryptographic operations and ensure your applications and services can access encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable the AWS KMS customer managed keys.\n\n1. Sign in to the AWS Management Console and open the AWS Key Management Service (AWS KMS) console at https://console.aws.amazon.com/kms.\n2. To change the AWS Region that the reported resource is presented in, use the Region selector in the upper-right corner of the page.\n3. In the navigation pane, choose 'Customer-managed keys'.\n4. Select the reported CMK and click on the dropdown 'Key Actions'.\n5. Choose the 'Enable' option..
```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```
Edited_pwdzvysgyp_ui_auto_policies_tests_name kjbqahijfa_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter '($.Y.s3BucketName==$.X.bucketName) and ($.X.versioningConfiguration.mfaDeleteEnabled does not exist)'; show X;```
AWS CloudTrail S3 buckets have not enabled MFA Delete This policy identifies the S3 buckets which do not have Multi-Factor Authentication enabled for CloudTrails. For encryption of log files, CloudTrail defaults to use of S3 server-side encryption (SSE). We recommend adding an additional layer of security by adding MFA Delete to your S3 bucket. This will help to prevent deletion of CloudTrail logs without your explicit authorization. We also encourage you to use a bucket policy that places restrictions on which of your identity access management (IAM) users are allowed to delete S3 objects. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MFA']. Mitigation of this issue can be done as follows: Enable MFA Delete on the bucket(s) you have configured to receive CloudTrail log files.\nNote: We recommend that you do not configure CloudTrail to write into an S3 bucket that resides in a different AWS account.\nAdditional information on how to do this can be found here:\n http://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactorAuthenticationDelete.
```config from cloud.resource where api.name = 'azure-synapse-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded as X; config from cloud.resource where api.name = 'azure-synapse-workspace-managed-sql-server-vulnerability-assessments' AND json.rule = properties.recurringScans.isEnabled is false as Y; filter '$.X.name equals $.Y.workspaceName'; show X;```
Azure Synapse Workspace vulnerability assessment is disabled This policy identifies Azure Synpase workspace which has Vulnerability Assessment setting disabled. Vulnerability Assessment service scans Synapse workspaces for known security vulnerabilities and highlight deviations from best practices, such as misconfigurations, excessive permissions, and unprotected sensitive data. It is recommended to enable Vulnerability assessment on Synapse workspaces. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure vulnerability assessment for your existing Azure Synapse workspace, follow below steps:\n\n1. Log in to Azure Portal and Navigate to Azure Synpase Analytics dashboard\n2. Select the reported Synapse Workspace\n3. Under Security, select Microsoft Defender for Cloud\n4. Enable Defender for Cloud to configure vulnerability assessment for the selected Azure Synapse Workspace.\n5 To configure vulnerability assessments to automatically run periodic scans, set Periodic recurring scans to On..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kusto-clusters' AND json.rule = properties.state equal ignore case Running and properties.enableDoubleEncryption is false```
Azure Data Explorer cluster double encryption is disabled This policy identifies Azure Data Explorer clusters in which double encryption is disabled. Double encryption adds a second layer of encryption using service-managed keys. It is recommended to enable infrastructure double encryption on Data Explorer clusters so that encryption can be implemented at the layer closest to the storage device or network wires. For more details: https://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-double This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Enabling double encryption is only possible during cluster creation. Once infrastructure encryption is enabled on your cluster, you can't disable it.\n\nTo create Azure Data Explorer cluster with double encryption, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/data-explorer/cluster-encryption-double\n\nNOTE: Using Infrastructure double encryption will have performance impact on the Azure Database for PostgreSQL server due to the additional encryption process..
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains "roles/viewer" or roles[*] contains "roles/editor" or roles[*] contains "roles/owner" as X; config from cloud.resource where api.name = 'gcloud-cloud-function-v2' as Y; filter '$.Y.serviceConfig.serviceAccountEmail equals $.X.user'; show Y;```
GCP Cloud Function is granted a basic role This policy identifies GCP Cloud Functions that are granted a basic role. This includes both Cloud Functions v1 and Cloud Functions v2. Basic roles are highly permissive roles that existed before the introduction of IAM and grant wide access over project to the grantee. The use of basic roles for granting permissions increases the blast radius and could help to escalate privilege further in case the Cloud Function is compromised. Following the principle of least privilege, it is recommended to avoid the use of basic roles. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: It is recommended to follow the principle of least privilege for granting access.\n\nTo update privileges granted to a service account, please refer to the steps below: \n1. Log in to the GCP console\n2. Navigate to the Cloud Functions\n3. Click on the cloud function for which alert is generated\n4. Go to 'DETAILS' tab\n5. Note the service account mentioned attached to the cloud function\n6. Navigate to the IAM & ADMIN\n7. Go to IAM tab\n8. Go to 'VIEW BY PRINCIPALS' tab\n9. Find the previously noted service account and click on 'Edit principal' button (pencil icon)\n10. Remove any binding to any basic role (roles/viewer or roles/editor or roles/owner)\n11. Click 'SAVE'..
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule= 'publicContainersList[*] contains insights-operational-logs and (totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist))' as X; config from cloud.resource where api.name = 'azure-monitor-log-profiles-list' as Y; filter '$.X.id contains $.Y.properties.storageAccountId'; show X;```
Azure Storage account container storing activity logs is publicly accessible This policy identifies the Storage account containers containing the activity log export is publicly accessible. Allowing public access to activity log content may aid an adversary in identifying weaknesses in the affected account's use or configuration. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage accounts'\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the container named as 'insight-operational-logs'\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'.
```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' and json.rule = storageEncrypted is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and keyMetadata.origin equals AWS_KMS and (rotation_status.keyRotationEnabled is false or rotation_status.keyRotationEnabled equals "null") as Y; filter '($.X.kmsKeyId equals $.Y.key.keyArn)'; show X;```
AWS RDS database instance encrypted with Customer Managed Key (CMK) is not enabled for regular rotation This policy identifies Amazon RDS instances that use Customer Managed Keys (CMKs) for encryption but are not enabled with key rotation. Amazon RDS instance encryption key rotation failure can result in prolonged exposure of sensitive data and potential compliance violations. As a security best practice, it is important to periodically rotate these keys. This ensures that if the keys are compromised, the data in the underlying service remains secure with the new keys. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to enable the automatic rotation of the KMS key used by the RDS instance\n\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n4. Navigate to the 'RDS' service.\n5. Select the RDS instance reported in the alert, and click on the 'Configuration' tab.\n6. Under the 'Storage' section, click on the KMS key link in 'AWS KMS key'.\n7. Under the 'Key rotation' tab on the navigated KMS key window, enable the 'Automatically rotate this CMK every year' check box.\n8. Click on Save..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(3306,3306)"```
Alibaba Cloud Security group allow internet traffic to MySQL port (3306) This policy identifies Security groups that allow inbound traffic on MySQL port (3306) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 3306, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'aws' AND cloud.account = 'jScheel AWS Account' AND api.name = 'aws-route53-domain' AND json.rule = dnssecKeys[*] is empty```
jScheel AWS Route53 domain configured without DNSSEC List of AWS Route53 domains configured without DNSSEC. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: https://aws.amazon.com/blogs/networking-and-content-delivery/configuring-dnssec-signing-and-validation-with-amazon-route-53/.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instance-template' AND json.rule = properties.canIpForward is true and (name does not start with "gke-" or (name starts with "gke-" and properties.disks[*].initializeParams.labels does not exist) )```
GCP VM instance template with IP forwarding enabled This policy identifies VM instance templates that have IP forwarding enabled. IP Forwarding could open unintended and undesirable communication paths and allows VM instances to send and receive packets with the non-matching destination or source IPs. To enable source and destination IP match check, disable the IP Forwarding. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP VM instance templates are used to create VM instances based on a preexisting configuration. GCP VM instance templates IP forwarding feature cannot be updated. After an instance template is created, the IP forwarding field becomes read-only. So to fix this alert, Create a new VM instance template with IP forwarding disabled, migrate all required data from the reported template to the newly created one, and delete the reported VM instance template.\n\nTo create a new VM Instance template with IP forwarding disabled:\n1. Login to GCP Portal\n2. Go to 'Computer Engine' (Left Panel)\n3. Go to 'Instance templates'\n4. Click on 'CREATE INSTANCE TEMPLATE'\n5. Specify the mandatory parameters as required\n6. Click 'Management, security, disk, networking, sole tenancy'\n7. Click 'Networking'\n8. Click on the specific Network interfaces\n9. Set 'IP forwarding' to 'Off'\n10. Click on 'Create' button\n\nTo Delete VM instance template which has IP forwarding enabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to 'Instance templates'\n4. From the list, choose the reported templates\n5. Click on the 'Delete' button.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case "Running" AND kind contains "functionapp" AND kind does not contain "workflowapp" AND kind does not equal "app" AND config.http20Enabled is false```
Azure Function App doesn't use HTTP 2.0 This policy identifies Azure Function App which doesn't use HTTP 2.0. HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritisation of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Platform settings', Set 'HTTP version' to '2.0'\n6. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\nAzure CLI Command - \"az functionapp config set --http20-enable true --name MyFunctionApp --resource-group MyResourceGroup\".
```config from cloud.resource where api.name = 'gcloud-compute-backend-bucket' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' not (Y.name intersects X.bucketName) '; show X;```
bobby gcp policy This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.policies.exportPolicy.status contains enabled or properties.publicNetworkAccess contains enabled)```
Azure Container Registry with exports enabled This policy identifies Azure Container Registries with exports enabled. Azure Container Registries with exports enabled allows data in the registry to be moved out using commands like acr import or acr transfer. Export functionality can expose registry data, increasing the risk of unauthorized data movement. Disabling exports ensures that data in a registry is accessed only via the dataplane (e.g., docker pull) and cannot be moved out using other methods. As a security best practice, it is recommended to disable export configuration for Azure Container Registries. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: To remediate the alert, ensure the registry is on the Premium service tier, disable public network access to turn off exports (supported only for managed registries in Premium SKU), and use the provided az command as this setting cannot be changed through the UI.\n\nCLI command: az acr update --name ${registryName} --allow-exports false --public-network-enabled false.
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains "resource.type =" or $.X.filter contains "resource.type=") and ($.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=") and $.X.filter contains "gce_route" and ($.X.filter contains "jsonPayload.event_subtype=" or $.X.filter contains "jsonPayload.event_subtype =") and ($.X.filter does not contain "jsonPayload.event_subtype!=" and $.X.filter does not contain "jsonPayload.event_subtype !=") and $.X.filter contains "compute.routes.delete" and $.X.filter contains "compute.routes.insert"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for VPC network route changes This policy identifies the GCP account which does not have a log metric filter and alert for VPC network route changes. Monitoring network routes deletion and insertion activities will help in identifying VPC traffic flows through an expected path. It is recommended to create a metric filter and alarm to detect activities related to the deletion and insertion of VPC network routes. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gce_route" AND jsonPayload.event_subtype="compute.routes.delete" OR jsonPayload.event_subtype="compute.routes.insert"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = "acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier size > 0 and acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier does not contain c4c1ede66af53448b93c283ce9448c4ba468c9432aa01d700d3878632f77d2d0 and _AWSCloudAccount.isRedLockMonitored(acl.grants[?(@.grantee.typeIdentifier=='id')].grantee.identifier) is false"```
AWS S3 bucket accessible to unmonitored cloud accounts This policy identifies those S3 buckets which have either the read/write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read/write privileges should be reviewed and confirmed that these are valid accounts of your organization (or authorised by your organization) and are not active under Prisma Cloud monitoring. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the reported S3 bucket\n4. Click on the 'Permissions' tab\n5. Navigate to the 'Access control list (ACL)' section and Click on the 'Edit'\n6. Under 'Access for other AWS accounts', Add the Cloud Accounts that are monitored by Prisma Cloud\n7. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and sku.tier does not equal ignore case Basic and properties.publicNetworkAccess equal ignore case Enabled```
Azure PostgreSQL database server deny public network access setting is not set This policy identifies Azure PostgreSQL database servers that have Deny public network access setting is not set. When 'Deny public network access' is set to yes, only private endpoint connections will be allowed to access this resource. It is highly recommended to set Deny public network access setting to Yes, which would allow PostgreSQL database server to be accessed only through private endpoints. Note: This feature is available in all Azure regions where Azure Database for PostgreSQL - Single server supports General Purpose and Memory Optimized pricing tiers. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for PostgreSQL servers'\n3. Click on the reported PostgreSQL server instance you want to modify \n4. Select 'Connection security' under 'Settings' from left panel \n5. For 'Deny public network access' ensure 'Deny public network access' is set to 'Yes'\n6. Click on 'Save'\n\nNote: When 'Deny public network access' is set to yes, only private endpoint connections will be allowed to access this resource..
```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = scheme equals internet-facing and type equals application as X; config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = (webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.resources.applicationLoadBalancer[*] contains $.X.loadBalancerArn'; show X;```
AWS ALB attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability This policy identifies AWS Application Load Balancer (ALB) attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, Application Load Balancer (ALB) attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228). For more information please refer below URL, https://aws.amazon.com/security/security-bulletins/AWS-2021-006/ This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to EC2 Dashboard, and select 'Load Balancers'\n3. Make sure your reported Application Load Balancer requires WAF based on your requirement and Note down the load balancer name\n4. Navigate to WAF & Shield Service\n5. Go to the Web ACL associated to the reported Application Load Balancer\n6. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n7. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n8. Click on 'Add rules'.
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.allowCrossTenantReplication exists and properties.allowCrossTenantReplication is true```
Azure 'Cross Tenant Replication' is enabled This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = settings[?any( name equals WDATP and properties.enabled is false )] exists```
Azure Microsoft Defender for Cloud WDATP integration Disabled This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has Microsoft Defender for Endpoint (WDATP) integration disabled. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for WDATP. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Integrations'\n6. Check/Enable option 'Allow Microsoft Defender for Endpoint to access my data'\n7. Select 'Save'.
```config from cloud.resource where api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.SqsManagedSseEnabled equals "false" and attributes.KmsMasterKeyId does not exist```
RomanTest - Ensure SQS service is encrypted at-rest This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultCacheBehavior.viewerProtocolPolicy contains "allow-all" or cacheBehaviors.items[?any( viewerProtocolPolicy contains "allow-all" )] exists```
AWS CloudFront viewer protocol policy is not configured with HTTPS For web distributions, you can configure CloudFront to require that viewers use HTTPS to request your objects, so connections are encrypted when CloudFront communicates with viewers. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Configure CloudFront to require HTTPS between viewers and CloudFront.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Behaviors' tab.\n4. Check the behavior you want to modify then select Edit.\n5. Choose 'HTTPS Only' or 'Redirect HTTP to HTTPS' for Viewer Protocol Policy.\n6. Select 'Yes, Edit.'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-glue-connection' AND json.rule = ((connectionType equals KAFKA and connectionProperties.KAFKA_SSL_ENABLED is false) or (connectionType does not equal KAFKA and connectionProperties.JDBC_ENFORCE_SSL is false)) and connectionType does not equal "NETWORK"```
AWS Glue connection do not have SSL configured This policy identifies the Glue connections that are not configured with SSL to encrypt connections. It is recommended to use an SSL connection with hostname matching is enforced for the DB connection on the client; enforcing SSL connections help protect against 'man in the middle' attacks by encrypting the data stream between connections. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to AWS Glue service\n4. Click on 'Connections', Click on the reported Connection\n5. Click on 'Edit'\n6. On the 'Edit connection' page, Select 'Require SSL connection'\n7. Click on 'Next'\n8. Click on 'Finish'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-network-acls' AND json.rule = "entries[?any(egress equals false and ((protocol equals 6 and ((portRange.to equals 22 or portRange.to equals 3389 or portRange.from equals 22 or portRange.from equals 3389) or (portRange.to > 22 and portRange.from < 22) or (portRange.to > 3389 and portRange.from < 3389))) or protocol equals -1) and (cidrBlock equals 0.0.0.0/0 or ipv6CidrBlock equals ::/0) and ruleAction equals allow)] exists"```
AWS Network ACLs allow ingress traffic on Admin ports 22/3389 This policy identifies the AWS Network Access Control List (NACL) which has a rule to allow ingress traffic to server administration ports. AWS NACL provides filtering of ingress and egress network traffic to AWS resources. Allowing ingress traffic on admin ports 22 (SSH) and 3389 (RDP) via AWS Network ACLs increases the vulnerability of EC2 instances and other network resources to unauthorized access and cyberattacks. It is recommended that no NACL allows unrestricted ingress access to server administration ports, such as SSH port 22 and RDP port 3389. NOTE: This policy may report NACLs, which include the deny policy in the rule set. Make sure while remediating the rule set does not consist of the Allow and Deny rule set together; which leads to overlap on each ruleset. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the AWS Network Access Control List perform the following actions:\n1. Sign into the AWS console and navigate to the Amazon VPC console. \n2. In the navigation pane, choose 'Network ACLs' under the 'Security' section.\n3. Select the reported Network ACL\n4. Click on 'Actions' and select 'Edit inbound rules'\n5. Click on Delete towards the right of rule which has source '0.0.0.0/0' or '::/0' and shows 'ALLOW and 'Port Range' which includes port 22 or 3389 or 'Port Range' shows 'ALL'\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.state equal ignore case running and properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled and config.ipSecurityRestrictions[?any(action equals Allow and ipAddress equals Any)] exists'```
Azure App Service web apps with public network access This policy identifies Azure App Service web apps that are configured with public network access. Publicly accessible web apps could allow malicious actors to remotely exploit if any vulnerabilities and could. It is recommended to configure the App Service web apps with private endpoints so that the web apps hosted are accessible only to restricted entities. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict App Service network access, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = properties.logs[?any((enabled is true and category equals Administrative))] exists and properties.logs[?any((enabled is true and category equals Alert))] exists and properties.logs[?any((enabled is true and category equals Policy))] exists and properties.logs[?any((enabled is true and category equals Security))] exists as X; count(X) less than 1```
Azure Monitor Diagnostic Setting does not captures appropriate categories This policy identifies Azure Monitor Diagnostic Setting which does not captures appropriate categories. Capturing appropriate diagnostic setting categories allows proper alerting. It is recommended to select Administrative, Alert, Policy, and Security diagnostic setting categories. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to 'Monitor' and select 'Activity log'\n3. Click on 'Diagnostic settings' in top pane\n4. Select 'Add diagnostic setting' if no 'Diagnostic settings' present\nOR\nClick on 'Edit setting' for the existing 'Diagnostic settings'\n5. Under 'Category details', select 'Administrative', 'Alert', 'Policy', and 'Security' for 'log'\n6. Add 'Destination details' and other required fields\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.scopes[*] does not contain resourceGroups and properties.enabled equals true and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Authorization/policyAssignments/delete" as X; count(X) less than 1```
Azure Activity log alert for delete policy assignment does not exist This policy identifies the Azure accounts in which activity log alert for Delete policy assignment does not exist. Creating an activity log alert for Delete policy assignment gives insight into changes done in azure policy - assignments and may reduce the time it takes to detect unsolicited changes. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete policy assignment (Microsoft.Authorization/policyAssignments)' and Other fields you can set based on your custom settings.\n6. Click on Create.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.sslEnforcement equals Enabled and properties.minimalTlsVersion does not equal TLS1_2```
Azure MySQL Database Server using insecure TLS version This policy identifies Azure MySQL Database Servers which are using insecure TLS version. As a security best practice, use the newer TLS version as the minimum TLS version for Azure MySQL Database Server. Currently, Azure MySQL Database Server supports TLS 1.2 version which resolves the security gap from its preceding versions. https://docs.microsoft.com/en-gb/azure/mysql/howto-tls-configurations This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure TLS settings on reported Azure MySQL Database Server, follow the below-mentioned URL:\nhttps://docs.microsoft.com/en-gb/azure/mysql/howto-tls-configurations.
```config from cloud.resource where api.name = 'aws-describe-mount-targets' AND json.rule = fileSystemDescription.encrypted is true as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '$.X.fileSystemDescription.kmsKeyId equals $.Y.key.keyArn'; show X;```
AWS Elastic File System (EFS) not encrypted using Customer Managed Key This policy identifies Elastic File Systems (EFSs) which are encrypted with default KMS keys and not with Keys managed by Customer. It is a best practice to use customer managed KMS Keys to encrypt your EFS data. It gives you full control over the encrypted data. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: AWS EFS Encryption of data at rest can only be enabled during file system creation. So to resolve this alert, create a new EFS with encryption enabled with the customer-managed key, then migrate all required data from the reported EFS to this newly created EFS and delete reported EFS.\n\nTo create new EFS with encryption enabled, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Click on 'Create file system' button\n6. On the 'Configure file system access' step, specify EFS details as per your requirements and Click on 'Next Step'\n7. On the 'Configure optional settings' step, Under 'Enable encryption' Choose 'Enable encryption of data at rest' and Select customer managed key [i.e. Other than (default)aws/elasticfilesystem] from 'Select KMS master key' dropdown list along with other parameters and Click on 'Next Step'\n8. On the 'Review and create' step, Review all your setting and Click on 'Create File System' button\n\nTo delete reported EFS which does not has encryption, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EFS dashboard\n4. Click on 'File systems' (Left Panel)\n5. Select the reported file system\n6. Click on 'Actions' drop-down\n7. Click on 'Delete file system'\n8. In the 'Permanently delete file system' popup box, To confirm the deletion enter the file system's ID and Click on 'Delete File System'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = metrics_monitoring does not exist or metrics_monitoring.request_metrics_enabled does not equal ignore case "true" or metrics_monitoring.usage_metrics_enabled does not equal ignore case "true"```
IBM Cloud Object Storage bucket is not enabled with IBM Cloud Monitoring This policy identifies IBM Cloud Object Storage buckets which have Monitoring disabled or not enabled properly. Use IBM Cloud Monitoring to gain operational visibility into the performance and health of your applications, services, and platforms. So, it is recommended to enable Monitoring to monitor all usage/request metrics of a bucket. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list'. From the list of resources, select the object storage instance in which the reported bucket resides.\n3. Select the bucket and click on 'Configuration' tab.\n4. Navigate to 'Monitoring', click on 'Create' button if it is not enabled already.\n5. If already enabled, click on three dots and click 'Edit'.\n6. Select 'Usage Metrics' and 'Request Metrics' checkboxes to get all metrics.\n7. Click on 'Save'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-db-list' AND json.rule = 'securityAlertPolicy does not exist or securityAlertPolicy[*] is empty or (securityAlertPolicy.properties.state equals Enabled and securityAlertPolicy.properties.emailAccountAdmins equals Disabled)'```
Azure SQL Databases with disabled Email service and co-administrators for Threat Detection This policy identifies Azure SQL Databases which have ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' not configured. This setting enables ADS - VA scan reports being sent to admins and subscription owners. It is recommended to enable 'Also send email notifications to admins and subscription owners' setting, which would help in reducing time required for identifying risks and taking corrective measures. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to SQL databases (Left Panel)\n3. Choose the reported each DB instance\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n6. In 'VULNERABILITY ASSESSMENT SETTINGS' section, Ensure 'Also send email notifications to admins and subscription owners' is checked\n7. 'Save' your changes.
```config from cloud.resource where api.name = 'aws-docdb-db-cluster-parameter-group' AND json.rule = parameters.tls.ParameterValue equals "disabled" as X; config from cloud.resource where api.name = 'aws-docdb-db-cluster' AND json.rule = Status equals available as Y; filter '$.X.DBClusterParameterGroupName equals $.Y.DBClusterParameterGroup'; show Y;```
AWS DocumentDB Cluster is not enabled with data encryption in transit This policy identifies Amazon DocumentDB Clusters for which data encryption in transit is disabled. Each DocumentDB Cluster is associated with a Cluster Parameter Group. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and the cluster. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To modify the Parameter group\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Click on the 'Parameter groups' (Left panel)\n5. Select the db cluster parameter group which is associated with the DocumentDB cluster on which the alert is generated\n6. Select the \"tls\" parameter\n7. Click on \"Edit\" button\n8. Set value to \"enabled\"\n9. Click on \"Modify cluster parameter\" button\n\nTo restart the Document DB cluster\n1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to Amazon DocumentDB Dashboard\n4. Click on the 'Clusters' (Left panel)\n5. Select the db cluster parameter group which is associated with the DocumentDB cluster on which the alert is generated, and choose the button to the left of its name.\n6. Choose \"Actions\", and then \"Reboot\".\n7. Click on \"Reboot\" button..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-v2-rule-group' AND json.rule = VisibilityConfig.CloudWatchMetricsEnabled is false or Rules[?any( VisibilityConfig.CloudWatchMetricsEnabled is false)] exists```
AWS WAF Rule Group CloudWatch metrics disabled This policy identifies the AWS WAF Rule Group having CloudWatch metrics disabled. AWS WAF rule groups have CloudWatch metrics that provide information about the number of allowed and blocked web requests, counted requests, and requests that pass through without matching any rule in the rule group. These metrics can be used to monitor and analyse the performance of the web access control list (web ACL) and its associated rules. It is recommended to enable CloudWatch metrics for a WAF rule group to help in monitoring and analysis of web requests. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable RuleGroup with CloudWatch metrics please follow below steps:\n\n1. Run the below command to get the ruleGroup details to be used for update\n aws wafv2 list-rule-groups --scope {scopeOfRuleGroup}\n2. Get the ruleGroup 'Id' and 'LockToken' values for the ruleGroup to be updated from the output.\n3. Run the below command with name and 'Id' obtained from above output\n aws wafv2 get-rule-group --name {ruleGroupName} --scope {scopeOfRuleGroup} --id {IdFromAboveOutput}\n4. Get the 'Rules' block output and save it in a file for further reference from above command output\n5. Please update 'CloudWatchMetricsEnabled' field to true for every rule in the file saved from above along with providing a metric name at the 'MetricName' field\n6. Run the below command to enable CloudWatch metrics on the ruleGroup.\n aws wafv2 update-rule-group \n --name {ruleGroupName} \n --scope {scopeOfRuleGroup} \n --id {ruleGroupId} \n --lock-token {tokenFromAboveOutput} \n --rules file://{fileFromAboveOutput}\n --visibility-config SampledRequestsEnabled=true,CloudWatchMetricsEnabled=true,MetricName= \n {metricNameForRuleGroup).
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-vpn-connections-summary' AND json.rule = 'vpnConnectionsSummary[*].vpnConnectionsCount greater than 7'```
AWS regions nearing VPC Private Gateway IPSec Limit This policy identifies if your account is near the private gateway IPSec limitation per VPC per Region. AWS provides a reasonable starting limitation for the maximum number of VPC Private Gateway IPSec connections you can assign in each VPC. If you approach the limit in a particular VPC, this alert indicates that you have nearly exhausted your allocation. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to VPC Dashboard\n4. Click on 'Site-to-Site VPN Connections' (Left Panel)\n5. Choose the VPN connection you want to delete, which more used or required\n6. Click on 'Actions' dropdown\n7. Click on 'Delete'\n8. On 'Delete' popup dialog, Click on 'Delete'\nNOTE: If existing VPN Connection is properly associated and exhausted your VPC Site-to-Site VPN Connections limit allocation, you can contact AWS for a service limit increase..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = restrict_create_service_id does not equal "RESTRICTED" ```
IBM Cloud Service ID creation is not restricted in account settings This policy identifies IBM cloud accounts where Service ID creation is not restricted in account settings. By default, all members of an account can create service IDs. Enabling Service ID creation setting will restrict the users from creating service IDs unless correct access is granted explicitly. It is recommended to enable Service ID creation setting and grant access only on a need basis. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable the restriction to create service IDs:\n\nhttps://cloud.ibm.com/docs/account?topic=account-restrict-service-id-create&interface=ui#enable-restrict-create-serviceid-ui.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.addonProfiles.omsagent.config does not exist or properties.addonProfiles.omsagent.enabled is false```
Azure AKS cluster monitoring not enabled Azure Monitor for containers is a feature designed to monitor the performance of container workloads deployed to either Azure Container Instances or managed Kubernetes clusters hosted on Azure Kubernetes Service (AKS). Monitoring your containers is critical, especially when you're running a production cluster, at scale, with multiple applications. This policy checks your AKS cluster monitoring add-on setting and alerts if no configuration is found, or monitoring is disabled. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable monitoring for your AKS cluster, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/monitor-aks#configure-monitoring.
```config from cloud.resource where cloud.type = 'AWS' and api.name = 'aws-ec2-describe-subnets' AND json.rule = 'mapPublicIpOnLaunch is true'```
AWS VPC subnets should not allow automatic public IP assignment This policy identifies VPC subnets which allow automatic public IP assignment. VPC subnet is a part of the VPC having its own rules for traffic. Assigning the Public IP to the subnet automatically (on launch) can accidentally expose the instances within this subnet to internet and should be edited to 'No' post creation of the Subnet. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to the 'VPC' service.\n4. In the navigation pane, click on 'Subnets'.\n5. Select the identified Subnet and choose the option 'Modify auto-assign IP settings' under the Subnet Actions.\n6. Disable the 'Auto-Assign IP' option and save it..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'logging.enabled is false and logging.bucket is empty'```
AWS CloudFront distribution with access logging disabled This policy identifies CloudFront distributions which have access logging disabled. Enabling access log on distributions creates log files that contain detailed information about every user request that CloudFront receives. Access logs are available for web distributions. If you enable logging, you can also specify the Amazon S3 bucket that you want CloudFront to save files in. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On 'General' tab, Click on 'Edit' button\n6. On 'Edit Distribution' page, Set 'Logging' to 'On', choose a 'Bucket for Logs' and 'Log Prefix' as desired\n7. Click on 'Yes, Edit'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='log_disconnections')].properties.value equals OFF or configurations.value[?(@.name=='log_disconnections')].properties.value equals off"```
Azure PostgreSQL database server with log disconnections parameter disabled This policy identifies PostgreSQL database servers for which server parameter is not set for log disconnections. Enabling log_disconnections helps PostgreSQL Database to Logs end of a session, including duration, which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_disconnections' and set it to on\n6. Click on 'Save' button from top menu to save the change..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'passwordReusePrevention !isType Integer or passwordReusePrevention == 0'```
Alibaba Cloud RAM password history check policy is disabled This policy identifies Alibaba Cloud accounts for which password history check policy is disabled. As a best practice, enable RAM password history check policy to prevent RAM users from reusing a specified number of previous passwords. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password History Check Policy' field, enter the value between 1 to 24 instead of 0 based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = config.remoteDebuggingEnabled is true```
mosh-stam2 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: mosh_ recommendation.
```config from cloud.resource where api.name = 'aws-secretsmanager-describe-secret' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.kmsKeyId does not exist ) or ($.X.kmsKeyId exists and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```
AWS Secrets Manager secret not encrypted by Customer Managed Key (CMK) This policy identifies AWS Secrets Manager secrets that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using a CMK that is disabled. AWS Secrets Manager secrets are a secure storage solution for sensitive information like passwords, API keys, and tokens in the AWS cloud. Secrets Manager secrets are encrypted by default by AWS managed key but users can specify CMK to get enhanced security, control over the encryption key, and also comply with any regulatory requirements. As a security best practice, using CMK to encrypt your Secrets Manager secrets is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the encryption key for a Secrets Manager secret:\n1. Open the Secrets Manager console at https://console.aws.amazon.com/secretsmanager/.\n2. From the list of secrets, choose the reported secret.\n3. On the secret details page, in the Secrets details section, choose Actions, and then choose 'Edit encryption key'.\n4. in the 'Encryption key' section choose the Customer Managed Key created and managed by you in AWS Key Management Service (KMS) based on your business requirement.\n5. Click 'Save' button to save the changes.\nNote: When using customer-managed CMKs to encrypt Secrets Manager secret, Ensure authorized entities have access to the key and its associated operations..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE AND continuousBackupsDescription.pointInTimeRecoveryDescription.pointInTimeRecoveryStatus does not equal ENABLED```
AWS DynamoDB table point-in-time recovery (PITR) disabled This policy identifies AWS DynamoDB tables that does not have point-in-time recovery (backup) enabled. AWS DynamoDB enables you to back up your table data continuously by using point-in-time recovery (PITR) with per-second granularity. This helps in protecting your data against accidental write or delete operations. It is recommended to enable point-in-time recovery functionality on the DynamoDB table to protect data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Point-in-Time Recovery (PITR) for a DynamoDB table, you can follow these steps:\n\n1. Sign in to the AWS Management Console.\n2. Navigate to the DynamoDB service.\n3. Click on the 'Tables' in the left navigation pane.\n4. Select the table you want to enable Point-in-Time Recover (PITR) for.\n5. Switch to the 'Backups' tab and click on 'Edit' next to Point-in-time recovery.\n6. Click on the 'Turn on point-in-time recovery' check box and Click on 'Save changes'..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(445,445) or destinationPortRanges[*] contains _Port.inRange(445,445) ))] exists```
Azure Network Security Group allows all traffic on CIFS (UDP Port 445) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows SMB UDP port 445. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.encryption.status does not exist or properties.encryption.status equal ignore case disabled)```
Azure Machine Learning workspace not encrypted with Customer Managed Key (CMK) This policy identifies Azure Machine Learning workspaces that are not encrypted with a Customer Managed Key (CMK). Azure handles encryption using platform-managed keys by default, but customer-managed keys (CMKs) provide greater control and help meet specific security and compliance requirements. Without CMKs, organizations may not have full control over key management and rotation, increasing the risk of compliance issues and unauthorized data access. Configuring the workspace to use CMKs enhances security by allowing organizations to manage key access and rotation, ensuring stronger protection and compliance for sensitive data. As a security best practice, it is recommended to configure the workspace to use Customer Managed Keys (CMKs). This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Once a Azure Machine Learning workspace is deployed, you can't switch from Microsoft-managed keys to customer-managed keys. You'll need to delete and recreate the workspace with customer-managed keys enabled.\n\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the reported Azure Machine Learning workspace\n4. Delete the workspace and then recreate it, ensuring you enable 'Encrypt data using a customer-managed key' under the 'Encryption' tab.
```config from cloud.resource where api.name = 'aws-ec2-describe-flow-logs' as X; config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = shared is false as Y; filter 'not($.X.resourceId equals $.Y.vpcId)' ; show Y;```
AWS VPC Flow Logs not enabled This policy identifies VPCs which have flow logs disabled. VPC Flow logs capture information about IP traffic going to and from network interfaces in your VPC. Flow logs are used as a security tool to monitor the traffic that is reaching your instances. Without the flow logs turned on, it is not possible to get any visibility into network traffic. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Your VPCs' and Choose the reported VPC\n5. Click on the 'Flow logs' tab and follow the instructions as in link below to enable Flow Logs for the VPC:\nhttps://aws.amazon.com/blogs/aws/vpc-flow-logs-log-and-view-network-traffic-flows/.
```config from cloud.resource where api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = primary.state equals "ENABLED" and (rotationPeriod does not exist or rotationPeriod greater than 7776000) as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' $.X.name equals $.Y.encryption.defaultKmsKeyName'; show Y;```
GCP Storage bucket CMEK not rotated every 90 days This policy identifies GCP Storage bucket with CMEK that are not rotated every 90 days A CMEK (Customer-Managed Encryption Key), which is configured for a GCP bucket becomes vulnerable over time due to prolonged use. Without regular rotation, the key is at greater risk of being compromised, which could lead to unauthorized access to the encrypted data in the bucket. This can undermine the security of your data and increase the chances of a breach if the key is exposed or exploited. It is recommended to configure rotation less than 90 days for CMEKs used for GCP buckets. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate Cloud Storage Buckets page\n3. Click on the reported bucket\n4. Go to 'Configuration' tab\n5. Under 'Default encryption key', click on the key name\n6. Click on 'EDIT ROTATION PERIOD'\n7. Select 90 days or less for 'Rotation period' dropdown\n8. Click 'SAVE'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals AppServices and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud is set to Off for App Service This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for App Service is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for App Service. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'App Service' Select 'On' under Plan.\n8. Select 'Save'.
```config from cloud.resource where api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals "ACTIVE" and serviceAccount contains "[email protected]" as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as Y; filter ' $.X.serviceAccount equals $.Y.user'; show X;```
GCP Vertex AI Workbench user-managed notebook is using default service account with the editor role This policy identifies GCP Vertex AI Workbench user-managed notebooks that are using the default service account with the editor role. When you create a new Vertex AI Workbench user-managed notebook, the compute engine default service account is associated with the notebook by default if any other service account is not configured. The compute engine default service account is automatically created when the Compute Engine API is enabled and is granted the IAM basic Editor role if you have not disabled this behavior explicitly. These permissions can be exploited to get admin access to the GCP project. To be compliant with the principle of least privileges and prevent potential privilege escalation, it is recommended that Vertex AI Workbench user-managed notebooks are not assigned the 'Compute Engine default service account' especially when the editor role is granted to the service account. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service (Left Panel)\n3. Under 'Notebooks', go to 'Workbench'\n4. Open the 'USER-MANAGED NOTEBOOKS' tab\n5. Click on the alerting notebook\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Identity and API access', use the dropdown to select a non-default service account as per needs\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'azure' AND cloud.service = 'Azure Network Watcher' AND api.name = 'azure-network-watcher-list' AND json.rule = ' provisioningState !exists or provisioningState != Succeeded'```
Azure Network Watcher is not enabled This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND cloud.account = 'Azure_Redlock_QA_BVT_25FE' AND api.name = 'azure-disk-list' AND json.rule = id exists ```
dnd-azure-disk-flip-flop-policy This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'oci-networking-loadbalancer' and json.rule = lifecycleState equal ignore case "ACTIVE" as X; config from cloud.resource where api.name = 'oci-networking-subnet' and json.rule = lifecycleState equal ignore case "AVAILABLE" as Y; config from cloud.resource where api.name = 'oci-networking-security-list' AND json.rule = lifecycleState equal ignore case AVAILABLE as Z; filter 'not ($.X.listeners does not equal "{}" and ($.X.subnetIds contains $.Y.id and $.Y.securityListIds contains $.Z.id and $.Z.ingressSecurityRules is not empty))'; show X;```
OCI Load Balancer not configured with inbound rules or listeners This policy identifies Load Balancers that are not configured with inbound rules or listeners. A Load Balancer's subnet security lists should include ingress rules, and the Load Balancer should have at least one listener to handle incoming traffic. Without these configurations, the load balancer cannot receive and route incoming traffic, rendering it ineffective. As best practice, it is recommended to configure Load Balancers with proper inbound rules and listeners. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure the OCI Load Balancers with inbound rules and listeners, refer to the following documentation:\nhttps://docs.cloud.oracle.com/iaas/Content/Security/Reference/configuration_tasks.htm#lb-enable-traffic.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = allowedListIPAddresses[*] size equals 0 or allowedListIPAddresses[?any( address equals 0.0.0.0/0 )] exists```
IBM Cloud MySQL Database network access is not restricted to a specific IP range This policy identifies IBM Cloud MySQL Databases with no specified IP range for network access. To restrict access to your databases, you can allowlist specific IP addresses or ranges of IP addresses on your deployment. When no IP addresses are in the allowlist, the allowlist is disabled and the deployment accepts connections from any IP address. It is recommended to create an allowlist, only IP addresses that match the allowlist or are in the range of IP addresses in the allowlist can connect to your deployment This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select MySQL database reported in the alert.\n3. Refer below URL for setting allowlist IP addresses : https://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-allowlisting&interface=ui#set-allowlist-ui\n4. Please remove IP address starting with '0.0.0.0' if any added already in the allow list and make sure to add IP address other than '0.0.0.0'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-autoscaling-launch-configuration' AND json.rule = blockDeviceMappings[*].ebs exists AND blockDeviceMappings[?any(ebs.encrypted is false)] exists```
AWS EC2 Auto Scaling Launch Configuration is not using encrypted EBS volumes This policy identifies AWS EC2 Auto Scaling Launch Configurations that are not using encrypted EBS volumes. A launch configuration defines an instance configuration template that an Auto Scaling group uses to launch EC2 instances. Amazon Elastic Block Store (EBS) volumes allow you to create encrypted launch configurations when creating EC2 instances and auto scaling groups. When the entire EBS volume is encrypted, data stored at rest, in-transit, and snapshots are encrypted. This protects the data from unauthorized access. As a security best practice for data protection, enable encryption for all EBS volumes at auto scaling launch configuration. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Once Auto Scaling Launch Configuration is created you can not modify the encryption for the EBS volumes. To reslove this alert you need copy the reported launch configuration, create new launch template using copied launch configuration data and select the encryption option for EBS vloumes. Later delete the reported launch configuration.\n\nTo create a new launch template,\n1. Log in to AWS console\n2. Navigate to the Amazon EC2 dashboard\n3. Under 'Auto Scaling' section, select the 'Auto Scaling groups'\n4. Click on 'Launch Templates'\n5. On 'Launch Templates' page, click on 'Create launch template'\n6. Create new lauch template by mentioning all data same as reported launch configuration.\n7. Under 'Storage (volumes)', make sure 'Encrypted' set for all EBS volumes you added..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-secretsmanager-describe-secret' AND json.rule = rotationEnabled is false and owningService is not member of (appflow, databrew, datasync, directconnect, events, opsworks-cm, rds, sqlworkbench)```
AWS Secret Manager Automatic Key Rotation is not enabled This policy identifies AWS Secret Manager that are not enabled with key rotation. As a security best practice, it is important to rotate the keys periodically so that if the keys are compromised, the data in the underlying service is still secure with the new keys. NOTE: This policy does not include secret manager which are managed by some of the AWS services that store AWS Secrets Manager secrets on your behalf. Refer: https://docs.aws.amazon.com/secretsmanager/latest/userguide/service-linked-secrets.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Automatic Key Rotation for AWS Secret Manager follow the steps mentioned in below URL:\n\nhttps://aws.amazon.com/blogs/security/how-to-configure-rotation-windows-for-secrets-stored-in-aws-secrets-manager/#:~:text=Use%20Case%203.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = not (pricings[?any(properties.extensions[?any(name equal ignore case ContainerRegistriesVulnerabilityAssessments AND isEnabled is true)] exists AND properties.pricingTier equal ignore case Standard )] exists)```
Azure Microsoft Defender for Cloud set to Off for Agentless container vulnerability assessment This policy identifies Azure Microsoft Defender for Cloud where the Agentless container vulnerability assessment is set to Off. Agentless container vulnerability assessment enables automatic scanning for vulnerabilities in container images stored in Azure Container Registry or running in Azure Kubernetes Service without additional agents. Disabling it exposes container images to unpatched security issues and misconfigurations, risking exploitation and data breaches. Enabling agentless container vulnerability assessment ensures continuous scanning for known vulnerabilities, enhancing security by proactively identifying risks and providing remediation suggestions to maintain compliance with industry standards. As a security best practice, it is recommended to enable Agentless container vulnerability assessment in Azure Microsoft Defender for Cloud. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Microsoft Defender for Cloud'\n3. Under 'Management', select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click on 'Settings & monitoring' at the top\n7. In the table, find 'Agentless container vulnerability assessment' and select 'On' under Plan\n8. Click 'Continue' in the top left\n9. Click 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-admin-consent-request-policy' AND json.rule = ['@odata.context'] exists```
pcsup-26179-policy This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-msk-cluster' AND json.rule = state equal ignore case active and enhancedMonitoring is member of (DEFAULT, PER_BROKER)```
AWS MSK clusters not configured with enhanced monitoring This policy identifies MSK clusters that are not configured with enhanced monitoring. Amazon MSK is a fully managed Apache Kafka service on AWS that handles the provisioning, setup, and maintenance of Kafka clusters. Amazon MSK's PER_TOPIC_PER_BROKER monitoring level provides granular insights into the audit, performance and resource utilization of individual topics and brokers, enabling you to identify and optimize bottlenecks in your Kafka cluster. It is recommended to enable at least PER_TOPIC_PER_BROKER monitoring on the MSK cluster to get enhanced monitoring capabilities. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure MSK clusters with enhanced monitoring:\n\n1. Sign in to the AWS console. Navigate to the Amazon MSK console.\n2. In the navigation pane, choose 'Clusters'. Then, select the reported cluster.\n3. For 'Action', select 'Edit monitoring'.\n4. Select either 'Enhanced partition-level monitoring' or 'Enhanced topic-level monitoring' option.\n5. Choose 'Save changes'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-definition' AND json.rule = properties.permissions[*].actions any start with "*" and properties.permissions[*].actions any end with "*" and properties.type equal ignore case "CustomRole" and properties.assignableScopes starts with "/subscriptions" and properties.assignableScopes does not contain "resourceGroups"```
Azure Custom subscription administrator roles found This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = isNodeVersionSupported exists AND isNodeVersionSupported does not equal "true"```
GCP GKE unsupported node version This policy identifies the GKE node version and generates an alert if the version running is unsupported. Using an unsupported version of Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) can lead to several potential issues and risks, such as security vulnerabilities, compatibility issues, performance and stability problems, and compliance concerns. To mitigate these risks, it's crucial to regularly update the GKE clusters to supported versions recommended by Google Cloud. As a security best practice, it is always recommended to use the latest version of GKE. Note: The Policy updates will be made as per the release schedule https://cloud.google.com/kubernetes-engine/docs/release-schedule#schedule-for-release-channels This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Manually upgrading your nodes:\n\n1. Visit the Google Kubernetes Engine Clusters menu in GCP Console.\n2. Next to the cluster you want to edit, Click the Edit button which looks like a pencil under Actions.\n3. On the Cluster details page, click the Nodes tab.\n4. In the Node Pools section, click the name of the node pool that you want to upgrade.\n5. Click the Edit button which looks like a pencil.\n6. Click "Change" under Node version.\n7. Select the desired version from the Node version drop-down list, then click "Upgrade"..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-waf-classic-web-acl-resource' AND json.rule = '(resources.applicationLoadBalancer[*] exists or resources.apiGateway[*] exists or resources.other[*] exists) and loggingConfiguration.resourceArn does not exist'```
AWS Web Application Firewall (AWS WAF) Classic logging is disabled This policy identifies Classic Web Application Firewalls (AWS WAFs) for which logging is disabled. Enabling WAF logging, logs all web requests inspected by the service which can be used for debugging and additional forensics. The logs will help to understand why certain rules are triggered and why certain web requests are blocked. You can also integrate the logs with any SIEM and log analysis tools for further analysis. It is recommended to enable logging on your Classic Web Application Firewalls (WAFs). For details: https://docs.aws.amazon.com/waf/latest/developerguide/classic-logging.html NOTE: Global (CloudFront) WAF resources are out of scope for this policy. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on your reported WAFs, follow below mentioned URL:\nhttps://docs.aws.amazon.com/waf/latest/developerguide/classic-logging.html#logging-management\n\nNOTE: No additional cost to enable logging on AWS WAF (minus Kinesis Firehose and any storage cost).\nFor Kinesis Firehose or any storage additional charges refer https://aws.amazon.com/cloudwatch/pricing/.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = networkConfig.datapathProvider does not equal ADVANCED_DATAPATH and (addonsConfig.networkPolicyConfig.disabled is true or networkPolicy.enabled does not exist or networkPolicy.enabled is false )```
GCP Kubernetes Engine Clusters have Network policy disabled This policy identifies Kubernetes Engine Clusters which have disabled Network policy. A network policy defines how groups of pods are allowed to communicate with each other and other network endpoints. By enabling network policy in a namespace for a pod, it will reject any connections that are not allowed by the network policy. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under 'Networking', Click on EDIT button for 'Calico Kubernetes Network policy'\n6. Select 'Enable Calico Kubernetes network policy for control plane'\n7. Click on Save\n8. Repeat Step 5 and Select 'Enable Calico Kubernetes network policy for nodes'\n9. Click on Save.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = (properties.supportsHttpsTrafficOnly does not exist or properties.supportsHttpsTrafficOnly is false) as X; config from cloud.resource where api.name = 'azure-storage-file-shares' as Y; filter '($.X.kind does not equal ignore case "FileStorage") or ($.X.kind equal ignore case "FileStorage" and $.Y.id contains $.X.name and $.Y.properties.enabledProtocols does not contain NFS)'; show X;```
Azure Storage Account without Secure transfer enabled This policy identifies Storage accounts which have Secure transfer feature disabled. The secure transfer option enhances the security of your storage account by only allowing requests to the storage account by a secure connection. When "secure transfer required" is disabled,REST APIs to access your storage accounts may connect over insecure HTTP which is not advised. Hence, it is highly recommended to enable secure transfer feature on your storage account. NOTE: Azure storage doesn't support HTTPs for custom domain names, this option is not applied when using a custom domain name. Additionally, this property is not applicable for NFS Azure file shares to work. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable secure transfer feature on your storage account, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/storage/common/storage-require-secure-transfer#require-secure-transfer-for-an-existing-storage-account.
```config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND resource.status = Active AND json.rule = tags[*].key none equal "application" AND tags[*].key none equal "Application"```
pcsup-gcp-policy This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects ("all-accounts") as X; config from cloud.resource where api.name = 'aws-ec2-describe-subnets' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects ("all-accounts") as Y; filter '$.X.vpcId equals $.Y.vpcId'; show Y;```
jashah_ms_join_pol This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'maxLoginAttemps !isType Integer or maxLoginAttemps == 0'```
Alibaba Cloud RAM password retry constraint policy is disabled This policy identifies Alibaba Cloud accounts for which password retry constraint policy is disabled. As a best practice, enable RAM password retry constraint policy to prevent multiple login attempts with an incorrect password within an hour. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Retry Constraint Policy' field, enter the value between 1 to 32 instead of 0 based on your requirement.\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case "application" and operating_status equal ignore case "online" and is_public is true```
IBM Cloud Application Load Balancer for VPC has public access enabled This policy identifies IBM Cloud Application Load Balancer for VPC which has public access enabled. Creating a load balancer with public access will lead to unexpected malicious requests getting sent to the public DNS address assigned. A private load balancer is only accessible from within a specific virtual private cloud (VPC). It is highly recommended to use load balancers of type private to protect from unauthorized access. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A Load balancer can be made private only at the time of creation. To create a private application\nload balancer, follow below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-load-balancer&interface=ui\nMake sure to select 'Private' for load balancer 'Type' under 'details' section.\n\nNote: Please make sure to create new load balancer in accordance with alerted resource.\nAlso update load balancer reference at all the clients/places of usage with newly created\nload balancer..
```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-app-service-web-apps-configurations' as Y; config from cloud.resource where api.name = 'azure-app-service' AND json.rule = 'kind contains functionapp and kind does not contain workflowapp and kind does not equal app and properties.state equal ignore case running and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist)) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists' as Z; filter ' $.Y.properties.azureStorageAccounts contains $.X.name and $.Z.name equal ignore case $.Y.name' ; show Z;```
Azure Function App with public access linked to Blob Storage This policy identifies Azure Function Apps configured with public access and linked to Azure Blob Storage. Azure Function Apps often access Blob Storage to retrieve or store data. When public access is enabled for the Function App, it exposes the application and, potentially, the associated Blob Storage to unauthorized access, leading to potential security risks. As a security best practice, it is recommended to evaluate public access for Azure Function Apps and secure Azure Blob Storage. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict access to App Service and secure Azure Blob Storage, refer to the following links for security recommendations:\n\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions\nhttps://learn.microsoft.com/en-us/azure/storage/blobs/security-recommendations.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and properties.httpsOnly is false```
Azure Logic app does not redirect HTTP requests to HTTPS This policy identifies Azure Logic apps that fail to redirect HTTP traffic to HTTPS. By default, Azure Logic app data is accessible through unsecured HTTP traffic. HTTP does not include any encryption and data sent over HTTP is susceptible to interception and eavesdropping. To secure web traffic, use HTTPS which incorporates encryption through SSL/TLS protocols, providing a secure channel over which data can be transmitted safely. As a security best practice, it is recommended to configure HTTP to HTTPS redirection to prevent unauthorized parties from being able to read or modify the data in transit. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Select 'On' radio button for 'HTTPS Only' option.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-get-audit-config' AND json.rule = 'auditConfigs[*].service does not contain allServices or (auditConfigs[*].auditLogConfigs[*].exemptedMembers exists and auditConfigs[*].auditLogConfigs[*].exemptedMembers is not empty)'```
GCP Project audit logging is not configured properly across all services and all users in a project This policy identifies the GCP projects in which cloud audit logging is not configured properly across all services and all users. It is recommended that cloud audit logging is configured to track all Admin activities and read, write access to user data. Logs should be captured for all users and there should be no exempted users in any of the audit config section. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. To read the project's IAM policy and store it in a file run a command:\ngcloud projects get-iam-policy [PROJECT_ID] > /tmp/policy.yaml\n2. Edit policy in /tmp/policy.yaml, adding or changing only the audit logs configuration to:\nauditConfigs:\n- auditLogConfigs:\n - logType: DATA_WRITE\n - logType: DATA_READ\nservice: allServices\nNote: Make sure 'exemptedMembers:' is not set, as audit logging should be enabled for all the users.\n3. To set audit config run:\ngcloud projects set-iam-policy [PROJECT_ID] /tmp/policy.yaml.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = dualAuthDelete does not exist or dualAuthDelete.enabled is false```
IBM Cloud Key Protect Key dual authorization for deletion is not enabled This policy identifies IBM Cloud Key Protect Key that has dual authorization for deletion is disabled. Dual authorization for Key Protect service instances is an extra policy that helps to prevent accidental or malicious deletion of keys in your Key Protect instance. It is recommended that dual authorization for deletion of all keys in a Key Protect instance is enabled. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to IBM Cloud CLI\n2. For setting up the IBM cloud CLI for Key Protect, please refer to the below URL: \nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-set-up-cli#install-cli\n3. To change the region where the reported Key Protect instance is located, run the following IBM cloud CLI command:\nibmcloud target -r <TARGET_INSTANCE_REGION> \n4. To enable dual authorization policy for your Key Protect instance key, run the following IBM cloud CLI command:\nibmcloud kp key policy-update dual-auth-delete <Reported KeyID or KEY_ALIAS> --enable --instance-id <TARGET_INSTANCE_ID where the reported key is present>\nReference: https://cloud.ibm.com/docs/key-protect?topic=key-protect-key-protect-cli-reference#kp-key-policy-update-dual\n 5. To enable dual authorization settings at the instance level, Please refer to the below URL.\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-manage-dual-auth.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-group' as X; config from cloud.resource where api.name = 'oci-iam-user' as Y; filter '($.X.name equals Administrators) and ($.X.groupMembers[*].userId contains $.Y.id) and ($.Y.apiKeys[*] size greater than 0)';show Y;```
OCI tenancy administrator users are associated with API keys This policy identifies OCI users who are the members of Administrators group, has API keys associated. It is recommended not to allow OCI users with API keys to have direct tenancy access, to preserve privileged security principle. As a best practice, dissociate the API keys for the OCI Users of Administrators group and use Service-level administrative users with API keys instead. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from Services menu.\n3. Select Users from Identity menu.\n4. For each tenancy administrator user who has an API key, select API Keys from the menu in the lower left hand corner.\n5. Delete any associated keys from the API Keys table.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```
DemoAggPolicy - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and transitEncryptionMode does not equal ignore case SERVER_AUTHENTICATION```
GCP Memorystore for Redis instance does not use in transit encryption This policy identifies GCP Memorystore for Redis instances with no in transit encryption. GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. When in-transit encryption is disabled, all data transmitted between your clients and Redis flows as plaintext over the network, making it vulnerable to man-in-the-middle attacks and packet sniffing, potentially exposing sensitive information like session tokens, personal data, or business secrets. It is recommended to enable In transit encryption for GCP Memorystore for Redis to prevent malicious actors from intercepting sensitive data. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: In-transit encryption cannot be changed for existing Memorystore for Redis instances. A new Memorystore for Redis instance instance should be created.\n\nTo create a new Memorystore for Redis instance with In-transit encryption , please refer to the steps below:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Click on the 'CREATE INSTANCE'\n3. Provide all the other details as per the requirements\n4. Under 'Security', select the 'Enable in-transit encryption' checkbox\n5. Click on the 'CREATE INSTANCE'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.encryption.requireInfrastructureEncryption does not exist or properties.encryption.requireInfrastructureEncryption is false)```
Azure storage account infrastructure encryption is disabled The policy identifies Azure storage accounts for which infrastructure encryption is disabled. Infrastructure double encryption adds a second layer of encryption using service-managed keys. When infrastructure encryption is enabled for a storage account or an encryption scope, data is encrypted twice. Once at the service level and once at the infrastructure level - with two different encryption algorithms and two different keys. Infrastructure encryption is recommended for scenarios where double encrypted data is necessary for compliance requirements. It is recommended to enable infrastructure encryption on Azure storage accounts so that encryption can be implemented at the layer closest to the storage device or network wires. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Configuring Infrastructure double encryption for Azure Storage accounts is only allowed during storage account creation. Once the storage account is provisioned, you cannot change the storage encryption.\n\nTo create an Azure Storage account with Infrastructure double encryption, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/infrastructure-encryption-enable\n\nNOTE: Using Infrastructure double encryption will have performance impact on the read and write speeds of Azure storage accounts due to the additional encryption process..
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = ingressSecurityRules[?any( isStateless is false )] exists```
OCI VCN Security list has stateful security rules This policy identifies the OCI Virtual Cloud Networks (VCN) security lists that have stateful ingress rules configured in their security lists. It is recommended that Virtual Cloud Networks (VCN) security lists are configured with stateless ingress rules to slow the impact of a denial-of-service (DoS) attack. This is applicable to oci cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Ingress rule where Stateless column is set to No\n5. Click on Edit\n6. Select the checkbox STATELESS\n7. Click on Save Changes.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(139,139) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on NetBIOS-SSN port (139) This policy identifies GCP Firewall rules which allow all inbound traffic on NetBIOS-SSN port (139). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the NetBIOS-SSN port (139) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-event-subscriptions' AND json.rule = 'sourceType equals db-security-group and ((status does not equal active or enabled is false) or (status equals active and enabled is true and (sourceIdsList is not empty or eventCategoriesList is not empty)))'```
AWS RDS event subscription disabled for DB security groups This policy identifies RDS event subscriptions for which DB security groups event subscription is disabled. You can create an Amazon RDS event notification subscription so that you can be notified when an event occurs for given DB security groups. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS Dashboard\n4. Click on 'Event subscriptions' (Left Panel)\n5. Choose the reported Event subscription\n6. Click on 'Edit'\n7. On 'Edit event subscription' page, Under 'Details' section; Select 'Yes' for 'Enabled' and Make sure you have subscribed your DB to 'All instances' and 'All event categories'\n8. Click on 'Edit'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-regional-forwarding-rule' AND json.rule = target contains "/targetHttpProxies/" and loadBalancingScheme contains "EXTERNAL"```
GCP public-facing (external) regional load balancer using HTTP protocol This policy identifies GCP public-facing (external) regional load balancers that are using HTTP protocol. Using the HTTP protocol with a GCP external load balancer transmits data in plaintext, making it vulnerable to eavesdropping, interception, and modification by malicious actors. This lack of encryption exposes sensitive information, increases the risk of man-in-the-middle attacks, and compromises the overall security and privacy of the data exchanged between clients and servers. It is recommended to use HTTPS protocol with external-facing load balancers. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to 'Network Service' and then 'Load Balancing'\n3. Click on the 'FRONTENDS' tab\n4. Identify the frontend that is using the reported forwarding rule.\n5. Click on the load balancer name associated with the frontend identified above\n6. Click 'Edit'\n7. Go to 'Frontend configuration'\n8. Delete the frontend rule that allows HTTP protocol.\n9. Add new frontend rule(s) as required. Make sure to use HTTPS protocol instead of HTTP for new rules.\n10. Click 'Update'\n11. Click 'UPDATE LOAD BALANCER' in the pop-up..
```config from cloud.resource where api.name = 'aws-waf-v2-web-acl-resource' AND json.rule = resources.applicationLoadBalancer[*] exists as X; config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = scheme equals internet-facing and type equals application as Y; filter 'not($.X.resources.applicationLoadBalancer[*] contains $.Y.loadBalancerArn)'; show Y;```
AWS Application Load Balancer (ALB) not configured with AWS Web Application Firewall v2 (AWS WAFv2) This policy identifies AWS Application Load Balancers (ALBs) that are not configured with AWS Web Application Firewall v2 (AWS WAFv2). As a best practice, configure the AWS WAFv2 service on the application load balancers to protect against application-layer attacks. To block malicious requests to your application load balancers, define the block criteria in the WAFv2 web access control list (web ACL). This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Make sure your the reported Application Load Balancer requires WAF based on your requirement and Note down the load balancer name.\n3. Navigate to WAF & Shield dashboard\n4. Click on Web ACLs, under AWS WAF section from left panel\n5. If Web ACL is not created; create a new Web ACL and add reported Application Load Balancer to Associated AWS resources.\n6. If you have Web ACL already created; Click on Web ACL and add your reported Application Load Balancer to Associated AWS resources..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(5432,5432) or destinationPortRanges[*] contains _Port.inRange(5432,5432) ))] exists```
Azure Network Security Group allows all traffic on PostgreSQL (TCP Port 5432) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on PostgreSQL (TCP Port 5432). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict PostgreSQL solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and canIpForward is true and name does not start with "gke-"```
GCP VM instances have IP Forwarding enabled This policy identifies VM instances that have IP Forwarding enabled. IP Forwarding could open unintended and undesirable communication paths and allows VM instances to send and receive packets with the non-matching destination or source IPs. To enable the source and destination IP match check, disable IP Forwarding. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: GCP VM instances IP forwarding feature cannot be updated. After an instance is created, the IP forwarding field becomes read-only. So to fix this alert, Create a new VM instance with IP forwarding disabled, migrate all required data from reported VM to newly created and delete the VM instance reported.\n\nTo create a new VM Instance with IP forwarding disabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. Click the CREATE INSTANCE button\n5. Specify other instance parameters as you desire\n6. Click Management, disk, networking, SSH keys\n7. Click Networking\n8. Click on the specific Network interfaces\n9. Set IP forwarding to Off\n10. Click on Done\n11. Click on Create button\n\nTo Delete VM instance which has IP forwarding enabled:\n1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Delete button.
```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = "filter exists" as X; count(X) less than 1```
GCP Log Entries without sinks configured This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireNumbers does not exist or requireNumbers is false'```
Alibaba Cloud RAM password policy does not have a number This policy identifies Alibaba Cloud accounts that do not have a number in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Numbers'\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.privateEndpointConnections[*] does not exist or properties.privateEndpointConnections[*] is empty or (properties.privateEndpointConnections[*] exists and properties.privateEndpointConnections[*].properties.privateLinkServiceConnectionState.status does not equal ignore case Approved))```
Azure Machine learning workspace is not configured with private endpoint This policy identifies Azure Machine learning workspaces that are not configured with private endpoint. Private endpoints in workspace resources allow clients on a virtual network to securely access data over Azure Private Link. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Machine learning workspaces. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure Machine Learning' dashboard\n3. Click on the reported Azure Machine learning workspace\n4. Configure Private endpoint connections under 'Networking' from left panel.\n\nFor information refer:\nhttps://learn.microsoft.com/en-us/azure/machine-learning/how-to-configure-private-link.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-data-factory-v2' AND json.rule = properties.provisioningState equal ignore case Succeeded and identity does not exist or identity.type equal ignore case "None"```
Azure Data Factory (V2) is not configured with managed identity This policy identifies Data Factories (V2) that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Data Factory. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Data factories'\n3. Click on the reported Data factory\n4. Select 'Managed identities' under 'Settings' from left panel \n5. Configure either 'System assigned' or 'User assigned' identity\nFor more on Data factories managed identities refer https://docs.microsoft.com/en-gb/azure/data-factory/data-factory-service-identity?tabs=data-factory\n6. Click on 'Save'.
```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```
gfssrguptn_ui_auto_policies_tests_name njfeujtwmv_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains "aws:kms" or sseAlgorithm contains "aws:kms:dsse") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager equal ignore case CUSTOMER and keyMetadata.keyState contains PendingDeletion as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn and $.Z.s3BucketName equals $.X.bucketName'; show X;```
AWS CloudTrail S3 bucket encrypted with Customer Managed Key (CMK) that is scheduled for deletion This policy identifies AWS CloudTrail S3 buckets encrypted with Customer Managed Key (CMK) that is scheduled for deletion. CloudTrail logs contain account activity related to actions across your AWS infrastructure. These log files stored in Amazon S3 are encrypted by AWS KMS keys. Deleting keys in AWS KMS that are used by CloudTrail is a common defense evasion technique and could be a potential ransomware attacker activity. After a key is deleted, you can no longer decrypt the data that was encrypted under that key, which helps the attacker to hide their malicious activities. It is recommended to regularly monitor the key used for encryption to prevent accidental deletion. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: The following steps are recommended to cancel KMS CMKs which are scheduled for deletion used by the S3 bucket\n\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Navigate to Key Management Service (KMS).\n6. Click on 'Key actions' dropdown.\n7. Click on 'Cancel key deletion'.\n8. Click on 'Enable'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = (origins.items[*] contains "customOriginConfig") and (origins.items[?(@.customOriginConfig.originSslProtocols.items)] contains "SSLv3")```
AWS CloudFront distribution is using insecure SSL protocols for HTTPS communication CloudFront, a content delivery network (CDN) offered by AWS, is not using a secure cipher for distribution. It is a best security practice to enforce the use of secure ciphers TLSv1.0, TLSv1.1, and/or TLSv1.2 in a CloudFront Distribution's certificate configuration. This policy scans for any deviations from this practice and returns the results. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Communication between CloudFront and your Custom Origin should enforce the use of secure ciphers. Modify the CloudFront Origin's Origin SSL Protocol to include TLSv1.0, TLSv1.1, and/or TLSv1.2.\n\n1. Go to the AWS console CloudFront dashboard.\n2. Select your distribution Id.\n3. Select the 'Origins' tab.\n4. Check the origin you want to modify then select Edit.\n5. Remove (uncheck) 'SSLv3' from Origin SSL Protocols.\n6. Select 'Yes, Edit.'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```
Critical of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-batch-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.networkProfile.accountAccess.defaultAction equal ignore case Allow and properties.publicNetworkAccess equal ignore case Enabled```
Azure Batch Account configured with overly permissive network access This policy identifies Batch Accounts configured with overly permissive network access. By default, Batch accounts are accessible from all the networks. With an Account access IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. With Private access Virtual Networks, the network traffic path is secured on both ends. It is recommended to configure the Batch account with an IP firewall or by Virtual Network, so that the Batch account is accessible only to restricted entities. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure private access private endpoint, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/private-connectivity#azure-portal\n\nTo disable public network, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/public-network-access#disable-public-network-access\n\nIf Batch account is intended access from public network, restrict it to specific IP ranges. To allow public network access with specific network rules, follow below URL:\nhttps://docs.microsoft.com/en-gb/azure/batch/public-network-access#access-from-selected-public-networks.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = listeners[?any( protocol does not equal ignore case https AND https_redirect does not exist )] exists```
IBM Cloud Application Load Balancer for VPC not configured with HTTPS Listeners This policy identifies IBM Cloud Application Load Balancers for VPC that has different listener protocol instead of HTTPS. HTTPS listeners uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS listeners for additional security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancers' reported in the alert\n4. Under 'Front-end listeners' tab, click on three dots on the right corner of a row containing listener with protocol other than HTTPS. Then click on 'Edit'.\n5. If the protocol is 'TCP', please delete the listener by clicking on three dots on the right corner. Then click on 'Delete'.\n6. Click on 'Create listener'.\n7. In the 'Edit front-end listener' screen, select 'HTTPS' from the 'Protocol' dropdown.\n8. Under 'Secrets Manager' please select an instance and select an SSL 'Certificate'. Make sure that the load balancer is authorised to access the SSL certificate.\n9. Click on 'Save'.
```config from cloud.resource where api.name = 'alibaba-cloud-rds-instance' as X; config from cloud.resource where api.name = 'alibaba-cloud-vpc' as Y; filter '$.X.vpcId equals $.Y.vpcId and $.Y.isDefault is true'; show X;```
Alibaba Cloud ApsaraDB RDS instance is using the default VPC This policy identifies ApsaraDB RDS instances which are configured with the default VPC. It is recommended to use a VPC configuration based on your security and networking requirements. You should create your own RDS instance VPC instead of using the default so that you can have full control over the RDS network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: NOTE: The VPC switching process will interrupt the availability of your instance for 30 seconds. Make sure that your application is configured with automatic reconnection policies.\n\n1. Log in to Alibaba Cloud Portal\n2. Go to ApsaraDB for RDS\n3. In the left navigation pane, click on 'Instances'\n4. Click on the reported instance\n5. In the left navigation pane, click on 'Database Connection'\n6. In the 'Database Connection' section, click on 'Switch VSwitch'\n7. On the 'Switch VSwitch' popup window, Choose custom VPC and Virtual Switch instead of default VPC from the 'Switch To' dropdown list.\n8. Click on OK\n9. Read the Notes properly and make sure all necessary actions are taken and then Click on 'Switch'.
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource does not equal ignore case "Microsoft.Keyvault" as X; config from cloud.resource where api.name = 'azure-log-analytics-linked-storage-accounts' AND json.rule = properties.dataSourceType equal ignore case Query as Y; filter '$.X.id contains $.Y.properties.storageAccountIds'; show X;```
Azure Log analytics linked storage account is not configured with CMK encryption This policy identifies Azure Log analytics linked Storage accounts which are not encrypted with CMK. By default Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts linked Log analytics for better control on the data. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure encryption using CMK for existing Azure Log analytics linked storage account, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/logs/private-storage#customer-managed-key-data-encryption.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cdn-endpoint' AND json.rule = properties.customDomains[?any( properties.customHttpsProvisioningState equals Enabled and properties.customHttpsParameters.minimumTlsVersion equals TLS10 )] exists```
Azure CDN Endpoint Custom domains using insecure TLS version This policy identifies Azure CDN Endpoint Custom domains which has insecure TLS version. TLS 1.2 resolves the security gap from its preceding versions. As a best security practice, use TLS 1.2 as the minimum TLS version for Azure CDN Endpoint Custom domains. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to 'CDN profiles'\n3. Choose the reported each 'CDN Endpoints' under each 'CDN profiles'\n4. Under 'Settings' section, Click on 'Custom domains'\n5. Select the 'Custom domain' for which you need to set TLS version\n6. Under 'Configure' select 'TLS 1.2' for 'Minimum TLS version'\n7. Click on 'Save'.