query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = 'destination.bucket exists' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' AND json.rule = (retentionPolicy.isLocked does not exist or retentionPolicy.isLocked is false) as Y; filter '($.X.destination.bucket contains $.Y.name)'; show Y;```
GCP Log bucket retention policy is not configured using bucket lock This policy identifies GCP log buckets for which retention policy is not configured using bucket lock. It is recommended to configure the data retention policy for cloud storage buckets using bucket lock to permanently prevent the policy from being reduced or removed in case the system is compromised by an attacker or a malicious insider. Note: Locking a bucket is an irreversible action. Once you lock a bucket, you cannot remove the retention policy from the bucket or decrease the retention period for the policy. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To set a retention policy on a bucket, please refer to the URL given below:\nhttps://cloud.google.com/storage/docs/using-bucket-lock#set-policy\n\nTo lock a bucket, please refer to the URL given below:\nhttps://cloud.google.com/storage/docs/using-bucket-lock#lock-bucket.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'user does not equal <root_account> and _DateTime.ageInDays(user_creation_time) > 30 and (password_last_used equals N/A or password_last_used equals no_information or _DateTime.ageInDays(password_last_used) > 30) and ((access_key_1_last_used_date equals N/A or _DateTime.ageInDays(access_key_1_last_used_date) > 30) and (access_key_2_last_used_date equals N/A or _DateTime.ageInDays(access_key_2_last_used_date) > 30))'```
AWS Inactive users for more than 30 days This policy identifies users who are inactive for more than 30 days. Inactive user accounts are an easy target for attacker because any activity on the account will largely get unnoticed. NOTE: Exception to this policy is, it is not valid for SSO login users and Root users This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: 1.Sign in to AWS console and navigate to IAM.\n2.Identify the user reported and Make sure that the user has legitimate reason to be inactive for such an extended period.\n3. Delete the user account, if the user no longer needs access to the console or no longer exists..
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= 'sourceRanges[*] contains 0.0.0.0/0 and allowed[?any(ports contains _Port.inRange(25,25) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)) )] exists'```
harish-GCP Firewall rule allows all traffic on SMTP port (25) This policy identifies GCP Firewall rules which allow all inbound traffic on SMTP port (25). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the SMTP port (25) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cdn-endpoint' AND json.rule = properties.customDomains[?any( properties.customHttpsProvisioningState does not equal Enabled )] exists```
Azure CDN Endpoint Custom domains is not configured with HTTPS This policy identifies Azure CDN Endpoint Custom domains which has not configured with HTTPS. Enabling HTTPS would allow sensitive data to be delivered securely via TLS/SSL encryption when it is sent across the internet. It is recommended to enable HTTPS in Azure CDN Endpoint Custom domains which will provide additional security and protects your web applications from attacks. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to 'CDN profiles'\n3. Choose the reported each 'CDN Endpoints' under each 'CDN profiles'\n4. Under 'Settings' section, Click on 'Custom domains'\n5. Select the 'Custom domain' for which you need to enable HTTPS\n6. Under 'Configure' select 'On' for 'Custom domain HTTPS'\n7. Select 'Certificate management type' and 'Minimum TLS version'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-route53-domain' AND json.rule = statusList[*] does not contain "clientTransferProhibited"```
AWS Route53 Domain transfer lock is not enabled This policy identifies the AWS Route53 domain, which is not enabled with transfer lock. Route 53 Domain Transfer Lock is a security feature that prevents unauthorised domain transfers by locking the domain at the registrar level. The feature sets the "clientTransferProhibited" flag, which is a registry setting enabled by the registrar to force all transfer requests to be rejected automatically. If Route 53 Domain Transfer Lock is disabled, your domain is vulnerable to unauthorized transfers, which can lead to service disruptions, data breaches, reputational damage, and financial loss. It is recommended to enable Route 53 Domain Transfer Lock to prevent unauthorized domain transfers and protect your domain from potential security threats and disruptions. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To lock a domain to prevent unauthorized transfer to another registrar, perform the following actions:\n\n1. Sign in to the AWS Management Console and open the Route 53 console at https://console.aws.amazon.com/route53/.\n2. In the navigation pane, choose 'Registered Domains'.\n3. Choose the name of the domain that is reported.\n4. On the 'Details' section, in the 'Actions' dropdown, choose 'Turn on transfer lock' to turn the transfer lock on.\n5. You can navigate to the 'Requests' page to see the progress of your request..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(53,53)"```
Alibaba Cloud Security group allow internet traffic to DNS port (53) This policy identifies Security groups that allow inbound traffic on DNS port (53) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 53, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' as X; config from cloud.resource where api.name = 'aws-ec2-describe-subnets' as Y; filter 'not $.X.vpcId equals $.Y.vpcId'; show X;```
AWS VPC not in use This policy identifies VPCs which are not in use. These VPC resources might be unintentionally launched and AWS also imposes a limit to the number of VPCs allowed per region. So it is recommended to either delete or use effectively such VPCs that do not have resources attached to them. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. Click on 'Your VPCs' and Choose the reported VPC\n5. If you want use reported VPC, Associate subnets to VPC or If you want to delete VPC, Click on 'Actions' and Choose 'Delete VPC' from the dropdown.
```config from cloud.resource where api.name = 'alibaba-cloud-action-trail' AND json.rule = ossBucketName equals 42```
Tamir policy This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-lightsail-instance' AND json.rule = state.name contains "running" and networking.ports[?any( accessDirection equals inbound and (cidrs contains "0.0.0.0/0" or ipv6Cidrs contains "::/0") and (((toPort == 22 or fromPort == 22) or (toPort > 22 and fromPort < 22)) or ((toPort == 3389 or fromPort == 3389) or (toPort > 3389 and fromPort < 3389))))] exists```
AWS Lightsail Instance does not restrict traffic on admin ports This policy identifies the AWS Lightsail instance having network rule with unrestricted access ("0.0.0.0/0" or "::/0") on port 22 or 3389. The firewall in Amazon Lightsail manages inbound traffic permitted to connect to your instance via its public IP address, controlling access to specific IPs and ports. Leaving administrative ports open to unrestricted access increases the risk of unauthorized access, such as brute-force attacks, which can compromise the instance and expose sensitive data. It is recommended to limit access to specific IP addresses in the firewall rules to reduce unauthorized access attempts. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To restrict the traffic on the AWS Lighsail instance firewall rule to known IP/CIDR range, perform the following actions:\n\n1. Sign in to the Lightsail console https://lightsail.aws.amazon.com/.\n2. In the left navigation pane, choose Instances.\n3. Choose the reported instance.\n4. Choose the Networking tab on your instance's management page.\n5. Click on the Edit icon on the rule contains the unrestricted access ("0.0.0.0/0" or "::/0") on port 22 or 3389 under the 'IPv4 Firewall' section or 'IPv6 firewall'\n6a. Click on 'Restrict to IP address' to update Source IP address to the Trusted CIDR range\nor \n6b. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 or 3389 or (range containing 3389 or 22) by clicking delete icon.\n\nNote: Before making any changes, please check the impact on your applications/services..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (elasticsearchClusterConfig.dedicatedMasterEnabled is false or elasticsearchClusterConfig.dedicatedMasterEnabled does not exist)'```
AWS Elasticsearch domain has Dedicated master set to disabled This policy identifies Elasticsearch domains for which Dedicated master is disabled in your AWS account. If dedicated master nodes are provided those handle the management tasks and cluster nodes can easily manage index and search requests from different types of workload and make them more resilient in production. Dedicated master nodes improve environmental stability by freeing all the management tasks from the cluster data nodes. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Click on 'Edit Domain'\n6. On the 'Edit domain' page,\n a. Check 'Enable dedicated master' checkbox to enable dedicated master nodes for the current cluster.\n b. Select the 'Instance type' based on your ES cluster requirements from the dropdown list.\n Note: As dedicated master nodes do not hold any data nor process any search and query requests, the instance node for this role typically does not require a large amount of CPU/RAM memory.\n c. Select the 'Number of master nodes' from dropdown list to allocate dedicated master nodes.\n7. Click on 'Submit'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule = 'requireNumbers contains false and requireSymbols contains false and expirePasswords contains false and allowUsersToChangePassword contains false and requireLowercaseCharacters contains false and requireUppercaseCharacters contains false and maxPasswordAge does not exist and passwordReusePrevention does not exist and minimumPasswordLength==6'```
Copy of AWS IAM Password policy is unsecure Checks to ensure that IAM password policy is in place for the cloud accounts. As a security best practice, customers must have strong password policies in place. This policy ensures password policies are set with all following options: - Minimum Password Length - At least one Uppercase letter - At least one Lowercase letter - At least one Number - At least one Symbol/non-alphanumeric character - Users have permission to change their own password - Password expiration period - Password reuse - Password expiration requires administrator reset This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'IAM' Service\n2. Click on 'Account Settings'\n3. Under 'Password Policy', select and set all the options\n4. Click on 'Apply password policy'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'network equals default'```
GCP Kubernetes Engine Clusters using the default network This policy identifies Google Kubernetes Engine (GKE) clusters that are configured to use the default network. Because GKE uses this network when creating routes and firewalls for the cluster, as a best practice define a network configuration that meets your security and networking requirements for ingress and egress traffic, instead of using the default network. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot change the network attached to an existing GKE cluster. To resolve this alert, create a new cluster with a custom network that meets your requirements, then migrate the cluster data from the reported cluster to this newly created GKE cluster and delete the reported GKE cluster.\n\nTo create new Kubernetes engine cluster with the custom network, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Set new cluster parameters as per your requirement and make sure 'Network' is set to other than 'default' under Networking section.\n6. Click on Save\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equal ignore case Ready and ['sqlServer'].['properties.privateEndpointConnections'] is empty```
Azure SQL Database server not configured with private endpoint This policy identifies Azure SQL database servers that are not configured with private endpoint. Private endpoint connections enforce secure communication by enabling private connectivity to Azure Database for SQL. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Azure SQL database. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure Portal \n2. Navigate to 'SQL Servers' and select the reported server\n3. Open the Private endpoint settings\n4. Click on Add Private endpoint to create and add a private endpoint\n\nRefer to below link for step by step process:\nhttps://learn.microsoft.com/en-us/azure/private-link/tutorial-private-endpoint-sql-portal.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/iam.serviceAccountAdmin) and (roles[*] contains roles/iam.serviceAccountUser)'```
GCP IAM Users have overly permissive service account privileges This policy identifies IAM users which have overly permissive service account privileges. Any user should not have Service Account Admin and Service Account User, both roles assigned at a time. Built-in/Predefined IAM role Service Account admin allows the user to create, delete, manage service accounts. Built-in/Predefined IAM role Service Account User allows the user to assign service accounts to Apps/Compute Instances. It is recommended to follow the principle of 'Separation of Duties' ensuring that one individual does not have all the necessary permissions to be able to complete a malicious action or meant to help avoid security or privacy incidents and errors. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM\n4. From the list of users, choose the reported IAM user\n5. Click on Edit permissions pencil icon\n6. For member having 'Service Account admin' and 'Service Account User' roles granted/assigned, Click on the Delete Bin icon to remove the role from a member.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and properties.privateEndpointConnections[*] does not exist```
Azure Key vault Private endpoint connection is not configured This policy identifies Key vaults that are not configured with a private endpoint connection. Azure Key vault private endpoints can be configured using Azure Private Link. Private Link allows users to access an Azure Key vault from within the virtual network or from any peered virtual network. When Private Link is combined with restricted NSG policies, it helps reduce the risk of data exfiltration. It is recommended to configure a Private endpoint connection to Key vaults. For more details: https://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer following URL for configuring Private endpoints on your Key vaults:\nhttps://docs.microsoft.com/en-us/azure/key-vault/general/private-link-service\n\nNOTE: The Key vault associated with private endpoints should not be allowing access from all networks in Firewalls and virtual networks section, make sure the Selected networks are configured with restrictive Virtual networks access. Otherwise the security provided by private endpoints will not be satisfied..
```config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = '(serverBlobAuditingPolicy does not exist or serverBlobAuditingPolicy is empty or serverBlobAuditingPolicy.properties.state equals Disabled or serverBlobAuditingPolicy.properties.retentionDays does not exist or (serverBlobAuditingPolicy.properties.storageEndpoint is not empty and serverBlobAuditingPolicy.properties.state equals Enabled and serverBlobAuditingPolicy.properties.retentionDays does not equal 0 and serverBlobAuditingPolicy.properties.retentionDays less than 90))' as X; config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = '(blobAuditPolicy does not exist or blobAuditPolicy is empty or blobAuditPolicy.properties.retentionDays does not exist or (blobAuditPolicy.properties.storageEndpoint is not empty and blobAuditPolicy.properties.state equals Enabled and blobAuditPolicy.properties.retentionDays does not equal 0 and blobAuditPolicy.properties.retentionDays less than 90))' as Y; filter '$.Y.blobAuditPolicy.id contains $.X.sqlServer.name'; show Y;```
Azure SQL Database with Auditing Retention less than 90 days This policy identifies SQL Databases that have Auditing Retention less than 90 days. Audit Logs can be used to check for anomalies and gives insight into suspected breaches or misuse of information and access. If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended to configure SQL database Audit Retention to be greater than or equal to 90 days and leave the database-level auditing disabled for all databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If server auditing is enabled, it always applies to the database. The database will be audited, regardless of the database auditing settings. It is recommended that you enable only server-level auditing setting and leave the database-level auditing disabled for all databases.\n\nTo configure the Server level audit setting:\n1. Log in to the Azure Portal\n2. Go to SQL servers\n3. Choose the reported each DB server\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\nNote: The default value for the retention period is 0 (unlimited retention).\n7. Click on 'Save'\n\nIt is recommended to avoid enabling both server auditing and database blob auditing together, unless; If you want to use a different storage account, retention period or Log Analytics Workspace for a specific database or want to use for audit event types or categories for a specific database that differ from the rest of the databases on the server.\nTo configure the Database level audit setting:\n1. Log in to the Azure Portal\n2. Go to SQL databases\n3. Choose the reported each DB\n4. Select 'Auditing', and verify that 'Enable Azure SQL Auditing' is set\n5. If Storage is selected, expand 'Advanced properties'\n6. Set the Retention (days) setting is greater than 90 days or 0 for unlimited retention.\nNote: The default value for the retention period is 0 (unlimited retention).\n7. Click on 'Save'.
```config from cloud.resource where api.name = 'aws-eks-describe-cluster' as X; config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[*].ipv4Ranges[*] contains 0.0.0.0/0 or ipPermissions[*].ipv6Ranges[*] contains ::/0) as Y; filter '$.X.resourcesVpcConfig.securityGroupIds contains $.Y.groupId or $.X.resourcesVpcConfig.clusterSecurityGroupId contains $.Y.groupId'; show Y;```
AWS EKS cluster security group overly permissive to all traffic This policy identifies EKS cluster Security groups that are overly permissive to all traffic. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on 'Inbound Rules'\n5. Remove the rule which has the 'Source' value as 0.0.0.0/0 or ::/0.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Outbound and (sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationAddressPrefix equals * or destinationAddressPrefix equals Internet))] exists```
Azure Network Security Group with overly permissive outbound rule This policy identifies NSGs with overly permissive outbound rules allowing outgoing traffic from source type any or source with public IP range. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic to known sources on authorized protocols and ports. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Azure Portal\n2. On left Navigation, Click on All services\n3. Under NETWORKING, click on Network security groups\n4. Choose the reported resource\n5. Under SETTINGS, Click on Outbound security rules\n6. Identify the row which matches conditions mentioned below:\na) Source: Any, public IPs\nb) Destination: Any\nc) Action: Allow\n7. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-recovery-service-vault' AND json.rule = properties.provisioningState equals Succeeded and (identity does not exist or identity.type equal ignore case "None")```
Azure Recovery Services vault is not configured with managed identity This policy identifies Recovery Services vaults that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Recovery Services vault. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Recovery Services vaults dashboard\n3. Click on the reported Recovery Services vault\n4. Under Setting section, Click on 'Identity'\n5. Configure either 'System assigned' or 'User assigned' managed identity based on your requirement.\n6. Click on 'Save'.
```config from cloud.resource where api.name = 'aws-ecs-container-instance' AND json.rule = status equals ACTIVE as X; config from cloud.resource where api.name = 'aws-ec2-describe-volumes' AND json.rule = state contains in-use and encrypted is false as Y; filter '$.Y.attachments[*].instanceId contains $.X.ec2InstanceId'; show Y;```
AWS ECS Cluster instance volume encryption for data at rest is disabled This policy identifies the ECS Cluster instance volumes for which encryption for data at rest is disabled. Encrypting data at rest reduces unintentional exposure of data and prevents unauthorized users from accessing sensitive data on your AWS ECS clusters. It is recommended to configure encryption for your ECS cluster instance volumes using an encryption key. NOTE: ECS can be launched using ECS Fargate launch type or EC2 Instance. ECS Fargate launch type pulls images from the Elastic Container Registry, which are transmitted over HTTPS and are automatically encrypted at rest using S3 server-side encryption. So this policy is only applicable to ECS launched using EC2 Instances. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To enable encryption to your ECS Cluster instance volumes, follow below URL:\nhttps://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html\n\nNOTE: The existing EBS volumes or snapshots cannot be encrypted, but when you copy unencrypted snapshots, or restore unencrypted volumes, the resulting snapshots or volumes are encrypted..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_executor_stats or settings.databaseFlags[?any(name contains log_executor_stats and value contains on)] exists)"```
GCP PostgreSQL instance database flag log_executor_stats is not set to off This policy identifies PostgreSQL database instances in which database flag log_executor_stats is not set to off. The log_executor_stats flag enables a crude profiling method for logging PostgreSQL executor performance statistics. Even though it can be useful for troubleshooting, it may increase the number of logs significantly and have performance overhead. It is recommended to set log_executor_stats off. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in the 'Flags' section, choose the flag 'log_executor_stats' from the drop-down menu, and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_executor_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'.
```config from cloud.resource where api.name = 'aws-ec2-describe-images' AND json.rule = image.blockDeviceMappings[*].deviceName exists```
haridemo This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are [None]. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = config.remoteDebuggingEnabled is true```
Azure App Services Remote debugging is enabled This policy identifies Azure App Services which has Remote debugging enabled. Enabling Remote debugging feature opens up inbound ports on App Services. It is recommended to disabled Azure App Services Remote debugging. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'App Services' from the left pane\n3. Select the reported App Services\n4. Go to 'Configurations' under 'Settings'\n5. Click on 'General settings'\n6. Select 'Off' for 'Remote debugging' under 'Debugging section\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(135,135) or destinationPortRanges[*] contains _Port.inRange(135,135) ))] exists```
Azure Network Security Group allows all traffic on Windows RPC (TCP Port 135) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on Windows RPC (TCP Port 135). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict Windows RPC solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where api.name = 'aws-dms-replication-task' AND json.rule = ReplicationTaskSettings.Logging.EnableLogging is false or ReplicationTaskSettings.Logging.LogComponents[?any( Id is member of ("SOURCE_CAPTURE","SOURCE_UNLOAD") and Severity is not member of ("LOGGER_SEVERITY_DEFAULT","LOGGER_SEVERITY_DEBUG","LOGGER_SEVERITY_DETAILED_DEBUG") )] exists```
AWS DMS replication task for the source database have logging not set to the minimum severity level This policy identifies AWS DMS replication task where logging is either not enabled or set below the minimum severity level, such as LOGGER_SEVERITY_DEFAULT, for SOURCE_CAPTURE and SOURCE_UNLOAD. Logging is indispensable in DMS replication for various purposes, including monitoring, troubleshooting, auditing, performance analysis, error detection, recovery, and historical reporting. SOURCE_CAPTURE captures ongoing replication or CDC data from the source database, while SOURCE_UNLOAD unloads data during full load. Logging these tasks is crucial for ensuring data integrity, compliance, and accountability during migration. It is recommended to enable logging for AWS DMS replication tasks and set a minimal logging level of DEFAULT for SOURCE_CAPTURE and SOURCE_UNLOAD to ensure that essential messages are logged, facilitating effective monitoring, troubleshooting, and compliance efforts. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for Source capture and Source Unload DMS replicatation tasks log components during migration:\n\n1. Log in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to 'Migration & Transfer' from the 'Services' dropdown and select 'Database Migration Service'\n4. In the navigation panel, under 'Migrate data', click on 'Database migration tasks'\n5. Select the reported replication task and choose 'Modify' from the 'Actions' dropdown on the right\n6. Under the 'Task settings' section, enable 'Turn on CloudWatch logs' under 'Task logs'\n7. Set the log component severity for both 'Source capture' and 'Source Unload' components to 'Default' or greater according to your business requirements\n8. Click 'Save' to save the changes.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/securitySolutions/write" as X; count(X) less than 1```
Azure Activity log alert for Create or update security solution does not exist This policy identifies the Azure accounts in which activity log alert for Create or update security solution does not exist. Creating an activity log alert for Create or update security solution gives insight into changes to the active security solutions and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Security Solutions (Microsoft.Security/securitySolutions)' and Other fields you can set based on your custom settings.\n6. Click on Create.
```config from cloud.resource where api.name = 'aws-code-build-project' AND json.rule = environment.environmentVariables[*].name exists and environment.environmentVariables[?any( (name contains "AWS_ACCESS_KEY_ID" or name contains "AWS_SECRET_ACCESS_KEY" or name contains "PASSWORD" ) and type equals "PLAINTEXT")] exists```
AWS CodeBuild project environment variables contain plaintext AWS credentials This policy identifies the AWS CodeBuild project that contains the environment variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and Password in plaintext. AWS CodeBuild environment variables configure build settings, pass contextual information, and manage sensitive data during the build process. Authentication credentials like AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY should never be stored in clear text, as this could lead to unintended data exposure and unauthorized access. It is recommended that AWS CodeBuild environment variables be securely managed using AWS Secrets Manager or AWS Systems Manager Parameter Store to store sensitive data and remove plaintext credentials. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remove environment variables from a AWS CodeBuild project,\n\n1. Log in to the AWS Management Console.\n2. In the console, select the specific region from the region drop-down in the top right corner, for which the alert is generated.\n3. Navigate to 'Developer Tools' from the 'Services' dropdown and select the 'CodeBuild'.\n4. In the navigation pane, choose 'Build projects'.\n5. Select the reported Build project and choose Edit, then click 'Environment' and Expand 'Additional configuration'.\n6. Choose 'Remove' next to the environment variables that contain plaintext credentials.\n7. When you have finished changing your CodeBuild environment configuration, click ‘Update environment’.\n\nYou can store environment variables with sensitive values in the AWS Systems Manager Parameter Store or AWS Secrets Manager and then retrieve them from your build spec according to your business requirements..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```
Informational - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save.
```config from cloud.resource where api.name='aws-cloudtrail-describe-trails' AND cloud.type = 'aws' AND json.rule = 'kmsKeyId does not exist'```
AWS CloudTrail logs are not encrypted using Customer Master Keys (CMKs) Checks to ensure that CloudTrail logs are encrypted. AWS CloudTrail is a service that enables governance, compliance, operational & risk auditing of the AWS account. It is a compliance and security best practice to encrypt the CloudTrail data since it may contain sensitive information. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to AWS Console and navigate to the 'CloudTrail' service.\n2. For each trail, under Configuration > Storage Location, select 'Yes' to 'Encrypt log files' setting\n3.Choose and existing KMS key or create a new one to encrypt the logs with..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = binaryAuthorization.evaluationMode does not exist or binaryAuthorization.evaluationMode equal ignore case EVALUATION_MODE_UNSPECIFIED or binaryAuthorization.evaluationMode equal ignore case DISABLED```
GCP Kubernetes Engine Clusters have binary authorization disabled This policy identifies Google Kubernetes Engine (GKE) clusters that have disabled binary authorization. Binary authorization is a security control that ensures only trusted container images are deployed on GKE clusters. As a best practice, verify images prior to deployment to reduce the risk of running unintended or malicious code in your environment. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Binary authorization for a GKE cluster, please refer to the URL given below:\nhttps://cloud.google.com/binary-authorization/docs/enable-cluster#console.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-virtual-machine-scale-set' AND json.rule = properties.virtualMachineProfile.storageProfile.osDisk.vhdContainers exists```
Azure Virtual machine scale sets are not utilising Managed Disks This policy identifies Azure Virtual machine scale sets which are not utilising Managed Disks. Using Azure Managed disk over traditional BLOB storage based VHD's has more advantage features like Managed disks are by default encrypted, reduces cost over storage accounts and more resilient as Microsoft will manage the disk storage and move around if underlying hardware goes faulty. It is recommended to move BLOB based VHD's to Managed Disks. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently migrating Virtual machine scale sets VHD disks to Azure Managed Disks is not available.\nIt is recommended that all new future scale sets be deployed with managed disks.\n\nFollow steps given in the URL to create new Virtual machine Scale sets,\n\nhttps://docs.microsoft.com/en-us/azure/virtual-machine-scale-sets/quick-create-portal.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-event-hub-namespace' AND json.rule = authorizationRules[*].name exists and authorizationRules[?any(name does not equal RootManageSharedAccessKey)] exists```
Azure Event Hub Namespace having authorization rules except RootManageSharedAccessKey This policy identifies Azure Event Hub Namespaces which have authorization rules except RootManageSharedAccessKey. Having Azure Event Hub namespace authorization rules other than 'RootManageSharedAccessKey' could provide access to all queues and topics under the namespace which pose a risk if these additional rules are not properly managed or secured. As best practice, it is recommended to remove Event Hub namespace authorization rules other than RootManageSharedAccessKey and create access policies at the entity level, which provide access to only that specific entity for queues and topics. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Event Hubs' dashboard\n3. Select the reported Event Hubs Namespace\n4. Select 'Shared access policies' under 'Settings' section\n5. Delete all other Shared access policy rules except 'RootManageSharedAccessKey'..
```config from cloud.resource where api.name = 'azure-sql-db-list' AND json.rule = sqlDatabase.properties.status equals Online and (securityAlertPolicy.properties.state equals Disabled or securityAlertPolicy does not exist or securityAlertPolicy.[*] isEmpty) as X; config from cloud.resource where api.name = 'azure-sql-server-list' AND json.rule = ['sqlServer'].['properties.state'] equals Ready and (serverSecurityAlertPolicy.properties.state equals Disabled or serverSecurityAlertPolicy does not exist or serverSecurityAlertPolicy isEmpty) as Y; filter "$.X.blobAuditPolicy.id contains $.Y.sqlServer.name"; show X;```
Azure SQL databases Defender setting is set to Off This policy identifies Azure SQL databases which have Defender setting set to Off. Azure Defender for SQL provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, SQL injection attacks, as well as anomalous database access patterns. Advanced threat protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If Azure Defender is enabled at server level it will also be applied to all the database, regardless of the database Azure Defender settings. It is recommended that you enable only server-level Azure Defender settings.\nTo enable auditing at server level:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database and SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'\n\nIt is recommended to avoid enabling Azure Defender in both server and database.\nIf you want to enable different storage account, email addresses for scan and alert notifications or 'Advanced Threat Protection types' for a specific database that differ from the rest of the databases on the server. Then to enable auditing at database level by:\n1. Log in to the Azure Portal\n2. Note down the reported SQL database\n3. Select 'SQL databases', Click on the SQL database instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'.
```config from cloud.resource where api.name = 'aws-elb-describe-load-balancers' as X; config from cloud.resource where api.name = 'aws-iam-list-server-certificates' as Y; filter '($.X.description.listenerDescriptions[*].listener.sslcertificateId equals $.Y.arn and ((_DateTime.ageInDays($.Y.expiration) > -90 and (_DateTime.ageInDays($.Y.expiration) < 0 or _DateTime.ageInDays($.Y.expiration) == 0)) or (_DateTime.ageInDays($.Y.expiration) > 0)))'; show X;```
AWS Elastic Load Balancer (ELB) with IAM certificate expiring in 90 days This policy identifies Elastic Load Balancers (ELB) which are using IAM certificates expiring in 90 days or using expired certificates. Removing expired IAM certificates eliminates the risk and prevents the damage of credibility of the application/website behind the ELB. As a best practice, it is recommended to reimport expiring certificates while preserving the ELB associations of the original certificate. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Removing invalid certificates via AWS Management Console is not currently supported. To delete/upload SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\nRemediation CLI:\n1. Run describe-load-balancers command to make sure that the expiring server certificate is not currently used by any active load balancer.\naws elb describe-load-balancers --region <COMPUTE_REGION> --load-balancer-names <ELB_NAME> --query 'LoadBalancerDescriptions[*].ListenerDescriptions[*].Listener.SSLCertificateId'\nThis command output will return the Amazon Resource Name (ARN) for the SSL certificate currently used by the selected ELB:\n[\n [\n \"arn:aws:iam::1234567890:server-certificate/MyCertificate\"\n ]\n]\n2. Create new AWS IAM certificate with your desired parameters value\n3. To upload new IAM Certificate:\naws iam upload-server-certificate --server-certificate-name <NEW_CERTIFICATE_NAME> --certificate-body file://Certificate.pem --certificate-chain file://CertificateChain.pem --private-key file://PrivateKey.pem\n4. To replaces the existing SSL certificate for the specified HTTPS load balancer:\naws elb set-load-balancer-listener-ssl-certificate --load-balancer-name <ELB_NAME> --load-balancer-port 443 --ssl-certificate-id arn:aws:iam::1234567890:server-certificate/<NEW_CERTIFICATE_NAME>\n5. Now that is safe to remove the expiring SSL/TLS certificate from AWS IAM, To delete it run:\naws iam delete-server-certificate --server-certificate-name <CERTIFICATE_NAME>.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-policy' AND json.rule = "(policy.policyType does not contain System) and (defaultPolicyVersion.policyDocument.Statement[?(@.Resource == '*' && @.Effect== 'Allow')].Action equals *)"```
Alibaba Cloud RAM policy allows full administrative privileges This policy identifies RAM policies with full administrative privileges. RAM policies are the means by which privileges are granted to users, groups or roles. It is recommended to grant the least privilege access like granting only the permissions required to perform a task, instead of allowing full administrative privileges. This is applicable to alibaba_cloud cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management\n3. In the left-side navigation pane, click 'Policies'\n4. Click on the reported RAM policy\n5. Under the 'References' tab, 'Revoke Permission' for all users/roles/groups attached to the policy.\n6. Delete the reported policy\n\nDetermine what users need to do and then craft policies for them that let the users perform only those tasks, instead of allowing full administrative privileges..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'attributes.crossZoneLoadBalancing.enabled is false'```
AWS Elastic Load Balancer (Classic) with cross-zone load balancing disabled This policy identifies Classic Elastic Load Balancers which have cross-zone load balancing disabled. When Cross-zone load balancing enabled, classic load balancer distributes requests evenly across the registered instances in all enabled Availability Zones. Cross-zone load balancing reduces the need to maintain equivalent numbers of instances in each enabled Availability Zone, and improves your application's ability to handle the loss of one or more instances. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Description tab, choose 'Change cross-zone load balancing setting'\n7. On the 'Configure Cross-Zone Load Balancing' popup dialog, select 'Enable'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = "nodePools[*].config.metadata does not exist or nodePools[*].config.metadata does not contain disable-legacy-endpoints or nodePools[*].config.metadata.disable-legacy-endpoints does not contain true"```
GCP Kubernetes Engine Clusters have legacy compute engine metadata endpoints enabled This policy identifies Google Kubernetes Engine (GKE) clusters that have legacy compute engine metadata endpoints enabled. Because GKE uses instance metadata to configure node VMs, some of this metadata is potentially sensitive and should be protected from workloads running on the cluster. Legacy metadata APIs expose the Compute Engine's instance metadata of server endpoints. As a best practice, disable legacy API and use v1 APIs to restrict a potential attacker from retrieving instance metadata. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You can currently disable legacy metadata APIs only when creating a new cluster, or when adding a new node pool to an existing cluster. To fix this alert, create a new GKE cluster with legacy metadata APIs disabled, migrate all required data from the reported cluster to the newly created cluster before you delete the reported GKE cluster.\n\nTo create new Kubernetes engine cluster with private node feature enabled, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on CREATE CLUSTER button\n5. Under the Node pools section, Click on the 'More node pool options' button\n6. On 'Edit node pool' window, For 'GCE instance metadata' click on 'Add metadata'\n7. Add 'disable-legacy-endpoints' as a metadata key and 'true' as a metadata value\n8. Click on 'Save'\n9. Click on 'Create'\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-subnet-list' AND json.rule = networkSecurityGroupId does not exist and name does not equal ignore case "GatewaySubnet" and name does not equal ignore case "RouteServerSubnet" and name does not equal ignore case "AzureFirewallSubnet" and name does not equal ignore case "AzureFirewallManagementSubnet" and ['properties.delegations'][*].['properties.serviceName'] does not equal "Microsoft.Netapp/volumes"```
Azure Virtual Network subnet is not configured with a Network Security Group This policy identifies Azure Virtual Network (VNet) subnets that are not associated with a Network Security Group (NSG). While binding an NSG to a network interface of a Virtual Machine (VM) enables fine-grained control of the VM, associating an NSG to a subnet enables better control over network traffic to all resources within a subnet. It is recommended to associate an NSG with a subnet so that you can protect your VMs on a subnet-level. For more information, https://learn.microsoft.com/en-gb/archive/blogs/igorpag/azure-network-security-groups-nsg-best-practices-and-lessons-learned https://learn.microsoft.com/en-us/azure/private-link/private-endpoint-overview#limitations Note: This policy will not report for subnets used by Azure Firewall Subnet, Azure Firewall Management Subnet, Gateway Subnet, NetApp File Share, Route Server Subnet, Private endpoints and Private links as Azure recommends not to configure Network Security Group (NSG) for these services. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal.\n2. Select 'Virtual Networks', and select the virtual network you need to modify.\n3. Select 'Subnets', and select the subnet you need to modify.\n4. Select the Network security group (NSG) you want to associate with the subnet.\n5. 'Save' your changes..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-insights-component' AND json.rule = properties.provisioningState equals Succeeded and (properties.publicNetworkAccessForQuery equals Enabled or properties.publicNetworkAccessForIngestion equals Enabled)```
Azure Application Insights configured with overly permissive network access This policy identifies Application Insights configured with overly permissive network access. Virtual network access configuration in Application Insights allows you to restrict data ingestion and queries coming from public networks. It is recommended to configure the Application Insights with virtual networks access configuration set to restrict, so that the Application Insight is accessible only to restricted Azure Monitor Private Link Scopes. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Application Insights dashboard \n3. Click on the reported Application Insights\n4. Under the 'Configure' menu, click on 'Network Isolation'\n5. Create a Azure Monitor Private Link Scope if it is not already created by referring:\nhttps://docs.microsoft.com/en-us/azure/azure-monitor/logs/private-link-configure#create-an-azure-monitor-private-link-scope\n6. After creating, Under 'Virtual networks access configuration', \nSet 'Accept data ingestion from public networks not connected through a Private Link Scope' to 'No' and \nSet 'Accept queries from public networks not connected through a Private Link Scope' to 'No'\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals SqlServers and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud is set to Off for Azure SQL Databases This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Azure SQL Databases is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Azure SQL Databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Azure SQL Databases' Select 'On' under Plan.\n8. Select 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-cloudtrail-get-trail-status' as Y; filter '(($.X.name == $.Y.trail) and ($.X.cloudWatchLogsLogGroupArn is not empty and $.X.cloudWatchLogsLogGroupArn exists) and $.X.isMultiRegionTrail is false and ($.Y.status.latestCloudWatchLogsDeliveryTime exists))'; show X;```
AWS CloudTrail logs should integrate with CloudWatch for all regions This policy identifies the Cloudtrails which is not integrated with cloudwatch for all regions. CloudTrail uses Amazon S3 for log file storage and delivery, so log files are stored durably. In addition to capturing CloudTrail logs within a specified S3 bucket for long term analysis, realtime analysis can be performed by configuring CloudTrail to send logs to CloudWatch Logs. For a trail that is enabled in all regions in an account, CloudTrail sends log files from all those regions to a CloudWatch Logs log group. It is recommended that CloudTrail logs be sent to CloudWatch Logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into AWS and navigate to CloudTrail service.\n2. Click on Trail in the left menu navigation and choose the reported cloudtrail.\n3. Go to CloudWatch Logs section and click Configure.\n4. Define a new or select an existing log group and click Continue to complete the process..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(1433,1433) or destinationPortRanges[*] contains _Port.inRange(1433,1433) ))] exists```
Azure Network Security Group allows all traffic on SQL Server (TCP Port 1433) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SQL Server (TCP Port 1433). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SQL Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist```
Medium of AWS S3 bucket publicly readable This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public. For more details: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where api.name = 'oci-database-autonomous-database' AND json.rule = lifecycleState contains AVAILABLE AND whitelistedIps is member of ("null") AND privateEndpoint is member of ("null")```
OCI Oracle Autonomous Database (ADB) access is not restricted to allowed sources or deployed within a Virtual Cloud Network This policy identifies Oracle Autonomous Databases (ADBs) that are not restricted to specific sources or not deployed within a Virtual Cloud Network (VCN). Autonomous Database automates critical database management tasks, and restricting its access to corporate IP addresses or VCNs is crucial for enhancing security. Deploying Autonomous Databases within a VCN and configuring access control rules ensure that only authorized sources can connect, significantly reducing the risk of unauthorized access. This protection is vital for maintaining the integrity and security of the databases. As best practice, it is recommended to have new Autonomous Database instances deployed within a VCN, and existing instances should have access control rules set to restrict connectivity to approved sources. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: To configure the OCI Oracle Autonomous Database (ADB) access, refer to the following documentation:\nhttps://docs.oracle.com/en/cloud/paas/autonomous-database/serverless/adbsb/access-control-rules-autonomous.html#GUID-F0B59281-E545-48B1-BA49-1FD51B65D123.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.apiServerAccessProfile.enablePrivateCluster is false and (properties.apiServerAccessProfile.authorizedIPRanges does not exist or properties.apiServerAccessProfile.authorizedIPRanges is empty)```
aweawoie This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-mq-broker' AND json.rule = 'brokerState equals RUNNING and publiclyAccessible is true'```
AWS MQ is publicly accessible This policy identifies the AWS MQ brokers which are publicly accessible. It is advisable to use MQ brokers privately only from within your AWS Virtual Private Cloud (VPC). Ensure that the AWS MQ brokers provisioned in your AWS account are not publicly accessible from the Internet to avoid sensitive data exposure and minimize security risks. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: Note: MQ broker configuration for public access cannot be modified. In case we need to update the MQ not to be public we need to recreate it.\n\n1. Go to AWS console\n2. Navigate to service 'Amazon MQ' from the 'Services' Menu\n3. From the list of 'Brokers' select the reported MQ broker\n4. From 'Details' section, copy all the configuration information.\n5. Within 'Users' section, locate and copy the ActiveMQ Web Console access credentials.\n6. Click on 'Brokers' from left panel, click on 'Create broker' \n7. Provide an unique name in field 'Broker name'\n8. In 'Advanced settings' section, select 'No' for 'Public accessibility'\n9. Set the new broker configuration parameters using the information copied at step no. 4\n10. Set the existing ActiveMQ Web Console access credentials copied at step no. 5\n11. Click on 'Create broker'\n12. Once the new broker is created, you can replace the broker endpoints within your applications\n\nTo delete the publicly accessible broker, \n1. select the alerted from the list of 'Brokers' \n2. Click on 'Delete' button\n3. When a dialog box pops up, enter the broker name to confirm and click on 'delete' button.
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy vwptv This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-secretsmanager-secret' AND json.rule = expireTime does not exist```
GCP Secrets Manager secret has no expiration date This policy identifies GCP Secret Manager secrets that have no expiration date. GCP Secret Manager securely stores and controls access to API keys, passwords, certificates, and other sensitive data. Without an expiration date, secrets remain vulnerable indefinitely. Setting an expiration date limits the potential damage of a security breach, as compromised credentials will eventually become invalid. It is recommended to configure secrets with an expiration date to reduce the risk of long-lived secrets being compromised or abused. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Secrets Manager' page\n2. Under 'Secrets', click on the reported secret\n3. Select 'EDIT SECRET' on the top navigation bar\n4. Under the 'Edit secret' page, under 'Expiration', select the 'Set expiration date' checkbox and set the date and time for expiration\n5. Click on 'UPDATE SECRET'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```
dnd_test_create_hyperion_policy_without_asset_type_finding_1 Description-bf90f2fb-d709-4040-a033-b74ef4a2f6d8 This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['SSH_BRUTE_FORCE']. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix does not equal 96.116.133.104/32 or sourceAddressPrefix does not equal 96.116.134.8/32 or sourceAddressPrefix does not equal 96.118.251.38/32 or sourceAddressPrefix does not equal 96.118.251.70/32 or sourceAddressPrefix does not equal 2001:558:fc0c::f816:3eff:fe2b:7e9f/128 or sourceAddressPrefix does not equal 2001:558:fc0c::f816:3eff:fe2d:f8c0/128 or sourceAddressPrefix does not equal 2001:558:fc18:2:f816:3eff:fea9:fec9/128 or sourceAddressPrefix does not equal 2001:558:fc18:2:f816:3eff:fe86:aa73/128) and (destinationPortRange contains _Port.inRange(22,22) or destinationPortRanges[*] contains _Port.inRange(22,22) ))] exists```
comcast-policy This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = authorizationRules[*] size greater than 1 and authorizationRules[?any(name does not equal RootManageSharedAccessKey and properties.rights contains Manage)] exists```
Azure Service bus namespace configured with overly permissive authorization rules This policy identifies Azure Service bus namespaces configured with overly permissive authorization rules. Service Bus clients should not use a namespace level access policy that provides access to all queues and topics in a namespace. It is recommended to follow the least privileged security model, should create access policies at the entity level for queues and topics to provide access to only the specific entity. All authorization rules except RootManageSharedAccessKey should be removed from the Service bus namespace. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to Service Bus\n3. Select the reported Service bus namespace\n4. Click on 'Shared access policies' under 'Settings'\n5. Select and remove all authorization rules except RootManageSharedAccessKey..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = 'instancesAttached is false'```
AWS Elastic Load Balancer (ELB) not in use This policy identifies unused Elastic Load Balancers (ELBs) in your AWS account. Any Elastic Load Balancer in your AWS account is adding charges to your monthly bill, although it is not used by any resources. As a best practice, it is recommended to remove ELBs that are not associated with any instances, it will also help you avoid unexpected charges on your bill. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To check and remove ELB that has no registered instances, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. In the navigation pane, under 'LOAD BALANCING', click on 'Load Balancers'\n5. Select reported Elastic Load Balancer\n6. Select the 'Description' tab from the bottom panel\n7. In 'Basic Configuration' section, see If the selected load balancer 'Status' is '0 of 0 instances in service'.\nIt means that there are no registered instances and the ELB can be safely removed.\n8. Click the 'Actions' dropdown from the ELB dashboard top menu\n9. Select Delete\n10. In the 'Delete Load Balancer' pop-up dialog, confirm the action to delete on clicking 'Yes, Delete' button.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = backendType equals SECOND_GEN and ipAddresses[*].type contains PRIMARY```
GCP SQL database is assigned with public IP This policy identifies GCP SQL databases which are assigned with public IP. To lower the organisation's attack surface, Cloud SQL databases should not have public IPs. Private IPs provide improved network security and lower latency for your application. It is recommended to configure Second Generation Sql instance to use private IPs instead of public IPs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Click on the reported SQL instance, \n4. On overview page, click on 'EDIT' from top menu\n5. Under 'Configuration options' Click on 'Connectivity'\n6. From dropdown deselect 'Public IP' checkbox \n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals OpenSourceRelationalDatabases and properties.pricingTier does not equal Standard)] exists```
Azure Microsoft Defender for Cloud set to Off for Open-Source Relational Databases This policy identifies Azure Microsoft Defender for Cloud which has defender setting for Open-Source Relational Databases set to Off. Enabling Azure Defender for cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behaviour analytics. Microsoft Defender for Cloud detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. It is highly recommended to enable Azure Defender for Open-Source Relational Databases. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Click 'Select types >' in the row for 'Databases'\n7. Set the radio button next to 'Open-source relational databases' to 'On'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.ageInDays($.X.properties.updatedOn) < 1) and (($.X.properties.principalId contains $.Y.id)))'; show X;```
llatorre - RoleAssigment v2 This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy pkifp This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-kms-get-key-rotation-status' AND json.rule = 'keyMetadata.keyState contains PendingDeletion'```
AWS KMS Key scheduled for deletion This policy identifies KMS Keys which are scheduled for deletion. Deleting keys in AWS KMS is destructive and potentially dangerous. It deletes the key material and all metadata associated with it and is irreversible. After a key is deleted, you can no longer decrypt the data that was encrypted under that key, which means that data becomes unrecoverable. You should delete a key only when you are sure that you don't need to use it anymore. If you are not sure, It is recommended that to disable the key instead of deleting it. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You should delete a KMS key only when you are sure that you don't need to use it anymore. To fix this alert, If you sure you no longer need a reported KMS key; dismiss the alert. If you are not sure, consider disabling the KMS key instead of deleting it.\n\nTo enable KMS CMKs which are scheduled for deletion, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Key Management Service (KMS)\n4. Click on 'Customer managed keys' (Left Panel)\n5. Select reported KMS Customer managed key\n6. Click on 'Key actions' dropdown\n7. Click on 'Cancel key deletion'\n8. Click on 'Enable'.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = ['attributes'].['load_balancing.cross_zone.enabled'] is false```
AWS Elastic Load Balancer v2 (ELBv2) with cross-zone load balancing disabled This policy identifies load balancers that do not have cross-zone load balancing enabled. Cross-zone load balancing evenly distributes incoming traffic across healthy targets in all availability zones. This can help to ensure your application can manage additional traffic and limit the risk of any single availability zone getting overwhelmed and perhaps affecting load balancer performance. It is recommended to enable cross-zone load balancing. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable cross-zone load balancing, please follow the below steps:\n\n1. Log in to the AWS console.\n2. Go to the EC2 Dashboard and select 'Load Balancers'\n3. Click on the reported load balancer. Under the 'Actions' dropdown, select 'Edit load balancer attributes'.\n4. For Gateway load balancers, under 'Availability Zone routing Configuration', enable 'Cross-zone load balancing'.\n5. For Network load balancers, under 'Availability Zone routing Configuration', select the 'Enable cross-zone load balancing' option.\n6. Click on 'Save changes'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-watcher-list' AND json.rule = provisioningState equals Succeeded as X; count(X) less than 1```
Azure Network Watcher not enabled This policy identifies Azure subscription regions where Network Watcher is not enabled. Azure Network Watcher provides tools to monitor, diagnose, view metrics, and enable or disable logs for resources in an Azure virtual network. Without Network Watcher enabled, you lose critical capabilities to monitor and diagnose network issues, making it difficult to identify and resolve performance bottlenecks, network security rules, and connectivity issues. As a best practice, it is recommended to enable Azure Network Watcher for your region to leverage its monitoring and diagnostic capabilities. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Enabling Network Watcher will incur costs. There are additional costs per transaction to run and store network data. For high-\nvolume networks these charges will add up quickly.\n\nTo enable Network Watcher, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/network-watcher/network-watcher-create?tabs=portal#enable-network-watcher-for-your-region.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-load-balancer' AND json.rule = 'listenerPortsAndProtocal[*].listenerProtocal equals https and ([*].tlscipherPolicy equals tls_cipher_policy_1_0 or [*].tlscipherPolicy equals tls_cipher_policy_1_1)'```
Alibaba Cloud SLB listener is configured with SSL policy having TLS version 1.1 or lower This policy identifies Server Load Balancer (SLB) listeners which are configured with SSL policy having TLS version 1.1 or lower. As a best security practice, use TLS 1.2 as the minimum TLS version in your load balancers SSL security policies. This is applicable to alibaba_cloud cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Server Load Balancer\n3. Click on the reported load balancer\n4. In the 'Listeners' tab, Choose HTTPS Listener, Click on 'Configure'\n5. In the 'Configure Listener' page, Click on 'Next'\n6. In the 'SSL Certificates', Click on 'Modify' for 'Advanced' section\n7. For 'TLS Security Policy', Choose TLS 1.2 or later version policy as per your requirement.\n8. Click on 'Next'\n9. If no changes to Backend Servers and Health Check, Click on 'Next'\n10. In 'Submit' section, click on 'Submit'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = (securityContacts is empty or securityContacts[*].properties.email is empty or securityContacts[*].properties.alertsToAdmins equal ignore case Off) and pricings[?any(properties.pricingTier equal ignore case Standard)] exists```
Azure Microsoft Defender for Cloud email notification for subscription owner is not set This policy identifies the Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) in which email notification for subscription owners is not set. Enabling security alert emails to subscription owners ensures that they receive security alert emails from Microsoft. This ensures that they are aware of any potential security issues and can mitigate the risk in a timely fashion. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Click on 'Email notifications'\n6. In the drop down of the 'All users with the following roles' field select 'Owner'\n7. Select 'Save'.
```config from cloud.resource where api.name = 'azure-frontdoor' AND json.rule = properties.provisioningState equals Succeeded as X; config from cloud.resource where api.name = 'azure-frontdoor-waf-policy' as Y; filter '$.X.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id does not exist or ($.X.properties.frontendEndpoints[*].properties.webApplicationFirewallPolicyLink.id equal ignore case $.Y.id and $.Y.properties.policySettings.enabledState equals Disabled)'; show X;```
Azure Front Door does not have the Azure Web application firewall (WAF) enabled This policy identifies Azure Front Doors which do not have the Azure Web application firewall (WAF) enabled. As a best practice, configure the Azure WAF service on the Front Doors to protect against application-layer attacks. To block malicious requests to your Front Doors, define the block criteria in the WAF rules. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'All services' > 'Front Doors'\n3. Click on the reported Front Door\n4. Click on the 'Web application firewall' from the left panel\n5. Select the frontend to attach WAF policy and Click on 'Apply Policy'\n6. In 'Associate a Waf policy' dialog, select appropriate enabled WAF policy from the 'Policy' dropdown.\n7. Click on 'Add' \n8. Click on 'Save' to save your changes.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equal ignore case running and kind contains workflowapp and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist and (properties.privateLinkIdentifiers does not exist or properties.privateLinkIdentifiers is empty))) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists'```
Azure Logic app configured with public network access This policy identifies Azure Logic apps that are configured with public network access. Exposing Logic Apps directly to the public internet increases the attack surface, making them more susceptible to unauthorized access, security threats, and potential breaches. By limiting Logic Apps to private network access, they are securely managed and less prone to external vulnerabilities. As a security best practice, it is recommended to configure private network access or restrict the public exposure only to the required entities instead of wide ranges. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Networking'\n5. On the 'Networking' page, under 'Inbound traffic configuration' section, select the 'Public network access' setting.\n6. On the 'Access Restrictions' page, review the list of access restriction rules that are defined for your app and avoid providing access to all networks.\n7. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith es:)] exists```
AWS Elasticsearch IAM policy overly permissive to all traffic This policy identifies Elasticsearch IAM policies that are overly permissive to all traffic. Amazon Elasticsearch service makes it easy to deploy and manage Elasticsearch. Customers can create a domain where the service is accessible. The domain should be granted access restrictions so that only authorized users and applications have access to the service. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Goto the IAM Services\n3. Click on 'Policies' in the left-hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under the Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'Elasticsearch Service', click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes..
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "sysdig-monitor" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","resourceType","serviceInstance","sysdigTeam"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud user with IAM policies provide administrative privileges for Cloud Monitoring Service This policy identifies IBM Cloud users with overly permissive IBM Cloud Monitoring Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Monitoring' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (logPublishingOptions does not exist or logPublishingOptions.SEARCH_SLOW_LOGS.enabled is false or logPublishingOptions.SEARCH_SLOW_LOGS.cloudWatchLogsLogGroupArn is empty)'```
AWS Elasticsearch domain has Search slow logs set to disabled This policy identifies Elasticsearch domains for which Search slow logs is disabled in your AWS account. Enabling support for publishing Search slow logs to AWS CloudWatch Logs enables you to have full insight into the performance of search operations performed on your Elasticsearch clusters. This will help you in identifying performance issues caused by specific search queries so that you can optimize your queries to address the problem. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Elasticsearch Service Dashboard\n4. Choose reported Elasticsearch domain\n5. Select the 'Logs' tab\n6. In 'Set up Search slow logs' section,\n a. click on 'Setup'\n b. In 'Select CloudWatch Logs log group' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n c. In 'Specify CloudWatch access policy', Create new/Select an existing policy as per your requirement\n d. Click on 'Enable'\n\nThe search slow logs setting 'Status' should change now to 'Enabled'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-bedrock-agent' AND json.rule = agentStatus is not member of ( "DELETING","FAILED") and guardrailConfiguration.guardrailIdentifier does not exist```
AWS Bedrock agent is not associated with Bedrock guardrails This policy identifies the AWS Bedrock agent that is not associated with Bedrock guardrails Amazon Bedrock Guardrails provides governance and compliance controls for generative AI applications, ensuring safe and responsible model use. Associating Guardrails with the Bedrock agent is useful for implementing governance and compliance controls in generative AI applications. Not linking Guardrails to the Bedrock agent raises the risk of non-compliance and harmful AI application outputs. It is recommended that AWS Bedrock agents be associated with Bedrock guardrails to implement safeguards and prevent unwanted behavior from model responses or user messages. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To associate the aws bedrock agent with the bedrock guardrail, perform the following actions:\n\n1. Log in to the AWS console. and Navigate to Amazon Bedrock console available at https://console.aws.amazon.com/bedrock/.\n2. In the navigation panel, under 'Builder tools', select 'Agents'.\n3. In the Agents, Click on the agent that is reported.\n4. Click on the 'Edit in Agent Builder' button on the right corner.\n5. In the Agent builder window, Under the 'Guardrail details' section click 'Edit' and select the name and version of the Amazon Bedrock guardrail created previously or click on the link to Create a new guardtrail.\n6. Choose 'Save and exit' to attach the selected guardrail to your Amazon Bedrock agent..
```config from cloud.resource where api.name = 'gcloud-compute-external-backend-service' AND json.rule = logConfig.enable does not exist or logConfig.enable is false```
GCP Cloud Load Balancer HTTP(S) logging is not enabled This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-replication-instance' AND json.rule = replicationInstanceStatus is not member of ('creating','deleted','deleting') and publiclyAccessible is true```
AWS DMS replication instance is publicly accessible This policy identifies AWS DMS (Database Migration Service) replication instances with public accessibility enabled. A DMS replication instance is used to connect and read the source data and prepare it for consumption by the target data store. When AWS DMS replication instances are publicly accessible, it increases the risk of unauthorized access, data breaches, and potentially malicious activities. It is recommended to disable the public accessibility of DMS replication instances to decrease the attack surface. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Public accessibility can be disabled only at the time of creation, please follow the below steps to create a new DMS replication instance:\n\n1. Sign in to the AWS Management Console and navigate to the AWS DMS console.\n2. In the navigation pane, choose 'Replication instances' and then click the 'Create replication instance' button.\n3. Under the 'Connectivity and security' section, Leave the 'Publicly accessible' option unchecked to ensure that the replication instance does not have public IP addresses or DNS names.\n4. Configure other settings based on your requirements.\n5. Click the 'Create replication instance' button to create the replication instance.\n\nTo delete the reported AWS DMS replication instance, Please follow the below steps:\n\n1. Sign in to the AWS Management Console and navigate to the AWS DMS console.\n2. In the navigation pane, choose 'Replication instances' to see a list of your existing replication instances.\n3. Select the replication instance that you want to delete from the list.\n4. After selecting the replication instance, choose 'Actions' and then 'Delete' from the menu.\n5. A confirmation dialog box will appear. Review the details and confirm that you want to delete the replication instance by selecting the 'Delete' button..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" AND shieldedInstanceConfig.enableVtpm is false```
GCP Vertex AI Workbench Instance has vTPM disabled This policy identifies GCP Vertex AI Workbench Instances that have the Virtual Trusted Platform Module (vTPM) feature disabled. The Virtual Trusted Platform Module (vTPM) validates the guest VM's pre-boot and boot integrity and provides key generation and protection. The root keys of the vTPM, as well as the keys it generates, cannot leave the vTPM, thereby offering enhanced protection against compromised operating systems or highly privileged project administrators. It is recommended to enable the virtual TPM device on GCP Vertex AI Workbench Instances to support measured boot and other OS security features that require a TPM. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on vTPM'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-apimanagement-apigateway' AND json.rule = lifecycleState equal ignore case ACTIVE and (networkSecurityGroupIds[*] is empty or networkSecurityGroupIds[*] does not exist)```
OCI API Gateway is not configured with Network Security Groups This policy identifies API Gateways that are not configured with Network Security Groups. Network security groups give fine-grained control of resources and help in restricting network access to your Private API Gateway with specific ports or with specific IP address range. As best practice, it is recommended to restrict access to the API Gateway by configuring network security groups. This is applicable to oci cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Developer Services -> API Management -> Gateways\n3. Click on the reported Gateway\n4. Click on the 'Edit' button\nNOTE: Before you update API gateway with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports or with specific IP address range based on requirement.\n5. On the 'Edit gateway' dialog, select the 'Enable network security groups' and select the restrictive Network Security Group \n6. Click on the 'Save Changes' button..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-armor-security-policy' AND json.rule = type does not equal ignore case CLOUD_ARMOR_EDGE and (rules[*].match.expr.expression does not contain cve-canary or rules[?any(match.expr.expression contains cve-canary and action equals allow)] exists)```
GCP Cloud Armor policy not configured with cve-canary rule This policy identifies GCP Cloud Armor rules where cve-canary is not enabled. Preconfigured WAF rule called "cve-canary" can help detect and block exploit attempts of CVE-2021-44228 and CVE-2021-45046 to address the Apache Log4j vulnerability. It is recommended to create a Cloud Armor security policy with rule blocking Apache Log4j exploit attempts. Reference : https://cloud.google.com/blog/products/identity-security/cloud-armor-waf-rule-to-help-address-apache-log4j-vulnerability This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To Update Existing rules follow the below steps,\n\n1. Login to GCP console\n2. Navigate to 'Cloud Armor' from service 'Network Security'(Left Panel)\n3. Click on the alerted policy\n4. Click on the pencil icon on the rule to edit the rule\n5. Under 'Mode', select 'Advanced mode', add expression "evaluatePreconfiguredExpr('cve-canary')"\n6. Under 'Action', select 'Deny' to block the exploit\n7. Click on 'Update'\n\nTo Add rule follow the below steps,\n\n1. Login to GCP console\n2. Navigate to 'Cloud Armor' from service 'Network Security'(Left Panel)\n3. Click on the alerted policy\n4. Click on 'Add rule'\n5. Under 'Mode', select 'Advanced mode', add expression "evaluatePreconfiguredExpr('cve-canary')"\n6. Under 'Action', select 'Deny' to block the exploit\n7. Update other details and click on 'Add'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0```
Copy of Copy of GCP API key is created for a project This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = "((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))" as X; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Y; filter'$.X.bucketName equals $.Y.s3BucketName'; show X;```
AWS CloudTrail bucket is publicly accessible This policy identifies publicly accessible S3 buckets that store CloudTrail data. These buckets contains sensitive audit data and only authorized users and applications should have access. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. If Access Control List' is set to 'Public' follow below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save\n6. If 'Bucket Policy' is set to public follow below steps\na. Under 'Bucket Policy', modify the policy to remove public access\nb. Click on Save\nc. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access..
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus contains available and dbclusterIdentifier does not exist and (engine contains postgres or engine contains mysql) and engineVersion is not member of (8.0.11, 8.0.13, 8.0.15, 9.6.1, 9.6.2, 9.6.3, 9.6.5, 9.6.6, 9.6.8, 9.6.9, 9.6.10, 10.1, 10.3, 10.4, 10.5) and iamdatabaseAuthenticationEnabled is false```
AWS RDS instance not configured with IAM authentication This policy identifies RDS instances that are not configured with IAM authentication. If you enable IAM authentication you don't need to store user credentials in the database, because authentication is managed externally using IAM. IAM database authentication provides the network traffic to and from database instances is encrypted using Secure Sockets Layer (SSL), Centrally manage access to your database resources and Profile credentials instead of a password, for greater security. For details: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html NOTE: IAM database authentication works only with MySQL and PostgreSQL. IAM database authentication is not available on all database engines; please refer https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.html#UsingWithRDS.IAMDBAuth.Availability for available versions. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable IAM authentication follow the below mentioned URL:\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Enabling.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cognito-identity-pool' AND json.rule = allowUnauthenticatedIdentities is true```
AWS Cognito identity pool allows unauthenticated guest access This policy identifies AWS Cognito identity pools that allow unauthenticated guest access. AWS Cognito identity pools unauthenticated guest access and allows unauthenticated users to assume a role in your AWS account. These unauthenticated users will be granted permissions of the assumed role which may have more privileges than that are intended. This could lead to unauthorized access or data leakage. It is recommended to disable unauthenticated guest access for the Cognito identity pools. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To deactivate guest access in an identity pool,\n1. Log in to AWS console\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to the Amazon Cognito dashboard\n4. Under 'Identity pools' section, select the reported identity pool\n5. In 'User access' tab, under 'Guest access' section\n6. Click on 'Deactivate' button to deactivate the guest access configured.\n\nNOTE: Before you deactivate unauthenticated guest access, it is must to have at-least one authenticated access configured in your identity pool..
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Tcp or protocol equals *) and (destinationPortRange contains _Port.inRange(53,53) or destinationPortRanges[*] contains _Port.inRange(53,53) ))] exists```
Azure Network Security Group allows all traffic on NetBIOS DNS (TCP Port 53) This policy identifies Azure Network Security Groups (NSG) that allow all traffic on DNS TCP port 53. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict DNS solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireUppercaseCharacters is false or requireUppercaseCharacters does not exist'```
AWS IAM password policy does not have an uppercase character This policy identifies AWS accounts in which IAM password policy does not have an uppercase character. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one uppercase letter'.\n4. Click on 'Apply password policy'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case "PowerState/running" and ['properties.storageProfile'].['imageReference'].['publisher'] equal ignore case microsoftsqlserver and (['properties.osProfile'].['linuxConfiguration'] exists and ['properties.osProfile'].['linuxConfiguration'].['disablePasswordAuthentication'] is false)```
Azure SQL on Virtual Machine (Linux) with basic authentication This policy identifies Azure Virtual Machines that are hosted with SQL on them and have basic authentication. Azure Virtual Machines with basic authentication could allow attackers to brute force and gain access to SQL database hosted on it, which might lead to information leakage. It is recommended to use SSH keys for authentication to avoid brute force attacks on SQL database hosted virtual machines. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure existing Azure Virtual machine with SSH key authentication, Follow below URL:\nhttps://learn.microsoft.com/en-us/azure/virtual-machines/extensions/vmaccess#update-ssh-key\n\nIf changes are not reflecting you may need to take backup, Create new virtual machine with SSH key based authentication and delete the reported virtual machine..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(3389,3389) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists```
GCP Firewall rule allows all traffic on RDP port (3389) This policy identifies GCP Firewall rules which allow all inbound traffic on RDP port (3389). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the RDP port (3389) should be allowed to specific IP addresses. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'..
```config from cloud.resource where finding.source = 'AWS Inspector' AND finding.type = 'AWS Inspector Security Best Practices'```
PCSUP-23654 This is applicable to all cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-list' AND json.rule = ( iamPolicy.bindings[*].members contains "allUsers" or iamPolicy.bindings[*].members contains "allAuthenticatedUsers" ) and ( disabled does not exist or disabled is false )```
GCP Service account is publicly accessible This policy identifies GCP Service accounts that are publicly accessible. GCP Service accounts are intended to be used by an application or compute workload, rather than a person. It can be granted permission to perform actions in the GCP project as any other GCP user. Allowing access to 'allUsers' or 'allAuthenticatedUsers' over a service account would allow unwanted access to the public and could lead to a security breach. As a security best practice, follow the Principle of Least Privilege and grant permissions to entities only on a need basis. It is recommended to avoid granting permission to 'allUsers' or 'allAuthenticatedUsers'. This is applicable to gcp cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To revoke access from 'allusers'/'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to the 'IAM and Admin' service (Left Panel)\n3. Go to 'Service Accounts'\n4. Click on the alerting service account\n5. Under the 'PERMISSIONS' tab, select the 'VIEW BY PRINCIPALS' tab\n6. Select the entries with 'allUsers' or 'allAuthenticatedUsers' \n7. Click on the 'REMOVE ACCESS' to revoke access from 'allusers'/'allAuthenticatedUsers'\n8. Click on 'CONFIRM'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-cluster' AND json.rule = status equals ACTIVE and settings[?any(name equals containerInsights and value equals disabled)] exists```
AWS ECS cluster with container insights feature disabled This policy identifies ECS clusters that are disabled with the container insights feature. Container Insights collects metrics at the cluster, task, and service levels. As a best practice, enable container insights to start collecting the data available through these logs for the reported ECS cluster. For details: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-cluster.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable container insights feature in your existing ECS cluster follow below mentioned URL:\n\nhttps://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/deploy-container-insights-ECS-cluster.html#deploy-container-insights-ECS-existing.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = settings.backupConfiguration.enabled is false and instanceType is not member of ("READ_REPLICA_INSTANCE","ON_PREMISES_INSTANCE")```
GCP SQL database instance is not configured with automated backups This policy identifies the GCP SQL database instances that are not configured with automated backups. Automated backups need to be set for any instance that contains data that should be protected from loss or damage. It is recommended to have all SQL database instances set to enable automated backups. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to 'SQL'\n3. Click on the reported SQL instance\n4. From the left menu go to 'Backups'\n5. Go to section 'Settings', click on 'EDIT'\n6. From the pop-up window 'Edit backups settings' click on 'Automated backups'\n7. Provide a time window from the available dropdown\n8. Click on 'Save'\n\n.
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireSymbols equals null or requireSymbols is false or requireSymbols does not exist'```
AWS IAM password policy does not have a symbol Checks to ensure that IAM password policy requires a symbol. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one non-alphanumeric character'.\n4. Click on 'Apply password policy'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isPythonVersionLatest exists and config.isPythonVersionLatest equals false'```
Azure App Service Web app doesn't use latest Python version This policy identifies App Service Web apps that are not configured with latest Python version. Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. It is recommended to use the latest Python version for web apps in order to take advantage of security fixes, if any. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to Python and Minor version is set to latest version.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/delete" as X; count(X) less than 1```
Azure Activity log alert for Delete network security group does not exist This policy identifies the Azure accounts in which activity log alert for Delete network security group does not exist. Creating an activity log alert for the Delete network security group gives insight into network access changes and may reduce the time it takes to detect suspicious activity. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Network Security Group (Microsoft.Network/networkSecurityGroups)' and Other fields you can set based on your custom settings.\n6. Click on Create.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-notebook-instance' AND json.rule = state equals "ACTIVE" and ( metadata.proxy-mode equals "mail" or metadata.proxy-user-mail exists )```
GCP Vertex AI Workbench user-managed notebook's JupyterLab interface access mode is set to single user This policy identifies GCP Vertex AI Workbench user-managed notebooks with JupyterLab interface access mode set to single user. Vertex AI Workbench user-managed notebook can be accessed using the web-based JupyterLab interface. Access mode controls the control access to this interface. Allowing access to only a single user could limit collaboration, increase chances of credential sharing, and hinder security audits and reviews of the resource. It is recommended to avoid single user access and make use of the service account access mode for user-managed notebooks. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Access mode cannot be changed for existing Vertex AI Workbench user-managed notebooks. A new Vertex AI Workbench user-managed notebook should be created.\n\nTo create a new Vertex AI Workbench user-managed notebook with access mode set to service account, please refer to the steps below:\n1. Login to the GCP console\n2. Under 'Vertex AI', navigate to the 'Workbench' (Left Panel)\n3. Select 'USER-MANAGED NOTEBOOKS' tab\n4. Click 'CREATE NEW'\n5. Click 'ADVANCED OPTIONS'\n6. Configure the instance as required\n7. Go to 'IAM and security' tab\n8. Select 'Service account'\n9. Click 'CREATE'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_temp_files')] does not exist or settings.databaseFlags[?(@.name=='log_temp_files')].value does not equal 0)"```
GCP PostgreSQL instance database flag log_temp_files is not set to 0 This policy identifies PostgreSQL database instances in which database flag log_temp_files is not set to 0. The log_temp_files flag controls the logging of names and size of temporary files. Configuring log_temp_files to 0 causes all temporary file information to be logged, while positive values log only files whose size is greater than or equal to the specified number of kilobytes. A value of -1 disables temporary file information logging. If all temporary files are not logged, it may be more difficult to identify potential performance issues that may be either poor application coding or deliberate resource starvation attempts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_temp_files' from the drop-down menu and set the value as '0'\nOR\nIf the flag has been set to other than 0, Under 'Configuration options', In 'Flags' section choose the flag 'log_temp_files' and set the value as '0'\n6. Click Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE```
Copy of PCSUP-16458-CLI-Test This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and Action contains sts:* and Resource equals * and Condition does not exist)] exists```
AWS IAM policy overly permissive to STS services This policy identifies the IAM policies that are overly permissive to STS services. AWS Security Token Service (AWS STS) is a web service that enables you to request temporary credentials for AWS Identity and Access Management (IAM) users or for users that you authenticate (federated users). It is recommended to follow the principle of least privileges ensuring that only restricted STS services for restricted resources. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service\n3. Click on the 'Policies' in left hand panel and Click on the reported IAM policy\n4. Under Permissions tab, Change the element of the policy document to be more restrictive so that it only allows restricted STS permissions on selected resources instead of wildcards (sts:* and Resource:*) OR Put condition statement with least privilege access..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-ssh-public-keys' AND json.rule = '(_DateTime.ageInDays($.uploadDate) > 91) and status==Active'```
AWS IAM SSH keys for AWS CodeCommit have aged more than 90 days without being rotated This policy identifies all of your IAM SSH public keys which haven't been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect your AWS CodeCommit repositories. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console\n2. Goto IAM and select Users\n3. Choose the reported user\n4. Goto Security Credential\n5. Delete the SSH Key ID and upload a new SSH Key\nKey creation steps: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-ssh-unixes.html.
```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.allowSharedKeyAccess is true and properties.sasPolicy does not exist```
Azure Storage account not configured with SAS expiration policy This policy identifies Azure Storage accounts not configured with SAS expiration policy. A Shared Access Signature (SAS) expiration policy specifies a recommended interval over which the SAS is valid. SAS expiration policies apply to a service SAS or an account SAS. When a user generates service SAS or an account SAS with a validity interval that is larger than the recommended interval, they'll see a warning. If Azure Storage logging with Azure Monitor is enabled, then an entry is written to the Azure Storage logs. It is recommended that you limit the interval for a SAS in case it is compromised. For more details: https://learn.microsoft.com/en-us/azure/storage/common/sas-expiration-policy This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure an expiration policy for shared access signatures for the reported Storage account, follow bellow URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/sas-expiration-policy.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals "ACTIVE" AND environment equals "GEN_1" AND serviceConfig.securityLevel exists AND serviceConfig.securityLevel does not equal "SECURE_ALWAYS"```
GCP Cloud Function v1 is using unsecured HTTP trigger This policy identifies GCP Cloud Functions v1 that are using unsecured HTTP trigger. Using HTTP triggers for cloud functions poses significant security risks, including vulnerability to interception, tampering, and various attacks like man-in-the-middle. Conversely, HTTPS triggers provide encrypted communication, safeguarding sensitive data and ensuring confidentiality. HTTPS also supports authentication mechanisms, enhancing overall security and trust. It is recommended to enable 'Require HTTPS' for HTTP triggers for all cloud functions v1. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Under section 'Trigger', click on 'EDIT' for HTTP trigger\n6. Select the checkbox against the field 'Require HTTPS'\n7. Click on 'SAVE'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'.
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as X; config from cloud.resource where api.name = 'gcloud-cloud-run-services-list' AND json.rule = spec.template.spec.serviceAccountName contains "[email protected]" as Y; filter ' $.X.user equals $.Y.spec.template.spec.serviceAccountName '; show Y; ```
GCP Cloud Run service is using default service account with editor role This policy identifies GCP Cloud Run services that are utilizing the default service account with the editor role. When you create a new Cloud Run service, the compute engine default service account is associated with the service by default if any other service account is not configured. The compute engine default service account is automatically created when the Compute Engine API is enabled and is granted the IAM basic Editor role if you have not disabled this behavior explicitly. These permissions can be exploited to get admin access to the GCP project. To be compliant with the principle of least privileges and prevent potential privilege escalation, it is recommended that Cloud Run services are not assigned the 'Compute Engine default service account' especially when the editor role is granted to the service account. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is not possible to change service account of an existing revision of cloud run service. To update the service account used, a new revision can be deployed.\n\nTo deploy a new service with a user-managed service account, please refer to the URLs given below:\nhttps://cloud.google.com/run/docs/securing/service-identity#deploying_a_new_service_with_a_user-managed_service_account.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and name does not start with "gke-" and (shieldedInstanceConfig does not exist or shieldedInstanceConfig.enableSecureBoot is false )```
GCP VM instance with Shielded VM Secure Boot disabled This policy identifies GCP VM instances that have Shielded VM Secure Boot disabled. Secure Boot is a security feature that ensures only trusted, digitally signed software runs during the boot process of a computer. Enabling it helps protect against malware and unauthorized software by verifying the integrity of the bootloader and operating system. Without Secure Boot, systems are vulnerable to rootkits, bootkits, and other malicious code that can compromise the system from the start, making it difficult to detect and remove such threats. It is recommended to enable Shielded VM secure boot for GCP VM instances. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to 'Compute Engine' and then 'VM instances'\n3. Click on the reported VM name\n4. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue\n5. Once the the VM has been stopped, click on the 'EDIT' button\n6. Under 'Shielded VM', enable 'Turn on Secure Boot'\n7. Click on 'Save'\n8. Click on 'START/RESUME' from the top menu..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = shieldedNodes.enabled does not exist or shieldedNodes.enabled equals "false"```
GCP Kubernetes cluster Shielded GKE Nodes feature disabled This policy identifies GCP Kubernetes clusters for which the Shielded GKE Nodes feature is not enabled. Shielded GKE nodes protect clusters against boot- or kernel-level malware or rootkits which persist beyond infected OS. It is recommended to enable Shielded GKE Nodes for all the Kubernetes clusters. FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/shielded-gke-nodes This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Browse the alerted cluster\n4. Click on the 'Edit' button on top\n5. From the drop-down for 'Shielded GKE Nodes' select 'Enable'\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.emailSubscriptionAdmins is false```
Azure SQL Server ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' is disabled This policy identifies Azure SQL Server which has ADS Vulnerability Assessment 'Also send email notifications to admins and subscription owners' disabled. This setting enables ADS - VA scan reports being sent to admins and subscription owners. It is recommended to enable 'Also send email notifications to admins and subscription owners' setting, which would help in reducing time required for identifying risks and taking corrective measures. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. In 'VULNERABILITY ASSESSMENT SETTINGS' section, Ensure 'Also send email notifications to admins and subscription owners' is checked\n6. 'Save' your changes.
```config from cloud.resource where cloud.type = 'azure' AND cloud.accountgroup NOT IN ( 'PCF Azure') AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].['osDisk'].['vhd'].['uri'] exists```
RomanPolicy This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = (status equals RUNNING and name does not start with "gke-") and serviceAccounts[*].email contains "[email protected]" and serviceAccounts[*].scopes[*] any equal "https://www.googleapis.com/auth/cloud-platform"```
GCP VM instance using a default service account with Cloud Platform access scope This policy identifies the GCP VM instances that are using a default service account with cloud-platform access scope. To compliant with the principle of least privileges and prevent potential privilege escalation it is recommended that instances are not assigned to default service account 'Compute Engine default service account' with scope 'cloud-platform'. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the GCP portal\n2. Go to Compute Engine\n3. Choose VM instances\n4. Click on the reported VM instance for which you want to change the service account\n5. If the instance is not stopped, click the 'Stop' button. Wait for the instance to be stopped\n6. Next, click the 'Edit' button\n7. Scroll down to the 'Service Account' section, From the drop-down menu, select the desired service account.\n8. Ensure 'Allow full access to all Cloud APIs' is not selected or 'Cloud Platform' under 'Set access for each API' is not enabled\n9. Click the 'Save' button and then click 'START' to start the VM instance..