query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = deployment.platform_options.disk_encryption_key_crn is empty``` | IBM Cloud PostgreSQL Database disk encryption is not enabled with customer managed keys
This policy identifies IBM Cloud PostgreSQL Databases with default disk encryption. Using customer managed keys will increase significant control where keys are managed by customers. It is recommended to use customer managed keys for disk encryption which provides customer control over the lifecycle of the keys.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: PostgreSQL database disk encryption can be enabled with Customer managed keys only at the time of\ncreation.\n\nPlease use below link to provide PostgreSQL service to KMS service authorization if not authorized already;\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-key-protect&interface=ui#granting-service-auth\n\nPlease use below link to provision a KMS instance with a key to use for encryption if not provisioned:\nhttps://cloud.ibm.com/docs/key-protect?topic=key-protect-getting-started-tutorial#create-keys\n\nPlease follow below steps to create a new PostgreSQL deployment from backup of vulnerable PostgreSQL deployment:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select PostgreSQL database reported in the alert.\n3. In the left navigation pane, navigate to 'Backups and restore', under 'Available Backups' section click on 'Create backup' to get latest backup of the database.\n4. Under 'Available Backups' tab, click on three dots on the right corner of a row containing latest backup and click on 'Restore backup'.\n5. On create a new Database for PostgreSQL from backup page, select all the configuration as per the requirement.\n6. Under 'Encryption' section, under 'KMS Instance' please select a KMS instance and a key from the instance to use for encryption.\n7. Click on 'Restore backup'.\n\nPlease follow below steps to delete the reported database deployment :\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list'.\n3. Select your deployment. Next, by using the stacked three-dot menu icon , choose Delete from the drop list.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' as X; config from cloud.resource where api.name = 'aws-ec2-describe-vpcs' as Y; filter ' $.X.resourcesVpcConfig.vpcId contains $.Y.vpcId and $.Y.isDefault is true'; show X;``` | AWS EKS cluster using the default VPC
This policy identifies AWS EKS clusters which are configured with the default VPC. It is recommended to use a VPC configuration based on your security and networking requirements. You should create your own EKS VPC instead of using the default, so that you can have full control over the cluster network.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: An AWS EKS cluster VPC cannot be changed once it is created. To resolve this alert, create a new cluster with the custom VPC as per your requirements, then migrate all required cluster data from the reported cluster to this newly created cluster and delete the reported Kubernetes cluster.\n\n1. Open the Amazon EKS dashboard.\n2. Choose Create cluster.\n3. On the Create cluster page, fill in the following fields:\n\n- Cluster name\n- Kubernetes version\n- Role name\n- VPC - Choose your new custom VPC.\n- Subnets\n- Security Groups\n- Endpoint private access\n- Endpoint public access\n- Logging\n\n4. Choose Create.. |
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains "protoPayload.methodName=" or $.X.filter contains "protoPayload.methodName =") and ($.X.filter does not contain "protoPayload.methodName!=" and $.X.filter does not contain "protoPayload.methodName !=") and $.X.filter contains "cloudsql.instances.update"'; show X; count(X) less than 1``` | GCP Log metric filter and alert does not exist for SQL instance configuration changes
This policy identifies the GCP account which does not have a log metric filter and alert for SQL instance configuration changes. Monitoring SQL instance configuration activities will help in reducing time to detect and correct misconfigurations done on sql server. It is recommended to create a metric filter and alarm to detect activities related to the SQL instance configuration.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nprotoPayload.methodName="cloudsql.instances.update"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.enableRBAC is false``` | Azure AKS enable role-based access control (RBAC) not enforced
To provide granular filtering of the actions that users can perform, Kubernetes uses role-based access controls (RBAC). This control mechanism lets you assign users, or groups of users, permission to do things like create or modify resources, or view logs from running application workloads. These permissions can be scoped to a single namespace, or granted across the entire AKS cluster.
This policy checks your AKS cluster RBAC setting and alerts if disabled.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To create a new AKS cluster with RBAC enabled, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/manage-azure-rbac#create-a-new-cluster-using-azure-rbac-and-managed-azure-ad-integration. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals Tcp and access equals Allow and direction equals Inbound and destinationPortRange contains *)] exists``` | Azure Network Security Group having Inbound rule overly permissive to all traffic on TCP protocol
This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on TCP protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-attached-user-policies' AND json.rule = attachedPolicies[*].policyArn contains "arn:aws:iam::aws:policy/AmazonElasticTranscoderFullAccess"``` | AWS IAM deprecated managed policies in use by User
This policy checks for any usage of deprecated AWS IAM managed policies and returns an alert if it finds one in your cloud resources.
When AWS deprecate an IAM managed policy, a new alternative is released with improved access restrictions. Existing IAM users and roles can continue to use the previous policy without interruption, however new IAM users and roles will use the new replacement policy.
Before you migrate any user or role to the new replacement policy, we recommend you review their differences in the Policy section of AWS IAM console. If you require one or more of the removed permissions, please add them separately to any user or role.
List of deprecated AWS IAM managed policies:
AmazonElasticTranscoderFullAccess (replaced by AmazonElasticTranscoder_FullAccess)
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNUSED_PRIVILEGES'].
Mitigation of this issue can be done as follows: 1. Go to the AWS console IAM dashboard.\n2. Click Policies on the left navigation menu.\n3. Enter the deprecated IAM policy name into the filter.\n4. Click on the policy name.\n5. Select the Policy usage tab.\n6. Check all attached users, make note of them, then select Detach.\n7. Click Policies on the left navigation menu.\n8. Enter the new IAM policy name into the filter.\n9. Click on the policy name.\n10. Select the Policy usage tab.\n11. Select Attach and check all the users you made a note of.\n12. Click Attach policy.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = (sseAlgorithm contains "aws:kms" or sseAlgorithm contains "aws:kms:dsse") and kmsMasterKeyID exists as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equals Enabled and keyMetadata.keyManager equals CUSTOMER and policies.default.Statement[?any((Principal.AWS equals * or Principal equals *)and Condition does not exist)] exists as Y; filter '$.X.kmsMasterKeyID contains $.Y.key.keyArn' ; show X;``` | AWS S3 bucket encrypted using Customer Managed Key (CMK) with overly permissive policy
This policy identifies Amazon S3 buckets that use Customer Managed Keys (CMKs) for encryption that have a key policy overly permissive. Amazon S3 bucket encryption key overly permissive can result in the exposure of sensitive data and potential compliance violations. As a security best practice, It is recommended to follow the principle of least privilege ensuring that the KMS key policy does not have all the permissions to be able to complete a malicious action.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: The following steps are recommended to add changes to existing key policy of the KMS key used by the S3 bucket\n1. Log in to the AWS Console and navigate to the 'S3' service.\n2. Click on the S3 bucket reported in the alert.\n3. Click on the 'Properties' tab.\n4. Under the 'Default encryption' section, click on the KMS key link in 'Encryption key ARN'.\n5. Click on the 'Key policy' tab on the navigated KMS key window.\n6. Click on 'Edit'.\n7. Replace the 'Everyone' grantee (i.e. '*') from the Principal element value with an AWS account ID or an AWS account ARN.\n OR \nAdd a Condition clause to the existing policy statement so that the KMS key is restricted.\n8. Click on 'Save Changes'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.publicNetworkAccess equal ignore case Enabled and firewallRules.value[*].properties.startIpAddress equals "0.0.0.0" and firewallRules.value[*].properties.endIpAddress equals "0.0.0.0"``` | Azure PostgreSQL Database Server 'Allow access to Azure services' enabled
This policy identifies Azure PostgreSQL Database Server which has 'Allow access to Azure services' settings enabled. When 'Allow access to Azure services' settings is enabled, PostgreSQL Database server will accept connections from all Azure resources including other subscription resources as well. It is recommended to use firewall rules or VNET rules to allow access from specific network ranges or virtual networks.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure console\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Select the reported PostgreSQL server\n4. Go to 'Connection security' under 'Settings'\n5. Select 'No' for 'Allow access to Azure services' under 'Firewall rules'\n6. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.storageEncrypted is true and $.X.kmsKeyId equals $.Y.key.keyArn and $.Y.keyMetadata.keyManager contains AWS'; show X;``` | AWS RDS database not encrypted using Customer Managed Key
This policy identifies RDS databases that are encrypted with default KMS keys and not with customer managed keys. As a best practice, use customer managed keys to encrypt the data on your RDS databases and maintain control of your keys and data on sensitive workloads.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: Because you can set AWS RDS database encryption only during database creation, the process for resolving this alert requires you to create a new RDS database with a customer managed key for encryption, migrate the data from the reported database to this newly created database, and delete the RDS database identified in the alert.\n\nTo create a new RDS database with encryption using a customer managed key:\n1. Log in to the AWS console.\n2. Select the region for which the alert was generated.\n3. Navigate to the Amazon RDS Dashboard.\n4. Select 'Create database'.\n5. On the 'Select engine' page, select 'Engine options' and 'Next'.\n6. On the 'Choose use case' page, select 'Use case' of database and 'Next'.\n7. On the 'Specify DB details' page, specify the database details you need and click 'Next'.\nNote: Amazon RDS encryption has some limitation on region and type instances. For Availability of Amazon RDS Encryption refer to: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html#Overview.Encryption.Availability\n8. On the 'Configure advanced settings' page, Under 'Encryption', select 'Enable encryption' and select the customer managed key [i.e. Other than (default)aws/rds] from 'Master key' dropdown list].\n9. Select 'Create database'.\n\nTo delete the RDS database that uses the default KMS keys, which triggered the alert:\n1. Log in to the AWS console\n2. Select the region for which the alert was generated.\n3. Navigate to the Amazon RDS Dashboard.\n4. Click on Instances, and select the reported RDS database.\n5. Select the 'Instance actions' drop-down and click 'Delete'.\n7. In the 'Delete' dialog, select the 'Create final snapshot?' checkbox, if you want a backup. Provide a name for the final snapshot, confirm deletion and select 'Delete'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = status equals "ACTIVE" AND containerDefinitions[*].readonlyRootFilesystem any false or containerDefinitions[*].readonlyRootFilesystem does not exist``` | AWS ECS task definition is not configured with read-only access to container root filesystems
This policy identifies the AWS Elastic Container Service (ECS) task definitions with readonlyRootFilesystem parameter set to false or if the parameter does not exist in the container definition within the task definition.
ECS root filesystem is the base filesystem that containers run on, providing the necessary environment and isolation for the containerized application.
If a containerized application is compromised, it could enable an attacker to alter the root file system of the host machine, thus compromising the entire system or application. This could lead to significant data loss, system crashes, or a broader security breach.
It is recommended to limit all ECS containers to have read-only access on ECS task definition to limit the potential impact of a compromised container.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To limit ECS task definitions to read-only access to root filesystems, perform the following actions:\n\n1. Sign into the AWS console and navigate to the Amazon ECS console\n2. In the navigation pane, choose 'Task definitions'\n3. Choose the task definition that is reported\n4. Select 'Create new revision', and then click on 'Create new revision'\n5. On the 'Create new task definition revision' page, select the container with Read-only root file system disabled\n6. Under the 'Read-only root file system' section, enable 'Read only'\n7. Specify the remaining configuration as per the requirements\n8. Choose 'Create'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.endpointPublicAccess is true and resourcesVpcConfig.publicAccessCidrs contains "0.0.0.0/0"``` | AWS EKS cluster public endpoint access overly permissive to all traffic
This policy identifies EKS clusters that have an overly permissive public endpoint accessible to all traffic. When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server that you use to communicate with your cluster (using Kubernetes management tools such as kubectl). By default, this API server endpoint accepts all connections from public internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
Allowing all traffic to EKS cluster may allow a bad actor to brute force their way into the system and potentially get access to the entire network. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Either disable public access to your API server so that it's not accessible from the internet and allow only private access, or restrict traffic solely from known static IP addresses.\n\nFor more details on Amazon EKS cluster endpoint access control, follow below mentioned URL:\nhttps://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changenetworksecuritygroupcompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createnetworksecuritygroup and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletenetworksecuritygroup and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatenetworksecuritygroup) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for Network Security Groups changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for Network Security Group (NSG) changes. Monitoring and alerting on changes to security groups will help in identifying changes to traffic flowing between Virtual Network Cards attached to Compute instances. It is recommended that an Event Rule and Notification be configured to catch changes made to Network Security Groups.
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Network Security Group – Change Compartment, Network Security Group – Create, Network Security Group - Delete and Network Security Group – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains CreateRoute and $.X.filterPattern contains CreateRouteTable and $.X.filterPattern contains ReplaceRoute and $.X.filterPattern contains ReplaceRouteTableAssociation and $.X.filterPattern contains DeleteRouteTable and $.X.filterPattern contains DeleteRoute and $.X.filterPattern contains DisassociateRouteTable) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for Route table changes
This policy identifies the AWS regions which do not have a log metric filter and alarm for Route table changes. Monitoring changes to route tables will help ensure that all VPC traffic flows through an expected path. It is recommended that a metric filter and alarm be established for changes to route tables.
NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateRoute) || ($.eventName = CreateRouteTable) || ($.eventName = ReplaceRoute) || ($.eventName = ReplaceRouteTableAssociation) || ($.eventName = DeleteRouteTable) || ($.eventName = DeleteRoute) || ($.eventName = DisassociateRouteTable) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/write" as X; count(X) less than 1``` | Azure Activity log alert for Create or update network security group does not exist
This policy identifies the Azure accounts in which activity log alert for Create or update network security group does not exist. Creating an activity log alert for Create or update network security group gives insight into network access changes and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Create or Update Network Security Group (Microsoft.Network/networkSecurityGroups)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-image' AND json.rule = status equals "available" and encryption equal ignore case "none"``` | IBM Cloud Virtual Server Image for Virtual Private Cloud (VPC) using basic Provider Managed Encryption
This policy identifies IBM Cloud Virtual Server Images for Virtual Private Cloud (VPC) which are not provisioned with Customer Managed Encryption and are using the basic Provider Managed Encryption. With customer-managed encryption, one can import own root keys to the cloud. This process is commonly called "bring your own key". When the encryption is managed by a cloud service provider, the image may still be vulnerable to unauthorized user access and manipulation. Customer-managed encryption (Key Protect & Hyper Protect Crypto Service) provides better audit records for root key usage, therefore it is recommended to use Customer Managed Encrypted Images.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: The encryption type of the Image cannot be changed once set. If the image's encryption type is set to default (Provider Managed Encryption), Then the image must be deleted and created once again with Customer Managed Encryption\nTo safely delete the image which has default Provider Managed encryption, follow these steps:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then select 'Images'\n3. Select the 'Image Name' reported in the alert\n4. Click on the 'Actions' dropdown\n5. Click on 'Delete'. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-secret-manager-secret' AND json.rule = 'secret_type equals arbitrary and (expiration_date does not exist or (_DateTime.ageInDays(expiration_date) > -1))'``` | IBM Cloud Secrets Manager has expired arbitrary secrets
This policy identifies IBM Cloud Secrets Manager arbitrary secret which is expired. Arbitrary secrets should be rotated to ensure that data cannot be accessed with an old secret which might have been lost, cracked, or stolen. It is recommended that all arbitrary secrets are set with expiration date and expired secrets should be regularly rotated.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: If the IBM Cloud Secrets Manager Arbitrary secret is expired, the secret needs to be deleted.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-delete-secrets&interface=ui#delete-secret-ui\n\nIf the IBM Cloud Secrets Manager Arbitrary secret is about to be expired, the secret has to be rotated.\nPlease use below URL as reference:\nhttps://cloud.ibm.com/docs/secrets-manager?topic=secrets-manager-manual-rotation&interface=ui#manual-rotate-arbitrary-ui\n\nMake sure to set an expiration date for each secret.\nPlease follow the below steps to set an expiration date:\n1. Log in to the IBM Cloud Console\n2. Click on Menu Icon and navigate to 'Resource list', from the list of resources select secret manager instance in which the reported secret resides, under security section.\n3. Select the secret.\n4. Under 'Expiration date' section, provide expiration date as required.\n6. Click on 'Update'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-logging-bucket' AND json.rule = name contains "pk"``` | pk-gcp-global
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus contains FAILURE'``` | AWS Config fails to deliver log files
This policy identifies AWS Configs which are failing to deliver its log files to the specified S3 bucket. It happens when it doesn't have sufficient permissions to complete the operation. To deliver information to S3 bucket, AWS Config needs to assume an IAM role that manages the permissions required to access the designated S3 bucket.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the AWS Config Dashboard\n4. Go to 'Settings' (Left Pane)\n5. In 'AWS Config role' section, 'Choose a role from your account' option and provide a unique name for new IAM role for the 'Role name' box; which does have permission to access S3 bucket.\n6. Click Save. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-subnets-list' AND json.rule = purpose is not member of (REGIONAL_MANAGED_PROXY, PRIVATE_SERVICE_CONNECT, GLOBAL_MANAGED_PROXY, PRIVATE_NAT) and (privateIpGoogleAccess does not exist or privateIpGoogleAccess is false)``` | GCP VPC Network subnets have Private Google access disabled
This policy identifies GCP VPC Network subnets have disabled Private Google access. Private Google access enables virtual machine instances on a subnet to reach Google APIs and services using an internal IP address rather than an external IP address. Internal (private) IP addresses are internal to Google Cloud Platform and are not routable or reachable over the Internet. You can use Private Google access to allow VMs without Internet access to reach Google APIs, services, and properties that are accessible over HTTP/HTTPS.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to VPC network (Left Panel)\n3. Select VPC networks\n4. Click on the name of a reported subnet, The 'Subnet details' page will be displayed\n5. Click on 'EDIT' button\n6. Set 'Private Google access' to 'On'\n7. Click on 'Save'\n\nFor more information, refer: https://cloud.google.com/vpc/docs/configure-private-google-access#enabling-pga. |
```config from cloud.resource where api.name = 'aws-glue-datacatalog' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.DataCatalogEncryptionSettings.EncryptionAtRest.CatalogEncryptionMode equals "DISABLED" or $.X.ConnectionPasswordEncryption.ReturnConnectionPasswordEncrypted equals "false") or ($.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId exists and ($.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId equals $.Y.keyMetadata.arn or $.X.DataCatalogEncryptionSettings.EncryptionAtRest.SseAwsKmsKeyId starts with "alias/aws/")) or ($.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId exists and ($.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId equals $.Y.keyMetadata.arn or $.X.DataCatalogEncryptionSettings.ConnectionPasswordEncryption.AwsKmsKeyId starts with "alias/aws/"))' ; show X;``` | AWS Glue Data Catalog not encrypted by Customer Managed Key (CMK)
This policy identifies AWS Glue Data Catalog that is encrypted using the default KMS key instead of CMK (Customer Managed Key) or using the CMK that is disabled.
AWS Glue Data Catalog is a managed metadata repository centralizing schema information for AWS Glue resources, facilitating data discovery and management. To protect sensitive data from unauthorized access, users can specify CMK to get enhanced security, and control over the encryption key and comply with any regulatory requirements.
It is recommended to use a CMK to encrypt the AWS Glue Data Catalog as it provides complete control over the encrypted data.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable the encryption for Glue data catalog\n1. Sign in to the AWS Management Console, Go to the AWS Management Console at https://console.aws.amazon.com/.\n2. Navigate to the reported resource region by selecting the region from the dropdown in the top right corner.\n3. Navigate to AWS Glue: In the 'Find Services' search box, type 'Glue' and select 'AWS Glue' from the search results.\n4. Choose the 'Data Catalog' dropdown in the navigation pane and select 'Catalog settings'.\n5. On the 'Data catalog settings' page, select the 'Metadata encryption' check box, and choose an AWS KMS CMK key that you are managing according to your business requirements.\nNote: When you use a customer managed key to encrypt your Data Catalog, the Data Catalog provides an option to register an IAM role to encrypt and decrypt resources. You need to grant your IAM role permissions that AWS Glue can assume on your behalf. This includes AWS KMS permissions to encrypt and decrypt data.\n6. To enable an IAM role that AWS Glue can assume to encrypt and decrypt data on your behalf, select the 'Delegate KMS operations to an IAM role' option.\n7. Select an IAM role equipped with the necessary permissions to conduct the required KMS operations for AWS Glue to assume.\n8. To Encrypt connection passwords, select 'Encrypt connection passwords', and choose an AWS KMS CMK key that you are managing according to your business requirements.\n9. And click 'save'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_checkpoints')] does not exist or settings.databaseFlags[?(@.name=='log_checkpoints')].value equals off)"``` | GCP PostgreSQL instance with log_checkpoints database flag is disabled
This policy identifies PostgreSQL instances in which log_checkpoints database flag is not set. Enabling the log_checkpoints database flag would enable logging of checkpoints and restart points to the server log.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Console\n2. Navigate to SQL\n3. Select the PostgreSQL instance for which you want to enable the database flag from the list\n4. Click 'Edit'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. Go to the 'Flags' section under 'Configuration options'\n6. Click 'Add item', choose the flag 'log_checkpoints' from the drop-down menu and set the value to 'on'\nOR\nIf 'log_checkpoints' database flag is already set to 'off', from the drop-down menu set the value to 'on'\n7. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains ConsoleLogin and ($.X.filterPattern contains "errorMessage=" or $.X.filterPattern contains "errorMessage =") and ($.X.filterPattern does not contain "errorMessage!=" and $.X.filterPattern does not contain "errorMessage !=") and $.X.filterPattern contains "Failed authentication") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for AWS management console authentication failures
This policy identifies the AWS accounts which do not have a log metric filter and alarm for AWS management console authentication failures. Monitoring failed console logins may decrease lead time to detect an attempt to brute force a credential, which may provide an indicator, such as source IP, that can be used in other event correlation. It is recommended that a metric filter and alarm be established for failed console authentication attempts.
NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events and is not set with specific log metric filter and alarm in your account.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi trail enabled with all Management Events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = ConsoleLogin) && ($.errorMessage = "Failed authentication") }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1, specify metric details and conditions details as required and click on 'Next'\n - In Step 2, Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3, Select name and description to alarm and click on 'Next'\n - In Step 4, Preview your data entered and click on 'Create Alarm'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = $.nodePools[*].management.autoUpgrade is true and $.currentNodeCount less than 3``` | GCP Kubernetes cluster size contains less than 3 nodes with auto upgrade enabled
Ensure your Kubernetes cluster size contains 3 or more nodes. (Clusters smaller than 3 may experience downtime during upgrades.)
This policy checks the size of your cluster pools and alerts if there are fewer than 3 nodes in a pool.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Resize your cluster.\n\n1. Visit the Google Kubernetes Engine menu in GCP Console.\n2. Click the cluster's Edit button, which looks like a pencil.\n3. In the Node pools section, expand the disclosure arrow for the node pool you want to change, and change the value of the Current size field to the desired value, then click Save.\n4. Repeat for each node pool as needed.\n5. Click Save to exit the cluster modification screen.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(21,21) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on FTP port (21)
This policy identifies GCP Firewall rules which allow all inbound traffic on FTP port (21). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the FTP port (21) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-object-storage-bucket' AND json.rule = kmsKeyId is member of ("null")``` | OCI Object Storage Bucket is not encrypted with a Customer Managed Key (CMK)
This policy identifies the OCI Object Storage buckets that are not encrypted with a Customer Managed Key (CMK). It is recommended that Object Storage buckets should be encrypted with a Customer Managed Key (CMK), using Customer Managed Key (CMK), provides an additional level of security on your data by allowing you to manage your own encryption key lifecycle management for the bucket.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Assign next to Encryption Key: Oracle managed key.\n5. Select a Vault from the appropriate compartment\n6. Select a Master Encryption Key\n7. Click Assign. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(80,80)"``` | Alibaba Cloud Security group allow internet traffic to HTTP port (80)
This policy identifies Security groups that allow inbound traffic on HTTP port (80) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 80, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'atRestEncryptionEnabled is false or atRestEncryptionEnabled does not exist'``` | AWS ElastiCache Redis cluster with encryption for data at rest disabled
This policy identifies ElastiCache Redis clusters which have encryption for data at rest(at-rest) is disabled. It is highly recommended to implement at-rest encryption in order to prevent unauthorized users from reading sensitive data saved to persistent media available on your Redis clusters and their associated cache storage systems.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster at-rest encryption can be set only at the time of the creation of the cluster. So to fix this alert, create a new cluster with at-rest encryption, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with at-rest encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption at-rest' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-user' AND json.rule = '(_DateTime.ageInDays(apiKeys[*].timeCreated) > 90)'``` | OCI users API keys have aged more than 90 days without being rotated
This policy identifies all of your IAM API keys which have not been rotated in the past 90 days. It is recommended to verify that they are rotated on a regular basis in order to protect OCI API access directly or via SDKs or OCI CLI.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Select Identity from the Services menu.\n3. Select Users from the Identity menu.\n4. Click on an individual user under the Name heading.\n5. Click on API Keys in the lower left hand corner of the page.\n6. Delete any API Keys with a date of 90 days or older under the Created column of the API Key table.\n\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Security/policies/write" as X; count(X) less than 1``` | Azure Activity log alert for Update security policy does not exist
This policy identifies the Azure accounts in which activity log alert for Update security policy does not exist. Creating an activity log alert for Update security policy gives insight into changes to security policy and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Update security policy (Microsoft.Security/policies)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-route53-list-hosted-zones' AND json.rule = 'hostedZone.config.privateZone is false and resourceRecordSet[*].type any equal A and (resourceRecordSet[*].resourceRecords[*].value any start with 10. or resourceRecordSet[*].resourceRecords[*].value any start with _IPAddress.inRange("172.%d",16,31) or resourceRecordSet[*].resourceRecords[*].value any start with 192.168.)'``` | AWS Route53 Public Zone with Private Records
A hosted zone is a container for records (An object in a hosted zone that you use to define how you want to route traffic for the domain or a subdomain), which include information about how you want to route traffic for a domain (such as example.com) and all of its subdomains (such as www.example.com, retail.example.com, and seattle.accounting.example.com). A hosted zone has the same name as the corresponding domain. A public hosted zone is a container that holds information about how you want to route traffic on the internet for a specific domain.It is best practice to avoid AWS Route 53 Public Hosted Zones containing DNS records for private IPs or resources within your AWS account to overcome information leakage of your internal network and resources.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: You can not convert a public hosted zone into a private hosted zone. So, it is recommended to create and configure a Private Hosted Zone to manage private IPs within your Virtual Private Cloud (VPC) as Amazon Route 53 service will only return your private DNS records when queried from within the associated VPC, and delete the associated public hosted zone once the Private hosted zone is configured with all the records.\nTo create a private hosted zone using the Route 53 console:\n1.For each VPC that you want to associate with the Route 53 hosted zone, change the following VPC settings to true:\n 'enableDnsHostnames'\n 'enableDnsSupport'\nFor more information, see Updating DNS Support (http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-dns.html#vpc-dns-updating) for Your VPC in the Amazon VPC User Guide.\n2. Sign in to the AWS console\n3. Go to Route 53 console\n4. If you are new to Route 53, choose Get Started Now under DNS Management. If you are already using Route 53, choose Hosted Zones in the navigation pane.\n5. Choose 'Create Hosted Zone'\n6. In the Create Private Hosted Zone pane, enter a domain name and, optionally, a comment.\nFor information about how to specify characters other than a-z, 0-9, and - (hyphen) and how to specify internationalized domain names, see DNS Domain Name Format (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DomainNameFormat.html).\n7. In the Type list, choose Private Hosted Zone for Amazon VPC\n8. In the VPC ID list, choose the VPC that you want to associate with the hosted zone. If you want to associate more than one VPC with the hosted zone, you can add VPCs after you create the hosted zone.\nNote: If the console displays the following message, you are trying to associate a VPC with this hosted zone that has already been associated with another hosted zone that has an overlapping namespace, such as example.com and retail.example.com:\n'A conflicting domain is already associated with the given VPC or Delegation Set.'\n9. Choose Create\n10. To associate more VPCs with the new hosted zone, perform the following steps:\n a. Choose Back to Hosted Zones.\n b. Choose the radio button for the hosted zone.\n c. In the right pane, in VPC ID, choose another VPC that you want to associate with the hosted zone.\n d. Choose Associate New VPC.\n e. Repeat steps c and d until you have associated all of the VPCs that you want to with the hosted zone.\nFor More Information : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/hosted-zone-private-creating.html\n\nTo delete a public hosted zone using the Route 53 console:\n1. Sign into the AWS console\n2. Go Route53 console\n3. Confirm that the hosted zone that you want to delete contains only an NS and an SOA record. If it contains additional records, delete them:\n a. Choose the name of the hosted zone that you want to delete.\n b. On the Record Sets page, if the list of records includes any records for which the value of the Type column is something other than NS or SOA, choose the row, and choose Delete Record Set. To select multiple, consecutive records, choose the first row, press and hold the Shift key, and choose the last row. To select multiple, non-consecutive records, choose the first row, press and hold the Ctrl key, and choose the remaining rows. Note: If you created any NS records for subdomains in the hosted zone, delete those records, too.\n c. Choose Back to Hosted Zones\n4. On the Hosted Zones page, choose the row for the hosted zone that you want to delete.\n5. Choose Delete Hosted Zone.\n6. Choose OK to confirm.\n7. If you want to make the domain unavailable on the internet, we recommend that you transfer DNS service to a free DNS service and then delete the Route 53 hosted zone. This prevents future DNS queries from possibly being misrouted. If the domain is registered with Route 53, see Adding or Changing Name Servers and Glue Records for a Domain (https://docs.aws.amazon.com/Route53/latest DeveloperGuide/domain-name-servers-glue-records.html) for information about how to replace Route 53 nameservers with name servers for the new DNS service. If the domain is registered with another registrar, use the method provided by the registrar to change name servers for the domain.\nFor More Information : https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/DeleteHostedZone.html. |
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case "service" and name equal ignore case "serviceType" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name equal ignore case "region")] does not exist)] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;``` | IBM Cloud Service ID with IAM policies provide administrative privileges for all Identity and Access enabled services
This policy identifies IBM Cloud Service ID, which has administrator role permission across 'All Identity and Access enabled services'. Service IDs with administrator permission on 'All Identity and Access enabled services' can access all services or resources in the account. If a Service ID with administrator privileges becomes compromised, it may result in compromised resources in the account. As a security best practice, granting the least privilege access, such as granting only the permissions required to perform a task instead of providing excessive permissions, is recommended.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Service IDs' in the left panel.\n3. Select the Service ID that is reported and that you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section > Click on three dots on the right corner of a row for the policy, which has administrator permission on 'All Identity and Access enabled services' \n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecs-describe-task-definition' AND json.rule = containerDefinitions[*].user exists and containerDefinitions[*].user contains root``` | AWS ECS Fargate task definition root user found
This policy identifies AWS ECS Fargate task definition which has user name as root. As a best practice, the user name to use inside the container should not be root.
Note: This parameter is not supported for Windows containers.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: Create a task definition revision.\n\n1. Open the Amazon ECS console.\n2. From the navigation bar, choose the region that contains your task definition.\n3. In the navigation pane, choose Task Definitions.\n4. On the Task Definitions page, select the box to the left of the task definition to revise and choose Create new revision.\n5. On the Create new revision of Task Definition page, change the existing Container Definitions.\n6. Under Security, remove root from the User field.\n7. Verify the information and choose Update, then Create.\n8. If your task definition is used in a service, update your service with the updated task definition.\n9. Deactivate previous task definition. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(22,22)"``` | Alibaba Cloud Security group allow internet traffic to SSH port (22)
This policy identifies Security groups that allow inbound traffic on SSH port (22) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 22, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'minimumPasswordLength does not exist or minimumPasswordLength less than 14'``` | Alibaba Cloud RAM password policy does not have a minimum of 14 characters
This policy identifies Alibaba Cloud accounts that do not have a minimum of 14 characters in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Password Length' field, enter 14 as the minimum number of characters for password complexity.\n6. Click on 'OK'\n7. Click on 'Close'. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-block-storage-volume' AND json.rule = volume_attachments[*] size equals 0 and encryption equal ignore case provider_managed``` | IBM Cloud unattached disk is not encrypted with customer managed key
This policy identifies IBM Cloud unattached disks (storage volume) which are not encrypted with customer managed keys. As a best practice, use customer managed keys to encrypt the data and maintain control of your keys and sensitive data.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: A disk (boot storage volume) can be encrypted with customer managed keys only at the time of\ncreation. Please delete reported data disk following below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete\n\nBefore deleting a disk, make sure to take snapshot of the disk by attaching it to a virtual\nserver instance and follow below URL to create a snapshot:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy mtmay
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals Dns and properties.pricingTier does not equal Standard)] exists``` | Copy of Azure Microsoft Defender for Cloud set to Off for DNS
This policy identifies Azure Microsoft Defender for Cloud which has defender setting for DNS set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. Defender for DNS monitors the queries and detects suspicious activities without the need for any additional agents on your resources. It is highly recommended to enable Azure Defender for DNS.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Expand 'Select Defender plan by resource type'\n7. Select 'On' status for 'DNS' under the column 'Microsoft Defender for'\n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-redshift-describe-clusters' AND json.rule='encrypted is false'``` | AWS Redshift instances are not encrypted
This policy identifies AWS Redshift instances which are not encrypted. These instances should be encrypted for clusters to help protect data at rest which otherwise can result in a data breach.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable encryption on your Redshift cluster follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/changing-cluster-encryption.html. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(5900,5900)"``` | Alibaba Cloud Security group allow internet traffic to VNC Server port (5900)
This policy identifies Security groups that allow inbound traffic on VNC Server port (5900) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 5900, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'. |
```config from cloud.resource where api.name = 'gcloud-domain-users' AND json.rule = isAdmin is true and isEnrolledIn2Sv is false and archived is false and suspended is false``` | GCP Google Workspace Super Admin not enrolled with 2-step verification
This policy identifies Google Workspace Super Admins that do not have 2-Step Verification enabled.
Super Admin accounts have access to all features in the Admin console and Admin API. This additional layer of 2SV significantly reduces the risk of unauthorized access, protecting administrative controls and sensitive data from potential breaches. Implementing 2-Step Verification safeguards your entire Google Workspace environment, maintaining robust security and compliance standards.
It is recommended to enable 2-Step Verification for all Super Admins as it provides an additional layer of security in case account credentials are compromised.
This is applicable to gcp cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Workspace users should be allowed to turn on 2-Step verification (2SV) before enabling 2SV. Follow the steps mentioned below to allow users to turn on 2SV.\n1. Sign in to Workspace Admin Console with an administrator account. \n2. Go to Menu then 'Security' > 'Authentication' > '2-step verification'.\n3. Check the 'Allow users to turn on 2-Step Verification' box.\n4. Select 'Enforcement' as per need.\n5. Click Save.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/a/answer/9176657\n\n\nTo enable 2-Step Verification for GCP Workspace User accounts, follow the steps below.\n1. Open your Google Account.\n2. In the navigation panel, select 'Security'.\n3. Under 'How you sign in to Google', select '2-Step Verification' > 'Get started'.\n4. Follow the on-screen steps.\n\nFor more details, please refer to below URL:\nhttps://support.google.com/accounts/answer/185839. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any((Condition.ForAnyValue:IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0 or Condition.ForAnyValue:IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith ecs:)] exists``` | AWS ECS IAM policy overly permissive to all traffic
This policy identifies ECS IAM policies that are overly permissive to all traffic. It is recommended that the ECS should be granted access restrictions so that only authorized users and applications have access to the service.
For more details:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/security_iam_id-based-policy-examples.html#security_iam_service-with-iam-policy-best-practices
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'ECS' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals READY and Firewall.DeleteProtection is false``` | VenuTestCLi
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and (settings.databaseFlags[*].name does not contain log_planner_stats or settings.databaseFlags[?any(name contains log_planner_stats and value contains on)] exists)"``` | GCP PostgreSQL instance database flag log_planner_stats is not set to off
This policy identifies PostgreSQL database instances in which database flag log_planner_stats is not set to off. The PostgreSQL planner/optimizer is responsible to create an optimal execution plan for each query. The log_planner_stats flag controls the inclusion of PostgreSQL planner performance statistics in the PostgreSQL logs for each query. This can be useful for troubleshooting but may increase the number of logs significantly and have performance overhead. It is recommended to set log_planner_stats off.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_planner_stats' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_planner_stats' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and ($.X.filterPattern contains "eventSource=" or $.X.filterPattern contains "eventSource =") and ($.X.filterPattern does not contain "eventSource!=" and $.X.filterPattern does not contain "eventSource !=") and $.X.filterPattern contains kms.amazonaws.com and $.X.filterPattern contains DisableKey and $.X.filterPattern contains ScheduleKeyDeletion) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for disabling or scheduled deletion of customer created CMKs
This policy identifies the AWS regions which do not have a log metric filter and alarm for disabling or scheduled deletion of customer created CMKs. Data encrypted with disabled or deleted keys will no longer be accessible. It is recommended that a metric filter and alarm be established for customer created CMKs which have changed state to disabled or scheduled deletion.
NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventSource = kms.amazonaws.com) && (($.eventName=DisableKey)||($.eventName=ScheduleKeyDeletion)) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-application-gateway' AND json.rule = ['properties.provisioningState'] equal ignore case Succeeded AND ['properties.httpListeners'][*].['properties.provisioningState'] equal ignore case Succeeded AND ['properties.httpListeners'][*].['properties.protocol'] equal ignore case Https AND ['properties.httpListeners'][*].['properties.sslProfile'].['id'] does not exist``` | Azure Application Gateway listener not secured with SSL profile
This policy identifies Azure Application Gateway listeners that are not secured with an SSL profile.
An SSL profile provides a secure channel by encrypting the data transferred between the client and the application gateway. Without SSL profiles, the data transferred is vulnerable to interception, posing security risks. This could lead to potential data breaches and compromise sensitive information.
As a security best practice, it is recommended to secure all Application Gateway listeners with SSL profiles. This ensures data confidentiality and integrity by encrypting traffic.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Application gateways'.\n2. Select 'Application gateways'.\n3. Click on reported Application gateway.\n4. Under 'Settings' select 'Listeners' from the left-side menu.\n5. Select the HTTPS listener.\n6. Check the 'Enable SSL Profile' box.\n7. Select the SSL profile you created (e.g., applicationGatewaySSLProfile) from the dropdown. If no profile exists, you'll need to create one first.\n8. Finish configuring the listener as needed.\n9. Click 'Add' to save the listener with the SSL profile.. |
```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = hostedZone.config.privateZone is false and resourceRecordSet[?any( type equals CNAME and resourceRecords[*].value contains elasticbeanstalk.com)] exists as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' as Y; filter 'not (X.resourceRecordSet[*].resourceRecords[*].value intersects $.Y.cname)'; show X;``` | AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk associated with AWS Elastic Beanstalk Instance
This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing Elastic Beanstalk (EBS) will have a risk of these dangling domain entries being taken over by an attacker by creating a similar Elastic beanstalk (EBS) in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones.
Note: Please ignore the reported alert if the Elastic Beanstalk (EBS) configured in the Route53 Hosted Zone DNS record are in different accounts.
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['RESOURCE_HIJACKING'].
Mitigation of this issue can be done as follows: Identify DNS record entry pointing to a non-existing Elastic Beanstalk (EBS) resource.\n\nTo remove DNS record entry, follow steps given in following URL:\nhttps://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-sets-deleting.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudwatch-log-group' AND json.rule = retentionInDays exists and retentionInDays less than 365``` | AWS CloudWatch log groups retention set to less than 365 days
This policy identifies the AWS CloudWatch LogGroups having a retention period set to less than 365 days.
CloudWatch Logs centralize and store logs from AWS services and systems. 1-year retention of the logs aids in compliance with log retention standards. Shorter retention periods can lead to the loss of historical logs needed for audits, forensic analysis, and compliance, increasing the risk of undetected issues or non-compliance.
It is recommended that AWS CloudWatch log group retention be set to at least 365 days to meet compliance needs and support audits, investigations, and analysis.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To change the log retention setting, perform the following actions:\n\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'CloudWatch Dashboard' by selecting 'CloudWatch' under the 'Management & Governance' in All services\n4. In the navigation pane, choose 'Log groups' under the 'Logs' section\n5. Select the log group that is reported and select 'Edit retention setting(s)' under the 'Actions' drop-down\n6. In 'Retention setting', for 'Expire events after', choose a log retention value either 'Never expire' or the value more than 365 days according to your business requirements\n7. Choose 'Save'. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-acl' AND json.rule = rules[?any( action equals allow and direction equals outbound and destination equals 0.0.0.0/0 )] exists``` | IBM Cloud ACL for VPC with overly permissive egress rule
This policy identifies IBM Cloud VPC Access Control List which are having overly permissive outbound rules allowing outgoing traffic to internet (0.0.0.0/0). ACL contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure ACL to restrict traffic to known destination on authorised protocols and ports.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the VPC ACL reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Access control lists'\n3. Select the Access control list reported in the alert\n4. Go to 'Outbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Destination Type' as 'Any' or 'IP or CIDR' as '0.0.0.0/0'\n6. Click on 'Delete'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule='loggingConfiguration.targetBucket equals null or loggingConfiguration.targetPrefix equals null'``` | AWS Access logging not enabled on S3 buckets
Checks for S3 buckets without access logging turned on. Access logging allows customers to view complete audit trail on sensitive workloads such as S3 buckets. It is recommended that Access logging is turned on for all S3 buckets to meet audit & compliance requirement
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable logging' option.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-enforcement-policy' AND json.rule = isEnabled is false``` | Azure Active Directory Security Defaults is disabled
This policy identifies Azure Active Directory which have Security Defaults configuration disabled. Security Defaults contains preconfigured security settings for common identity-related attacks. This provides a basic level of security-enabled by default. It is recommended to enable this configuration as a security best practice.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Active Directory'\n3. Select 'Properties' under 'Manage'\n4. Click on 'Manage Security defaults' if not selected\n5. Under 'Enable Security defaults' select 'Yes' for 'Enable Security defaults'\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cosmos-db' AND json.rule = properties.provisioningState equals Succeeded AND properties.publicNetworkAccess equal ignore case Enabled AND properties.virtualNetworkRules[*] is empty``` | Azure Cosmos DB Virtual network is not configured
This policy identifies Azure Cosmos DBs that are not configured with a Virtual network. Azure Cosmos DB by default is accessible from any source if the request is accompanied by a valid authorization token. By configuring Virtual network only requests originating from those subnets will get a valid response. It is recommended to configure Virtual network to Cosmos DB.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Refer to the following URL to configure Virtual networks on your Cosmos DB:\nhttps://docs.microsoft.com/en-us/azure/cosmos-db/how-to-configure-vnet-service-endpoint. |
```config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' AND json.rule = profile does not equal RESTRICTED and profile does not equal CUSTOM as X; config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' AND json.rule = sslPolicy exists as Y; filter " $.X.selfLink contains $.Y.sslPolicy "; show Y;``` | GCP HTTPS Load balancer SSL Policy not using restrictive profile
This policy identifies HTTPS Load balancers which are not using restrictive profile in it's SSL Policy, which controls sets of features used in negotiating SSL with clients. As a best security practice, use RESTRICTED as SSL policy profile as it meets stricter compliance requirements and does not include any out-of-date SSL features.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'advanced menu' hyperlink to view target proxies\n5. Click on 'Target proxies' tab\n6. Click on the reported HTTPS target proxy\n7. Click on the hyperlink under 'URL map'\n8. Click on the 'EDIT' button\n9. Select 'Frontend configuration', Click on HTTPS protocol rule\n10. Select SSL policy which uses the RESTRICTED/CUSTOM profile or if no SSL policy is already present then create a new SSL policy with RESTRICTED as Profile.\nNOTE: If you choose CUSTOM as profile then make sure you are using profile features equally restrictive as the RESTRICTED profile or more than the RESTRICTED profile.\n11. Click on 'Done'\n12. Click on 'Update'. |
```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-monitor-log-profiles-list' as Y; filter '($.X.properties.encryption.keySource does not equal "Microsoft.Keyvault" and $.X.properties.encryption.keyvaultproperties.keyname is not empty and $.X.properties.encryption.keyvaultproperties.keyversion is not empty and $.X.properties.encryption.keyvaultproperties.keyvaulturi is not empty and $.Y.properties.storageAccountId contains $.X.name)'; show X;``` | Azure Storage Account Container with activity log has BYOK encryption disabled
This policy identifies the Storage Accounts in which container with activity log has BYOK encryption disabled. Azure storage account with the activity logs being exported to container should use BYOK (Use Your Own Key) for encryption, which provides additional confidentiality controls on log data.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'. |
```config from cloud.resource where resource.status = Deleted and api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists``` | test-resource-status
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind contains functionapp and kind does not contain workflowapp and kind does not equal app and properties.state equal ignore case running and ((properties.publicNetworkAccess exists and properties.publicNetworkAccess equal ignore case Enabled) or (properties.publicNetworkAccess does not exist)) and config.ipSecurityRestrictions[?any((action equals Allow and ipAddress equals Any) or (action equals Allow and ipAddress equals 0.0.0.0/0))] exists'``` | Azure Function app configured with public network access
This policy identifies Azure Function apps that are configured with public network access. Publicly accessible web apps could allow malicious actors to remotely exploit any vulnerabilities and could. It is recommended to configure the Function apps with private endpoints so that the functions hosted are accessible only to restricted entities.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To restrict App Service access, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/app-service/app-service-ip-restrictions. |
```config from cloud.resource where api.name = 'aws-iam-service-last-accessed-details' AND json.rule = '(arn contains :role or arn contains :user) and serviceLastAccesses[?any(serviceNamespace contains cloudtrail and totalAuthenticatedEntities any equal 0)] exists' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' AND json.rule = 'isAttached is true and (document.Statement[?any(Effect equals Allow and (Action[*] contains DeleteTrail or Action contains DeleteTrail or Action contains cloudtrail:* or Action[*] contains cloudtrail:*))] exists)' as Y; filter '($.Y.entities.policyRoles[*].roleName exists and $.X.arn contains $.Y.entities.policyRoles[*].roleName) or ($.Y.entities.policyUsers[*].userName exists and $.X.arn contains $.Y.entities.policyUsers[*].userName)'; show X;``` | AWS IAM role/user with unused CloudTrail delete or full permission
This policy identifies IAM roles/users that have unused CloudTrail delete permission or CloudTrail full permissions. As a security best practice, it is recommended to grant the least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions to a particular role/user. It helps to reduce the potential improper or unintended access to your critical CloudTrail infrastructure.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: If Roles with unused CloudTrail delete permission,\n1. Log in to AWS console\n2. Navigate IAM service\n3. Click on Roles\n4. Click on reported IAM role\n5. In the Permissions tab, Under the 'Permissions policies' section, Remove the policies which have CloudTrail permissions or Delete role if is not required.\n\nIf Users with unused CloudTrail delete permission,\n1. Log in to AWS console\n2. Navigate IAM service\n3. Click on Users\n4. Click on reported IAM user\n5. In the Permissions tab, Under the 'Permissions policies' section, Remove the policies which have CloudTrail permissions or Delete user if is not required.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = sku.tier equals "Premium" and properties.status equals "Active" and networkRuleSets[*].properties.defaultAction equals "Allow" and networkRuleSets[*].properties.publicNetworkAccess equals Enabled``` | Azure Service bus namespace configured with overly permissive network access
This policy identifies Azure Service bus namespaces configured with overly permissive network access. By default, Service Bus namespaces are accessible from the internet as long as the request comes with valid authentication and authorization. With an IP firewall, you can restrict it further to only a set of IPv4 addresses or IPv4 address ranges. With Virtual Networks, the network traffic path is secured on both ends. It is recommended to configure the Service bus namespace with an IP firewall or by Virtual Network; so that the Service bus namespace is accessible only to restricted entities.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To restrict Service bus namespace access to only a set of IPv4 addresses or IPv4 address ranges; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-ip-filtering\n\nTo restrict Service bus namespace access with a virtual network; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-service-endpoints. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-roles' AND json.rule = role.assumeRolePolicyDocument.Statement[*].Action contains "sts:AssumeRoleWithWebIdentity" and role.assumeRolePolicyDocument.Statement[*].Principal.Federated contains "cognito-identity.amazonaws.com" and role.assumeRolePolicyDocument.Statement[*].Effect contains "Allow" and role.assumeRolePolicyDocument.Statement[*].Condition.StringEquals does not contain "cognito-identity.amazonaws.com:aud"``` | AWS Cognito service role does not have identity pool verification
This policy identifies the AWS Cognito service role that does not have identity pool verification.
AWS Cognito is an identity and access management service for web and mobile apps. AWS Cognito service roles define permissions for AWS services accessing resources. The 'aud' claim in a cognito service role is an identity pool token that specifies the intended audience for the token. If the aud claim is not enforced in the cognito service role trust policy, it could potentially allow tokens issued for one audience to be used to access resources intended for a different audience. This oversight increases the risk of unauthorized access, compromising access controls and elevating the potential for data breaches within the AWS environment.
It is recommended to implement proper validation of the 'aud' claim by adding the 'aud' in the Cognito service role trust policy.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNAUTHORIZED_ACCESS'].
Mitigation of this issue can be done as follows: To mitigate the absence of 'aud' claim validation in service roles associated with Cognito identity pools, follow these steps:\n\n1. Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.\n2. In the navigation pane of the IAM console, choose 'Roles'.\n3. In the list of roles in account, choose the name of the role that is reported.\n4. Choose the 'Trust relationships' tab, and then choose 'Edit trust policy'.\n5. Edit the trust policy, add a condition to verify that the 'aud' claim matches the expected identity pool.\n6. Click 'Update Policy'.\n\nRefer to the below link to add the required aud validation in service roles\nhttps://docs.aws.amazon.com/cognito/latest/developerguide/role-based-access-control.html#creating-roles-for-role-mapping. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = metadata.items[?any(key contains "serial-port-enable" and value contains "true")] exists and (status equals RUNNING and name does not start with "gke-")``` | GCP VM instances have serial port access enabled
This policy identifies VM instances which have serial port access enabled. Interacting with a serial port is often referred to as the serial console. The interactive serial console does not support IP-based access restrictions such as IP allowlists. If you enable the interactive serial console on an instance, clients can attempt to connect to that instance from any IP address. So it is recommended to keep interactive serial console support disabled.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit\n6. Under Remote access section, Uncheck 'Enable connecting to serial ports'\n7. Click on Save button. |
```config from cloud.resource where api.name = 'azure-cognitive-services-account-diagnostic-settings' AND json.rule = (properties.logs[?any(enabled equal ignore case "true")] exists or properties.metrics[?any( enabled equal ignore case "true" )] exists) and properties.storageAccountId exists as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = 'totalPublicContainers > 0 and (properties.allowBlobPublicAccess is true or properties.allowBlobPublicAccess does not exist) and properties.publicNetworkAccess equal ignore case Enabled and networkRuleSet.virtualNetworkRules is empty and (properties.privateEndpointConnections is empty or properties.privateEndpointConnections does not exist)' as Y; filter '$.X.properties.storageAccountId contains $.Y.id'; show Y;``` | Azure Storage Account storing Cognitive service diagnostic logs is publicly accessible
This policy identifies Azure Storage Accounts storing Cognitive service diagnostic logs are publicly accessible.
Azure Storage account stores Cognitive service diagnostic logs which might contain detailed information of platform logs, resource logs, trace logs and metrics. Diagnostic log data may contain sensitive data and helps in identifying potentially malicious activity. The attacker could exploit publicly accessible storage account to get cognitive diagnostic data logs and could breach into the system by leveraging exposed data and propagate across your system.
As a best security practice, it is recommended to restrict storage account access to only the services as per business requirement.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to 'Storage Accounts' dashboard\n3. Select the reported storage account\n4. Under 'Data storage' section, Select 'Containers'\n5. Select the blob container you need to modify\n6. Click on 'Change access level'\n7. Set 'Public access level' to 'Private (no anonymous access)'\n8. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = instanceOptions.areLegacyImdsEndpointsDisabled is false``` | OCI Compute Instance has Legacy MetaData service endpoint enabled
This policy identifies the OCI Compute Instances that are configured with Legacy MetaData service (IMDSv1) endpoints enabled. It is recommended that Compute Instances should be configured with legacy v1 endpoints (Instance Metadata Service v1) being disabled, and use Instance Metadata Service v2 instead following security best practices.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. In the Instance Details section, next to Instance Metadata Service, click Edit.\n5. For the Allowed IMDS version, select the Version 2 only option.\n6. Click Save Changes.\n\nNote : \nIf you disable IMDSv1 on an instance that does not support IMDSv2, you might not be able to connect to the instance when you launch it. To re enable IMDSv1: using the Console, on the Instance Details page, next to Instance Metadata Service, click Edit. Select the Version 1 and version 2 option, save your changes, and then restart the instance. Using the API, use the UpdateInstance operation.\n\nFMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/gettingmetadata.htm#upgrading-v2. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dax-cluster' AND json.rule = Status equals "available" and SSEDescription.Status equals "DISABLED"``` | AWS DAX cluster not configured with encryption at rest
This policy identifies the AWS DAX cluster where encryption at rest is disabled.
AWS DAX cluster encryption at rest provides an additional layer of data protection, helping secure your data from unauthorized access to underlying storage. Without encryption, anyone with access to the storage media could potentially intercept and view the data.
It is recommended to enable encryption at rest for the AWS DAX cluster.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable DAX encryption at rest while creating the new DynamoDB cluster, perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'DynamoDB' service under the 'Database' section in 'Services' menu\n4. In the navigation pane on the left side of the console, under 'DAX', choose 'Clusters'\n5. Choose 'Create cluster'\n6. For Cluster name , and other configurations set according to your reported DAX cluster\n7. On the 'Configure security' panel, In 'Encryption' section, select the checkbox 'Turn on encryption at rest' and Click 'Next'\n8. On the 'Verify advanced settings' set according your reported DAX cluster and click 'Next'\n9. On the 'Review and create' click 'Create cluster'\n\nOnce the new cluster is created, change the cluster endpoint within your DynamoDB application to reference the new resource.\n\nTo delete the existing DAX cluster where encryption not enabled\n\n1. Sign in to the AWS Management Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'DynamoDB' service under the 'Database' section in 'Services' menu\n4. In the navigation pane on the left side of the console, under 'DAX', choose Clusters\n5. Select the DAX cluster that is reported and required to remove\n6. Click 'Delete' to delete the cluster. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = 'processing is false and (elasticsearchClusterConfig.zoneAwarenessEnabled is false or elasticsearchClusterConfig.zoneAwarenessEnabled does not exist)'``` | AWS Elasticsearch domain has Zone Awareness set to disabled
This policy identifies Elasticsearch domains for which Zone Awareness is disabled in your AWS account. Enabling Zone Awareness (cross-zone replication) increases the availability by distributing your Elasticsearch data nodes across two availability zones available in the same AWS region. It also prevents data loss and minimizes downtime in the event of node or availability zone failure.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable Zone Awareness feature on existing Elasticsearch, following CLI can be used:\naws es update-elasticsearch-domain-config --domain-name <DOMAINNAME> --region <REGION> --elasticsearch-cluster-config ZoneAwarenessEnabled=true, ZoneAwarenessConfig={AvailabilityZoneCount=<COUNT>}\n\nFor more refer:\nhttps://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-multiaz.html. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-private-endpoint' AND json.rule = properties.privateLinkServiceConnections[*].properties.privateLinkServiceId is not empty and properties.privateLinkServiceConnections[*].properties.privateLinkServiceId contains id``` | Test-Uilian
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] contains microsoft-user-default-legacy``` | Azure AD Users can consent to apps accessing company data on their behalf is enabled
This policy identifies Azure Active Directory which have 'Users can consent to apps accessing company data on their behalf' configuration enabled. User profiles contain private information which could be shared with others without requiring any further consent from the user if this configuration is enabled. It is recommended not to allow users to use their identity outside of the cloud environment.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: To configure user consent to apps accessing company data on their behalf, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/active-directory/manage-apps/configure-user-consent?pivots=portal. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = 'user does not end with @yourcompanydomainname and user does not end with gserviceaccount.com'``` | CUSTOMIZE: Non-corporate accounts have access to Google Cloud Platform (GCP) resources
Using personal accounts to access GCP resources may compromise the security of your business. Using fully managed corporate Google accounts to access Google Cloud Platform resources is recommended to make sure that your resources are secure.
NOTE : This policy requires customization before using it.
To customize, follow the steps mentioned below:
- Clone this policy and replace '@yourcompanydomainname' in RQL with your domain name. For example: 'user does not end with @prismacloud.io and user does not end with gserviceaccount.com'.
- For multiple domains, update the RQL with conditions for each domain. For example: 'user does not end with @prismacloud.io and user does not end with @prismacloud.com and user does not end with gserviceaccount.com'.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['USER_ANOMALY'].
Mitigation of this issue can be done as follows: It is recommended to use fully managed corporate Google accounts for increased visibility, auditing, and control over access to Google Cloud Platform resources. Do not access GCP resources through personal accounts.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'config.isPhpVersionLatest exists and config.isPhpVersionLatest equals false'``` | Azure App Service Web app doesn't use latest PHP version
This policy identifies App Service Web apps that are not configured with latest PHP version. Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. It is recommended to use the latest PHP version for web apps in order to take advantage of security fixes, if any.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'App Services' dashboard\n3. Select the reported web app service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Click on 'General settings' tab, Ensure that Stack is set to PHP and Minor version is set to latest version.\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case application and operating_status equal ignore case online and pools[?any( health_monitor.type does not equal ignore case https )] exists``` | IBM Cloud Application Load Balancer for VPC has backend pool with health check protocol not configured with HTTPS
This policy identifies IBM Cloud Application Load Balancers for VPC that has different health check protocol instead of HTTPS. HTTPS pools uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS backend pools for additional security.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancers' reported in the alert\n4. Under ‘Back-end pools' tab, click on three dots on the right corner of a row containing back-end pool with health check protocol other than HTTPS. Then click on 'Edit’\n5. In the 'Edit back-end pool' screen, under 'Health check' section, select 'HTTPS' from the 'Health protocol' dropdown.\n6. Click on 'Save'. |
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "cloud-object-storage" and operator is member of ("stringEquals", "stringMatch"))] exists and (attributes[?any( name is member of ("resource","resourceGroupId","serviceInstance","prefix"))] does not exist or attributes[?any( name equal ignore case "resourceType" and value equal ignore case "bucket" )] exists ) )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;``` | IBM Cloud user with IAM policies provide administrative privileges for Cloud object storage buckets
This policy identifies IBM Cloud users with overly permissive administrative role on IBM Cloud cloud object storage service.
IBM Cloud Object Storage is a highly scalable, resilient, and secure managed data storage service on the IBM Cloud platform that offers an alternative to traditional block and file storage solutions. When a user having a policy with admin rights on object storage gets compromised, the whole service gets compromised.
As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and then click on 'Users' in the left panel.\n3. Select the user for whom you want to edit access.\n4. Go to the 'Access' tab, and under the 'Access policies' section, click on the three dots on the right corner of a row for the policy that has administrator permission on the 'IBM Cloud Object Storage' service.\n5. Click on Remove or Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to edit or remove, and confirm by clicking Save or Remove.. |
```config from cloud.resource where cloud.account = 'Aws_sand_2743_Dipankar_Again' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = 'status.recording is true and status.lastStatus equals SUCCESS and recordingGroup.allSupported is true' as X; config from cloud.resource where api.name = 'aws-region' AND json.rule = optInStatus equals opted-in or optInStatus equals opt-in-not-required as Y; filter '$.X.region equals $.Y.regionName'; show X; count(X) less than 1``` | NSK test AWS config recorder
test
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.account = 'Bikram-Personal-AWS Account' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist``` | bikram-test-public-s3-bucket
bikram-test-public-s3-bucket
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 1433 or fromPort == 1433) or (toPort > 1433 and fromPort < 1433)))] exists)``` | Copy of AWS Security Group allows all traffic on SSH port (22)
This policy identifies Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 22 (or range containing 22). |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dynamodb-describe-table' AND json.rule = 'ssedescription does not exist or (ssedescription exists and ssedescription.ssetype == AES256)'``` | AWS DynamoDB encrypted using AWS owned CMK instead of AWS managed CMK
This policy identifies the DynamoDB tables that use AWS owned CMK (default ) instead of AWS managed CMK (KMS ) to encrypt data. AWS managed CMK provide additional features such as the ability to view the CMK and key policy, and audit the encryption and decryption of DynamoDB tables.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'DynamoDB' dashboard\n4. Select the reported table from the list of DynamoDB tables\n5. In 'Overview' tab, go to 'Table Details' section\n6. Click on the 'Manage Encryption' link available for 'Encryption Type'\n7. On 'Manage Encryption' pop up window, Select 'KMS' as the encryption type.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(name equals VirtualMachines and properties.pricingTier equal ignore case Standard and properties.subPlan equal ignore case P2)] does not exist or pricings[?any(name equals Dns and properties.deprecated is false and properties.pricingTier does not equal Standard)] exists``` | Azure Microsoft Defender for Cloud set to Off for DNS
This policy identifies Azure Microsoft Defender for Cloud which has a defender setting for DNS set to Off. Enabling Azure Defender for the cloud provides advanced security capabilities like threat intelligence, anomaly detection, and behavior analytics. Defender for DNS monitors the queries and detects suspicious activities without the need for any additional agents on your resources. It is highly recommended to enable Azure Defender for DNS.
Note: This policy does check for classic Defender for DNS configuration. If Defender for Servers Plan 2 is enabled, the defender setting for DNS will be set by default.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: For customers who are using Microsoft Defender for Servers Plan 2:\n\n1. Go to Microsoft Defender for Cloud\n2. Select Environment Settings\n3. Click on the subscription name\n4. Select the Defender plans\n5. Ensure Status is set to On for Servers Plan 2\n\nFor customers who are using Microsoft Defender for Servers Plan 1:\n\n1. Go to Microsoft Defender for Cloud\n2. Select Environment Settings\n3. Click on the subscription name\n4. Select the Defender plans\n5. Ensure Status is set to On for DNS.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = serverAdmins does not exist or serverAdmins[*] size equals 0 or (serverAdmins[*].properties.administratorType exists and serverAdmins[*].properties.administratorType does not equal ActiveDirectory and serverAdmins[*].properties.login is not empty)``` | Azure SQL server not configured with Active Directory admin authentication
This policy identifies Azure SQL servers that are not configured with Active Directory admin authentication. Azure Active Directory authentication is a mechanism of connecting to Microsoft Azure SQL Database and SQL Data Warehouse by using identities in Azure Active Directory (Azure AD). With Azure AD authentication, you can centrally manage the identities of database users and other Microsoft services in one central location. It is recommended to configure SQL servers with Active Directory admin authentication.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate SQL servers dashboard\n3. Select reported each SQL server\n4. Click on Azure Active Directory (under 'Settings')\n5. Click on 'Set admin'\n6. Select an Azure Active Directory from available options\n7. Click on Select\n8. Click on Save. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and authEnabled is false``` | GCP Memorystore for Redis instance has AUTH disabled
This policy identifies GCP Memorystore for Redis instances having AUTH disabled.
GCP Memorystore for Redis is a fully managed in-memory data store that simplifies Redis deployment and scaling while ensuring high availability and low-latency access. When AUTH is disabled, any client that can reach the Redis instance over the network can freely connect and perform operations without providing any credentials, creating a significant security risk to your data.
It is recommended to enable authentication (AUTH) on the GCP Memorystore for Redis to ensure only authorized clients can connect.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the Google Cloud Management Console. Navigate to the 'Memorystore for Redis' page\n2. Under 'Instances', click on the reported instance.\n3. Select 'EDIT' on the top navigation bar\n4. Under 'Edit Redis instance' page, under 'Security', select the 'Enable AUTH' checkbox\n5. Click on 'SAVE'.. |
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' AND json.rule = status.state does not contain TERMINATING as X; config from cloud.resource where api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 8088 or fromPort == 8088) or (toPort > 8088 and fromPort < 8088)))] exists) as Y; filter '$.X.ec2InstanceAttributes.emrManagedMasterSecurityGroup equals $.Y.groupId or $.X.ec2InstanceAttributes.additionalMasterSecurityGroups[*] contains $.Y.groupId'; show X;``` | AWS EMR cluster Master Security Group allows all traffic to port 8088
This policy identifies AWS EMR cluster which has Master Security Group which allows all traffic to port 8088. Exposing port 8088 to all traffic exposes web interfaces of the master node of an EMR Cluster. This configuration is highly susceptible to EMR cluster hijacking attacks. It is highly recommended limiting the access for the EMR cluster attached Master Security Group to your IP only or configure SSH Tunnel.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services.\n\n1. Log in to the AWS Console\n2. Select Clusters in left side pane\n3. Select the EMR Cluster reported in the alert\n4. Select the Security groups for Master link under Security and access\n5. Choose ElasticMapReduce-master from the list \n6. Click on the 'Inbound Rule'\n7. Delete the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 8088 (or range containing 8088). |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' as X; config from cloud.resource where api.name = 'aws-rds-describe-db-parameter-groups' AND json.rule = (((dbparameterGroupFamily starts with "postgres" or dbparameterGroupFamily contains "sqlserver") and (['parameters'].['rds.force_ssl'].['parameterValue'] does not equal 1 or ['parameters'].['rds.force_ssl'].['parameterValue'] does not exist)) or ((dbparameterGroupFamily starts with "mariadb" or dbparameterGroupFamily starts with "mysql") and (parameters.require_secure_transport.parameterValue does not equal 1 or parameters.require_secure_transport.parameterValue does not exist)) or (dbparameterGroupFamily contains "db2-ae" and (parameters.db2comm.parameterValue does not equal ignore case "SSL" or parameters.db2comm.parameterValue does not exist))) as Y; filter '$.X.dbparameterGroups[*].dbparameterGroupArn equals $.Y.dbparameterGroupArn' ; show X;``` | AWS RDS database instance not configured with encryption in transit
This policy identifies AWS RDS database instances that are not configured with encryption in transit. This covers MySQL, SQL Server, PostgreSQL, MariaDB, and DB2 RDS instances.
Enabling encryption is crucial to protect data as it moves through the network and enhances the security between clients and storage servers. Without encryption, sensitive data transmitted between your application and the database is vulnerable to interception by malicious actors. This could lead to unauthorized access, data breaches, and potential compromises of confidential information.
It is recommended that data be encrypted while in transit to ensure its security and reduce the risk of unauthorized access or data breaches.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable the in-transit encryption feature for your Amazon RDS databases, perform the following actions:\n\nDefault parameter groups for RDS DB instances cannot be modified. Therefore, you must create a custom parameter group, modify it, and then attach it to your RDS for DB instances. Changes to parameters in a customer-created DB parameter group are applied to all DB instances that are associated with the DB parameter group.\n\nFollow the below links to create and associate a DB parameter group with a DB instance,\n\nTo Create a DB parameter group, refer to the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating\n\nTo Associating a DB parameter group with a DB instance, refer the below link\nhttps://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating\n\nTo Modifying parameters in a DB parameter group,\n\n1. Sign in to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/.\n2. In the navigation pane, choose 'Parameter Groups'.\n3. In the list, choose the parameter group that is associated with the RDS instance.\n4. For Parameter group actions, choose 'Edit'.\n5. Change the values of the parameters that you want to modify. You can scroll through the parameters using the arrow keys at the top right of the dialog box.\n6. In the 'Modifiable parameters' section, enter 'rds.force_ssl' in the Filter Parameters search box for SQL Server and PostgreSQL databases, and type 'require_secure_transport' in the search box for MySQL and MariaDB databases and type DB2COMM for DB2 databases.\n a. For the 'rds.force_ssl' database parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature. \n or\n b. For the 'require_secure_transport' parameter, enter '1' in the Value configuration box to enable the Transport Encryption feature.\n or\n c. For the 'DB2COMM' parameter, enter 'SSL' in the Value box based on the allowed values to enable Transport Encryption.\n7. Choose Save changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-get-stages' AND json.rule = webAclArn does not exist or webAclArn does not start with arn:aws:wafv2``` | AWS API Gateway REST API not configured with AWS Web Application Firewall v2 (AWS WAFv2)
This policy identifies AWS API Gateway REST API which is not configured with AWS Web Application Firewall. As a best practice, enable the AWS WAF service on API Gateway REST API to protect against application layer attacks. To block malicious requests to your API Gateway REST API, define the block criteria in the WAF web access control list (web ACL).
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Make sure your the reported API Gateway REST API requires WAF based on your requirement and Note down the API Gateway REST API name\n\nFollow steps given in below URL to associate API Gateway REST API to WAF Web ACL ,\nhttps://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-aws-resource.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals READY and Firewall.DeleteProtection is false``` | AWS Network Firewall delete protection is disabled
This policy identifies the AWS Network Firewall for which delete protection is disabled.
AWS Network Firewall manages inbound and outbound traffic for the AWS resources within Virtual Private Clouds (VPCs). The deletion protection setting protects against accidental deletion of the firewall. Deletion of a firewall increases the risk of unauthorized access, data breaches, and compliance issues.
It is recommended to enable deletion protection for a network firewall to safeguard against accidental deletion.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable delete protection on an AWS Network Firewall, perform the following actions:\n\n1. Log into the AWS console\n2. Select the specific region from the drop-down in the top right corner for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, Under 'Network Firewall', choose 'Firewalls'\n5. On the Firewalls page, select the reported firewall\n6. In the 'Firewall details' tab, under the 'Change protections' section, click on 'Edit'\n7. In the pop-up window, choose the 'Enable' checkbox under the 'Delete protection' option\n8. Click on 'Save' to save the changes. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-subnets-list' AND json.rule = purpose does not contain INTERNAL_HTTPS_LOAD_BALANCER and purpose does not contain REGIONAL_MANAGED_PROXY and purpose does not contain GLOBAL_MANAGED_PROXY and purpose does not contain PRIVATE_SERVICE_CONNECT and (enableFlowLogs is false or enableFlowLogs does not exist)``` | GCP VPC Flow logs for the subnet is set to Off
This policy identifies the subnets in VPC Network which have Flow logs disabled. Flow logs enable the capturing of information about the IP traffic going to and from network interfaces in VPC Subnets. It is recommended to enable the flow logs which can be used for network monitoring, forensics, real-time security analysis.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Goto VPC Network (on Left Panel)\n3. Select the reported VPC network and then click on the alerted subnet\n4. On 'Subnet details' page, click on 'EDIT'\n5. Set 'Flow Logs' to value 'On'\n6. Click on 'SAVE'\nFor more information, refer : https://cloud.google.com/vpc/docs/using-flow-logs#enable-subnet. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or requireLowercaseCharacters is false or requireLowercaseCharacters does not exist'``` | AWS IAM password policy does not have a lowercase character
Checks to ensure that IAM password policy requires a lowercase character. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Require at least one lowercase letter'.\n4. Click on 'Apply password policy'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'status equals ISSUED and (_DateTime.ageInDays($.notAfter) > -31)'``` | AWS Certificate Manager (ACM) has certificates expiring in 30 days or less
This policy identifies ACM certificates expiring in 30 days or less, which are in the AWS Certificate Manager. If SSL/TLS certificates are not renewed prior to their expiration date, they will become invalid and the communication between the client and the AWS resource that implements the certificates is no longer secure. As a best practice, it is recommended to renew certificates before their validity period ends. AWS Certificate Manager automatically renews certificates issued by the service that is used with other AWS resources. However, the ACM service does not renew automatically certificates that are not in use or not associated anymore with other AWS resources. So the renewal process must be done manually before these certificates become invalid.
NOTE: If you wanted to be notified other than before or less than 30 days; you can clone this policy and replace '30' in RQL with your desired days value. For example, 15 days OR 7 days which will alert certificates expiring in 15 days or less OR 7 days or less respectively.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to the Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Verify that the 'Status' column shows 'Issued' for the reported certificate\n6. Under 'Actions' drop-down select 'Reimport certificate' option\n7. On the Import a certificate page, perform the following actions:\n7a. In 'Certificate body*' box, paste the PEM-encoded certificate to import, purchased from your SSL certificate provider.\n7b. In 'Certificate private key*' box, paste the PEM-encoded, unencrypted private key that matches the SSL/TLS certificate public key.\n7c.(Optional) In 'Certificate chain' box, paste the PEM-encoded certificate chain delivered with the certificate body specified at step 7a.\n8. Click on 'Review and import' button\n9. On the Review and import page, review the imported certificate details then click on 'Import'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-container-instances-container-group' AND json.rule = properties.provisioningState equals Succeeded and properties.containers[*].properties.environmentVariables[*] exists and properties.containers[*].properties.environmentVariables[*].value exists``` | Azure Container Instance environment variable with regular value type
This policy identifies Azure Container Instances (ACI) in which the environment variables with regular value type instead of the secure values property. Objects with secure values are intended to hold sensitive information like passwords or keys for your application. Using secure values for environment variables is both safer and more flexible than including them in your container's image. So it is recommended to secure the environment variable by specifying the 'secureValue' property instead of the regular 'value' for the variable's type.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Environment variables can only be configured with secure values at the time of container instance creation. It is not possible to modify environment variables once instance is created. Hence, it is suggested to delete an existing container instance having not configured with secure values and create a new container instance having required environment variables configured with secure values.\nNote: Backup or migrate data from the container instance before deleting it.\n\nTo create a container instance with environment variables with secure value property; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-environment-variables#secure-values\n\nTo delete a reported container instance; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart-portal#clean-up-resources. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-active-directory-authorization-policy' AND json.rule = defaultUserRolePermissions.permissionGrantPoliciesAssigned[*] does not contain "ManagePermissionGrantsForSelf.microsoft-user-default-low"``` | Azure Microsoft Entra ID users can consent to apps accessing company data on their behalf not set to verified publishers
This policy identifies instances in the Microsoft Entra ID configuration where users in your Azure Microsoft Entra ID (formerly Azure Active Directory) can consent to applications accessing company data on their behalf, even if the applications are not from verified publishers.
Allowing unverified applications to access company data increases the likelihood of data breaches and unauthorized access, which could lead to the exposure of confidential information. Using unverified applications can lead to non-compliance with data protection regulations and undermine trust in the organization's data handling practices.
As a best practice, it is recommended to configure the user consent settings to restrict access only to applications from verified publishers.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Select 'Enterprise Applications'\n4. Select 'Consent and permissions'\n5. Select 'User consent settings'\n6. Under User consent for applications, select 'Allow user consent for apps from verified publishers, for selected permissions (Recommended)'\n7. Select Save. |
```config from cloud.resource where api.name = 'aws-ecs-service' AND json.rule = networkConfiguration.awsvpcConfiguration.assignPublicIp exists and networkConfiguration.awsvpcConfiguration.assignPublicIp equal ignore case "ENABLED"``` | AWS ECS services have automatic public IP address assignment enabled
This policy identifies whether Amazon ECS services are configured to assign public IP addresses automatically. Assigning public IP addresses to ECS services may expose them to the internet. If the services are not adequately secured or have vulnerabilities, they could be susceptible to unauthorized access, DDoS attacks, or other malicious activities. It is recommended that the Amazon ECS environment not have an associated public IP address except for limited edge cases.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To modify a disable auto-assign public IP for an ECS Service:\n\n1. Use the AWS CLI console or AWS API, as you cannot update network configurations for an ECS Service using the AWS Management Console.\n\n2. Run update-service command in AWS CLI to disable auto-assign public IP for an ECS Service\n aws ecs update-service --cluster <ECS Cluster Name> --service <ECS Service Name> --network-configuration "awsvpcConfiguration={subnets=[string, string],securityGroups=[string, string],assignPublicIp=DISABLED}"\nPlease Refer to the below URL:\nhttps://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_validate_compliance_hyperion_policy_ss_finding_2
Description-0b771ac4-26e0-4857-8391-b8e39e24555b
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'privateClusterConfig.enablePrivateNodes does not exist or privateClusterConfig.enablePrivateNodes is false'``` | GCP Kubernetes Engine Clusters not configured with private nodes feature
This policy identifies Google Kubernetes Engine (GKE) Clusters which are not configured with the private nodes feature. Private nodes feature makes your master inaccessible from the public internet and nodes do not have public IP addresses, so your workloads run in an environment that is isolated from the internet.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: GCP Kubernetes private node feature can be enabled at the time of cluster creation. So to fix this alert, Create a new cluster with a private node feature enabled on it and migrate all required data from reported cluster to the newly created cluster and delete reported Kubernetes engine cluster.\n\nTo create new Kubernetes engine cluster with private node feature enabled, perform the following: \n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. Click on CREATE CLUSTER button\n5. Click on 'Advanced options'\n6. Under the Networking section, Check the 'Enable VPC-native (using alias IP)' option\n7. Choose the required Network, Node subnet parameters\n8. From Network security, select the Private cluster check box.\n9. To create a master that is accessible from authorized external IP ranges, keep the 'Access master using its external IP address' checkbox selected.\n9. Set 'Master IP range' to as per your required IP range\n10. Click on 'Create'\nNOTE: When you create a private cluster, you must specify a /28 CIDR range for the VMs that run the Kubernetes master components.\n\nTo delete reported Kubernetes engine cluster, perform the following:\n1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters \n4. Click on reported Kubernetes cluster\n5. Click on the DELETE button\n6. On 'Delete a cluster' popup dialog, Click on DELETE to confirm the deletion of the cluster.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = type equals application and ['attributes'].['routing.http.drop_invalid_header_fields.enabled'] is false``` | AWS Application Load Balancer (ALB) is not configured to drop HTTP headers
This policy identifies AWS Application Load Balancers that are not configured to drop HTTP headers.
AWS Application Load Balancers distribute incoming HTTP/HTTPS traffic across multiple targets such as EC2 instances, containers, and Lambda functions, based on routing rules and health checks. By default, ALBs are not configured to drop invalid HTTP header values, which can leave the load balancer vulnerable to HTTP desync attacks. HTTP desync attacks manipulate request headers to exploit inconsistencies between servers, potentially leading to security vulnerabilities and unauthorized access.
It is recommended to enable this feature, to prevent the load balancer from forwarding requests with invalid HTTP headers to mitigate potential security vulnerabilities.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure the application load balancer to drop invalid HTTP header fields, perform the following actions:\n\n1. Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/\n2. In the navigation pane, choose 'Load balancers'\n3. Choose the reported Application Load Balancer \n4. From 'Actions', choose 'Edit load balancer attributes' \n5. Enable the 'Drop invalid header fields’ option\n6. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-guardduty-detector' AND json.rule = status does not equal ENABLED``` | AWS GuardDuty detector is not enabled
This policy identifies the AWS GuardDuty detector that is not enabled in specific regions.
GuardDuty identifies potential security threats in the AWS environment by analyzing data collected from various sources. The GuardDuty detector is the entity within the GuardDuty service that does this analysis. Failure to enable GuardDuty increases the risk of undetected threats and vulnerabilities which could lead to compromises in the AWS environment.
It is recommended to enable GuardDuty detectors in all regions to reduce the risk of security breaches.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable Amazon GuardDuty in the region,\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down menu located at the top right corner for which the alert has been generated.\n3. Navigate to service 'Amazon Gaurdduty' from the 'Services' Menu.\n4. Choose 'Get Started'.\n5. Choose 'Enable GuardDuty' to enable on a specific region.\n\nTo re-enable Amazon GuardDuty after suspending,\n1. Log in to the AWS console.\n2. In the console, select the specific region from the region drop-down menu located at the top right corner for which the alert has been generated.\n3. Navigate to service 'Amazon Gaurdduty' from the 'Services' Menu.\n4. In the navigation pane, choose 'Settings'.\n5. Choose 'Re-enable GuardDuty' to re-enable on a specific region.. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-instance' AND json.rule = 'ramRoleName is empty'``` | Alibaba Cloud ECS instance RAM role not enabled
This policy identifies ECS instances for which the Resource Access Management (RAM) role is not enabled. Alibaba Cloud provides RAM roles to securely access Alibaba Cloud services and resources. As a best practice, create RAM roles and attach the role to manage ECS instance permissions securely instead of distributing or sharing keys or passwords.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, click 'Instances'\n4. Select the reported ECS instance\n5. Select More > Instance Settings > Bind/Unbind RAM Role\n6. Select a required RAM Role\nNOTE: If already RAM role is not created create new RAM role and follow the same procedure to attach.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = policy.Statement[?any((Principal equals * or Principal.AWS contains *) and Effect equals Allow and Condition does not exist)] exists``` | AWS Private ECR repository policy is overly permissive
This policy identifies AWS Private ECR repositories that have overly permissive registry policies. An ECR(Elastic Container Registry) repository is a collection of Docker images available on the AWS cloud. These images might contain sensitive information which should be restricted to unauthorized users.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'ECR' dashboard from 'Services' dropdown\n4. Go to 'Repository', from the left panel\n5. Select the repository for which alert is being generated\n6. Select the 'Permissions' option from left menu below 'repositores'\n7. Click on 'Edit policy JSON' to modify the JSON so that Principal is restrictive\n8. After modifications, click on 'Save'.. |
```config from cloud.resource where api.name = 'azure-app-service-basic-publishing-credentials-policies' AND json.rule = properties.allow is true as X; config from cloud.resource where api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running as Y; filter '$.X.id contains $.Y.id'; show Y;``` | Azure App Service basic authentication enabled
This policy identifies Azure App Services which have basic authentication enabled.
Basic Authentication allows local identity management for App Services without using a centralized identity provider like Azure Entra ID, posing a security risk by creating isolated identity systems that lack centralized control and are vulnerable to credential compromise and unauthorized access. Disabling Basic Authentication and integrating with a centralized solution like Azure Entra ID enhances security with stronger authentication, improved access management, and reduced attack risks.
As a security best practice, it is recommended to disable basic authentication for Azure App Services.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App Service\n4. Under 'Settings' section, Click on 'Configuration'\n5. Under the 'General settings' tab, scroll down to locate the two Basic Auth settings:\n - Set the 'SCM Basic Auth Publishing Credentials' radio button to Off\n - Set the 'FTP Basic Auth Publishing Credentials' radio button to Off\n6. At the top, click on 'Save'\n7. Click 'Continue' to save the changes. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-eks-describe-cluster' AND json.rule = resourcesVpcConfig.securityGroupIds[*] size greater than 1``` | AWS EKS cluster control plane assigned multiple security groups
Amazon EKS strongly recommends that you use a dedicated security group for each cluster control plane (one per cluster). This policy checks the number of security groups assigned to your cluster's control plane and alerts if there are more than one.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Create a single dedicated VPC security group for your EKS cluster control plane.\n\nFrom the AWS console a security group cannot be added to, nor removed from, a Kubernetes cluster once it is created. To resolve this alert, create a new cluster with a single dedicated security group as per your requirements, then migrate all required cluster data from the reported cluster to this newly created cluster and delete the reported Kubernetes cluster.\n\n1. Open the Amazon EKS dashboard.\n2. Choose Create cluster.\n3. On the Create cluster page, fill in the following fields:\n\n- Cluster name\n- Kubernetes version\n- Role name\n- VPC\n- Subnets\n- Security Groups - Choose your new dedicated control plane security group.\n- Endpoint private access\n- Endpoint public access\n- Logging\n\n4. Choose Create.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case "/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace"``` | test-p3
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-subscription-tenantpolicy' AND json.rule = properties.blockSubscriptionsIntoTenant is false or properties.blockSubscriptionsLeavingTenant is false``` | Azure subscription permission for Microsoft Entra tenant is set to 'Allow everyone'
This policy identifies Microsoft Entra tenant that are not configured with restrictions for 'Subscription entering Microsoft Entra tenant' and 'Subscription leaving Microsoft Entra tenant'.
Users who are set as subscription owners can make administrative changes to the subscriptions and move them into and out of the Microsoft Entra tenant. Allowing subscriptions to enter or leave the Microsoft Entra tenant without restrictions can expose the organization to unauthorized access and potential security breaches.
As best practice, it is recommended to configure the settings for 'Subscription entering Microsoft Entra tenant' and 'Subscription leaving Microsoft Entra tenant' to 'Permit no one' to ensure only authorized subscriptions can interact with the tenant, thus enhancing the security of your Azure environment.
This is applicable to azure cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure policy settings to control the movement of Azure subscriptions from and into Microsoft Entra tenant follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/cost-management-billing/manage/manage-azure-subscription-policy#setting-subscription-policy. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals RUNNING and privateClusterConfig exists and privateClusterConfig.enablePrivateEndpoint does not exist``` | GCP Kubernetes Engine private cluster has private endpoint disabled
This policy identifies GCP Kubernetes Engine private clusters with private endpoint disabled. A public endpoint might expose the current cluster and Kubernetes API version and an attacker may be able to determine whether it is vulnerable to an attack. Unless required, disabling the public endpoint will help prevent such threats, and require the attacker to be on the master's VPC network to perform any attack on the Kubernetes API. It is recommended to enable the private endpoint and disable public access on Kubernetes clusters.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Once a cluster is created without enabling Private Endpoint, it cannot be remediated. Rather, the cluster must be recreated. \nTo create the private cluster with public access disabled, refer to the below link,\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#private_cp\n\nTo resolve the alert, ensure deletion of the old cluster after the new private cluster is created and is in running state and once all the data has been migrated.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-parameter' AND json.rule = 'type does not contain SecureString'``` | AWS SSM Parameter is not encrypted
This policy identifies the AWS SSM Parameters which are not encrypted. AWS Systems Manager (SSM) parameters that store sensitive data, for example, passwords, database strings, and permit codes are encrypted so as to meet security and compliance prerequisites. An encrypted SSM parameter is any sensitive information that should be kept and referenced in a protected way.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console\n2. Go to System Manager\n3. In the navigation panel, Click on 'Parameter Store'\n4. Choose the reported parameter and port it to a new parameter with Type 'SecureString'\n5. Delete the reported parameter by clicking on 'Delete'\n6. Click on 'Delete parameters'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty``` | Copy of Copy of Copy of build information
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = description.availabilityZones[*] size less than 2``` | AWS Classic Load Balancer not configured to span multiple Availability Zones
This policy identifies AWS Classic Load Balancers that are not configured to span multiple Availability Zones. Classic Load Balancer would not be able to redirect traffic to targets in another Availability Zone if the sole configured Availability Zone becomes unavailable. As best practice, it is recommended to configure Classic Load Balancer to span multiple Availability Zones.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure AWS Classic Load Balancer to span multiple Availability Zones follow the steps mentioned in below URL:\n\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/classic/enable-disable-az.html#add-availability-zone. |
Subsets and Splits