query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = config.ftpsState equals AllAllowed``` | Azure App Services FTP deployment is All allowed
This policy identifies Azure App Services which has FTP deployment setting as All allowed. Attacker could listen to wifi traffic and get the login credentials to a FTP deployments which could be in plain text and get full control of the code base of the app or service. It is highly recommend to use FTPS if FTP deployment for workflow is essential else disable the FTP deployment for Azure App Services.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Following recommendation steps are for resources hosted in App Service, Premium and Windows Consumption plans,\n\n1. Log in to the Azure Portal\n2. Select 'App Services' from the left pane\n3. Select the reported App Services\n4. Go to 'Configurations' under 'Settings'\n5. Click on 'General settings'\n6. Select 'FTPS only' or 'Disabled' for 'FTP state' under 'Platform settings'\n7. Click on 'Save'\n\nIf Function App Hosted in Linux using Consumption (Serverless) Plan follow below steps\n\nAzure CLI Command \nFTP Disable - \"az functionapp config set --ftps-state Disabled --name MyFunctionApp --resource-group MyResourceGroup\"\n\nFTPS only - \"az functionapp config set --ftps-state FtpsOnly --name MyFunctionApp --resource-group MyResourceGroup\". |
```config from cloud.resource where cloud.type = 'gcp' AND api.name= 'gcloud-storage-buckets-list' AND json.rule = '($.logging does not exist or $.logging equals null) and ($.acl[*].email exists and $.acl[*].email contains logging)'``` | GCP Bucket containing Operations Suite Logs have bucket logging disabled
This policy identifies the buckets containing Operations Suite Logs for which logging is disabled. Enabling bucket logging, logs all the requests made on the bucket which can be used for debugging and forensics. It is recommended to enable logging on the buckets containing Operations Suite Logs.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable logging for a bucket:\n\nhttps://cloud.google.com/storage/docs/access-logs. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-replication-instance' AND json.rule = replicationInstanceStatus equals "available" and autoMinorVersionUpgrade is false``` | AWS DMS replication instance automatic version upgrade disabled
This policy identifies the AWS DMS(Database Migration Service) replication instances that do not have auto minor version upgrade feature enabled
A replication instance in DMS is a compute resource used to replicate data between a source and target database during the migration or ongoing replication process. Failure to enable automatic minor upgrades can leave your database instances vulnerable to security risks stemming from outdated software.
It is recommended to enable automatic minor version upgrades on DMS replication instances to receive timely patches and updates, reduce the risk of security vulnerabilities and improve overall performance and stability.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To modify an AWS DMS(Database Migration Service) Replication Instance's Automatic version upgrade using the AWS console, follow these steps:\n\n1. Sign in to the AWS Management Console.\n2. In the console, select the specific region from the region dropdown in the top right corner, for which the alert is generated.\n3. Go to the DMS console by either searching for 'DMS' in the AWS services search bar or navigating directly to the DMS service.\n4. From the navigation pane on the left, select 'Replication Instances' under the 'Migrate data' section.\n5. Select the replication instance that is reported and select 'Modify' from the 'Action' dropdown in the right corner.\n6. Under the 'Maintenance' section, choose the 'Yes' option for the 'Automatic version upgrade'.\n7. Under the 'When to apply the modifications' section, choose 'Apply immediately' or 'Apply during the next scheduled maintenance window' according to your business requirements.\n8. Click 'Save' to save the changes.. |
```config from cloud.resource where api.name = 'aws-networkfirewall-firewall' AND json.rule = FirewallStatus.Status equals "READY" as X; config from cloud.resource where api.name = 'aws-network-firewall-logging-configuration' AND json.rule = LoggingConfiguration.LogDestinationConfigs[*].LogType does not exist as Y; filter '$.X.Firewall.FirewallArn equal ignore case $.Y.FirewallArn' ; show X;``` | AWS Network Firewall is not configured with logging configuration
This policy identifies an AWS Network Firewall where logging is not configured.
AWS Network Firewall manages inbound and outbound traffic for the AWS resources within the AWS environment. Logging configuration for the network firewall involves enabling logging of network traffic, including allowed and denied requests, to provide visibility into network activity. Failure to configure logging results in a lack of visibility into potential security threats, making it difficult to detect and respond to malicious activity effectively and hindering threat detection and compliance.
It is recommended to enable logging to ensure comprehensive monitoring, threat detection, compliance adherence, and effective incident response.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To update a firewall's logging configuration through the console, Perform the following actions:\n\n1. Sign in to the AWS Management Console\n2. Select the specific region from the region drop-down in the top right corner, for which the alert is generated\n3. Navigate to the VPC Dashboard\n4. In the navigation pane, Under 'Network Firewall', choose 'Firewalls'\n5. On the Firewalls page, select the reported firewall\n6. In the 'Firewall details' tab, under the 'Logging' section, click on 'Edit'\n5. Select the Log type as needed for your requirement. You can configure logging for alert and flow logs.\n\nAlert – Sends logs for traffic that matches any stateful rule whose action is set to Alert or Drop. For more information about stateful rules and rule groups, see Rule groups in AWS Network Firewall.\n\nFlow – Sends logs for all network traffic that the stateless engine forwards to the stateful rules engine.\n\n6. For each selected log type, choose the destination type, then provide the information for the logging destination that you prepared following the guidance in Firewall logging destinations.\n7. Choose 'Save' to save your changes and return to the firewall's detail page.. |
```config from cloud.resource where api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE and deletionProtectionEnabled is false``` | AWS DynamoDB table deletion protection is disabled
This policy identifies AWS DynamoDB tables with deletion protection disabled.
DynamoDB is a fully managed NoSQL database that provides a highly reliable, scalable, low-latency database solution for applications that require consistent, single-digit millisecond latency at any scale. Deletion protection feature allows authorised administrators to prevent accidental deletion of DynamoDB tables. Enabling deletion protection helps reduce the risk of data loss, maintain data integrity, ensure compliance, and protect DynamoDB tables across different environments.
It is recommended to enable deletion protection on DynamoDB tables to prevent unintended data loss.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable a Dynamodb table with deletion protection, follow these steps:\n\n1. Sign into the AWS console and navigate to the DynamoDB console.\n2. In the navigation pane, under 'Tables', locate the table you want to enable deletion protection for and select it.\n3. In the table details page, under the 'Additional settings' tab, go to the 'Deletion protection' section and click on 'Turn on'.\n4. Under the confirmation screen, click on 'Confirm'.. |
```config from cloud.resource where api.name = 'aws-ec2-describe-network-interfaces' AND json.rule = association.allocationId exists``` | amtest-eni
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = 'networkRuleSet.defaultAction equals Allow'``` | Azure Storage Account default network access is set to 'Allow'
This policy identifies Storage accounts which have default network access is set to 'Allow'. Restricting default network access helps to provide a new layer of security, since storage accounts accept connections from clients on any network. To limit access to selected networks, the default action must be changed.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To change the default network access rule, follow below URL:\nhttps://learn.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#change-the-default-network-access-rule. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ecr-get-repository-policy' AND json.rule = imageTagMutability equal ignore case mutable``` | AWS ECR private repository tag mutable
This policy identifies AWS ECR private repositories whose tag immutability is not configured.
AWS Elastic Container Registry (ECR) tag immutability ensures that once an image is pushed to a repository with tag immutability enabled, the tag cannot be overwritten or updated. This feature is useful for ensuring the security, integrity, and reliability of container images in production environments. It prevents tags from being overwritten, which can help prevent unauthorised changes to images.
It is recommended to enable tag immutability on ECR repositories to maintain the integrity and security of the images pushed.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable tag immutability for an ECR repository, follow the below steps:\n\n1. Log into the AWS console and navigate to the ECR dashboard.\n2. In the navigation pane, choose 'Repositories' under 'Private registry'.\n3. Select the repository you want to edit and choose 'Edit' from the 'Actions' dropdown.\n4. Make 'Tag immutability' to 'enabled'.\n5. Choose 'Save'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(management.autoRepair does not exist or management.autoRepair is false)] exists``` | GCP Kubernetes cluster node auto-repair configuration disabled
This policy identifies GCP Kubernetes cluster nodes with auto-repair configuration disabled. GKE's node auto-repair feature helps you keep the nodes in your cluster in a healthy, running state. When enabled, GKE makes periodic checks on the health state of each node in your cluster. If a node fails consecutive health checks over an extended time period, GKE initiates a repair process for that node.
FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Click on the alerted cluster and go to section 'Node pools'\n4. Click on a node pool to ensure Auto repair' is enabled in the 'Management' section\n5. To modify click on the 'Edit' button at the top\n6. To enable the configuration click on the check box against 'Enable auto-repair'\n7. Click on 'Save'\n8. Repeat Step 4-7 for each node pool associated with the reported cluster. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' as X; config from cloud.resource where api.name = 'azure-active-directory-user' as Y; filter '((_DateTime.ageInDays($.X.properties.updatedOn) < 80) and (($.X.properties.principalId contains $.Y.id)))'; show X; addcolumn properties.roleDefinition.properties.roleName``` | llatorre - RoleAssignment v1
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Reach out to [email protected]. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-dms-endpoint' AND json.rule = status equals active and (endpointType equals SOURCE and sslMode equals none and engineName is not member of ("s3", "azuredb")) or (endpointType equals TARGET and sslMode equals none and engineName is not member of ("dynamodb", "kinesis", "neptune", "redshift", "s3", "elasticsearch", "kafka"))``` | AWS Database Migration Service endpoint do not have SSL configured
This policy identifies Database Migration Service (DMS) endpoints that are not configured with SSL to encrypt connections for source and target endpoints. It is recommended to use SSL connection for source and target endpoints; enforcing SSL connections help protect against 'man in the middle' attacks by encrypting the data stream between endpoint connections.
NOTE: Not all databases use SSL in the same way. An Amazon Redshift endpoint already uses an SSL connection and does not require an SSL connection set up by AWS DMS. So there are some exlcusions included in policy RQL to report only those endpoints which can be configured using DMS SSL feature.
For more details:
https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Security.html#CHAP_Security.SSL
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the AWS DMS dashboard\n3. In the navigation pane, choose 'Endpoints'\n4. Select the reported DMS endpoint\n5. Under 'Actions', choose 'Modify'\n6. In the 'Endpoint configuration' section, select the 'Secure Socket Layer (SSL) mode' from the dropdown list select suitable SSL mode according to your requirement other than 'none'.\n7. Click on 'Save'\n\nNOTE: Before modifying the SSL setting, you should be configured with the proper certificate you want to use for SSL connection under the DMS 'Certificate' service.. |
```config from cloud.resource where api.name = 'aws-ec2-describe-security-groups' AND json.rule = _AWSCloudAccount.orgHierarchyNames() intersects ("all-accounts")``` | jashah_ms_config
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = status equal ignore case "RUNNING" and (machineType contains "machineTypes/n2d-" or machineType contains "machineTypes/c2d-" or machineType contains "machineTypes/c3d-" or machineType contains "machineTypes/c3-standard-")and (disks[*].guestOsFeatures[*].type contains "SEV_CAPABLE" or disks[*].guestOsFeatures[*].type contains "SEV_LIVE_MIGRATABLE_V2" or disks[*].guestOsFeatures[*].type contains "SEV_SNP_CAPABLE" or disks[*].guestOsFeatures[*].type contains "TDX_CAPABLE") and (confidentialInstanceConfig.enableConfidentialCompute does not exist or confidentialInstanceConfig.enableConfidentialCompute is false)``` | GCP VM instance Confidential VM service disabled
This policy identifies GCP VM instances that have Confidential VM service disabled.
GCP VM encrypts data at rest and in transit, but the data must be decrypted before processing. Confidential VM service (Confidential Computing) allows GCP VM to keep in-memory data secure by utilizing hardware-based memory encryption. This protects any sensitive data leakage in case the VM is compromised.
It is recommended to enable Confidential VM service on GCP VMs to enhance the confidentiality and integrity of in-memory data on the VMs.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Confidential VM services cannot be enabled for existing VM instances. A new VM should be created to enable confidential VM services on the instance.\n\nTo create a new VM instance with confidential VM services enabled, please refer to the steps below:\n1. Login to the GCP console\n2. Under 'Compute Engine', navigate to the 'VM instances' (Left Panel)\n3. Click on 'Create instance'\n4. Navigate to 'Security' section, Click Enable under 'Confidential VM service'.\n5. In the Enable Confidential Computing dialog, review the list of settings updated when you enable the service, and then click 'Enable'.\n6. Review other settings for the VM instance.\n7. Click 'Create'.\n\nNote: For the list of supported VM configurations for confidential VM services, please refer to the URL given below: https://cloud.google.com/confidential-computing/confidential-vm/docs/supported-configurations. |
```config from cloud.resource where api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = state.code contains active and listeners[?any( protocol is member of (HTTP,TCP,UDP,TCP_UDP) and defaultActions[?any( redirectConfig.protocol contains HTTPS)] does not exist )] exists as X; config from cloud.resource where api.name = 'aws-elbv2-target-group' AND json.rule = targetType does not equal alb and protocol exists and protocol is not member of ('TLS', 'HTTPS') as Y; filter '$.X.listeners[?any( protocol equals HTTP or protocol equals UDP or protocol equals TCP_UDP )] exists or ( $.X.listeners[*].protocol equals TCP and $.X.listeners[*].defaultActions[*].targetGroupArn contains $.Y.targetGroupArn)'; show X;``` | AWS Elastic Load Balancer v2 (ELBv2) with listener TLS/SSL is not configured
This policy identifies AWS Elastic Load Balancers v2 (ELBv2) which have non-secure listeners. As Load Balancers will be handling all incoming requests and routing the traffic accordingly. The listeners on the load balancers should always receive traffic over secure channel with a valid SSL certificate configured.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. On the Listeners tab, Click the 'Edit' button under the available listeners\n7. In the Load Balancer Protocol type is application select the listener protocol as 'HTTPS (Secure HTTP)' or If the load balancer type is network, select the listener protocol as TLS\n8. Select appropriate 'Security policy' \n9. In the SSL Certificate column, click 'Change'\n10. On 'Select Certificate' popup dialog, Choose a certificate from ACM or IAM or upload a new certificate based on requirement and Click on 'Save'\n11. Back to the 'Edit listeners' dialog box, review the secure listeners configuration, then click on 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-containers-artifacts-kubernetes-cluster' AND json.rule = lifecycleState equal ignore case ACTIVE and options.admissionControllerOptions.isPodSecurityPolicyEnabled is false``` | OCI Kubernetes Engine Cluster pod security policy not enforced
This policy identifies Kubernetes Engine Clusters that are not enforced with pod security policy. The Pod Security Policy defines a set of conditions that pods must meet to be accepted by the cluster; when a request to create or update a pod does not meet the conditions in the pod security policy, that request is rejected and an error is returned.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure Pod Security Policies for Container Engine for Kubernetes, refer below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengusingpspswithoke.htm\n\nNOTE: You must define pod security policies for the pod security policy admission controller to enforce when accepting pods into the cluster. If you do not define pod security polices, the pod security policy admission controller will prevent any pods being created in the cluster.. |
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and (Action contains iam:CreatePolicyVersion or Action contains iam:SetDefaultPolicyVersion or Action contains iam:PassRole or Action contains iam:CreateAccessKey or Action contains iam:CreateLoginProfile or Action contains iam:UpdateLoginProfile or Action contains iam:AttachUserPolicy or Action contains iam:AttachGroupPolicy or Action contains iam:AttachRolePolicy or Action contains iam:PutUserPolicy or Action contains iam:PutGroupPolicy or Action contains iam:PutRolePolicy or Action contains iam:AddUserToGroup or Action contains iam:UpdateAssumeRolePolicy or Action contains iam:*))] exists``` | AWS IAM Policy permission may cause privilege escalation
This policy identifies AWS IAM Policy which have permission that may cause privilege escalation. AWS IAM policy having weak permissions could be exploited by an attacker to elevate privileges. It is recommended to follow the principle of least privileges ensuring that AWS IAM policy does not have these sensitive permissions.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: Refer to the following URL to remove below listed weak permissions from reported AWS IAM Policies,\nhttps://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_manage-attach-detach.html#remove-policies-console\n\nBelow are the permission which can lead to privilege escalation,\niam:CreatePolicyVersion\niam:SetDefaultPolicyVersion\niam:PassRole\niam:CreateAccessKey\niam:CreateLoginProfile\niam:UpdateLoginProfile\niam:AttachUserPolicy\niam:AttachGroupPolicy\niam:AttachRolePolicy\niam:PutUserPolicy\niam:PutGroupPolicy\niam:PutRolePolicy\niam:AddUserToGroup\niam:UpdateAssumeRolePolicy\niam:*. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(80,80) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on HTTP port (80)
This policy identifies GCP Firewall rules which allow all inbound traffic on HTTP port (80). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the HTTP port (80) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-sql-instances-list' and json.rule = "(settings.ipConfiguration.sslMode equal ignore case TRUSTED_CLIENT_CERTIFICATE_REQUIRED and _DateTime.ageInDays(serverCaCert.expirationTime) > -1) or settings.ipConfiguration.sslMode equal ignore case ALLOW_UNENCRYPTED_AND_ENCRYPTED"``` | GCP SQL Instances do not have valid SSL configuration
This policy identifies GCP SQL instances that either lack SSL configuration or have SSL certificates that have expired.
If an SQL instance is not configured to use SSL, it may accept unencrypted and insecure connections, leading to potential risks such as data interception and authentication vulnerabilities.
It is a best practice to enable SSL configuration to ensure data security and integrity when communicating with a GCP SQL instance.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure SQL instance with SSL configuration,follow the steps mentioned below:\n\n1. Log in to google cloud console\n2. Navigate to 'Cloud SQL Instances'\n3. Click on the alerted instance and navigate to 'Security' under 'Connections' tab\n4. Select one of the following under 'Manage SSL mode':\n i. Allow only SSL connections\n ii. Require trusted client certificates\n\nTo verify the validity of the current certificate, follow the steps mentioned below:\n\n1. Log in to google cloud console\n2. Navigate to 'Cloud SQL Instances'\n3. Click on the alerted instance and navigate to 'Security' under 'Connections' tab\n4. Verify the expiration date of your server certificate under 'Manage server CA certificates' table\n\nTo create a new client certificate, follow the URL mentioned: https://cloud.google.com/sql/docs/mysql/configure-ssl-instance#client-certs. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.identitycontrolplane.creategroup and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.deletegroup and condition.eventType[*] contains com.oraclecloud.identitycontrolplane.updategroup) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for IAM group changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for IAM group changes. Monitoring and alerting on changes to IAM group will help in identifying changes to satisfy the least privilege principle. It is recommended that an Event Rule and Notification be configured to catch changes made to IAM group.
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Identity in the Service Name Drop-down and selecting Group – Create, Group – Delete and Group – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = deployment.enable_public_endpoints is true``` | IBM Cloud Database PostgreSQL is exposed to public
The policy identifies IBM Cloud Database PostgreSQL instances exposed to the public via public endpoints. When provisioning an IBM Cloud database service, it is generally not recommended to use public endpoints because it can pose a security risk. Public endpoints can make your database accessible to anyone with internet access, potentially leaving your data vulnerable to unauthorized access or malicious attacks. Instead, it is recommended to use private endpoints when provisioning a database service in IBM Cloud.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Refer to the IBM documentation to change the service endpoints from public to private\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-service-endpoints. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-services-list' AND json.rule = services[?any( name ends with "/cloudasset.googleapis.com" and state equals "ENABLED" )] does not exist``` | GCP Cloud Asset Inventory is disabled
This policy identifies GCP accounts where GCP Cloud Asset Inventory is disabled.
GCP Cloud Asset Inventory is a metadata inventory service that allows you to view, monitor, and analyze Google Cloud and Anthos assets across projects and services. This data can prove to be crucial in security analysis, resource change tracking, and compliance auditing.
It is recommended to enable GCP Cloud Asset Inventory for centralized visibility and control over your cloud assets.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Under 'APIs and Services', navigate to the 'API Library' (Left Panel)\n3. Search and select 'Cloud Asset API'\n4. Click 'ENABLE'.. |
```config from cloud.resource where api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.policies.azureADAuthenticationAsArmPolicy.status contains enabled``` | Azure Container Registry with ARM audience token authentication enabled
This policy identifies Azure Container Registries that permit ARM audience tokens for authentication.
When ARM audience tokens are enabled, they allow authentication intended for broader Azure services, which could introduce potential security risks. Disabling ARM audience tokens ensures that only ACR-specific tokens are valid, enhancing security by limiting authentication exclusively to Azure Container Registry audience tokens.
As a security best practice, it is recommended to disable ARM audience tokens for Azure Container Registries.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To disable ARM audience tokens for Azure Container Registries, refer to the following link:\nhttps://learn.microsoft.com/en-us/azure/container-registry/container-registry-disable-authentication-as-arm#assign-a-built-in-policy-definition-to-disable-arm-audience-token-authentication---azure-portal. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress')].sourceCidrIp contains 0.0.0.0/0"``` | Alibaba Cloud Security group is overly permissive to all traffic
This policy identifies Security groups that are overly permissive to all traffic. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' as X; count(X) less than 1 ``` | test_aggr_pk
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-flexible-server' AND json.rule = properties.state equal ignore case Ready and require_secure_transport.value does not equal ignore case on``` | Azure PostgreSQL flexible server secure transport parameter is disabled
This policy identifies PostgreSQL flexible servers for which secure transport (SSL connectivity) parameter is disabled.
Secure transport (SSL connectivity) helps to provide a new layer of security, by connecting server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application.
As a security best practice, it is recommended to enable secure transport parameter for Azure PostgreSQL flexible server.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'Azure Database for PostgreSQL flexible server'\n3. Click on the reported PostgreSQL flexible server\n4. Navigate to Settings -> Server parameters\n5. Search for parameter 'require_secure_transport' and set VALUE to 'ON' and You can also set min TLS version by setting 'ssl_min_protocol_version' server parameter as per your business requirement.\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = 'properties.powerState.code equal ignore case Running and properties.agentPoolProfiles[?any(type equal ignore case AvailabilitySet and count less than 3)] exists'``` | Azure AKS cluster pool profile count contains less than 3 nodes
This policy identifies AKS clusters that are configured with node pool profile less than 3 nodes. It is recommended to have at least 3 or more than 3 nodes in a node pool for a more resilient cluster. (Clusters smaller than 3 may experience downtime during upgrades.)
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To scale AKS cluster node pool nodes count, follow below URL:\nhttps://learn.microsoft.com/en-gb/azure/aks/scale-cluster?tabs=azure-cli. |
```config from cloud.resource where cloud.service = 'AWS Auto Scaling' AND api.name = 'aws-describe-auto-scaling-groups' AND json.rule = createdTime does not contain "foo"``` | Automation Audit Log Cron BUVZK Policy
Automation Audit Log Policy
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and listeners.* is not empty and listeners.*.sslConfiguration.certificateName is empty and listeners.*.protocol does not equal ignore case HTTP``` | OCI Load balancer listener is not configured with SSL certificate
This policy identifies Load balancers for which the listener is not configured with an SSL certificate.
Enforcing an SSL connection helps prevent unauthorized users from reading sensitive data that is intercepted as it travels through the network, between clients/applications and cache servers.
It is recommended to implement SSL between the load balancer and your client; so that the load balancer can accept encrypted traffic from a client.
This is applicable to oci cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure SSL to your Load balancer listener follow below URLs details:\nFor adding certificate - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/create_certificate.htm\n\nFor editing listener - https://docs.oracle.com/en-us/iaas/Content/Balance/Tasks/managinglisteners_topic-Editing_Listeners.htm. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and (identity.type does not exist or identity.type equal ignore case None)``` | Azure Cognitive Services account is not configured with managed identity
This policy identifies Azure Cognitive Services accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Cognitive Services account.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure AI services'\n3. Click on the reported Azure AI service\n4. Select 'Identity' under 'Resource Management' from left panel\n5. Configure either System assigned or User assigned identity\n6. Click on Save. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = launchOptions.isPvEncryptionInTransitEnabled is false``` | OCI Compute Instance boot volume has in-transit data encryption is disabled
This policy identifies the OCI Compute Instances that are configured with disabled in-transit data encryption boot or block volumes. It is recommended that Compute Instance boot or block volumes should be configured with in-transit data encryption to minimize risk for sensitive data being leaked.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click Edit\n5. Click on Show Advanced Options\n6. Select USE IN-TRANSIT ENCRYPTION\n7. Click Save Changes\n\nNote : To update the instance properties, the instance must be rebooted.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-role-assignment' AND json.rule = '((_DateTime.ageInDays($.properties.updatedOn) < 60) and (properties.principalType contains User) and (properties.scope starts with"/subscriptions"))' addcolumn properties.roleDefinition.properties.roleName properties.roleDefinition.properties.type properties.principalId properties.updatedBy``` | llatorre - RoleAssigment v4
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Go to investigate and identify the user that was assigned this role:\nconfig from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = id contains <principalId_from_the_json_output>\n\nGo to investigate and identify who assigned this role:\nconfig from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = id contains <updatedby_from_the_json_output>. |
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or minimumPasswordLength < 16 or minimumPasswordLength does not exist'``` | AWS IAM password policy does not have a minimum of 16 characters
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(config.shieldedInstanceConfig.enableSecureBoot does not exist or config.shieldedInstanceConfig.enableSecureBoot is false)] exists``` | GCP Kubernetes cluster shielded GKE node with Secure Boot disabled
This policy identifies GCP Kubernetes cluster shielded GKE nodes with Secure Boot disabled. An attacker may seek to alter boot components to persist malware or rootkits during system initialization. It is recommended to enable Secure Boot for Shielded GKE Nodes to verify the digital signature of node boot components.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: Once a Node pool is provisioned, it cannot be updated to enable Secure Boot. You must create new Node pools within the cluster with Secure Boot enabled. You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool, then delete the non-conforming pools.\n\nTo create a nodepool with Secure Boot enabled follow the below steps,\n\n1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. Select the alerted cluster and click 'ADD NODE POOL'\n4. Ensure that the 'Enable secure boot' checkbox is checked under the ‘Shielded options' in section 'Security'\n5. Click on 'CREATE'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.publicAccessPrevention does not equal ignore case "enforced" and iam.bindings[*] size greater than 0 and iam.bindings[*].members[*] any equal allUsers``` | GCP Storage buckets are publicly accessible to all users
This policy identifies the buckets which are publicly accessible to all users. Enabling public access to Storage buckets enables anybody with a web association to access sensitive information that is critical to business. Access over a whole bucket is controlled by IAM. Access to individual objects within the bucket is controlled by its ACLs.
This is applicable to gcp cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: To remove public access from a bucket, either enable "Public access prevention" or edit/remove any permissions granted to 'allUsers' on a bucket.\n\nTo edit/remove permissions granted over the bucket, follow the instructions below:\n1. Login to GCP Portal\n2. Go to the Cloud Storage Buckets page.\n3. Go to Buckets\n4. Click on the Storage bucket for which alert has been generated\n5. Select the Permissions tab near the top of the page.\n6. Edit/remove any permissions granted to 'allUsers'\n \nTo prevent public access over the bucket, follow the instructions below:\n1. Login to GCP Portal\n2. Go to the Cloud Storage Buckets page.\n3. Go to Buckets\n4. Click on the Storage bucket for which alert has been generated\n5. Select the Permissions tab near the top of the page.\n6. In the Public access card, click "Prevent public access" to enforce public access prevention.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-iam-identity-account-setting' AND json.rule = restrict_create_platform_apikey does not equal "RESTRICTED"``` | IBM Cloud API key creation is not restricted in account settings
This policy identifies IBM cloud accounts where API key creation is not restricted in account settings. By default, all members of an account can create API keys. Enabling API key creation will restrict the users from creating API keys unless correct access is granted explicitly. It is recommended to enable API key creation setting and grant access only on a need basis.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable the API key creation setting:\n\nhttps://cloud.ibm.com/docs/account?topic=account-allow-api-create&interface=ui#allow-all-api-create. |
```config from cloud.resource where api.name = 'gcloud-iam-service-accounts-keys-list' as X; config from cloud.resource where api.name = 'gcloud-iam-service-accounts-list' as Y; filter '($.X.name does not contain prisma-cloud and $.X.name contains iam.gserviceaccount.com and $.X.name contains $.Y.email and $.X.keyType contains USER_MANAGED)' ; show X;``` | GCP User managed service accounts have user managed service account keys
This policy identifies user managed service accounts that use user managed service account keys instead of Google-managed. For user-managed keys, the User has to take ownership of key management activities. Even after owner precaution, keys can be easily leaked by common development malpractices like checking keys into the source code or leaving them in downloads directory or accidentally leaving them on support blogs/channels. So It is recommended to limit the use of User-managed service account keys and instead use Google-managed keys which cannot be downloaded.
Note: This policy might alert the service accounts which are not created using Terraform for cloud account onboarding. These alerts are valid as no user-managed service account should be used for cloud account onboarding.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to delete user managed service account keys:\n\nhttps://cloud.google.com/iam/docs/creating-managing-service-account-keys#deleting. |
```config from cloud.resource where api.name = 'aws-elasticache-cache-clusters' as X; config from cloud.resource where api.name = 'aws-elasticache-describe-replication-groups' as Y; filter '$.Y.memberClusters contains $.X.cacheClusterId and $.X.cacheClusterStatus equals available and ($.X.cacheSubnetGroupName is empty or $.X.cacheSubnetGroupName does not exist)'; show Y;``` | AWS ElastiCache cluster not associated with VPC
This policy identifies ElastiCache Clusters which are not associated with VPC. It is highly recommended to associate ElastiCache with VPC, as provides virtual network in your own logically isolated area and features such as selecting IP address range, creating subnets, and configuring route tables, network gateways, and security settings.
NOTE: If you created your AWS account before 2013-12-04, you might have support for the EC2-Classic platform in some regions. AWS has deprecated the use of Amazon EC2-Classic for launching ElastiCache clusters. All current generation nodes are launched in Amazon Virtual Private Cloud only. So this policy only applies legacy ElastiCache clusters which are created using EC2-Classic.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: AWS ElastiCache cluster VPC association can be done only at the time of the creation of the cluster. So to fix this alert, create a new cluster with VPC, then migrate all required ElastiCache cluster data from the reported ElastiCache cluster to this newly created cluster and delete reported ElastiCache cluster.\n\nTo create new ElastiCache cluster with at-rest encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis or Memcached based on your requirement\n5. Choose cluster parameters as per your requirement\n6. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\n7. Select desired VPC for 'Subnet group' along with other parameters\nNOTE: If you don't specify a subnet when you launch a cluster, the cluster launches into your default Amazon VPC.\n8. Click on 'Create' button to launch your new ElastiCache cluster\n\nTo delete reported ElastiCache cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Select reported cluster\n5. Click on 'Delete' button\n6. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = (properties.publicNetworkAccess does not equal ignore case disabled and properties.networkAcls does not exist) or (properties.publicNetworkAccess does not equal ignore case disabled and properties.networkAcls.defaultAction equal ignore case allow ) ``` | Azure Key Vault Firewall is not enabled
This policy identifies Azure Key Vault which has Firewall disabled. Enabling Azure Key Vault Firewall feature prevents unauthorised traffic from reaching your key vault. It is recommend to enable Azure Key Vault Firewall which provides additional layer of protection for your secrets.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Under 'Settings' select 'Networking'\n4. In order to "Allow public access from specific virtual networks and IP addresses", Click on 'Allow public access from specific virtual networks and IP addresses' Under 'Firewalls and virtual networks'. Add 'IPv4 address or CIDR'.\n5. In order to disable public access, Click on 'Disable public access'.\n6. Click on 'Save'.. |
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as X; config from cloud.resource where api.name = 'gcloud-cloud-run-revisions-list' AND json.rule = spec.serviceAccountName contains "[email protected]" as Y; filter ' $.X.user equals $.Y.spec.serviceAccountName '; show Y;``` | GCP Cloud Run service revision is using default service account with editor role
This policy identifies GCP Cloud Run service revisions that are utilizing the default service account with the editor role.
GCP Compute Engine Default service account is automatically created upon enabling the Compute Engine API. This service account is granted the IAM basic Editor role by default, unless explicitly disabled. Assigning default service account with the editor role to cloud run revisions could lead to privilege escalation. Granting minimal access rights helps in promoting a better security posture.
Following the principle of least privileges, it is recommended to avoid assigning default service account with the editor role to cloud run revision.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Changing a service account of an existing cloud run service revision is impossible. The service revision can be deleted and a new revision with appropriate permissions can be deployed.\n\nTo delete a cloud run service revision that is serving traffic, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Cloud Run' service\n3. Click on the cloud run service on whose revision, alert is generated\n4. Go to the 'REVISIONS' tab\n5. Click on 'MANAGE TRAFFIC'\n6. Click on the delete icon in front of the alerting revision. Adjust traffic distribution appropriately.\n7. Click on 'Save'\n8. Under the 'REVISIONS' tab, click the actions button (three dots) in front of the alerting revision.\n9. Click 'Delete'\n10. Click 'DELETE'\n\nTo delete a cloud run service revision that is not serving any traffic, please refer to the steps below:\n1. Login to the GCP console\n2. Navigate to the 'Cloud Run' service\n3. Click on the cloud run service on whose revision, alert is generated\n4. Go to the 'REVISIONS' tab\n5. Under the 'REVISIONS' tab, click the actions button (three dots) in front of the alerting revision.\n6. Click 'Delete'\n7. Click 'DELETE'. |
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains "protoPayload.serviceName=" or $.X.filter contains "protoPayload.serviceName =") and ($.X.filter does not contain "protoPayload.serviceName !=" and $.X.filter does not contain "protoPayload.serviceName!=") and $.X.filter contains "cloudresourcemanager.googleapis.com" and ($.X.filter contains "ProjectOwnership OR projectOwnerInvitee" or $.X.filter contains "ProjectOwnership or projectOwnerInvitee") and ($.X.filter contains "protoPayload.serviceData.policyDelta.bindingDeltas.action=" or $.X.filter contains "protoPayload.serviceData.policyDelta.bindingDeltas.action =") and ($.X.filter does not contain "protoPayload.serviceData.policyDelta.bindingDeltas.action!=" and $.X.filter does not contain "protoPayload.serviceData.policyDelta.bindingDeltas.action !=") and ($.X.filter contains "protoPayload.serviceData.policyDelta.bindingDeltas.role=" or $.X.filter contains "protoPayload.serviceData.policyDelta.bindingDeltas.role =") and ($.X.filter does not contain "protoPayload.serviceData.policyDelta.bindingDeltas.role!=" and $.X.filter does not contain "protoPayload.serviceData.policyDelta.bindingDeltas.role !=") and $.X.filter contains "REMOVE" and $.X.filter contains "ADD" and $.X.filter contains "roles/owner"'; show X; count(X) less than 1``` | GCP Log metric filter and alert does not exist for Project Ownership assignments/changes
This policy identifies the GCP account which does not have a log metric filter and alert for Project Ownership assignments/changes. Project Ownership Having highest level of privileges on a project, to avoid misuse of project resources project ownership assignment/change actions mentioned should be monitored and alerted to concerned recipients.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \n(protoPayload.serviceName="cloudresourcemanager.googleapis.com") AND (ProjectOwnership OR projectOwnerInvitee) OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="REMOVE" AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner") OR (protoPayload.serviceData.policyDelta.bindingDeltas.action="ADD" AND protoPayload.serviceData.policyDelta.bindingDeltas.role="roles/owner")\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = "(((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and publicAccessBlockConfiguration.ignorePublicAcls is false) or (policyStatus.isPublic is true and publicAccessBlockConfiguration.restrictPublicBuckets is false)) and websiteConfiguration does not exist) and ((policy.Statement[*].Condition.Bool.aws:SecureTransport does not exist) or ((policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action contains s3: or policy.Statement[?(@.Principal=='*' || @.Principal.AWS=='*')].Action[*] contains s3:) and (policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains false or policy.Statement[?(@.Principal=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Allow')].Condition.Bool.aws:SecureTransport contains FALSE or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains true or policy.Statement[?(@.Principal=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE or policy.Statement[?(@.Principal.AWS=='*' && @.Effect=='Deny')].Condition.Bool.aws:SecureTransport contains TRUE)))"``` | pkodoth - AWS S3 bucket not configured with secure data transport policy
This policy identifies S3 buckets which are not configured with secure data transport policy. AWS S3 buckets should enforce encryption of data over the network using Secure Sockets Layer (SSL). It is recommended to add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:*) from anybody who browses (Principal: *) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false).
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. Navigate to Amazon S3 Dashboard\n3. Click on 'Buckets' (Left Panel)\n4. Choose the reported S3 bucket\n5. On 'Permissions' tab, Click on 'Bucket Policy'\n6. Add a bucket policy that explicitly denies (Effect: Deny) all access (Action: s3:) from anybody who browses (Principal: ) to Amazon S3 objects within an Amazon S3 bucket if they are not accessed through HTTPS (aws:SecureTransport: false). Below is the sample policy:\n{\n "Sid": "ForceSSLOnlyAccess",\n "Effect": "Deny",\n "Principal": "*",\n "Action": "s3:GetObject",\n "Resource": "arn:aws:s3:::bucket_name/*",\n "Condition": {\n "Bool": {\n "aws:SecureTransport": "false"\n }\n }\n}. |
```config from cloud.resource where api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" and gceSetup.serviceAccounts[*].email contains "[email protected]" as X; config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as Y; filter ' $.X.gceSetup.serviceAccounts[*].email equals $.Y.user'; show X;``` | GCP Vertex AI Workbench Instance is using default service account with the editor role
This policy identifies GCP Vertex AI Workbench Instances that are using the default service account with the Editor role.
The Compute Engine default service account is automatically created with an autogenerated name and email address when you enable the Compute Engine API. By default, this service account is granted the IAM basic Editor role unless you explicitly disable this behavior. If this service account is assigned to a Vertex AI Workbench instance, it may lead to potential privilege escalation.
In line with the principle of least privilege, it is recommended that Vertex AI Workbench Instances are not assigned the 'Compute Engine default service account', particularly when the Editor role is granted to the service account.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Identity and API access', use the dropdown to select a non-default service account as per needs\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.networkProfile.networkPlugin does not contain azure``` | Azure AKS cluster Azure CNI networking not enabled
Azure CNI provides the following features over kubenet networking:
- Every pod in the cluster is assigned an IP address in the virtual network. The pods can directly communicate with other pods in the cluster, and other nodes in the virtual network.
- Pods in a subnet that have service endpoints enabled can securely connect to Azure services, such as Azure Storage and SQL DB.
- You can create user-defined routes (UDR) to route traffic from pods to a Network Virtual Appliance.
- Support for Network Policies securing communication between pods.
This policy checks your AKS cluster for the Azure CNI network plugin and generates an alert if not found.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To create a new AKS cluster with the Azure CNI network plugin enabled, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/aks/configure-azure-cni. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = 'extractable is false and state equals 1 and ((lastRotateDate does not exist and _DateTime.ageInDays(creationDate) > 90 ) or _DateTime.ageInDays(lastRotateDate) > 90)'``` | IBM Cloud Key Protect root key have aged more than 90 days without being rotated
This policy identifies IBM Cloud Key Protect root keys that have aged more than 90 days without being rotated. It is a procedure that adheres to security best practices to rotate keys on a regular basis. So that if the keys are compromised, the data in the underlying service is still secure with the new keys.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. Click on Menu Icon and navigate to 'Resource list', From the list of resources, select your provisioned instance of Key Protect in which the reported root key resides.\n3. Select the key and click on three dots on the right corner of the row to open the list of options for the key that you want to rotate.\n4. Click on 'Rotate'.\n5. In the 'Rotation' window, click on 'Rotate Key' \n6. In order to set the rotation policy, Under 'Manage rotation policy', enable 'Rotation policy' checkbox and select the day intervals for the key rotation as per the requirement.\n7. Click on 'Set policy' button to establish this policy.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = '(acl[*].email exists and acl[*].email contains logging) and (versioning.enabled is false or versioning does not exist)'``` | GCP Storage log buckets have object versioning disabled
This policy identifies Storage log buckets which have object versioning disabled. Enabling object versioning on storage log buckets will protect your cloud storage data from being overwritten or accidentally deleted. It is recommended to enable object versioning feature on all storage buckets where sinks are configured.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable object versioning on a bucket:\n\nhttps://cloud.google.com/storage/docs/using-object-versioning#set. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine equals redis and transitEncryptionEnabled is false and replicationGroupId does not exist``` | AWS ElastiCache Redis with in-transit encryption disabled (Non-replication group)
This policy identifies ElastiCache Redis that are in non-replication groups or individual ElastiCache Redis and have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps prevent unauthorized users from reading sensitive data between your Redis and their associated cache storage systems.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: AWS ElastiCache Redis in-transit encryption can be set, only at the time of creation. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption along with other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want a backup for your cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-iam-service-accounts-keys-list' AND json.rule = 'name contains iam.gserviceaccount.com and (_DateTime.ageInDays($.validAfterTime) > -1) and keyType equals USER_MANAGED'``` | bboiko test 02 - policy
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-flexible-server' AND json.rule = properties.state equal ignore case "Ready" and require_secure_transport.value equal ignore case "OFF"``` | Azure MySQL database flexible server SSL enforcement is disabled
This policy identifies Azure MySQL database flexible servers for which the SSL enforcement is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable MySQL database flexible server SSL connection, refer below URL:\nhttps://docs.microsoft.com/en-us/azure/mysql/flexible-server/how-to-connect-tls-ssl. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = settings.ipConfiguration.authorizedNetworks[?any(value contains 0.0.0.0/0 or value contains ::/0)] exists``` | GCP SQL instance configured with overly permissive authorized networks
This policy identifies GCP Cloud SQL instances that are configured with overly permissive authorized networks. You can connect to the SQL instance securely by using the Cloud SQL Proxy or adding your client's public address as an authorized network. If your client application is connecting directly to a Cloud SQL instance on its public IP address, you have to add your client's external address as an Authorized network for securing the connection. It is recommended to add specific IPs instead of public IPs as authorized networks as per the requirement.
Reference: https://cloud.google.com/sql/docs/mysql/authorize-networks
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Navigate to the 'Instances' page on section 'SQL'(Left Panel)\n3. Click on the alerted instance name \n4. Select the 'Connections' tab on the left panel\n5. Inspect for the networks added as Authorized Networks\n6. If any public IP is set for 'Authorized networks', review and delete the network by clicking the delete icon on the network\n7. Click on 'DONE'.\n8. Click on 'SAVE'.. |
```config from cloud.resource where api.name = 'aws-securityhub-hub' AND json.rule = SubscribedAt exists``` | test
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = versioning.enabled is false or versioning does not exist``` | GCP Storage bucket with object versioning disabled
This policy identifies GCP Storage buckets that have object versioning disabled.
Object versioning is a method of keeping multiple variants of an object in the same storage bucket. Enabling object versioning on storage log buckets will protect your cloud storage data from being overwritten or accidentally deleted.
It is recommended to enable the object versioning feature on all storage buckets.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the GCP console\n2. Navigate to the Cloud Storage Buckets page. Select 'Buckets' from the left panel\n3. Click on the reported bucket\n4. Go to the 'Protection' tab\n5. Under the 'Object versioning' section, select 'OBJECT VERSIONING OFF'\n6. In the 'Turn on object versioning' dialog, select the 'Add recommended lifecycle rules to manage version costs' checkbox if required.\n7. Click on 'CONFIRM'.. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-cloudtrail-describe-trails' AND json.rule='logFileValidationEnabled is false'``` | AWS CloudTrail log validation is not enabled in all regions
This policy identifies AWS CloudTrails in which log validation is not enabled in all regions. CloudTrail log file validation creates a digitally signed digest file containing a hash of each log that CloudTrail writes to S3. These digest files can be used to determine whether a log file was modified after CloudTrail delivered the log. It is recommended that file validation be enabled on all CloudTrails.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Access the 'CloudTrail' service.\n4. For each trail reported, under Configuration > Storage Location, make sure 'Enable log file validation' is set to 'Yes'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = '(status equals RUNNING and name does not start with "gke-") and shieldedInstanceConfig exists and (shieldedInstanceConfig.enableVtpm is false or shieldedInstanceConfig.enableIntegrityMonitoring is false)'``` | GCP VM instance with Shielded VM features disabled
This policy identifies VM instances which have Shielded VM features disabled. Shielded VMs are virtual machines (VMs) on Google Cloud Platform hardened by a set of security controls that help defend against rootkits and bootkits. Shielded VM's verifiable integrity is achieved through the use of Secure Boot, virtual trusted platform module (vTPM)-enabled Measured Boot, and integrity monitoring. Shielded VM instances run firmware which is signed and verified using Google's Certificate Authority, ensuring that the instance's firmware is unmodified and establishing the root of trust for Secure Boot.
NOTE: You can only enable Shielded VM options on instances that have Shielded VM support. This policy reports VM instances that have Shielded VM support and are disabled with the Shielded VM features.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate VM instances page\n3. STOP the reported VM instance before editing the instance\nNOTE: Before stoping the instance, Check the VM instance operational requirement.\n4. After the instance stops, click 'EDIT'\n5. In the Shielded VM section, select 'Turn on vTPM' and 'Turn on Integrity Monitoring'.\nOptionally, if you do not use any custom or unsigned drivers on the instance, also select 'Turn on Secure Boot'.\n6. Click on 'Save' and then START the instance.. |
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "databases-for-mysql" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resourceGroupId","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;``` | IBM Cloud user with IAM policies provide administrative privileges for Databases for MySQL service
This policy identifies IBM Cloud users with administrator role permission for Databases for MySQL service. A user has full platform control as an administrator, including the ability to assign other users access policies and modify deployment passwords. If a user with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and check the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Databases for MySQL' service\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-functions-applications' AND json.rule = lifecycleState equal ignore case ACTIVE and (networkSecurityGroupIds does not exist or networkSecurityGroupIds[*] is empty)``` | OCI Function Application is not configured with Network Security Groups
This policy identifies Function Applications that are not configured with Network Security Groups.
OCI Function Applications allow you to execute code in response to events without provisioning or managing infrastructure. When these function applications are not configured with NSGs, they are more vulnerable to unauthorized access and potential security breaches. NSGs help isolate and protect your functions by ensuring that only trusted sources can communicate with them.
As a best practice, it is recommended to restrict access to the application traffic by configuring network security groups.
This is applicable to oci cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure Network Security Group for your function application, refer below URL:\nhttps://docs.oracle.com/en-us/iaas/Content/Functions/Tasks/functionsusingnsgs.htm\nNOTE: Before you update Function Application with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirement.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (displayName contains "Default Security List for") and (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals "all") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)``` | OCI Default Security List of every VCN allows all traffic on SSH port (22)
This policy identifies OCI Default Security lists associated with every VCN that allow unrestricted ingress access to port 22. It is recommended that no security group allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-loadbalancer' AND json.rule = lifecycleState equals ACTIVE and networkSecurityGroupIds[*] does not exist``` | OCI Load balancer not configured with Network Security Groups
This policy identifies Load balancers that are not configured with Network Security Groups.
Without Network Security Groups, load balancers may be exposed to unwanted traffic, increasing the risk of security breaches and unauthorized access. NSGs allow administrators to define security rules that specify the types of traffic allowed to flow in and out of the load balancer, enhancing overall network security.
As a best practice, it is recommended to restrict access to the load balancer by configuring network security groups.
This is applicable to oci cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to OCI console\n2. Go to Networking -> Load Balancers\n3. Click on the reported load balancer\nNOTE: Before you update load balancer with Network security group, make sure you have a restrictive Network Security Group already created with only specific traffic ports based on requirements. \n4. On the 'Load Balancer Details' page, click on the 'Edit' button next to 'Network Security Groups' to make the changes.\n5. On the 'Edit Network Security Groups' dialog, select the restrictive Network Security Group and click on the 'Save Changes' button.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-file-storage-export' AND json.rule = exportOptions[?any( identitySquash equals ROOT and (anonymousGid does not equal 65534 or anonymousUid does not equal 65534))] exists``` | OCI File Storage File System access is not restricted to root users
This policy identifies the OCI File Storage File Systems that allow unrestricted access to root users. It is recommended that File Storage File Systems should limit root users access by restricting the privileges, to increase the security of File Systems.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on the export path reported in the alert\n5. Click on Edit NFS Export Options\n6. Update the NFS Export Options where Squash is set Root and update Squash UID and Squash GID to 65534. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-key-protect-key' AND json.rule = extractable is false and state equals 1 and ((policy[*].rotation exists and policy[*].rotation.enabled is false ) or policy[*].rotation does not exist)``` | IBM Cloud Key Protect root key automatic key rotation is not enabled
This policy identifies IBM Cloud Key Protect root keys that are not enabled with automatic key rotation. As a security best practice, it is important to rotate the keys periodically. So that if the keys are compromised, the data in the underlying service is still secure with the new keys.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. Click on Menu Icon and navigate to 'Resource list', From the list of resources, under security section, select your provisioned instance of Key Protect, in which the reported root key resides.\n3. Select the key and click on the three dots on the right corner of the row to open the list of options for the key for which you want to set the rotation policy.\n4. Click on 'Rotate'\n5. In order to set the rotation policy, Under 'Manage rotation policy' section, enable the 'Rotation policy' checkbox and select the day intervals for the key rotation as per the requirement.\n6. Click on the 'Set policy' button to establish this policy.. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and ($.X.filterPattern contains "eventSource=" or $.X.filterPattern contains "eventSource =") and ($.X.filterPattern does not contain "eventSource!=" and $.X.filterPattern does not contain "eventSource !=") and $.X.filterPattern contains organizations.amazonaws.com and $.X.filterPattern contains AcceptHandshake and $.X.filterPattern contains AttachPolicy and $.X.filterPattern contains CreateAccount and $.X.filterPattern contains CreateOrganizationalUnit and $.X.filterPattern contains CreatePolicy and $.X.filterPattern contains DeclineHandshake and $.X.filterPattern contains DeleteOrganization and $.X.filterPattern contains DeleteOrganizationalUnit and $.X.filterPattern contains DeletePolicy and $.X.filterPattern contains DetachPolicy and $.X.filterPattern contains DisablePolicyType and $.X.filterPattern contains EnablePolicyType and $.X.filterPattern contains InviteAccountToOrganization and $.X.filterPattern contains LeaveOrganization and $.X.filterPattern contains MoveAccount and $.X.filterPattern contains RemoveAccountFromOrganization and $.X.filterPattern contains UpdatePolicy and $.X.filterPattern contains UpdateOrganizationalUnit) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for AWS Organization changes
This policy identifies the AWS regions that do not have a log metric filter and alarm for AWS Organizations changes. Monitoring changes to AWS Organizations will help to ensure any unwanted, accidental, or intentional modifications that may lead to unauthorized access or other security breaches within the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to AWS Organization's configurations.
NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi trial enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n{ ($.eventSource = organizations.amazonaws.com) && (($.eventName = "AcceptHandshake") || ($.eventName = "AttachPolicy") || ($.eventName = "CreateAccount") || ($.eventName = "CreateOrganizationalUnit") || ($.eventName = "CreatePolicy") || ($.eventName = "DeclineHandshake") || ($.eventName = "DeleteOrganization") || ($.eventName = "DeleteOrganizationalUnit") || ($.eventName = "DeletePolicy") || ($.eventName = "DetachPolicy") || ($.eventName = "DisablePolicyType") || ($.eventName = "EnablePolicyType") || ($.eventName = "InviteAccountToOrganization") || ($.eventName = "LeaveOrganization") || ($.eventName = "MoveAccount") || ($.eventName = "RemoveAccountFromOrganization") || ($.eventName = "UpdatePolicy") || ($.eventName = "UpdateOrganizationalUnit")) }\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-networks-list' AND json.rule = 'autoCreateSubnetworks does not exist'``` | GCP project is configured with legacy network
This policy identifies the projects which have configured with legacy networks. Legacy networks have a single network IPv4 prefix range and a single gateway IP address for the whole network. Subnetworks cannot be created in a legacy network. Legacy networks can have an impact on high network traffic projects and subject to the single point of failure.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: For each Google Cloud Platform project,\nFollow the documentation and delete the reported network which is in the legacy mode:\nhttps://cloud.google.com/vpc/docs/using-legacy. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' AND json.rule = (restrictions.browserKeyRestrictions does not exist and restrictions.serverKeyRestrictions does not exist and restrictions.androidKeyRestrictions does not exist and restrictions.iosKeyRestrictions does not exist) or (restrictions.browserKeyRestrictions exists and (restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals "*")] exists or restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals "*.[TLD]")] exists or restrictions.browserKeyRestrictions[?any(allowedReferrers[*] equals "*.[TLD]/*")] exists)) or (restrictions.serverKeyRestrictions exists and (restrictions.serverKeyRestrictions[?any(allowedIps[*] equals 0.0.0.0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals 0.0.0.0/0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals ::/0)] exists or restrictions.serverKeyRestrictions[?any(allowedIps[*] equals ::0)] exists))``` | GCP API key not restricted to use by specified Hosts and Apps
This policy identifies GCP API key not restricted to use by specified Hosts and Apps. Unrestricted keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. It is recommended to restrict API key usage to trusted hosts, HTTP referrers and apps.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to google cloud console\n2. Navigate to 'Credentials', Under service 'APIs & Services' (Left Panel)\n3. In the section 'API Keys', Click on the reported 'API Key Name'\n4. In the 'Key restrictions' section, set the application restrictions to any of HTTP referrers, IP Adresses, Android Apps, iOS Apps.\n5. Click 'SAVE'.\nNote: Do not set 'HTTP referrers' to wild-cards (* or *.[TLD] or *.[TLD]/*). \nDo not set 'IP addresses' restriction to any overly permissive IP (0.0.0.0 or 0.0.0.0/0 or ::0 or ::/0). |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-db-cluster' AND json.rule = 'storageEncrypted is false'``` | AWS RDS DB cluster encryption is disabled
This policy identifies RDS DB clusters for which encryption is disabled. Amazon Aurora encrypted DB clusters provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon Aurora encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for data-at-rest encryption.
NOTE: This policy is applicable only for Aurora DB clusters.
https://docs.aws.amazon.com/cli/latest/reference/rds/describe-db-clusters.html
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: AWS DB clusters can be encrypted only while creating the database cluster. You can't convert an unencrypted DB cluster to an encrypted one. However, you can restore an unencrypted Aurora DB cluster snapshot to an encrypted Aurora DB cluster. To do this, specify a KMS encryption key when you restore from the unencrypted DB cluster snapshot.\n\nFor AWS RDS,\n1. To create a 'Snapshot' of the unencrypted DB cluster, follow the instruction mentioned in below link:\nRDS Link: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_CreateSnapshotCluster.html\n\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available', delete the unencrypted DB cluster before restoring from the DB cluster Snapshot by following below steps for AWS RDS,\na. Sign to the AWS Management Console and open the Amazon RDS console at https://console.aws.amazon.com/rds/\nb. In the navigation pane, choose 'Databases'.\nc. In the list of DB instances, choose a writer instance for the DB cluster.\nd. Choose 'Actions', and then choose 'Delete'.\n\n2. To restoring the Cluster from a DB Cluster Snapshot, follow the instruction mentioned in below link:\nRDS Link: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/USER_RestoreFromSnapshot.html\n\nFor AWS Document DB,\n1. To create a 'Snapshot' of the unencrypted DB cluster, follow the instruction mentioned in below link:\nDocument DB Link: https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-create_manual_cluster_snapshot.html\n\nNOTE: As you can't restore from a DB cluster snapshot to an existing DB cluster; a new DB cluster is created when you restore. Once the Snapshot status is 'Available', delete the unencrypted DB cluster before restoring from the DB cluster Snapshot by following below steps for AWS Document DB, \n a. Sign to the AWS Management Console and open the Amazon DocumentDB console at https://console.aws.amazon.com/docdb/\n b. In the navigation pane, choose 'Clusters'.\n c. Select the cluster from the list which needs to be deleted\n d. Choose 'Actions', and then choose 'Delete'.\n\n2. To restoring the Cluster from a DB Cluster Snapshot, follow the instruction mentioned in below link:\nDocument DB Link: https://docs.aws.amazon.com/documentdb/latest/developerguide/backup_restore-restore_from_snapshot.html. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' and json.rule = groupName contains "ahazra" ``` | Demo AWS Security Group overly permissive to all traffic
This policy identifies Security groups that are overly permissive to all traffic. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict traffic solely from known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0.. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-sql-server-list' AND json.rule = sqlEncryptionProtectors[*].kind != azurekeyvault and sqlEncryptionProtectors[*].properties.serverKeyType != AzureKeyVault and sqlEncryptionProtectors[*].properties.uri !exists``` | Azure SQL server TDE protector is not encrypted with BYOK (Use your own key)
This policy identifies Bring Your Own Key(BYOK) support for Transparent Data Encryption(TDE) in SQL server. The data encryption key(DEK) can be protected with an asymmetric key that is stored in the Key Vault which allows user control of TDE encryption keys and restricts who can access them and when.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Navigate to 'SQL servers' dashboard, and select the SQL server instance you want to modify\n3. In the left navigation, select 'Transparent data encryption'\n4. Select Customer-managed key > Select a key > Change key\n- In Key vault, select an existing key vault or create new key vault\n- In Key, select an existing key or create a new key\n- In Version, select an existing version or create new version\nOR\nSelect Customer-managed key > Enter a key identifier\n- In Key identifier add key vault URI, if URI is already noted.\n5. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals does not equal Microsoft.Network/publicIPAddresses/write and properties.condition.allOf[?(@.field=='category')].['equals'] contains Administrative" as X; count(X) less than 1``` | Azure Activity Log alert for Create or Update Public IP does not exist
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'azure-spring-cloud-service' AND json.rule = properties.powerState equals Running and sku.tier does not equal Basic as X; config from cloud.resource where api.name = 'azure-spring-cloud-app' AND json.rule = properties.provisioningState equals Succeeded and properties.enableEndToEndTLS is false as Y; filter '$.X.name equals $.Y.serviceName'; show Y;``` | Azure Spring Cloud app end-to-end TLS is disabled
This policy identifies Azure Spring Cloud apps in which end-to-end TLS is disabled. Enabling end-to-end TLS/SSL will secure traffic from ingress controller to apps. After you enable end-to-end TLS and load a cert from the key vault, all communications within Azure Spring Cloud are secured with TLS. As a security best practice, it is recommended to have an end-to-end TLS to secure Spring Cloud apps traffic.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Azure Spring Cloud dashboard\n3. Choose Azure Spring Cloud service for which Azure Spring Cloud app is reported\n4. Under the 'Settings', click on 'Apps'\n5. Click on reported Azure Spring Cloud app\n6. Under the 'Settings', click on 'Ingress-to-app TLS'\n7. Set 'Yes' to 'Ingress-to-app TLS'. |
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(25,25)"``` | Alibaba Cloud Security group allow internet traffic to SMTP port (25)
This policy identifies Security groups that allow inbound traffic on SMTP port (25) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network.
This is applicable to alibaba_cloud cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule having 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 25, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range.\n7. Click on 'OK'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-instances-list' AND json.rule = 'scheduling.preemptible equals true and (status equals RUNNING and name does not start with "gke-")'``` | GCP VM Instances enabled with Pre-Emptible termination
Checks to verify if any VM instance is initiated with the flag 'Pre-Emptible termination' set to True. Setting this instance to True implies that this VM instance will shut down within 24 hours or can also be terminated by a Service Engine when high demand is encountered. While this might save costs, it can also lead to unexpected loss of service when the VM instance is terminated.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Once a VM instance is started with Pre-Emptible set to Yes, it cannot be changed. If this instance with Pre-Emptible set is a critical resource, then spin up a new VM instance with necessary services, processes, and updates so that there will be no interruption of services.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/networkSecurityGroups/securityRules/delete" as X; count(X) less than 1``` | Azure Activity log alert for Delete network security group rule does not exist
This policy identifies the Azure accounts in which activity log alert for Delete network security group rule does not exist. Creating an activity log alert for Delete network security group rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Security Rule (Microsoft.Network/networkSecurityGroups/securityRules)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 45) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 45))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 45) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 45)))'``` | AWS access keys not used for more than 45 days
This policy identifies IAM users for which access keys are not used for more than 45 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 45 days, then that access key needs to be deleted (even though the access key is inactive)
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-database-maria-db-server' AND json.rule = properties.userVisibleState equals Ready and properties.sslEnforcement equals Disabled``` | Azure MariaDB database server with SSL connection disabled
This policy identifies MariaDB database servers for which SSL enforce status is disabled. Azure Database for MariaDB supports connecting your Azure Database for MariaDB server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. It is recommended to enforce SSL for accessing your database server.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure SSL connection on an existing Azure Database for MariaDB, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/mariadb/howto-configure-ssl. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name='gcloud-compute-firewall-rules-list' AND json.rule= disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(80,80) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | IR-test-GCP Firewall rule allows all traffic on HTTP port (80)
Test GCP policy to check cli remediation / can be deleted.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-cache-clusters' AND json.rule = engine equals memcached and transitEncryptionEnabled is false``` | AWS ElastiCache Memcached cluster with in-transit encryption disabled
This policy identifies AWS ElastiCache Memcached clusters that have in-transit encryption disabled. It is highly recommended to implement in-transit encryption in order to protect data from unauthorized access as it travels through the network, between clients and cache servers. Enabling data encryption in-transit helps to prevent unauthorized users from reading sensitive data between your Memcached and their associated cache storage systems.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: AWS ElastiCache Memcached in-transit encryption can be set, only at the time of creation. So to resolve this alert, create a new cluster with in-transit encryption enabled, then migrate all required ElastiCache Memcached cluster data from the reported ElastiCache Memcached cluster to this newly created cluster and delete reported ElastiCache Memcached cluster.\n\nTo create new ElastiCache Memcached cluster with In-transit encryption set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache\n4. Click on 'Memcached clusters' under 'Resources'\n5. Click on 'Create Memcached clusters' button\n6. On the 'Cluster settings' page,\na. Enter a name for the new cache cluster\nb. Select Memcached engine version from 'Engine version' dropdown list\nNote: As of September 2022,In-transit encryption can be enabled only for AWS ElastiCache clusters with Memcached engine version 1.6.12 or later\nc. Enter the 'Subnet group settings' and click on 'Next'\nd. Under 'Security', Select 'Enable' checkbox under 'Encryption in transit'\ne. Fill in other necessary parameters\n7. Click on 'Create' button to launch your new ElastiCache Memcached cluster\n\nTo delete reported ElastiCache Memcached cluster follow below given URL:\nhttps://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/GettingStarted.DeleteCacheCluster.html. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.addonProfiles.azureKeyvaultSecretsProvider.enabled is false``` | Azure AKS cluster is not configured with disk encryption set
This policy identifies AKS clusters that are not configured with disk encryption set. Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of an Azure key vault as a secrets store with an Azure Kubernetes Service (AKS) cluster via a CSI volume. It is recommended to enable secret store CSI driver for your Kubernetes clusters.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Kubernetes services dashboard\n3. Click on the reported Kubernetes cluster\n4. Under Setting section, Click on 'Cluster configuration'\n5. Select 'Enable secret store CSI driver'\nNOTE: Once the CSI driver is enabled, Azure will deploy additional pods onto the cluster. You'll still need to configure Azure Key Vault, define secrets to securely fetch, and redeploy the application to use these secrets.\nFor more details: https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/demos/standard-walkthrough/\n6. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-iam-authentication-policy' AND json.rule = 'passwordPolicy.isLowercaseCharactersRequired isFalse'``` | OCI IAM password policy for local (non-federated) users does not have a lowercase character
This policy identifies Oracle Cloud Infrastructure(OCI) accounts that do not have a lowercase character in the password policy for local (non-federated) users. As a security best practice, configure a strong password policy for secure access to the OCI console.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['WEAK_PASSWORD'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console Page: https://console.ap-mumbai-1.oraclecloud.com/\n2. Go to Identity in the Services menu.\n3. Select Authentication Settings from the Identity menu.\n4.Click Edit Authentication Settings in the middle of the page.\n5.Ensure the checkbox is selected next to MUST CONTAIN AT LEAST 1 LOWERCASE CHARACTER.\nNote : The console URL is region specific, your tenancy might have a different home region and thus console URL.. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-block-storage-volume' AND json.rule = volumeBackupPolicyAssignment[*] size equals 0 and volumeGroupId equal ignore case "null"``` | OCI Block Storage Block Volume does not have backup enabled
This policy identifies the OCI Block Storage Volumes that do not have backup enabled. It is recommended to have block volume backup policies on each block volume so that the block volume can be restored during data loss events.
Note: This Policy is not applicable for block volumes that are added to volume groups.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Click on Edit button\n5. Select the Backup Policy from the Backup Policies section as appropriate\n6. Click Save Changes. |
```config from cloud.resource where api.name = 'gcloud-projects-get-iam-user' AND json.rule = roles[*] contains "roles/editor" or roles[*] contains "roles/owner" as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and name does not start with "gke-" as Y; filter '$.Y.serviceAccounts[*].email contains $.X.user'; show Y;``` | GCP VM instance has risky basic role assigned
This policy identifies GCP VM instances configured with the risky basic role. Basic roles are highly permissive roles that existed prior to the introduction of IAM and grant wide access over project to the grantee. To reduce the blast radius and defend against privilege escalations if the VM is compromised, it is recommended to follow the principle of least privilege and avoid use of basic roles.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: It is recommended to the principle of least privilege for granting access.\n\nTo create a new instance with desired service account, please refer to the URL given below:\nhttps://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#using\n\nTo update the assigned service account to VM, please refer to the URL given below:\nhttps://cloud.google.com/compute/docs/access/create-enable-service-accounts-for-instances#changeserviceaccountandscopes\n\nTo update priviledges granted to a service account, please refer to the URL given below:\nhttps://cloud.google.com/iam/docs/granting-changing-revoking-access. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (protocol equals Udp or protocol equals *) and (destinationPortRange contains _Port.inRange(1434,1434) or destinationPortRanges[*] contains _Port.inRange(1434,1434) ))] exists``` | Azure Network Security Group allows all traffic on SQL Server (UDP Port 1434)
This policy identifies Azure Network Security Groups (NSG) that allow all traffic on SQL Server (UDP Port 1434). Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict SQL Server solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-run-services-list' AND json.rule = "status.conditions[?any(type equals Ready and status equals True)] exists and status.conditions[?any(type equals RoutesReady and status equals True)] exists and ['metadata'].['annotations'].['run.googleapis.com/ingress'] equals all"``` | GCP Cloud Run service with overly permissive ingress rule
This policy identifies GCP Cloud Run services configured with overly permissive ingress rules. It is recommended to restrict the traffic from the internet and other resources by allowing traffic to enter through load balancers or internal traffic for better network-based access control.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to service 'Cloud Run'\n3. Click on the alerted service, go to tab 'TRIGGERS'\n4. Under section 'Ingress', select a ingress type other than 'Allow all traffic'\n5. Click on 'SAVE'. |
```config from cloud.resource where api.name = 'aws-connect-instance' AND json.rule = InstanceStatus equals "ACTIVE" and storageConfig[?any( resourceType is member of ('CHAT_TRANSCRIPTS','CALL_RECORDINGS','SCREEN_RECORDINGS') and storageConfigs[*] exists )] exists as X; config from cloud.resource where api.name='aws-s3api-get-bucket-acl' AND json.rule = "((((acl.grants[?(@.grantee=='AllUsers')] size > 0) or policyStatus.isPublic is true) and publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((acl.grants[?(@.grantee=='AllUsers')] size > 0) and ((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false))) or (policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))))" as Y; filter ' $.X.storageConfig[*].storageConfigs[*].S3Config.BucketName intersects $.Y.bucketName' ; show Y;``` | AWS Connect instance using publicly accessible S3 bucket
This policy identifies the S3 bucket used by AWS Connect instances for storing CHAT_TRANSCRIPTS, CALL_RECORDINGS, and SCREEN_RECORDINGS, which are publicly accessible.
The S3 bucket containing CHAT_TRANSCRIPTS, CALL_RECORDINGS, or SCREEN_RECORDINGS being publicly accessible is significant, as it exposes sensitive customer data and internal data to the public.
It is recommended to secure the identified S3 buckets by enforcing stricter access controls and eliminating public read permissions for the reported S3 bucket used for AWS Connect instances.
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To update the publicly accessible setting of a bucket, perform the following actions:\n1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\n a. Under 'Access Control List', Click on 'Everyone' and uncheck all items\n b. Under 'Access Control List', Click on 'Authenticated users group' and uncheck all items\n c. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\n a. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\n b. If 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\n c. Click on Save changes\nNote: Ensure updating the 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 443 or fromPort == 443) or (toPort > 443 and fromPort < 443)))] exists)``` | Allowing all to HTTPS
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-route53-list-hosted-zones' AND json.rule = resourceRecordSet[?any( resourceRecords[*].value contains s3-website or aliasTarget.dnsname contains s3-website )] exists as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as Y; filter 'not($.X.resourceRecordSet[*].name contains $.Y.bucketName)'; show X;``` | AWS Route53 Hosted Zone having dangling DNS record with subdomain takeover risk
This policy identifies AWS Route53 Hosted Zones which have dangling DNS records with subdomain takeover risk. A Route53 Hosted Zone having a CNAME entry pointing to a non-existing S3 bucket will have a risk of these dangling domain entries being taken over by an attacker by creating a similar S3 bucket in any AWS account which the attacker owns / controls. Attackers can use this domain to do phishing attacks, spread malware and other illegal activities. As a best practice, it is recommended to delete dangling DNS records entry from your AWS Route 53 hosted zones.
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-mysql-deployment-info' AND json.rule = deployment.enable_public_endpoints is true``` | IBM Cloud Database MySQL is exposed to public
The policy identifies IBM Cloud Database MySQL instances exposed to the public via public endpoints. When provisioning an IBM Cloud database service, it is generally not recommended to use public endpoints because it can pose a security risk. Public endpoints can make your database accessible to anyone with internet access, potentially leaving your data vulnerable to unauthorized access or malicious attacks. Instead, it is recommended to use private endpoints when provisioning a database service in IBM Cloud.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Refer to the IBM documentation to change the service endpoints from public to private\nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-service-endpoints. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-rds-describe-db-snapshots' AND json.rule="attributes[?(@.attributeName=='restore')].attributeValues[*] contains all"``` | AWS RDS snapshots are accessible to public
This policy identifies AWS RDS snapshots which are accessible to public. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to setup and manage databases. If RDS snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the 'RDS' service.\n4. For the RDS instance reported in the alert, change 'Publicly Accessible' setting to 'No'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and rootAccess equals Enabled and notebookInstanceLifecycleConfigName does not exist``` | AWS SageMaker notebook instance with root access enabled
This policy identifies the SageMaker notebook instances which are enabled with root access. Root access means having administrator privileges, users with root access can access and edit all files on the compute instance, including system-critical files. Removing root access prevents notebook users from deleting system-level software, installing new software, and modifying essential environment components.
NOTE: Lifecycle configurations need root access to be able to set up a notebook instance. Because of this, lifecycle configurations associated with a notebook instance always run with root access even if you disable root access for users.
For more details:
https://docs.aws.amazon.com/sagemaker/latest/dg/nbi-root-access.html
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances (Left panel)\n4. Click on the reported SageMaker notebook instance\nNote: To update root access for SageMaker notebook instances; Instances need to be stopped. So stop running instance before editing.\n5. In the 'Notebook instance settings' section, click on 'Edit'\n6. On the Edit notebook instance page, within the 'Permissions and encryption' section,\nFrom the 'Root access - optional' options, select 'Disable - Don't give users root access to the notebook'\n7. Click on the 'Update notebook instance'. |
```config from cloud.resource where Resource.status = Active AND api.name = 'aws-application-autoscaling-scaling-policy' as Y; config from cloud.resource where api.name = 'aws-dynamodb-describe-table' AND json.rule = tableStatus equal ignore case ACTIVE AND billingModeSummary.billingMode does not equal PAY_PER_REQUEST as X; filter 'not($.Y.ResourceName equals $.X.tableName)'; show X;``` | AWS DynamoDB table Auto Scaling not enabled
This policy identifies AWS DynamoDB tables with auto-scaling disabled.
DynamoDB is a fully managed NoSQL database that provides a highly reliable, scalable, low-latency database solution for applications that require consistent, single-digit millisecond latency at any scale. Auto-scaling functionality allows you to dynamically alter the allocated throughput capacity for your DynamoDB tables based on current traffic patterns. This feature employs the Application Auto Scaling service to automatically boost provisioned read and write capacity to manage unexpected traffic increases and reduce throughput when the workload falls in order to avoid paying for wasted supplied capacity.
It is recommended to enable auto-scaling for the DynamoDB table to ensure efficient resource utilisation, cost optimisation, improved performance, simplified management, and scalability.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable auto-scaling for a DynamoDB table through the AWS Management Console, follow these steps:\n\n1. Sign into the AWS console. Navigate to the DynamoDB console.\n2. In the navigation pane, choose 'Tables'.\n3. Select the table you want to enable auto-scaling for.\n4. Choose the 'Additional settings' tab.\n5. In the 'Read/write capacity' section, choose 'Edit'.\n6. In the 'Capacity mode' section, choose 'Provisioned'.\n7. For 'Table capacity', set 'Auto scaling' to 'On' for read capacity, write capacity, or both.\n8. Set the minimum and maximum capacity units, and the target utilization percentage for read and write capacity.\n9. Choose 'Save changes'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sns-get-topic-attributes' AND json.rule = Policy.Statement[?any(Effect equals Allow and (Principal.AWS does not equal * and Principal does not equal * and Principal.AWS contains arn and Principal.AWS does not contain $.Owner))] exists``` | AWS SNS topic with cross-account access
This policy identifies AWS SNS topics that are configured with cross-account access. Allowing unknown cross-account access to your SNS topics will enable other accounts and gain control over your AWS SNS topics. To prevent unknown cross-account access, allow only trusted entities to access your Amazon SNS topics by implementing the appropriate SNS policies.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['CROSS_ACCOUNT_TRUST'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the 'Simple Notification Service' dashboard\n4. Go to 'Topics', from the left panel\n5. Select the reported SNS topic\n6. Click on the 'Edit' button from the top options bar\n7. On the edit page go to the 'Access Policy - optional' section\n8. In the Access Policy section, verify all ARN values in 'Principal' elements are from trusted entities; If not remove those ARN from the entry.\n9. Click on 'Save changes'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='cloudsql.enable_pgaudit')] does not exist or settings.databaseFlags[?(@.name=='cloudsql.enable_pgaudit')].value does not equal on)"``` | GCP PostgreSQL instance database flag cloudsql.enable_pgaudit is not set to on
This policy identifies PostgreSQL database instances in which database flag cloudsql.enable_pgaudit is not set to on. Enabling the flag cloudsql.enable_pgaudit enables the logging by pgAudit extenstion for the database (if installed). The pgAudit extenstion for PostgreSQL databases provides detailed session and object logging to comply with government, financial, & ISO standards and provides auditing capabilities to mitigate threats by monitoring security events on the instance. Any changes to the database logging configuration should be made in accordance with the organization's logging policy.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: It is recommended to set the 'cloudsql.enable_pgaudit' flag to 'on' for PostgreSQL database.\n\nTo update the flag of GCP PostgreSQL instance, please refer to the URL given below and set cloudsql.enable_pgaudit flag to on:\nhttps://cloud.google.com/sql/docs/postgres/flags#set_a_database_flag. |
```config from cloud.resource where api.name = 'azure-key-vault-list' AND json.rule = 'properties.enableSoftDelete does not exist or properties.enablePurgeProtection does not exist'``` | Azure Key Vault is not recoverable
The key vault contains object keys, secrets and certificates. Accidental unavailability of a key vault can cause immediate data loss or loss of security functions (authentication, validation, verification, non-repudiation, etc.) supported by the key vault objects.
It is recommended the key vault be made recoverable by enabling the "Do Not Purge" and "Soft Delete" functions. This is in order to prevent loss of encrypted data including storage accounts, SQL databases, and/or dependent services provided by key vault objects (Keys, Secrets, Certificates) etc., as may happen in the case of accidental deletion by a user or from disruptive activity by a malicious user.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Azure Portal\nAzure Portal does not have provision to update the respective configurations\n\nAzure CLI 2.0\naz resource update --id /subscriptions/xxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/<resourceGroupName>/providers/Microsoft.KeyVault/vaults/<keyVaultName> --set properties.enablePurgeProtection=true properties.enableSoftDelete=true. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist``` | RLP-83104 - Copy of Critical of AWS S3 bucket publicly readable
This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public.
For more details:
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.. |
```config from cloud.resource where api.name = 'aws-ec2-describe-instances' AND json.rule = enaSupport is true and clientToken contains "foo" ``` | ajay ec2 describe
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = domainProcessingStatus equal ignore case active and (logPublishingOptions does not exist or logPublishingOptions.ES_APPLICATION_LOGS.enabled is false)``` | AWS Opensearch domain Error logging disabled
This policy identifies AWS Opensearch domains with no error logging configuration.
Opensearch application logs contain information about errors and warnings raised during the operation of the service and can be useful for troubleshooting. Error logs from domains can aid in security assessments, access monitoring, and troubleshooting availability problems.
It is recommended to enable the AWS Opensearch domain with error logs, which will help in security audits and troubleshooting.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable the AWS Opensearch domain with error logs:\n\n1. Sign into the AWS console and navigate to the Opensearch Service Dashboard\n2. In the navigation pane, under 'Managed Clusters', select 'Domains'\n2. Choose the reported Elasticsearch domain\n3. On the Logs tab, select 'Error logs' and choose 'Enable'.\n4. In the 'Set up error logs' section, in the 'Select log group from CloudWatch logs' setting, Create/Use existing CloudWatch Logs log group as per your requirement\n5. In 'Specify CloudWatch access policy', create new/Select an existing policy as per your requirement\n6. Click on 'Enable'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-macie2-session' AND json.rule = status equals "ENABLED" as X; count(X) less than 1``` | AWS Macie is not enabled
This policy identifies the AWS Macie that is not enabled in specific regions.
AWS Macie is a data security service that automatically discovers, classifies, and protects sensitive data in AWS, enhancing security and compliance posture. Failure to activate AWS Macie increases the risk of potentially missing out on automated detection and protection of sensitive data, leaving your organization more vulnerable to data breaches and compliance violations.
It is recommended to enable Macie in all regions for comprehensive adherence to security and compliance requirements.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable Macie in the specific region,\n\n1. Log in to your AWS Management Console.\n2. By using the AWS Region selector in the upper-right corner of the page, select the Region which is reported.\n3. In the AWS Management Console, search for "Macie" in the services search bar or locate it under the "Security, Identity, & Compliance" category.\n4. On the Amazon Macie page, choose Get started.\n5. Choose Enable Macie.\n\nTo re-enable macie after suspended in the region,\n\n1. Log in to your AWS Management Console.\n2. By using the AWS Region selector in the upper-right corner of the page, select the Region which is reported.\n3. In the AWS Management Console, search for "Macie" in the services search bar or locate it under the "Security, Identity, & Compliance" category.\n4. In the Macie dashboard, navigate to the 'settings' section.\n5. Click on the 'Re-enable Macie' button under the 'Suspend Macie' section.\n\nAfter enabling Macie, you can further configure policies, alerts, and other settings according to your organization's security and compliance needs.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'restrictions.geoRestriction.restrictionType contains none'``` | AWS CloudFront web distribution with geo restriction disabled
This policy identifies CloudFront web distributions which have geo restriction feature disabled. Geo Restriction has the ability to block IP addresses based on Geo IP by allowlist or denylist a country in order to allow or restrict users in specific locations from accessing web application content.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, in which the alert is generated\n3. Navigate to CloudFront Distributions Dashboard\n4. Click on the reported distribution\n5. On 'Restrictions' tab, Click on the 'Edit' button\n6. On 'Edit Geo-Restrictions' page, Set 'Enable Geo-Restriction' to 'Yes' and allowlist/denylist countries as per your requirement.\n7. Click on 'Yes, Edit'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals RUNNABLE and databaseVersion contains MYSQL and (settings.databaseFlags[*].name does not contain skip_show_database or settings.databaseFlags[?any(name contains skip_show_database and value does not contain on)] exists)``` | GCP MySQL instance database flag skip_show_database is not set to on
This policy identifies Mysql database instances in which database flag skip_show_database is not set to on. This prevents people from using the SHOW DATABASES statement if they do not have the SHOW DATABASES privilege. This can improve security if you have concerns about users being able to see databases belonging to other users. It is recommended to set skip_show_database to on.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported MYSQL instance\n4. Click on 'EDIT'\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'skip_show_database' from the drop-down menu and set the value as 'on'\nOR\nIf the flag has been set to off, Under 'Customize your instance', In 'Flags' section choose the flag 'skip_show_database' and set the value as 'on', Click on DONE\n6. Click on 'DONE' and then 'SAVE' and if popup window appears, select 'SAVE AND RESTART'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = 'roles[*] contains roles/cloudkms.admin and roles[*] contains roles/cloudkms.crypto'``` | GCP IAM user have overly permissive Cloud KMS roles
This policy identifies IAM users who have overly permissive Cloud KMS roles. Built-in/Predefined IAM role Cloud KMS Admin allows the user to create, delete, and manage service accounts. Built-in/Predefined IAM role Cloud KMS CryptoKey Encrypter/Decrypter allows the user to encrypt and decrypt data at rest using the encryption keys. It is recommended to follow the principle of 'Separation of Duties' ensuring that one individual does not have all the necessary permissions to be able to complete a malicious action.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM\n4. From the list of users, choose the reported IAM user\n5. Click on Edit permissions pencil icon\n6. For member having 'Cloud KMS Admin' and any of the 'Cloud KMS CryptoKey Encrypter/Decrypter', 'Cloud KMS CryptoKey Encrypter', 'Cloud KMS CryptoKey Decrypter' or any CryptoKey roles granted/assigned, Click on the Delete Bin icon to remove the role from a member. |
```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018``` | xnbnuowcaz_ui_auto_policies_tests_name
kszyashfvs_ui_auto_policies_tests_descr
This is applicable to aws cloud and is considered a critical severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-postgresql-deployment-info' AND json.rule = allowedListIPAddresses[*] size equals 0 or allowedListIPAddresses[?any( address equals 0.0.0.0/0 )] exists``` | IBM Cloud PostgreSQL Database network access is not restricted to a specific IP range
This policy identifies IBM Cloud PostgreSQL Databases with no specified IP range for network access. To restrict access to your databases, you can allowlist specific IP addresses or ranges of IP addresses on your deployment. When no IP addresses are in the allowlist, the allowlist is disabled and the deployment accepts connections from any IP address. It is recommended to create an allowlist, only IP addresses that match the allowlist or are in the range of IP addresses in the allowlist can connect to your deployment.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'Resource list', from the list of resources select PostgreSQL database reported in the alert.\n3. Refer below URL for setting allowlist IP addresses : \nhttps://cloud.ibm.com/docs/cloud-databases?topic=cloud-databases-allowlisting&interface=ui#set-allowlist-ui\n4. Please remove IP address starting with '0.0.0.0' if any added already in the allow list and make sure to add IP address other than '0.0.0.0'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = ['attributes'].['deletion_protection.enabled'] contains false``` | AWS Elastic Load Balancer v2 (ELBv2) with deletion protection disabled
This policy identifies Elastic Load Balancers v2 (ELBv2), which are configured with the deletion protection feature disabled.
AWS Elastic Load Balancer automatically distributes incoming traffic across multiple targets, such as EC2 instances, containers, and IP addresses, to improve the availability and fault tolerance of applications. To prevent your load balancer from being deleted accidentally, you can enable deletion protection.
It is recommended to enable deletion protection on AWS Elastic load balancers to protect them from being deleted accidentally.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable deletion protection on load balancer:\n\n1. Log in to the AWS console. Navigate to EC2 dashboard\n2. Select 'Load Balancers'\n3. Click on the reported Load Balancer\n4. On the 'Attributes' tab, choose 'Edit'\n5. On the Edit load balancer attributes page, select 'Enable' for 'Delete Protection'\n6. Click on 'Save' to save your changes.. |
Subsets and Splits