query
stringlengths 107
3k
| description
stringlengths 183
5.37k
|
---|---|
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90) or (access_key_2_active is true and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)'``` | AWS access keys are not rotated for 90 days
This policy identifies IAM users for which access keys are not rotated for 90 days. Access keys are used to sign API requests to AWS. As a security best practice, it is recommended that all access keys are regularly rotated to make sure that in the event of key compromise, unauthorized users are not able to gain access to your AWS services.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['KEYS_AND_SECRETS'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Click on the user that was reported in the alert.\n3. Click on 'Security Credentials' and for each 'Access Key'.\n4. Follow the instructions below to rotate the Access Keys that are older than 90 days.\nhttps://aws.amazon.com/blogs/security/how-to-rotate-access-keys-for-iam-users/. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-images' AND json.rule = image.public is true and image.shared is false and image.imageOwnerAlias does not exist``` | AWS Amazon Machine Image (AMI) is publicly accessible
This policy identifies AWS AMIs which are owned by the AWS account and are accessible to the public. Amazon Machine Image (AMI) provides information to launch an instance in the cloud. The AMIs may contain proprietary customer information and should be accessible only to authorized internal users.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to 'EC2' service.\n2. In the navigation pane, choose AMIs.\n3. Select your AMI from the list, and then choose Actions, Modify Image Permissions.\n4. Choose Private and choose Save.. |
```config from cloud.resource where api.name = 'azure-storage-account-list' as X; config from cloud.resource where api.name = 'azure-storage-account-table-diagnostic-settings' AND json.rule = properties.logs[*].enabled all true as Y; filter 'not($.X.id contains $.Y.properties.storageAccountId)'; show X;``` | Azure Storage Logging is not Enabled for Table Service for Read Write and Delete requests
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-events-rule' AND json.rule = '(isEnabled equals true) and (actions.actions[?any( actionType equals ONS and isEnabled equals true and lifecycleState equals ACTIVE)] exists) and (condition.eventType[*] contains com.oraclecloud.virtualnetwork.changesecuritylistcompartment and condition.eventType[*] contains com.oraclecloud.virtualnetwork.createsecuritylist and condition.eventType[*] contains com.oraclecloud.virtualnetwork.deletesecuritylist and condition.eventType[*] contains com.oraclecloud.virtualnetwork.updatesecuritylist) and actions.actions[*].topicId exists' as X; count(X) less than 1``` | OCI Event Rule and Notification does not exist for security list changes
This policy identifies the OCI compartments which do not have an Event Rule and Notification that gets triggered for security list changes. Monitoring and alerting on changes to Security Lists will help in identifying changes to traffic flowing into and out of Subnets within a Virtual Cloud Network. It is recommended that an Event Rule and Notification be configured to catch changes made to the security list.
NOTE:
1. Event Rules are compartment scoped and will detect events in child compartments, it is recommended to create the Event rule at the root compartment level.
2. This policy will trigger alert if you have at least one Event Rule and Notification, even if OCI has single or multi compartments.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the Event into the Search box at the top of the Console.\n3. Click the Event Service from the Services submenu\n4. Select the compartment that should host the rule\n5. Click Create Rule\n6. Provide a Display Name and Description\n7. Create a Rule Condition by selecting Networking in the Service Name Drop-down and selecting Network Security List – Change Compartment, Security List – Create, Security List - Delete and Security List – Update\n8. In the Actions section select Notifications as Action Type\n9. Select the Compartment that hosts the Topic to be used.\n10. Select the Topic to be used\n11. Optionally add Tags to the Rule\n12. Click Create Rule. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elb-describe-load-balancers' AND json.rule = description.scheme contains internet-facing``` | AWS Classic Load Balancer is in use for internet-facing applications
This policy identifies Classic Load Balancers that are being used for internet-facing HTTP/HTTPS applications. Classic Load Balancer should be used when you have an existing application running in the EC2-Classic network. Application Load Balancers (ALB) is recommended for internet-facing HTTP/HTTPS web applications.
For more details:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: To create Application Load Balancer (ALB) refer,\nhttps://docs.aws.amazon.com/elasticloadbalancing/latest/application/create-application-load-balancer.html\n\nOnce Application Load Balancer created, you can delete the reported Classic Load Balancer by,\n1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to EC2 Dashboard\n4. Click on 'Load Balancers', choose the reported load balancer\n5. Click on the 'Actions' and from the drop-down click on 'Delete'\n6. Click on the 'Yes, Delete'. |
```config from cloud.resource where api.name = 'ibm-vpc' as X; config from cloud.resource where api.name = 'ibm-vpc-flow-log-collector' as Y; filter 'not($.X.id equals $.Y.target.id)'; show X;``` | IBM Cloud VPC Flow Logs not enabled
This policy identifies IBM Cloud VPCs which have flow logs disabled. VPC Flow logs capture information about IP traffic going to and from network interfaces in your VPC. Flow logs are used as a security tool to monitor the traffic that is reaching your instances. Without the flow logs turned on, it is not possible to get any visibility into network traffic.
This is applicable to ibm cloud and is considered a informational severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To configure a Flow log on a VPC, please follow the below URL. Please make sure to provide target as 'VPC':\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-ordering-flow-log-collector&interface=ui\n. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'resourceUsageExportConfig.enableNetworkEgressMetering does not exist or resourceUsageExportConfig.enableNetworkEgressMetering is false'``` | GCP Kubernetes Engine Clusters not configured with network traffic egress metering
This policy identifies Kubernetes Engine Clusters which are not configured with network traffic egress metering. When network traffic egress metering enabled, deployed DaemonSet pod meters network egress traffic by collecting data from the conntrack table, and exports the metered metrics to the specified destination. It is recommended to use, network egress metering so that you will be having data and track over monitored network traffic.
NOTE: Measuring network egress requires a network metering agent (NMA) running on each node. The NMA runs as a privileged pod, consumes some resources on the node (CPU, memory, and disk space), and enables the nf_conntrack_acct sysctl flag on the kernel (for connection tracking flow accounting). If you are comfortable with these caveats, you can enable network egress tracking for use with GKE usage metering.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Follow the below mentioned URL to enable GKE usage metering:\n\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/cluster-usage-metering#enabling. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' as X; count(X) less than 1``` | Azure Monitoring log profile is not configured to export activity logs
This policy identifies the Azure accounts in which at least one monitoring log profile is not configured. A Log Profile controls how your Activity Log is exported; using which you could export the logs and store them for a longer duration for analyzing security activities within your Azure account. So it is recommended to have at least one monitoring log profile in an account to export all activity logs.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To create a new log profile (Export to a storage account) use following command:\naz monitor log-profiles create --name <NAME_OF_LOGPROFILE> --location <LOCATION> --locations <LOCATIONS_LIST> --categories "Delete" "Write" "Action" --enabled true --days <RETENTION_IN_DAYS> --storage-account-id "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUPNAME>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"\n\nOR\n\nTo create a new log profile (Export to an event hub) use following command:\naz monitor log-profiles create --name <NAME_OF_LOGPROFILE> --location <LOCATION> --locations <LOCATIONS_LIST> --categories "Delete" "Write" "Action" --enabled true --days <RETENTION_IN_DAYS> --service-bus-rule-id "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUPNAME>/providers/Microsoft.EventHub/namespaces/<EVENTHUB_NAMESPACENAME>/authorizationrules/RootManageSharedAccessKey"\n\nNOTE: Make sure before referring Storage Account or Eventhub in above CLI commands, you have already created Storage Account or Eventhub as per your requirements.. |
```config from cloud.resource where api.name = 'gcloud-compute-project-info' AND json.rule = commonInstanceMetadata.kind equals "compute#metadata" and commonInstanceMetadata.items[?any(key contains "enable-oslogin" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and commonInstanceMetadata.items[?any(key contains "ssh-keys")] exists as X; config from cloud.resource where api.name = 'gcloud-compute-instances-list' AND json.rule = status equals RUNNING and ( metadata.items[?any(key exists and key contains "block-project-ssh-keys" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and metadata.items[?any(key exists and key contains "enable-oslogin" and (value contains "Yes" or value contains "Y" or value contains "True" or value contains "true" or value contains "TRUE" or value contains "1"))] does not exist and name does not start with "gke-") as Y; filter '$.Y.zone contains $.X.name'; show Y;``` | HD-GCP VM instances have block project-wide SSH keys feature disabled
This policy identifies VM instances which have block project-wide SSH keys feature disabled. Project-wide SSH keys are stored in Compute/Project-metadata. Project-wide SSH keys can be used to login into all the instances within a project. Using project-wide SSH keys eases the SSH key management but if compromised, poses the security risk which can impact all the instances within a project. It is recommended to use Instance specific SSH keys which can limit the attack surface if the SSH keys are compromised.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Computer Engine (Left Panel)\n3. Go to VM instances\n4. From the list of VMs, choose the reported VM\n5. Click on Edit button\n6. Under SSH Keys section, Check 'Block project-wide SSH keys' on the checkbox\n7. Click on Save. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-security-groups' AND json.rule = '((groupName == default) and (ipPermissions[*] is not empty or ipPermissionsEgress[*] is not empty))'``` | AWS Default Security Group does not restrict all traffic
This policy identifies the default security groups which does not restrict inbound and outbound traffic. A VPC comes with a default security group whose initial configuration denies all inbound traffic and allow all outbound traffic. If you do not specify a security group when you launch an instance, the instance is automatically assigned to this default security group. As a result, the instance may accidentally send outbound traffic. It is recommended that to remove any inbound and outbound rules in the default security group and not to attach the default security group to any resources.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact on your applications/services.\n\nFor Resources associated with the alerted security group:\n1. Identify AWS resources that exist within the default security group\n2. Create a set of least privilege security groups for those resources\n3. Place the resources in those security groups\n4. Remove the associated resources from the default security group\n\nFor alerted Security Groups:\n1. Log in to the AWS console\n2. In the console, select the specific region from the 'Region' drop-down on the top right corner, for which the alert is generated\n3. Navigate to the 'VPC' service\n4. For each region, Click on 'Security Groups' specific to the alert\n5. On section 'Inbound rules', Click on 'Edit Inbound Rules' and remove the existing rule, click on 'Save'\n6. On section 'Outbound rules', Click on 'Edit Outbound Rules' and remove the existing rule, click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = publiclyAccessible is true and masterUsername is member of ("awsuser","administrator","admin")``` | AWS Redshift cluster with commonly used master username and public access setting enabled
This policy identifies AWS Redshift clusters configured with commonly used master usernames like 'awsuser', 'administrator', or 'admin', and the public access setting is enabled.
AWS Redshift, a managed data warehousing service typically stores sensitive and critical data. Allowing public access increases the risk of unauthorized access, data breaches, and potential malicious activities. Using standard usernames increases the risk of password brute-force attacks by potential intruders.
As a recommended security measure, it is advised not to use commonly used usernames and to disable public access for the Redshift cluster.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Changing the default master user name for your existing Amazon Redshift clusters requires relaunching those clusters with a different master user name and migrating the existing data to the new clusters.\n\nTo launch the new Redshift database clusters,\n1. Sign in to the AWS Management Console and open the Amazon Redshift console at https://console.aws.amazon.com/redshift/.\n2. On the navigation menu, choose 'Clusters'. The clusters for your account in the current AWS Region are listed. A subset of the properties of each cluster is displayed in columns in the list.\n3. Choose 'Create cluster' to create a cluster.\n4. Follow the instructions on the console page to enter the properties for Cluster configuration.\n5. In the 'Database configuration' section, type a unique (non-default) user name within the 'Master user name' field.\n6. In the 'Additional configurations', Under the 'Network and security' Dropdown, Ensure the checkbox 'Turn on Publicly accessible' in the 'Publicly accessible' section is unchecked.\n7. Fill out the rest of the fields available on this page with the information taken from the existing cluster.\n8. Choose 'Create cluster' to create the cluster. The cluster might take several minutes to be ready to use.\n9. Once the Cluster Status value changes to available and the DB Health status changes to healthy, the new cluster can used to load the existing data from the old cluster.\n10. Once the data migration process is completed and all the data is loaded into the new Redshift cluster and all applications configured to use the new cluster, delete the old cluster.\n\nTo delete the existing cluster, refer to the below link.\nhttps://docs.aws.amazon.com/redshift/latest/mgmt/managing-clusters-console.html#delete-cluster. |
```config from cloud.resource where api.name = 'azure-machine-learning-workspace' AND json.rule = properties.keyVault exists as X; config from cloud.resource where api.name = 'azure-key-vault-list' AND json.rule = "not (diagnosticSettings.value[*].properties.logs[*].enabled any equal true and diagnosticSettings.value[*].properties.logs[*].enabled size greater than 0)" as Y; filter '$.X.properties.keyVault contains $.Y.name'; show Y;``` | Azure Key vault used for machine learning workspace secrets storage is not enabled with audit logging
This policy identifies Azure Key vaults used for machine learning workspace secrets storage that are not enabled with audit logging.
Azure Key vaults are used to store machine learning workspace secrets and other sensitive information that is needed by the workspace. Enabling key vaults with audit logging will help in monitoring how and when machine learning workspace secrets are accessed, and by whom. This audit log data enhances visibility by providing valuable insights into the trail of interactions involving confidential information.
As a best practice, it is recommended to enable audit event logging for key vaults used for machine learning workspace secrets storage.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Select 'Key vaults'\n3. Select the key vault instance to modify\n4. Select 'Diagnostic settings' under 'Monitoring'\n5. Click on '+Add diagnostic setting'\n6. In the 'Diagnostic setting' page, Select the Logs, Metrics and Destination details as per your business requirements.\n7. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-external-backend-service' AND json.rule = backends exists and ( protocol equal ignore case "HTTP" or protocol equal ignore case "HTTPS" or protocol equal ignore case "HTTP2" ) and ( logConfig.enable does not exist or logConfig.enable is false )``` | GCP External Load Balancer logging is disabled
This policy identifies GCP External Load Balancers using any of the protocols like HTTP, HTTPS, and HTTP/2 having logging disabled.
GCP external load balancers distribute incoming traffic across multiple instances or services hosted on Google Cloud Platform. Feature \"logging\" for external load balancers captures and records detailed information about the traffic flowing through the load balancers. This includes data such as incoming requests, responses, errors, latency metrics, and other relevant information. By enabling logging for external load balancers, you gain visibility into the performance, health, and security of the applications. Logged data comes handy for troubleshooting an incident, monitoring, analysis, and compliance purposes.
It is recommended to enable logging for all external load balancers.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the GCP console.\n2. Navigate to 'Network Services' and select 'Load Balancing' from the left panel.\n3. Click on 'BACKENDS'.\n4. Click on the load balancer link under the 'Load balancer' column for the reported backend service.\n5. On the Load Balancer details page, click on 'EDIT'.\n6. Click on 'Backend configuration', and then click the edit icon next to the reported backend service under the 'Backend services' section.\n7. Under 'Logging', select 'Enable logging' checkbox.\n8. Choose the appropriate Sample rate.\n9. To finish editing the backend service, click 'UPDATE'.\n10. To finish editing the load balancer, click 'UPDATE'.. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-machine-learning-workspace' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.managedNetwork.isolationMode equal ignore case Disabled OR properties.managedNetwork.isolationMode does not exist)``` | Azure Machine Learning workspace not enforced with Managed Virtual Network Isolation
This policy identifies Azure Machine Learning workspaces that are not enforced with Managed Virtual Network Isolation.
Managed Virtual Network Isolation ensures that the workspace and its resources are accessible only within a secure virtual network. Without enforcing this isolation, the environment becomes vulnerable to security risks like external threats, data leaks, and non-compliance. If not properly isolated, the workspace may be exposed to public networks, increasing the chances of unauthorized access and data breaches.
As a security best practice, it is recommended to configure Azure Machine Learning workspaces with Managed Virtual Network Isolation. This will restrict network access to the workspace and ensure that it can only be accessed from authorized networks.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Note: To update an existing Azure Machine Learning workspace to use a managed virtual network, you first need to delete all its compute resources, including compute instances, compute clusters, and managed online endpoints.\n\n1. Log in to Azure Portal and search for 'Azure Machine Learning'\n2. Select 'Azure Machine Learning'\n3. Select the reported Azure Machine Learning Workspace\n4. Under 'Settings' go to 'Networking' section\n5. At the top, select the 'Workspace managed outbound access' tab\n6. Choose either 'Allow Internet Outbound' or 'Allow Only Approved Outbound' based on your needs\n7. Configure the workspace outbound rules according to your requirements\n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = properties.userVisibleState equal ignore case Ready and properties.privateEndpointConnections[*] is empty``` | Copy of build information
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals "0.0.0.0/0" and direction equals "inbound" and ( protocol equals "all" or ( protocol equals "tcp" and ( port_max greater than 3389 and port_min less than 3389 ) or ( port_max equals 3389 and port_min equals 3389 ))))] exists``` | IBM Cloud Security Group allow all traffic on RDP port (3389)
This policy identifies IBM Cloud Security groups that allow all traffic on RDP port 3389. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 3389 (or range containing 3389)\n6. Click on 'Delete'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].['osDisk'].['vhd'].['uri'] exists ``` | Azure Virtual Machines are not utilising Managed Disks
This policy identifies Azure Virtual Machines which are not utilising Managed Disks. Using Azure Managed disk over traditional BLOB based VHD's has more advantage features like Managed disks are by default encrypted, reduces cost over storage accounts and more resilient as Microsoft will manage the disk storage and move around if underlying hardware goes faulty. It is recommended to move BLOB based VHD's to Managed Disks.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'Virtual Machines' from the left pane\n3. Select the reported virtual machine\n4. Select 'Disks' under 'Settings'\n5. Click on 'Migrate to managed disks'\n6. Select 'Migrate'. |
```config from cloud.resource where api.name = 'aws-elasticbeanstalk-environment' AND json.rule = status does not equal "Terminated" as X; config from cloud.resource where api.name = 'aws-elasticbeanstalk-configuration-settings' AND json.rule = configurationSettings[*].optionSettings[?any( optionName equals "ManagedActionsEnabled" and namespace equals "aws:elasticbeanstalk:managedactions" and value equals "false")] exists as Y; filter ' $.X.environmentName equals $.Y.configurationSettings[*].environmentName and $.X.applicationName equals $.Y.configurationSettings[*].applicationName'; show X;``` | AWS Elastic Beanstalk environment managed platform updates are not enabled
This policy identifies the AWS Elastic Beanstalk Environment where managed platform updates are not enabled.
Elastic Beanstalk is a platform as a service (PaaS) product from Amazon Web Services (AWS) that provides automated application deployment and scaling features. Enabling managed platform updates ensures that the latest available platform fixes, updates, and features for the environment are installed. Users must not apply updates manually without automatic updates, risking missed critical updates and potential security vulnerabilities. This can result in high-severity security risks, loss of data, and possible system downtime.
It is recommended to ensure platform updates are managed automatically is crucial for the overall security and performance of the applications running on the platform.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure managed platform updates for Elastic Beanstalk environment, perform the following actions\n\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to 'Elastic Beanstalk' service\n4. In the navigation pane, choose 'Environments', then select the reported environment's name from the list\n5. In the navigation pane, choose Configuration\n6. In the 'Updates, monitoring, and logging' configuration category, choose Edit\n7. Under 'Managed platform updates' section, Enable Managed updates by selecting the 'Activated' checkbox\n8. If managed updates are enabled, select a maintenance window, and then select an 'Update level' according to your business requirements\n9. To save the changes choose 'Apply' at the bottom of the page. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = isMasterVersionSupported exists AND isMasterVersionSupported does not equal "true"``` | GCP GKE unsupported Master node version
This policy identifies the GKE master node version and generates an alert if the version running is unsupported.
Using an unsupported version of Google Kubernetes Engine (GKE) on Google Cloud Platform (GCP) can lead to several potential issues and risks, such as security vulnerabilities, compatibility issues, performance and stability problems, and compliance concerns. To mitigate these risks, it's crucial to regularly update the GKE clusters to supported versions recommended by Google Cloud.
As a security best practice, it is always recommended to use the latest version of GKE.
Note: This Policy is in line with the GCP GKE release version schedule https://cloud.google.com/kubernetes-engine/docs/release-schedule#schedule-for-release-channels
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Manually initiate a master upgrade:\n\n1. Visit the Google Kubernetes Engine Clusters menu in Google Cloud Platform Console.\n2. Click the desired cluster name.\n3. Under Cluster basics, click "Upgrade Available" next to Version.\n4. Select the desired version, then click Save Changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = notebookInstanceStatus equals InService and subnetId does not exist``` | AWS SageMaker notebook instance is not placed in VPC
This policy identifies SageMaker notebook instances that are not placed inside a VPC. It is recommended to place your SageMaker inside VPC so that VPC-only resources able to access your SageMaker data, which cannot be accessed outside a VPC network.
For more details:
https://docs.aws.amazon.com/sagemaker/latest/dg/process-vpc.html
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: AWS SageMaker notebook instance can be not placed in VPC once it is created. You need to create a new notebook instance placing it in VPC; migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Network' section,\nFrom the 'VPC – optional' dropdown list, select the VPC where you want to deploy a new SageMaker notebook instance.\n5. Choose other parameters as per your requirement and click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete <notebook-instance-name> dialog box, click the Delete button to confirm the action.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-instances' AND json.rule = iamInstanceProfile.arn does not exist and state.code equals 16``` | AWS EC2 Instance IAM Role not enabled
AWS provides Identity Access Management (IAM) roles to securely access AWS services and resources. The role is an identity with permission policies that define what the identity can and cannot do in AWS. As a best practice, create IAM roles and attach the role to manage EC2 instance permissions securely instead of distributing or sharing keys or passwords.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: The most common setup is the AWS default that allows for EC2 access to AWS Services. For most, this is a great way to realize flexible, yet secure, EC2 access enabled for your instances. Select this when you launch EC2 instances to automatically inherit these permissions.\n\nIAM\n1. Go to the AWS console IAM dashboard.\n2. In the navigation pane, choose Roles, Create new role.\n3. Under 'Choose the service that will use this role' select EC2, then 'Next:Permissions.'\n4. On the Attach permissions policies page, select an AWS managed policy that grants your instance access to the resources that they need, then 'Next:Tags.'\n5. Add tags (optional), the select 'Next:Review.'\n6. On the Create role and Review page, type a name for the role and choose Create role.\n\nEC2\n1. Go to the AWS console EC2 dashboard.\n2. Select Running Instances.\n3. Check the instance you want to modify.\n4. From the Actions pull down menu, select Instance Settings and Attach/Replace IAM Role.\n5. On the Attach/Replace IAM Role page, under the IAM role pull down menu, choose the role created in the IAM steps above.. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains CreateNetworkAcl and $.X.filterPattern contains CreateNetworkAclEntry and $.X.filterPattern contains DeleteNetworkAcl and $.X.filterPattern contains DeleteNetworkAclEntry and $.X.filterPattern contains ReplaceNetworkAclEntry and $.X.filterPattern contains ReplaceNetworkAclAssociation) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for Network Access Control Lists (NACL) changes
This policy identifies the AWS regions which do not have a log metric filter and alarm for Network Access Control Lists (NACL) changes. Monitoring changes to NACLs will help ensure that AWS resources and services are not unintentionally exposed. It is recommended that a metric filter and alarm be established for changes made to NACLs.
NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateNetworkAcl) || ($.eventName = CreateNetworkAclEntry) || ($.eventName = DeleteNetworkAcl) || ($.eventName = DeleteNetworkAclEntry) || ($.eventName = ReplaceNetworkAclEntry) || ($.eventName = ReplaceNetworkAclAssociation) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "state equals RUNNABLE and databaseVersion contains POSTGRES and settings.databaseFlags[?any(name contains log_hostname and value contains on)] exists"``` | GCP PostgreSQL instance database flag log_hostname is not set to off
This policy identifies PostgreSQL database instances in which database flag log_hostname is not set to off. Logging hostnames can incur overhead on server performance as for each statement logged, DNS resolution will be required to convert IP address to hostname. It is recommended to set log_hostname as off.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_hostname' from the drop-down menu and set the value as 'off'\nOR\nIf the flag has been set to other than off, Under 'Customize your instance', In 'Flags' section choose the flag 'log_hostname' and set the value as 'off'\n6. Click on 'DONE' and then 'SAVE'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-activity-log-alerts' AND json.rule = "location equal ignore case Global and properties.enabled equals true and properties.scopes[*] does not contain resourceGroups and properties.condition.allOf[?(@.field=='operationName')].equals equals Microsoft.Network/publicIPAddresses/delete" as X; count(X) less than 1``` | Azure Activity log alert for Delete public IP address rule does not exist
This policy identifies the Azure accounts in which activity log alert for Delete public IP address rule does not exist.
Creating an activity log alert for Delete public IP address rule gives insight into network rule access changes and may reduce the time it takes to detect suspicious activity. By enabling this monitoring, you get alerts whenever any deletions are made to public IP addresses rules.
As a best practice, it is recommended to have an activity log alert for Delete public IP address rule to enhance network security monitoring and detect suspicious activities.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Click on Monitor (Left Panel)\n3. Select 'Alerts'\n4. Click on Create > Alert rule\n5. In 'Create an alert rule' page, choose the Scope as your Subscription and under the CONDITION section, choose 'Delete Public Ip Address (Public Ip Address)' and Other fields you can set based on your custom settings.\n6. Click on Create. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = policy.Statement[?any(Effect equals Deny and Action equals s3:* and (Principal.AWS equals * or Principal equals *) and Condition.Bool.aws:SecureTransport contains false )] does not exist``` | AWS S3 bucket policy does not enforce HTTPS request only
This policy identifies AWS S3 bucket having a policy that does not enforce only HTTPS requests. Enforcing the S3 bucket to accept only HTTPS requests would prevent potential attackers from eavesdropping on data in-transit or manipulating network traffic using man-in-the-middle or similar attacks. It is highly recommended to explicitly deny access to HTTP requests in S3 bucket policy.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. Navigate to the S3 dashboard\n3. Choose the reported S3 bucket\n4. In the 'Permissions' tab, Click on 'Edit' under 'Bucket policy'\n5. To update S3 bucket policy to enforce HTTPS request only, follow the below URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-config-rule/. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case "/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace"``` | bboiko test 04 - policy
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = 'webACLId is empty'``` | AWS CloudFront web distribution with AWS Web Application Firewall (AWS WAF) service disabled
This policy identifies Amazon CloudFront web distributions which have the AWS Web Application Firewall (AWS WAF) service disabled. As a best practice, enable the AWS WAF service on CloudFront web distributions to protect against application layer attacks. To block malicious requests to your Cloudfront Content Delivery Network, define the block criteria in the WAF web access control list (web ACL).
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button\n5. On 'Edit Distribution' page, Choose a 'AWS WAF Web ACL' from dropdown.\n6. Click on 'Yes, Edit'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-network-lb-list' AND json.rule = diagnosticSettings.value[*] size equals 0``` | Azure Load Balancer diagnostics logs are disabled
Azure Load Balancers provide different types of logs related to alert events, health probe and metrics to help you manage and troubleshoot issues. This policy identifies Azure Load Balancers that have diagnostics logs disabled. As a best practice, enable diagnostic logs to start collecting the data available through these logs.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Diagnostic logs is not supported for Azure Load Balancer which are in Basic SKU\nPlease create new Load Balancer and selecting Standard SKU\nOR\nTo upgrade Basic SKU Load Balancer to Standard SKU follow the steps provided in the link below,\nhttps://docs.microsoft.com/en-us/azure/load-balancer/upgrade-basic-standard\n\nFor Azure Load Balancer Standard SKU follow below steps,\n1. Log in to the Azure portal.\n2. Navigate to 'Load Balancers', and select the reported load balancer from the list\n3. Select 'Diagnostic settings' under 'Monitoring' section\n4. Click on '+Add diagnostic setting'\n5. Specify a 'Diagnostic settings name',\n6. Under 'Category details' section, select the type of 'Log' that you want to enable\n7. Under section 'Destination details',\na. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\nb. If you set 'Archive to storage account', select the 'Subscription', 'Storage account' and set the 'Retention (days)'\nc. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n8. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(110,110) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on POP3 port (110)
This policy identifies GCP Firewall rules which allow all inbound traffic on POP3 port (110). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the POP3 port (110) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = advancedSecurityOptions.enabled is false and advancedSecurityOptions.internalUserDatabaseEnabled is false``` | AWS OpenSearch Fine-grained access control is disabled
This policy identifies AWS OpenSearch which has Fine-grained access control disabled. Fine-grained access control offers additional ways of controlling access to your data on AWS OpenSearch Service. It is highly recommended enabling fine-grained access control to protect the data on your domain.
For more information, please follow the URL given below,
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Refer the following URL for configuring Fine-grained access control on your AWS OpenSearch:\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html#fgac-forget\n\nNotes: \n1. You can't enable fine-grained access control on existing domains, only new ones. After you enable fine-grained access control, you can't disable it.\n2. Fine-grained access control is supported only from ElasticSearch 6.7 or later. To upgrade older versions of AWS OpenSearch please refer to the URL given below,\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/version-migration.html. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.privateEndpointConnections[*] is empty``` | Azure Cognitive Services account not configured with private endpoint
This policy identifies Azure Cognitive Services accounts that are not configured with private endpoint. Private endpoints in Azure AI service resources allow clients on a virtual network to securely access data over Azure Private Link. Configuring a private endpoint enables access to traffic coming from only known networks and prevents access from malicious or unknown IP addresses which includes IP addresses within Azure. It is recommended to create private endpoint for secure communication for your Cognitive Services account.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure portal\n2. Navigate to 'Azure AI services'\n3. Click on the reported Azure AI service\n4. Configure Private endpoint connections under 'Networking' from left panel. |
```config from cloud.resource where api.name = 'aws-apigateway-get-stages' AND json.rule = methodSettings.[*].loggingLevel does not exist or methodSettings.[*].loggingLevel equal ignore case off as X; config from cloud.resource where api.name = 'aws-apigateway-get-rest-apis' as Y; filter ' $.X.restApi equal ignore case $.Y.id '; show Y;``` | AWS API Gateway REST API execution logging disabled
This policy identifies AWS API Gateway REST API's that have disabled execution logging in their stages.
AWS API Gateway REST API is a service for creating and managing RESTful APIs integrated with backend services like Lambda and HTTP endpoints. Execution logs log all the API activity logs to CloudWatch, which helps in incident response, security and compliance, troubleshooting, and monitoring.
It is recommended to enable logging on the API Gateway REST API to track API activity.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To enable execution logging on API Gateway Rest API, follow the below steps:\n\n1. Sign in to the AWS console. Navigate to the API Gateway dashboard\n2. Under the navigation page, select the 'APIs'\n3. Select the REST API reported; under the navigation page, select 'Stages'\n4. Select a stage and click on 'Edit' under the 'Logs and tracing' section\n5. Under the 'Edit logs and tracing' page, select a value other than 'Off' under the 'CloudWatch logs' dropdown.\n6. Click on 'Save'.\n7. Repeat this process for all the stages of the reported REST API.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elbv2-describe-load-balancers' AND json.rule = "state.code contains active and ['attributes'].['access_logs.s3.enabled'] contains false"``` | AWS Elastic Load Balancer v2 (ELBv2) with access log disabled
This policy identifies Elastic Load Balancers v2 (ELBv2) which have access log disabled. Access logs capture detailed information about requests sent to your load balancer and each log contains information such as the time the request was received, the client's IP address, latencies, request paths, and server responses. You can use these access logs to analyze traffic patterns and troubleshoot issues.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to EC2 dashboard\n4. Click on 'Load Balancers' (Left Panel)\n5. Select the reported ELB\n6. Click on 'Actions' drop-down\n7. Click on 'Edit attributes'\n8. In the 'Edit load balancer attributes' popup box, Choose 'Enable' for 'Access logs' and configure S3 location where you want to store ELB logs.\n9. Click on 'Save'. |
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-iam-get-credential-report' AND json.rule='user does not equal "<root_account>" and password_enabled equals true and mfa_active is false'``` | AWS MFA not enabled for IAM users
This policy identifies AWS IAM users for whom MFA is not enabled. AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. Multiple factors provide increased security for your AWS account settings and resources.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MFA'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS and navigate to the 'IAM' service.\n2. Navigate to the user that was reported in the alert.\n3. Under 'Security Credentials', check "Assigned MFA Device" and follow the instructions to enable MFA for the user.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-firewall-rules-list' AND json.rule = disabled is false and direction equals INGRESS and (sourceRanges[*] equals ::0 or sourceRanges[*] equals 0.0.0.0 or sourceRanges[*] equals 0.0.0.0/0 or sourceRanges[*] equals ::/0 or sourceRanges[*] equals ::) and allowed[?any(ports contains _Port.inRange(1521,1521) or (ports does not exist and (IPProtocol contains tcp or IPProtocol contains udp)))] exists``` | GCP Firewall rule allows all traffic on Oracle DB port (1521)
This policy identifies GCP Firewall rules which allow all inbound traffic on DB port (1521). Allowing access from arbitrary IP addresses to this port increases the attack surface of your network. It is recommended that the DB port (1521) should be allowed to specific IP addresses.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: If the Firewall rule reported indeed needs to restrict all traffic, follow the instructions below:\n1. Login to GCP Console\n2. Go to 'VPC Network'\n3. Go to the 'Firewall'\n4. Click on the reported Firewall rule\n5. Click on 'EDIT'\n6. Modify Source IP ranges to specific IP\n7. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = 'legacyAbac.enabled equals true'``` | GCP Kubernetes Engine Clusters have Legacy Authorization enabled
This policy identifies GCP Kubernetes Engine Clusters which have enabled legacy authorizer. The legacy authorizer in Kubernetes Engine grants broad and statically defined permissions to all cluster users. After legacy authorizer setting is disabled, RBAC can limit permissions for authorized users based on need.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to Kubernetes Engine (Left Panel)\n3. Select Kubernetes clusters\n4. From the list of clusters, choose the reported cluster\n5. Under 'Security', click on edit button (Pencil Icon) for Legacy authorization\n6. Uncheck 'Enable legacy authorization' checkbox\n7. Click on Save Changes. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | AWS CloudFront distributions does not have a default root object configured
This policy identifies list of CloudFront distributions which does not have default root object configured. If a CloudFront distribution does not have a default root object configured, requests for the root of your distribution pass to your origin server which might return a list of the private contents of your origin. To avoid exposing the contents of your distribution or returning an error it is recommended to specify a default root object.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure a default root object for your distribution follow the steps mentioned in below URL:\nhttps://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html#DefaultRootObjectHowToDefine. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = vulnerabilityAssessments[*].properties.storageContainerPath exists and vulnerabilityAssessments[*].properties.recurringScans.isEnabled is false``` | Azure SQL Server ADS Vulnerability Assessment Periodic recurring scans is disabled
This policy identifies Azure SQL Server which has ADS Vulnerability Assessment Periodic recurring scans disabled. Advanced Data Security - Vulnerability Assessment 'Periodic recurring scans' schedules periodic vulnerability scanning for the SQL server and Databases. It is recommended to enable ADS - VA Periodic recurring scans which provides risk visibility based on updated known vulnerability signatures and best practices.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Select 'SQL servers', and select the SQL server you need to modify\n3. Click on 'Microsoft Defender for Cloud' under 'Security'\n4. Click on '(Configure)' next to 'Microsoft Defender for SQL: Enabled at the server-level'\n5. Set 'Periodic recurring scans' to 'ON' under 'VULNERABILITY ASSESSMENT SETTINGS'\n6. 'Save' your changes. |
```config from cloud.resource where api.name = 'aws-cloudfront-list-distributions' AND json.rule = webACLId is not empty as X; config from cloud.resource where api.name = 'aws-waf-v2-global-web-acl-resource' AND json.rule =(webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesAnonymousIpList or webACL.postProcessFirewallManagerRuleGroups.firewallManagerStatement.name does not contain AWSManagedRulesKnownBadInputsRuleSet) and NOT ( webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesAnonymousIpList and webACL.rules[*].statement.managedRuleGroupStatement.name contains AWSManagedRulesKnownBadInputsRuleSet ) as Y; filter '$.Y.webACL.arn equals $.X.webACLId'; show X;``` | cloneAWS CloudFront attached WAFv2 WebACL is not configured with AMR for Log4j Vulnerability
This policy identifies AWS CloudFront attached with WAFv2 WebACL which is not configured with AWS Managed Rules (AMR) for Log4j Vulnerability. As per the guidelines given by AWS, CloudFront attached with WAFv2 WebACL should be configured with AWS Managed Rules (AMR) AWSManagedRulesKnownBadInputsRuleSet and AWSManagedRulesAnonymousIpList to protect from Log4j Vulnerability (CVE-2021-44228).
For more information please refer below URL,
https://aws.amazon.com/security/security-bulletins/AWS-2021-006/
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Go to the CloudFront Distributions Dashboard\n3. Click on the reported web distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. Note down the associated AWS WAF web ACL\n6. Go to the noted WAF web ACL in AWS WAF & Shield Service\n7. Under 'Rules' tab click on 'Add rules' and select 'Add managed rule groups'\n8. Under 'AWS managed rule groups' enable 'Anonymous IP list' and 'Known bad inputs'\n9. Click on 'Add rules'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = 'backupRetentionPeriod equals 0 or backupRetentionPeriod does not exist'``` | AWS RDS instance without Automatic Backup setting
This policy identifies RDS instances which are not set with the Automatic Backup setting. If Automatic Backup is set, RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases which provide for point-in-time recovery. The automatic backup will happen during the specified backup window time and keeps the backups for a limited period of time as defined in the retention period. It is recommended to set Automatic backups for your critical RDS servers that will help in the data restoration process.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Amazon RDS console\n4. Choose Instances, and then select the reported DB instance\n5. On 'Instance Actions' drop-down list, choose 'Modify'\n6. In 'Backup' section,\na. From the 'Backup Retention Period' drop-down list, select the number of days you want RDS should retain automatic backups of this DB instance\nb. Choose 'Start Time' and 'Duration' in 'Backup window' which is the daily time range (in UTC) during which automated backups created\n7. Click on 'Continue'\n8. On the confirmation page, choose 'Modify DB Instance' to save your changes. |
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"``` | API automation policy wvpvq
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and config.http20Enabled equals false'``` | Azure App Service Web app doesn't use HTTP 2.0
HTTP 2.0 has additional performance improvements on the head-of-line blocking problem of old HTTP version, header compression, and prioritization of requests. HTTP 2.0 no longer supports HTTP 1.1's chunked transfer encoding mechanism, as it provides its own, more efficient, mechanisms for data streaming.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Platform settings', Set 'HTTP version' to '2.0'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-configservice-describe-configuration-recorders' AND json.rule = '(status.recording is true and status.lastStatus equals SUCCESS) and (recordingGroup.allSupported is false or recordingGroup.includeGlobalResourceTypes is false)'``` | AWS Config must record all possible resources
This policy identifies resources for which AWS Config recording is enabled but recording for all possible resources are disabled. AWS Config provides an inventory of your AWS resources and a history of configuration changes to these resources. You can use AWS Config to define rules that evaluate these configurations for compliance. Hence, it is important to enable this feature.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the AWS and navigate to the 'Config' service\n2. Change to the respective region and in the navigation pane, click on 'Settings'\n3. Review the 'All resources' and Check the 2 options (3.a and 3.b)\n3.a Record all resources supported in this region\n3.b Include global resources (e.g., AWS IAM resources). |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-compute-instance' AND json.rule = agentConfig.isMonitoringDisabled is true``` | OCI Compute Instance has monitoring disabled
This policy identifies the OCI Compute Instances that are configured with Monitoring disabled. It is recommended that Compute Instances should be configured with monitoring is enabled following security best practices.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Metrics.\n5. Click Enable monitoring. (If monitoring is not enabled (and the instance uses a supported image), then a button is available to enable monitoring.)\n\nFMI : https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/Tasks/enablingmonitoring.htm#ExistingEnabling. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-activity-tracker-route' AND json.rule = rules[?any( (locations[*] equal ignore case "global") or (locations[*] equals "*") )] exists as X; count(X) less than 1``` | IBM Cloud Activity Tracker Event Routing is not configured to collect global events
This policy identifies IBM Cloud Accounts which does not have at-least one Activity tracker event route defined to collect global event's data. Activity tracker event route configured with global events collects all the global services' event data and will be sent to the target configured, which can be used for access pattern analysis from security perspective. It is recommended to define at-least one route with location set to global.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To configure an Activity tracker route to collect global events, please follow the below URL. Please make sure to provide location value either as 'global' or '*' to make the route collect global service events.:\n\nhttps://cloud.ibm.com/docs/atracker?topic=atracker-route_v2&interface=cli#route-create-cli. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'``` | cloned copy - RLP-93423 - 1
Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-snapshots' AND json.rule = 'snapshot.status equals available and snapshot.encrypted is false'``` | AWS RDS DB snapshot is not encrypted
This policy identifies AWS RDS DB (Relational Database Service Database) cluster snapshots which are not encrypted. It is highly recommended to implement encryption at rest when you are working with production data that have sensitive information, to protect from unauthorized access.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: You can encrypt a copy of an unencrypted snapshot. This way, you can quickly add encryption to a previously unencrypted DB instance.\nFollow below steps to encrypt a copy of an unencrypted snapshot:\n1. Log in to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'RDS' dashboard from 'Services' dropdown.\n4. Click on 'Snapshot' from left menu.\n5. Select the alerted snapshot\n6. From 'Action' dropdown, select 'Copy Snapshot'\n7. In 'Settings' section, from 'Destination Region' select a region,\n8. Provide an identifier for the new snapshot in field 'New DB Snapshot Identifier'\n9.In 'Encryption' section, select 'Enable Encryption'\n10. Select a master key for encryption from the dropdown 'Master key'.\n11. Click on 'Copy Snapshot'.\n\nThe source snapshot needs to be removed once the copy is available.\nNote: If you delete a source snapshot before the target snapshot becomes available, the snapshot copy may fail. Verify that the target snapshot has a status of AVAILABLE before you delete a source snapshot.. |
```config from cloud.resource where api.name = 'aws-vpc-transit-gateway' AND json.rule = isShared is false and options.autoAcceptSharedAttachments exists and options.autoAcceptSharedAttachments equal ignore case "enable"``` | AWS Transit Gateway auto accept vpc attachment is enabled
This policy identifies if Transit Gateways are automatically accepting shared VPC attachments. When this feature is enabled, the Transit Gateway automatically accepts any VPC attachment requests from other AWS accounts without requiring explicit authorization or verification. This can be a security risk, as it may allow unauthorized VPC attachments to connect to the Transit Gateway. As per the best practices for authorization and authentication, it is recommended to turn off the AutoAcceptSharedAttachments feature.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To modify a transit gateway Auto accept shared attachments:\n\n 1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.\n 2. On the navigation pane, choose Transit Gateways.\n 3. Choose the transit gateway to modify.\n 4. Under the ‘Actions' dropdown, choose the ‘Modify transit gateway’ option.\n 5. On the 'Modify transit gateway' page, uncheck the 'Auto accept shared attachments' checkbox under the 'Configure cross-account sharing options' section.\n 6. Click 'Modify transit gateway' to update the changes.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'domainValidationOptions[*].domainName contains *'``` | AWS ACM Certificate with wildcard domain name
This policy identifies ACM Certificates which are using wildcard certificates for wildcard domain name instead of single domain name certificates. ACM allows you to use an asterisk (*) in the domain name to create an ACM Certificate containing a wildcard name that can protect several sites in the same domain. For example, a wildcard certificate issued for *.prismacloud.io can match both www.prismacloud.io and images.prismacloud.io. When you use wildcard certificates, if the private key of a certificate is compromised, then all domain and subdomains that use the compromised certificate are potentially impacted. So it is recommended to use single domain name certificates instead of wildcard certificates to reduce the associated risks with a compromised domain or subdomain.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: To resolve this alert, you have to replace the reported wildcard certificate with single domain name certificate for all the first-level subdomains resulted from the domain name of the website secured with the wildcard certificate and delete the reported wildcard domain certificate.\n\nTo create a new certificate with a single domain:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to Certificate Manager\n4. In 'Request a certificate' page,\na. On Step 1: 'Add domain names' page, in the 'Domain name' box, type the fully qualified domain name. Click on 'Next'\nb. On Step 2: 'Select validation method' page, Select the validation method. Click on 'Review'\nc. On Step 3: 'Review' page, review the domain name and validation method details. click on 'Confirm'\nd. On Step 4: 'Validation' page, validate the certificate request based on the validation method selected. then click on 'Continue'\nThe certificate status should change from 'Pending validation' to 'Issued'. Now access your application's web server configuration and replace the wildcard certificate with the newly issued single domain name certificate.\n\nTo delete wildcard certificate:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Go to Certificate Manager(ACM) service\n4. Choose the reported certificate\n5. Under 'Actions' drop-down click on 'Delete'\n6. On 'Delete certificate' popup windows, Click on 'Delete' button. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = status equals "RUNNING" and nodePools[?any(config.bootDiskKmsKey does not exist)] exists``` | GCP GKE cluster node boot disk not encrypted with CMEK
This policy identifies GCP GKE clusters that do not have their node boot disk encrypted with CMEK.
The GKE node boot disk is the persistent disk that houses the Kubernetes node file system. By default, this disk is encrypted by a GCP managed key but users can specify customer managed encryption key to get enhanced security, control over the encryption key, and also comply with any regulatory requirements.
It is recommended to use CMEK to encrypt the boot disk of GKE cluster nodes as it gives you full control over the encrypted data.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: The KMS key used for node boot disk encryption for existing GKE clusters/cluster nodes cannot be changed. \n\nFor standard clusters:\nEither create a new standard cluster with node boot disk encryption using CMEK or add new node pools with disk encryption using CMEK to an existing standard cluster while removing older node pools which do not have node boot disk CMEK configured. To encrypt GKE standard cluster node boot disks using CMEK, please refer to the URLs given below:\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#boot-disks\n\nFor autopilot clusters:\nAutopilot cluster node boot disk encryption cannot be updated for existing autopilot clusters. To create a new autopilot cluster with CMEK protected node boot disk, please refer to the URLs given below:\nhttps://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#create_a_cluster_with_a_cmek-protected_node_boot_disk. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = 'isLegacy is true and (properties.categories[*] does not contain Write or properties.categories[*] does not contain Delete or properties.categories[*] does not contain Action)'``` | Azure Monitor log profile does not capture all activities
This policy identifies the Monitor log profiles which are not configured to capture all activities. A log profile controls how the activity log is exported. Configuring the log profile to collect logs for the categories 'Write', 'Delete' and 'Action' ensures that all the control/management plane activities performed on the subscription are exported.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: On the Azure Portal, there is no provision to check or set categories. However, when a log profile is created using the Azure Portal, Write, Delete and Action categories are set by default.\n\nLog profile activities can be set only through CLI using REST API and CLI is:\n1. To list the Log profile run,\naz monitor log-profiles list\n\n2. Note the name of reported log profile and replace it with <LOG_PROFILE_NAME> in below command:\naz account get-access-token --query "{subscription:subscription,accessToken:accessToken}" --out tsv | xargs -L1 bash -c 'curl -X GET -H "Authorization: Bearer $1" -H "Content-Type: application/json" https://management.azure.com/subscriptions/$0/providers/microsoft.insights/logprofiles/<LOG_PROFILE_NAME>?api-version=2016-03-01' | jq\nCopy the JSON output and save it as 'input.json' file.\n\n3. Edit the saved 'input.json' file to add all activities 'Write', 'Delete' and 'Action' in categories JSON array section.\n\n4. Run below command taking 'input.json' as input file,\naz account get-access-token --query "{subscription:subscription,accessToken:accessToken}" --out tsv | xargs -L1 bash -c 'curl -X PUT -H "Authorization: Bearer $1" -H "Content-Type: application/json" https://management.azure.com/subscriptions/$0/providers/microsoft.insights/logprofiles/<LOG_PROFILE_NAME>?api-version=2016-03-01 -d@"input.json"'\n\nNOTE: To run all above CLIs you have to be configured with Azure subscription and accessToken locally. And these CLI commands require 'microsoft.insights/logprofiles/*' permission.. |
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-iam-get-policy-version' AND json.rule = isAttached is true and document.Statement[?any(Effect equals Allow and Action contains sts:AssumeRole and Resource anyStartWith * and Condition does not exist)] exists and policyArn does not contain iam::aws``` | AWS IAM policy allows assume role permission across all services
This policy identifies AWS IAM policy which allows assume role permission across all services. Typically, AssumeRole is used if you have multiple accounts and need to access resources from each account then you can create long term credentials in one account and then use temporary security credentials to access all the other accounts by assuming roles in those accounts.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION'].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'IAM' service.\n3. Identify the reported policy\n4. Change the Service element of the policy document to be more restrictive so that it only allows AssumeRole permission on select services.. |
```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-rds-describe-db-snapshots' AND json.rule = "attributes[?(@.attributeName=='restore')].attributeValues[*] size != 0 and _AWSCloudAccount.isRedLockMonitored(attributes[?(@.attributeName=='restore')].attributeValues) is false"``` | AWS RDS Snapshot with access for unmonitored cloud accounts
This policy identifies RDS snapshots with access for unmonitored cloud accounts. The RDS Snapshot which have either the read / write permission opened up for Cloud Accounts which are NOT part of Cloud Accounts monitored by Prisma Cloud. These accounts with read / write privileges should be reviewed and confirmed that these are valid accounts of your organisation (or authorised by your organisation) and are not active under Prisma Cloud monitoring.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign into the AWS console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to the RDS service.\n4. Select the identified 'RDS Snapshot' under the 'Snapshots' in the left hand menu.\n5. Under the tab 'Snapshot Actions', selection the option 'Share Snapshot'.\n6. Review and delete the AWS Accounts which should not have read access.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' AND json.rule = 'osType does not exist and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled is false) and encryption.type is not member of ("EncryptionAtRestWithCustomerKey", "EncryptionAtRestWithPlatformAndCustomerKeys")'``` | Azure VM data disk is encrypted with the default encryption key instead of ADE/CMK
This policy identifies the data disks which are encrypted with the default encryption key instead of ADE/CMK. Azure encrypts data disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable SSE with Azure Disk Encryption [SSE with PMK+ADE],\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal. |
```config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equal ignore case "Microsoft.Keyvault" as X; config from cloud.resource where api.name = 'azure-key-vault-list' and json.rule = properties.enableRbacAuthorization is false and properties.accessPolicies[*].permissions exists and (properties.accessPolicies[*].permissions.keys[*] intersects ('Decrypt', 'Encrypt', 'Release', 'Purge', 'all') or properties.accessPolicies[*].permissions.secrets[*] intersects ('Purge', 'all') or properties.accessPolicies[*].permissions.certificates[*] intersects ('Purge', 'all')) as Y; filter '$.Y.properties.vaultUri contains $.X.properties.encryption.keyvaultproperties.keyvaulturi'; show X;``` | Azure Storage account encryption key configured by access policy with privileged operations
This policy identifies Azure Storage accounts which are encrypted by an encryption key configured access policy with privileged operations. Encryption keys should be kept confidential and only accessible to authorized entity with limited operation access. Allowing privileged access to an encryption key also allows to alter/delete the data that is encrypted by it, making the data more easily accessible. It is recommended to have restricted access policies to an encryption key so that only authorized entities can access it with limited operation access.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to your Storage account and display the Encryption settings\n3. Keep note of the Key vault and Key used\n4. Navigate to the Key Vault resource noted\n5. Select Access policies, select the key noted\n6. Click on Edit and make sure only required permissions are checked instead of Select all and only required operations are selected instead of privileged operations as per business requirement.. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(management.autoUpgrade does not exist or management.autoUpgrade is false)] exists``` | GCP Kubernetes cluster node auto-upgrade configuration disabled
This policy identifies GCP Kubernetes cluster nodes with auto-upgrade configuration disabled. Node auto-upgrades help you keep the nodes in your cluster up to date with the cluster master version when your master is updated on your behalf. When you create a new cluster using Google Cloud Platform Console, node auto-upgrade is enabled by default.
FMI: https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-upgrades
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Google cloud console\n2. Navigate to Google Kubernetes Engine, click on 'Clusters' to get the list\n3. Click on the alerted cluster and go to section 'Node pools'\n4. Click on a node pool to ensure 'Auto-upgrade' is enabled in the 'Management' section\n5. To modify click on the 'Edit' button at the top\n6. To enable the configuration click on the check box against 'Enable auto-upgrade'\n7. Click on 'Save'\n8. Repeat Step 4-7 for each node pool associated with the reported cluster. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-bigquery-dataset-list' AND json.rule = iamPolicy.bindings[?any(members[*] equals "allUsers" or members[*] equals "allAuthenticatedUsers")] exists``` | GCP BigQuery dataset is publicly accessible
This policy identifies BigQuery datasets that are anonymously or publicly accessible. Granting permissions to allUsers or allAuthenticatedUsers allows anyone to access the dataset. Such access might not be desirable if sensitive data is being stored in the dataset. So it is recommended to not allow anonymous and/or public access to BigQuery datasets.
This is applicable to gcp cloud and is considered a high severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate to service 'BigQuery'(Left Panel)\n3. Under the 'Explorer' section, search for the reported BigQuery dataset and select 'Open' from the kebab menu\n4. Click on dropdown 'SHARING' and select 'Permissions'\n5. In 'Filter', search for 'allUsers' or 'allAuthenticatedUsers', review each attached role and click the delete icon\n6. On the popup 'Remove role from principal?', select the checkbox and click on 'REMOVE'. |
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains "ConsoleLogin" and ($.X.filterPattern contains "MFAUsed !=" or $.X.filterPattern contains "MFAUsed!=") and $.X.filterPattern contains "Yes" and ($.X.filterPattern contains "userIdentity.type =" or $.X.filterPattern contains "userIdentity.type=") and $.X.filterPattern contains "IAMUser" and ($.X.filterPattern contains "responseElements.ConsoleLogin =" or $.X.filterPattern contains "responseElements.ConsoleLogin=") and $.X.filterPattern contains "Success") and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1``` | AWS Log metric filter and alarm does not exist for management console sign-in without MFA
This policy identifies the AWS regions that do not have a log metric filter and alarm for management console sign-in without MFA.
A log metric filter in AWS CloudWatch scans log data for specific patterns and generates metrics based on those patterns. Unauthorized access attempts may go undetected without a log metric filter and alarm for console sign-ins without MFA. This increases the risk of account compromise and potential data breaches due to inadequate security monitoring.
It is recommended that a metric filter and alarm be established for management console sign-in without MFA to increase visibility into accounts that are not protected by MFA.
NOTE: This policy will trigger an alert if you have at least one Cloudtrail with the multi-trail is enabled, Logs all management events in your account, and is not set with a specific log metric filter and alarm.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to the AWS Console.\n2. Navigate to the CloudWatch dashboard.\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (Cloudtrail should be multi-trail enabled with all Management Events captured) and click the Actions Dropdown Button -> Click 'Create Metric Filter' button.\n5. In the 'Define Pattern' page, add the 'Filter pattern' value as\n\n{ ($.eventName = "ConsoleLogin") && ($.additionalEventData.MFAUsed != "Yes") && ($.userIdentity.type = "IAMUser") && ($.responseElements.ConsoleLogin = "Success") }\n\nand Click on 'NEXT'.\n6. In the 'Assign Metric' page, Choose Filter Name, and Metric Details parameter according to your requirement and click on 'Next'.\n7. Under the ‘Review and Create' page, Review details and click 'Create Metric Filter’.\n8. To create an alarm based on a log group-metric filter, Refer to the below link \n https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Create_alarm_log_group_metric_filter.html. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" AND gceSetup.metadata.notebook-disable-root is false``` | GCP Vertex AI Workbench Instance has root access enabled
This policy identifies GCP Vertex AI Workbench Instances that have root access enabled.
Enabling root access on a GCP Vertex AI Workbench instance increases the risk of unauthorized system changes, privilege escalation, and data exposure. It can also make the instance more vulnerable to attacks if not properly secured. Limiting root access and applying strict access controls are essential to reduce these risks.
It is recommended to disable root access for GCP Vertex AI Workbench Instances.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Go to the 'SOFTWARE AND SECURITY' tab\n7. Under 'Modify software and security configuration', disable (uncheck) 'Root access to the instance' checkbox\n8. At the bottom of the page, click 'SUBMIT'. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule = publiclyAccessible is true``` | AWS Redshift cluster instance with public access setting enabled
This policy identifies AWS Redshift clusters with public access setting enabled.
AWS Redshift, a managed data warehousing service typically stores sensitive and critical data. Allowing public access increases the risk of unauthorized access, data breaches, and potential malicious activities.
As a recommended security measure, it is advised to disable public access for the Redshift cluster.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To modify the publicly accessible setting of the Redshift cluster,\n1. Sign in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Navigate to the 'Redshift' service.\n4. Click on the checkbox for the identified Redshift cluster name.\n5. In the top menu options, click on 'Actions' and select 'Modify publicly accessible setting' as the option.\n6. Uncheck the checkbox 'Turn on Publicly accessible' in the 'Publicly accessible' section and click on 'Save Changes' button.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = "configurations.value[?(@.name=='log_duration')].properties.value equals OFF or configurations.value[?(@.name=='log_duration')].properties.value equals off"``` | Azure PostgreSQL database server with log duration parameter disabled
This policy identifies PostgreSQL database servers for which server parameter is not set for log duration. Enabling log_duration helps the PostgreSQL Database to Logs the duration of each completed SQL statement which in turn generates query and error logs. Query and error logs can be used to identify, troubleshoot, and repair configuration errors and sub-optimal performance.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to Azure console.\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Server parameters' under 'Settings' block\n5. From the list of parameters find 'log_duration' and set it to on\n6. Click on 'Save' button from top menu to save the change.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = ['properties.storageProfile'].osDisk.vhd.uri exists as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.encryption.keySource equals "Microsoft.Storage" as Y; filter "$.['X'].['properties.storageProfile'].['osDisk'].['vhd'].['uri'] contains $.Y.name"; show Y;``` | Azure Storage account containing VHD OS disk is not encrypted with CMK
This policy identifies Azure Storage account containing VHD OS disk which are not encrypted with CMK. VHD's attached to Virtual Machines are stored in Azure storage. By default Azure Storage account is encrypted using Microsoft Managed Keys. It is recommended to use Customer Managed Keys to encrypt data in Azure Storage accounts for better control on the data.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on the reported storage account\n3. Under the Settings menu, click on Encryption\n4. Select Customer Managed Keys\n- Choose 'Enter key URI' and Enter 'Key URI'\nOR\n- Choose 'Select from Key Vault', Enter 'Key Vault' and 'Encryption Key'\n5. Click on 'Save'. |
```config from cloud.resource where api.name = 'gcloud-logging-sinks-list' AND json.rule = 'destination.bucket exists' as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' AND json.rule = (retentionPolicy does not exist ) as Y; filter '($.X.destination.bucket contains $.Y.name)'; show Y;``` | GCP Log bucket retention policy not enabled
This policy identifies GCP log buckets for which retention policy is not enabled. Enabling retention policies on log buckets will protect logs stored in cloud storage buckets from being overwritten or accidentally deleted. It is recommended to configure a data retention policy for these cloud storage buckets to store the activity logs for forensics and security investigations.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to google cloud console \n2. Navigate to section 'Browser', Under 'Storage' \n3. Select the alerted log bucket\n4. In tab ''RETENTION', click on '+SET RETENTION POLICY' to set a retention policy\n5. Set a value for 'Retention period' in pop-up 'Set a retention policy'\n6. Click on 'SAVE'.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = ((((publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false) or (publicAccessBlockConfiguration.ignorePublicAcls is false and accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) and acl.grantsAsList[?any(grantee equals AllUsers and permission is member of (ReadAcp,Read,FullControl))] exists) or ((policyStatus.isPublic is true and ((publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration does not exist) or (publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false) or (publicAccessBlockConfiguration.restrictPublicBuckets is false and accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))) and (policy.Statement[?any(Effect equals Allow and (Principal equals * or Principal.AWS equals *) and (Action contains s3:* or Action contains s3:Get or Action contains s3:List) and (Condition does not exist))] exists))) and websiteConfiguration does not exist``` | Low of AWS S3 bucket publicly readable
This policy identifies the S3 buckets that are publicly readable by Get/Read/List bucket operations. These permissions permit anyone, malicious or not, to Get/Read/List bucket operations on your S3 bucket if they can guess the namespace. S3 service does not protect the namespace if ACLs and Bucket policy is not handled properly, with this configuration you may be at risk of compromise of critical data by leaving S3 public.
For more details:
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/set-permissions.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/about-object-ownership.html#ensure-object-ownership
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the S3 resource reported in the alert\n4. Click on the 'Permissions' tab\n5. If Access Control List is set to 'Public' follow the below steps\na. Under 'Access Control List', Click on 'Everyone' and uncheck all items\nb. Click on Save changes\n6. If 'Bucket Policy' is set to public follow the below steps\na. Under 'Bucket Policy', Select 'Edit Bucket Policy' and consider defining what explicit 'Principal' should have the ability to GET/LIST objects in your S3 bucket. You may also want to specifically limit the 'Principal' ability to perform specific GET/LIST functions, without the wild card.\nIf 'Bucket Policy' is not required delete the existing 'Bucket Policy'.\nb. Click on Save changes\n\nNote: Make sure updating 'Access Control List' or 'Bucket Policy' does not affect S3 bucket data access.. |
```config from cloud.resource where api.name = 'gcloud-access-approval-project-approval-setting' AND json.rule = enrolledServices[*].cloudProduct does not equal "all"``` | GCP Cloud ' Access Approval' is not enabled
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-glue-job' AND json.rule = Command.BucketName exists and Command.BucketName contains "aws-glue-assets-" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "aws-glue-assets-" as Y; filter 'not ($.X.Command.BucketName equals $.Y.bucketName)' ; show X;``` | aws glue shadow
sdcsc
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any(access equals Allow and direction equals Inbound and (sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and (destinationPortRange contains _Port.inRange(3389,3389) or destinationPortRanges[*] contains _Port.inRange(3389,3389) ))] exists``` | Azure Network Security Group allows all traffic on RDP Port 3389
This policy identifies any NSG rule that allow all traffic on RDP port 3389. Review your list of NSG rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-object-storage-bucket' AND json.rule = activity_tracking does not exist or activity_tracking.write_data_events does not equal ignore case "true" or activity_tracking.read_data_events does not equal ignore case "true"``` | IBM Cloud Object Storage bucket is not enabled with IBM Activity Tracker
This policy identifies IBM Cloud Object Storage buckets which have Activity Tracker disabled or not enabled properly. The IBM Cloud Activity Tracker service records user-initiated activities that change the state of a service in IBM Cloud. You can use this service to investigate abnormal activity and critical actions, and to comply with regulatory audit requirements. In addition, you can be alerted about actions as they happen. So, it is recommended to enable Activity tracker to log all read/write data and management events on a bucket.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To configure Activity Tracker on a Cloud Object Storage bucket, please follow the below URL.\nPlease make sure to select 'Track data events' checkbox and select 'read & write' option \nfrom the Activity Tracker dropdown:\nhttps://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-at#at-console-enable. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function' AND json.rule = status equals ACTIVE and vpcConnector does not exist``` | GCP Cloud Function not enabled with VPC connector
This policy identifies GCP Cloud Functions that are not configured with a VPC connector. VPC connector helps function to connect to a resource inside a VPC in the same project. Setting up the VPC connector allows you to set up a secure perimeter to guard against data exfiltration and prevent functions from accidentally sending any data to unwanted destinations. It is recommended to configure the GCP Cloud Function with a VPC connector.
Note: For the Cloud Functions function to access the public traffic with Serverless VPC connector, you have to introduce Cloud NAT.
Link: https://cloud.google.com/functions/docs/networking/network-settings#route-egress-to-vpc
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service (Left Panel)\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings’ drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. Under Section 'Egress settings', select a VPC connector from the dropdown\n8. In case VPC connector is not available, select 'Custom' and\n9. Click on 'Create a Serverless VPC Connector', follow the link to create a Serverless VPC connector: https://cloud.google.com/vpc/docs/configure-serverless-vpc-access\n10. Once the Serverless VPC connector is available, select it from the dropdown\n11. Click on 'NEXT'\n12. Click on 'DEPLOY'. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and tokens[?any( properties.status contains enabled )] exists``` | Azure Container Registry with repository scoped access token enabled
This policy identifies Azure Container Registries having repository scoped access tokens enabled.
Disable repository-scoped access tokens for your registry to prevent access via tokens. Enhancing security involves disabling local authentication methods, including admin user, repository-scoped access tokens, and anonymous pull. This ensures that container registries rely solely on Microsoft Entra ID identities for authentication.
As a security best practice, it is recommended to disable repository scoped access token for Azure Container Registries.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to your Azure portal \n2. Navigate to 'Container registries' \n3. Select the reported Container Registry \n4. Under 'Repository permissions' select 'Tokens'\n5. Click on the active token and make it inactive by unchecking the 'Active status'\n6. Click on 'Save'\n7. Repeat step 5 & 6 for all the active tokens. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-nsg' AND json.rule = (securityRules[?any((((*.destinationPortRange.min == 3389 or *.destinationPortRange.max == 3389) or (*.destinationPortRange.min < 3389 and *.destinationPortRange.max > 3389)) or (protocol equals "all") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))) and (source equals 0.0.0.0/0 and direction equals INGRESS))] exists)``` | OCI Network Security Group allows all traffic on RDP port (3389)
This policy identifies OCI Security groups that allow unrestricted ingress access to port 3389. It is recommended that no security group allows unrestricted ingress access to port 3389. As a best practice, remove unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), to reduce server's exposure to risk.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.\n3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Security Rules\n5. If you want to add a rule, click Add Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.. |
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case "service" and name equal ignore case "serviceType" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name equal ignore case "region")] does not exist )] exists and subjects[?any( attributes[?any( name contains "access_group_id")] exists )] exists as X; config from cloud.resource where api.name = 'ibm-iam-access-group-member' as Y; config from cloud.resource where api.name = 'ibm-iam-access-group' as Z; filter '$.X.subjects[*].attributes[*].value contains $.Y.access_group.id and $.Y.access_group.id equal ignore case $.Z.id'; show Z;``` | IBM Cloud Access group with members having administrator role permission for All Identity and Access enabled services
This policy identifies IBM Cloud Access groups, which have administrator role permission across all Identity and Access enabled services policy with users, service IDs, or trusted profiles. This would allow all members of this group to have administrative privileges. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions.
This is applicable to ibm cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Access groups' in the left panel.\n3. Select the access group which is reported in the alert.\n4. Review and remove Users/Service IDs/Trusted profiles from the access group.\nRefer below link for removing the Member from the access group:\nhttps://cloud.ibm.com/docs/account?topic=account-assign-access-resources&interface=ui#removing-access-console\nOR\nTo remove the overly permissible policy from the access group:\n1. Go to 'Access' tab and click on three dots on the right corner of a row for the policy which is having administrator permission on 'All Identity and Access enabled services'.\n2. Click on Remove OR Edit to assign limited permission to the policy.\n3. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove.. |
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-active-directory-authorization-policy' AND json.rule = not (allowInvitesFrom equal ignore case adminsAndGuestInviters OR allowInvitesFrom equal ignore case none)``` | Azure Guest User Invite not restricted to users with specific admin role
This policy identifies instances in the Microsoft Entra ID configuration where guest user invitations are not restricted to specific administrative roles.
Allowing anyone in the organization, including guests and non-admins, to invite guest users can lead to unauthorized access and potential data breaches. This unrestricted access poses a significant security risk.
As a best practice, it is recommended to configure guest user invites to specific admin roles. This will ensure that only authorized personnel can invite guests, maintaining tighter control over access to cloud resources.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal and search for 'Microsoft Entra ID'\n2. Select 'Microsoft Entra ID'\n3. Under 'Manage' select 'External Identities'\n4. Select 'External collaboration settings'\n5. Under 'Guest invite settings' section, select 'Only users assigned to specific admin roles can invite guest users'\n6. Select 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case "PowerState/running" and (['properties.osProfile'].['linuxConfiguration'] exists and ['properties.osProfile'].['linuxConfiguration'].['disablePasswordAuthentication'] is false)``` | Azure Virtual Machine (Linux) does not authenticate using SSH keys
This policy identifies Azure Virtual Machines that have basic authentication, not authenticating using SSH keys. Azure Virtual Machines with basic authentication could allow attackers to brute force and gain unauthorized access, which might lead to potential data leaks. It is recommended to use SSH keys for authentication to avoid brute force attacks on virtual machines.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To configure existing Azure Virtual machine with SSH key authentication, Follow below URL:\nhttps://learn.microsoft.com/en-us/azure/virtual-machines/extensions/vmaccess#update-ssh-key\n\nIf changes are not reflecting you may need to take backup, You may need to create new virtual machine with SSH key based authentication and delete the reported virtual machine.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty``` | dnd_test_add_remove_child_policy_hyperion_policy_ss_finding_2
Description-e736aef6-4ad4-4324-9b5b-75dd70620202
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['SSH_BRUTE_FORCE'].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'alibaba-cloud-action-trail' as X; config from cloud.resource where api.name = 'alibaba-cloud-oss-bucket-info' as Y; filter '$.X.isLogging is true and $.X.ossBucketName equals $.Y.bucket.name and $.Y.cannedACL does not contain Private'; show Y;``` | Alibaba Cloud ActionTrail log OSS bucket is publicly accessible
This policy identifies Object Storage Service (OSS) buckets which are publicly accessible and stores ActionTrail log data. These buckets contain sensitive audit data and only authorized users and applications should have access. As a best practice, make OSS buckets private which stores ActionTrail log data and only authorized user should have access.
This is applicable to alibaba_cloud cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['INTERNET_EXPOSURE'].
Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Object Storage Service\n3. In the left-side navigation pane, click on the reported bucket\n4. In the 'Basic Settings' tab, In the 'Access Control List (ACL)' Section, Click on 'Configure'\n5. For 'Bucket ACL' field, Choose 'Private' option\n6. Click on 'Save'. |
```config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = loggingConfiguration.targetBucket does not exist as Y; filter '$.X.s3BucketName equals $.Y.bucketName'; show Y;``` | AWS S3 CloudTrail bucket for which access logging is disabled
This policy identifies S3 CloudTrail buckets for which access is disabled. S3 Bucket access logging generates access records for each request made to your S3 bucket. An access log record contains information such as the request type, the resources specified in the request worked, and the time and date the request was processed. It is recommended that bucket access logging be enabled on the CloudTrail S3 bucket.
This is applicable to aws cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service.\n2. Click on the the S3 bucket that was reported.\n3. Click on the 'Properties' tab.\n4. Under the 'Server access logging' section, select 'Enable' option and provide s3 bucket of your choice in the 'Target bucket'\n5. Click on 'Save Changes'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-disk-list' and json.rule = 'osType exists and managedBy exists and (encryptionSettings does not exist or encryptionSettings.enabled == false) and encryption.type is not member of ("EncryptionAtRestWithCustomerKey", "EncryptionAtRestWithPlatformAndCustomerKeys")'``` | Azure VM OS disk is encrypted with the default encryption key instead of ADE/CMK
This policy identifies the OS disks which are encrypted with the default encryption key instead of ADE/CMK. Azure encrypts OS disks by default Server-Side Encryption (SSE) with platform-managed keys [SSE with PMK]. It is recommended to use either SSE with Azure Disk Encryption [SSE with PMK+ADE] or Customer Managed Key [SSE with CMK] which improves on platform-managed keys by giving you control of the encryption keys to meet your compliance need.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['UNENCRYPTED_DATA'].
Mitigation of this issue can be done as follows: To enable SSE with Azure Disk Encryption [SSE with PMK+ADE],\nFollow https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-prerequisites based VM the data disk is assigned.\n\nTo enable SSE with Customer Managed Key [SSE with CMK],\nFollow https://docs.microsoft.com/en-us/azure/virtual-machines/disks-enable-customer-managed-keys-portal. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-machine-learning-compute' AND json.rule = properties.provisioningState equal ignore case "Succeeded" and properties.properties.state equal ignore case "Running" and properties.properties.osImageMetadata.isLatestOsImageVersion is false``` | Azure Machine Learning compute instance not running latest OS Image Version
This policy identifies Azure Machine Learning compute instances not running on the latest available image version.
Running compute instances on outdated image versions increases security risks. Without the latest security patches and updates, these instances are more vulnerable to attacks, which can compromise machine learning models and data.
As a best practice, it is recommended to recreate or update Azure Machine Learning compute instances to the latest image version, ensuring they have the most recent security patches and updates.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To ensure your Azure Machine Learning compute instances are running the latest available image version, follow these remediation steps:\n\n1. Recreate the Compute Instance. This will ensure it is provisioned with the latest VM image, including all recent updates and security patches.\n- Steps:\n 1. Backup Important Data:\n - Store notebooks in the `User files` directory to persist them.\n - Mount data to persist files.\n 2. Re-create the Instance:\n - Delete the existing compute instance.\n - Provision a new compute instance with latest OS image version.\n 3. Restore Data:\n - Restore notebooks and mounted data to the newly created instance.\n\nNote: This will result in the loss of data and customizations stored on the instance's OS and temporary disks.. |
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '$.Y.conditions[*].metricThresholdFilter contains $.X.name and ($.X.filter contains "resource.type =" or $.X.filter contains "resource.type=") and ($.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=") and $.X.filter contains "gcs_bucket" and ($.X.filter contains "protoPayload.methodName=" or $.X.filter contains "protoPayload.methodName =") and ($.X.filter does not contain "protoPayload.methodName!=" and $.X.filter does not contain "protoPayload.methodName !=") and $.X.filter contains "storage.setIamPermissions"'; show X; count(X) less than 1``` | GCP Log metric filter and alert does not exist for Cloud Storage IAM permission changes
This policy identifies the GCP account which does not have a log metric filter and alert for Cloud Storage IAM permission changes. Monitoring Cloud Storage IAM permission activities will help in reducing time to detect and correct permissions on sensitive Cloud Storage bucket and objects inside the bucket. It is recommended to create a metric filter and alarm to detect activities related to the Cloud Storage IAM permission.
This is applicable to gcp cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="gcs_bucket" AND protoPayload.methodName="storage.setIamPermissions"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway-waf-policy' AND json.rule = properties.applicationGateways[*].id size greater than 0 and properties.policySettings.state equal ignore case Enabled and properties.policySettings.mode does not equal ignore case Prevention``` | Azure Application Gateway WAF policy is not enabled in prevention mode
This policy identifies the Azure Application Gateway WAF policies that are not enabled in prevention mode.
Azure Application Gateway WAF policies support Prevention and Detection modes. Detection mode monitors and logs all threat alerts to a log file. Detection mode is useful for testing purposes and configures WAF initially but it does not provide protection. It logs the traffic, but it doesn't take any actions such as allow or deny. Where as, in Prevention mode, WAF analyzes incoming traffic to the application gateway and blocks any requests that are determined to be malicious based on a set of rules.
As a best security practice, it is recommended to enable Application Gateway WAF policies with Prevention mode to prevent malicious requests from reaching your application and potentially causing damage.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Navigate to 'Web Application Firewall policies (WAF)' dashboard\n3. Click on the reported WAF policy\n4. In 'Overview' section, Click on 'Switch to prevention mode'.\n\nNOTE: Define managed rule or custom rules properly as per your business requirement prior to transition to Prevention mode. This can help in unexpected blocked traffic.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-server-list' AND json.rule = "$.serverSecurityAlertPolicy.properties.retentionDays does not exist or $.serverSecurityAlertPolicy.properties.state equals Disabled"``` | Azure SQL server Defender setting is set to Off
This policy identifies Azure SQL server which have Defender setting set to Off. Azure Defender for SQL provides a new layer of security, which enables customers to detect and respond to potential threats as they occur by providing security alerts on anomalous activities. Users will receive an alert upon suspicious database activities, potential vulnerabilities, SQL injection attacks, as well as anomalous database access patterns. Advanced threat protection alerts provide details of suspicious activity and recommend action on how to investigate and mitigate the threat.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Go to the reported SQL server\n3. Select 'SQL servers', Click on the SQL server instance you wanted to modify\n4. Click on 'Microsoft Defender for Cloud' under 'Security'\n5. Click on 'Enable Microsoft Defender for SQL'. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0``` | Copy of GCP API key is created for a project1
This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead.
Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies.
This is applicable to gcp cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion.. |
```config from cloud.resource where api.name = 'aws-emr-studio' AND json.rule = DefaultS3Location exists and DefaultS3Location contains "aws-emr-studio-" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "aws-emr-studio-" as Y; filter 'not ($.X.BucketName equals $.Y.bucketName)' ; show X;``` | AWS EMR shadow resource
sdvdsv
This is applicable to aws cloud and is considered a high severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'azure' AND api.name= 'azure-network-nsg-list' AND json.rule = securityRules[?any((sourceAddressPrefix equals Internet or sourceAddressPrefix equals * or sourceAddressPrefix equals 0.0.0.0/0 or sourceAddressPrefix equals ::/0) and protocol equals * and access equals Allow and destinationPortRange contains * and direction equals Inbound)] exists``` | Azure Network Security Group having Inbound rule overly permissive to all traffic on any protocol
This policy identifies Azure Network Security Groups (NSGs) which are overly permissive to all traffic on any protocol. A network security group contains a list of security rules that allow or deny inbound or outbound network traffic based on source or destination IP address, port, and protocol. As a best practice, it is recommended to configure NSGs to restrict traffic from known sources, allowing only authorized protocols and ports.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. Evaluate whether you want to edit the rule and limit access to specific users, hosts, and services only, deny access, or delete the rule completely.\n\n1. Log in to the Azure Portal.\n2. Select 'All services'.\n3. Select 'Network security groups', under NETWORKING.\n4. Select the Network security group you need to modify.\n5. Select 'Inbound security rules' under Settings.\n6. Select the rule you need to modify, and edit it to allow specific IP addresses and Port ranges OR set the 'Action' to 'Deny' OR 'Delete' the rule based on your requirement.\n7. 'Save' your changes.. |
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-subnet' and json.rule = public_gateway exists``` | IBM Cloud Virtual Private Cloud (VPC) Subnet has public gateways attached
This policy identifies IBM Virtual Private Cloud Subnet where public gateway attached. A Public Gateway enables resources to connect to the internet. After public gateway is attached, all resources in that subnet can connect to the internet. If the use case for public gateway is not external connectivity, it is recommended not to attach any public gateways.
This is applicable to ibm cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Public Gateways'\n3. Select the 'Public Gateway' reported in the alert\n4. From the drop down select Detach\n5. Safely detach the public gateway and then delete the public gateway. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-elasticache-describe-replication-groups' AND json.rule = 'authTokenEnabled is false or transitEncryptionEnabled is false or authTokenEnabled does not exist or transitEncryptionEnabled does not exist'``` | AWS ElastiCache Redis cluster with Redis AUTH feature disabled
This policy identifies ElastiCache Redis clusters which have Redis AUTH feature disabled. Redis AUTH can improve data security by requiring the user to enter a password before they are granted permission to execute Redis commands on a password protected Redis server.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: AWS ElastiCache Redis cluster Redis AUTH password can be set, only at the time of creation of the cluster. So to resolve this alert, create a new cluster with Redis AUTH feature enabled, then migrate all required ElastiCache Redis cluster data from the reported ElastiCache Redis cluster to this newly created cluster and delete the reported ElastiCache Redis cluster.\n\nTo create new ElastiCache Redis cluster with Redis AUTH password set, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Click on 'Create' button\n6. On the 'Create your Amazon ElastiCache cluster' page,\na. Select 'Redis' cache engine type.\nb. Enter a name for the new cache cluster\nc. Select Redis engine version from 'Engine version compatibility' dropdown list.\nNote: As of July 2018, In-transit encryption can be enabled only for AWS ElastiCache clusters with Redis engine version 3.2.6 and 4.0.10.\nd. Click on 'Advanced Redis settings' to expand the cluster advanced settings panel\ne. Select 'Encryption in-transit' checkbox to enable encryption\nNote: Redis AUTH can only be enabled when creating clusters where in-transit encryption is enabled.\nf. Select 'Redis AUTH' checkbox to enable to enable AuthToken password\ng. Type password you want enforce on 'Redis AUTH Token' textbox.\nChoosen password should meet 'Passwords must be at least 16 and a maximum of 128 printable characters. At least 16 characters, and maximum 128 characters, restricted to any printable ASCII character except ' ', '"', '/' and '@' signs' criteria. Set the new Redis cluster other parameters which are same as of reported Redis cluster configuration details.\nNote: The password set at cluster creation cannot be changed.\n7. Click on 'Create' button to launch your new ElastiCache Redis cluster\n\nTo delete reported ElastiCache Redis cluster, perform the following:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to ElastiCache Dashboard\n4. Click on Redis\n5. Select reported Redis cluster\n6. Click on 'Delete' button\n7. In the 'Delete Cluster' dialog box, if you want back for you cluster select 'Yes' from the 'Create final backup' dropdown menu, provide a name for the cluster backup, then click on 'Delete'.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = properties.provisioningState equals Succeeded and diagnosticSettings.value[*].properties.workspaceId does not equal ignore case "/subscriptions/8dff688e-d9b0-477c-b2b0-b0e729fb06bd/resourceGroups/rg-analytics-sh-prd-scus/providers/Microsoft.OperationalInsights/workspaces/log-sh-workspace"``` | test-3
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-roles' as X; config from cloud.resource where api.name = 'aws-iam-get-policy-version' as Y; filter "($.X.inlinePolicies[*].policyDocument.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal *) or ($.X.attachedPolicies[*].policyArn contains $.Y.policyArn and $.Y.document.Statement[?(@.Effect=='Allow' && @.Resource=='*')].Action any equal *)"; show X;``` | AWS IAM Roles with Administrator Access Permissions
This policy identifies AWS IAM roles which has administrator access permission set. This would allow all users who assume this role to have administrative privileges. As a security best practice, it is recommended to grant least privilege access like granting only the permissions required to perform a task, instead of providing excessive permissions.
This is applicable to aws cloud and is considered a medium severity issue.
Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE'].
Mitigation of this issue can be done as follows: 1. Log in to AWS console\n2. Navigate to IAM service\n3. Click on Roles\n4. Click on reported IAM role\n5. Under 'Permissions policies' click on 'X' to detach or remove the policy having excessive permissions and assign a limited permission policy as required for a particular role.. |
```config from cloud.resource where finding.type IN ( 'Host Vulnerability', 'Serverless Vulnerability' , 'Compliance' , 'AWS Inspector Runtime Behavior Analysis' , 'AWS Inspector Security Best Practices' , 'AWS GuardDuty Host' , 'AWS GuardDuty IAM' ) ``` | Hostfindings test
This is applicable to all cloud and is considered a low severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: N/A. |
```config from cloud.resource where api.name = 'aws-mq-broker' AND json.rule = brokerState equal ignore case RUNNING as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyState equal ignore case Enabled and keyMetadata.keyManager does not equal ignore case CUSTOMER as Y; filter '$.X.encryptionOptions.kmsKeyId equals $.Y.keyMetadata.arn or $.X.encryptionOptions.useAwsOwnedKey is true'; show X;``` | AWS MQ Broker is not encrypted by Customer Managed Key (CMK)
This policy identifies AWS MQ Brokers that are not encrypted by Customer Managed Key (CMK).
AWS MQ Broker messages might contain sensitive information. AWS MQ Broker messages are encrypted by default by an AWS managed key but users can specify CMK to get enhanced security, control over the encryption key, and also comply with any regulatory requirements.
As a security best practice use of CMK to encrypt your MQ Broker is advisable as it gives you full control over the encrypted data.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: AWS MQ Broker encryption option can be done only at the creation of MQ broker. You cannot change the encryption options once it has been created. To resolve this alert create a new MQ broker configuring encryption with CMK key, migrate all data to newly created MQ broker and then delete the reported MQ broker.\n\nTo create a new AWS MQ broker encryption with CMK key,\n1. Log in to the AWS console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated\n3. Go to the AWS MQ broker Dashboard\n4. Click on 'Create brokers'\n5. Select the broker engine type, deployment mode as per your business requirement\n6. Under 'Configure settings', In Additional settings section choose Encryption option choose 'Customer managed CMKs are created and managed by you in AWS Key Management Service (KMS).' based on your business requirement.\n7. Review and Create the MQ broker.\n\nTo delete reported MQ broker, refer following URL:\nFor ActiveMQ Broker: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-activemq.html#delete-broker\nFor RabbitMQ Broker: https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/getting-started-rabbitmq.html#rabbitmq-delete-broker. |
```config from cloud.resource where api.name = 'oci-object-storage-bucket' as X; config from cloud.resource where api.name = 'oci-logging-logs' as Y; filter 'not ($.X.name contains $.Y.configuration.source.resource and $.Y.configuration.source.service contains objectstorage and $.Y.configuration.source.category contains write and $.Y.lifecycleState equal ignore case ACTIVE )'; show X;``` | OCI Object Storage Bucket write level logging is disabled
This policy identifies Object Storage buckets that have write-level logging disabled.
Enabling write-level logging for Object Storage provides more visibility into changes to objects in your buckets. Without write-level logging, there is no record of changes made to the bucket. This lack of visibility can lead to undetected data breaches, unauthorized changes, and compliance violations.
As a best practice, it is recommended to enable write-level logging on Object Storage buckets.
This is applicable to oci cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: First, if a log group for holding these logs has not already been created, create a log group by the following steps:\n\n1. Login to the OCI Console\n2. Go to the Log Groups page\n3. Click the 'Create Log Group' button in the middle of the screen\n4. Select the relevant compartment to place these logs\n5. Type a name for the log group in the 'Name' box.\n6. Add an optional description in the 'Description' box\n7. Click the 'Create' button in the lower left hand corner\n\nSecond, enable Object Storage write log logging for reported bucket by the following steps:\n1. Login to the OCI Console\n2. Go to the Logs page\n3. Click the 'Enable Service Log' button in the middle of the screen\n4. Select the relevant resource compartment\n5. Select ‘Object Storage’ from the Service drop-down menu \n6. Select the reported bucket from the ‘Resource’ drop-down menu \n7. Select ‘Write Access Events’ from the ‘Log Category’ drop-down menu \n8. Type a name for your Object Storage write log in the ‘Log Name’ drop-down menu \n9. Click the ‘Enable Log’ button in the lower left hand corner. |
```config from cloud.resource where api.name = 'azure-active-directory-user-registration-details' AND json.rule = isMfaRegistered is false as X; config from cloud.resource where api.name = 'azure-active-directory-user' AND json.rule = accountEnabled is true as Y; filter '$.X.userDisplayName equals $.Y.displayName'; show X;``` | Azure Active Directory MFA is not enabled for user
This policy identifies Azure users for whom AD MFA (Active Directory Multi-Factor Authentication) is not enabled.
Azure AD is a simple best practice that adds an extra layer of protection on top of your user name and password. MFA provides increased security for your Azure account settings and resources. Enabling Azure AD Multi-Factor Authentication using Conditional Access policies is the recommended approach to protect users.
As best practice, it is recommended to enable Azure AD Multi-Factor Authentication for users.
This is applicable to azure cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MFA'].
Mitigation of this issue can be done as follows: To enable per-user Azure AD Multi-Factor Authentication; follow below URL:\nhttps://docs.microsoft.com/en-us/azure/active-directory/authentication/howto-mfa-userstates. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'kind starts with app and properties.clientCertEnabled equals false'``` | Azure App Service Web app client certificate is disabled
This policy identifies Azure web apps which are not set with client certificate. Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to App Services\n3. Click on the reported App\n4. Under Setting section, Click on 'Configuration'\n5. Under 'General Settings' tab, In 'Incoming client certificates', Set 'Client certificate mode' to 'Require'\n6. Click on 'Save'\n\nNote: App Services with Free sku plan is ideal for testing applications in a managed Azure environment. For Free sku plan client certificates option is not supported. We recomended to upgrade such reported app service as per your requirement apart from free sku plan.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-bus-namespace' AND json.rule = properties.status equals "Active" and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)``` | Bobby Copy of Azure Service bus namespace not configured with Azure Active Directory (Azure AD) authentication
This policy identifies Service bus namespaces that are not configured with Azure Active Directory (Azure AD) authentication and are enabled with local authentication. Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there's no need to store the tokens in your code and risk potential security vulnerabilities. It is recommended to configure the Service bus namespaces with Azure AD authentication so that all actions are strongly authenticated.
This is applicable to azure cloud and is considered a medium severity issue.
Sample categories of findings relevant here are [].
Mitigation of this issue can be done as follows: To configured Azure Active Directory (Azure AD) authentication and disable local authentication on existing Service bus, follow below URL instructions:\nhttps://docs.microsoft.com/en-us/azure/service-bus-messaging/disable-local-authentication. |
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = encryption.defaultKmsKeyName does not exist``` | GCP Storage Bucket encryption not configured with Customer-Managed Encryption Key (CMEK)
This policy identifies GCP Storage Buckets that are not configured with a Customer-Managed Encryption key.
GCP Storage Buckets might contain sensitive information. Google Cloud Storage service encrypts all data within the buckets using Google-managed encryption keys by default but users can specify Customer-Managed Keys (CMKs) to get enhanced security, control over the encryption key, and also comply with any regulatory requirements.
As a security best practice, the use of CMK to encrypt your Storage bucket is advisable as it gives you full control over the encrypted data.
This is applicable to gcp cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: To update the GCP storage bucket with customer-managed encryption, follow the below steps:\n\n1. Sign in to the Google Cloud Management Console. Navigate to the Cloud Storage Buckets page.\n2. Click on the name of the bucket where you want to enable customer-managed encryption.\n3. Under the 'Configuration' tab, under the 'Protection' section, select the 'Edit encryption type' option.\n4. A 'Edit encryption' dialogue box will appear. Select the 'Customer-managed encryption key' option.\n5. Under the 'Select a customer-managed key' dropdown, select the KMS key to be used for encryption.\n6. Click on 'SAVE'.\n\nNote: Make sure the storage bucket service account has cloudkms.cryptoKeyEncrypterDecrypter permissions to encrypt or decrypt with the selected key.. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any(properties.pricingTier does not equal Standard and (properties.deprecated does not exist or properties.deprecated is false))] exists``` | Azure Microsoft Defender for Cloud Defender plans is set to Off
This policy identifies Azure Microsoft Defender for Cloud which has a Defender setting set to Off. Enabling Azure Defender provides advanced security capabilities like providing threat intelligence, anomaly detection, and behavior analytics in the Azure Microsoft Defender for Cloud. It is highly recommended to enable Azure Defender for all Azure services.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Select Defender plan by resource type' Select 'Enable all'.\n8. Select 'Save'. |
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-security-center-settings' AND json.rule = pricings[?any( name equals VirtualMachines and properties.pricingTier does not equal Standard)] exists``` | Azure Microsoft Defender for Cloud is set to Off for Servers
This policy identifies Azure Microsoft Defender for Cloud (previously known as Azure Security Center and Azure Defender) which has defender setting for Servers is set to Off. Enabling Microsoft Defender for Cloud provides the tools needed to harden your resources, track your security posture, protect against cyberattacks, and streamline security management. It is highly recommended to enable Microsoft Defender for Servers.
This is applicable to azure cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Go to 'Microsoft Defender for Cloud'\n3. Select 'Environment Settings'\n4. Click on the subscription name\n5. Select the 'Defender plans'\n6. Select 'Enable all Microsoft Defender for Cloud plans' if not already enabled\n7. On the line in the table for 'Servers' Select 'On' under Plan.\n8. Select 'Save'. |
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 3389 or *.destinationPortRange.max == 3389) or (*.destinationPortRange.min < 3389 and *.destinationPortRange.max > 3389)) or (protocol equals "all") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)``` | OCI security lists allows unrestricted ingress access to port 3389
This policy identifies OCI Security lists that allow unrestricted ingress access to port 3389. It is recommended that no security list allows unrestricted ingress access to port 3389. As a best practice, remove unfettered connectivity to remote console services, such as Remote Desktop Protocol (RDP), to reduce server's exposure to risk.
This is applicable to oci cloud and is considered a informational severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit.. |
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-list-server-certificates' AND json.rule = '(_DateTime.ageInDays($.expiration) > -1)'``` | AWS IAM has expired SSL/TLS certificates
This policy identifies expired SSL/TLS certificates. To enable HTTPS connections to your website or application in AWS, you need an SSL/TLS server certificate. You can use ACM or IAM to store and deploy server certificates. Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB), which can damage the credibility of the application/website behind the ELB. This policy generates alerts if there are any expired SSL/TLS certificates stored in AWS IAM. As a best practice, it is recommended to delete expired certificates.
This is applicable to aws cloud and is considered a low severity issue.
Sample categories of findings relevant here are ['MISCONFIGURATION'].
Mitigation of this issue can be done as follows: Removing invalid certificates via AWS Management Console is not currently supported. To delete SSL/TLS certificates stored in IAM via the AWS API use the Command Line Interface (CLI).\n\nRemediation CLI:\n1. Run describe-load-balancers command to make sure that the expired server certificate is not currently used by any active load balancer.\n aws elb describe-load-balancers --region <COMPUTE_REGION> --load-balancer-names <ELB_NAME> --query 'LoadBalancerDescriptions[*].ListenerDescriptions[*].Listener.SSLCertificateId'\nThis command output will return the Amazon Resource Name (ARN) for the SSL certificate currently used by the selected ELB:\n [\n [\n "arn:aws:iam::1234567890:server-certificate/MyCertificate"\n ]\n ]\n2. If the load balancer listener using the reported expired certificate is not removed before the certificate, the ELB may continue to use the same certificate and work improperly. To delete the ELB listener that is using the expired SSL certificate, run following command:\n aws elb delete-load-balancer-listeners --region <COMPUTE_REGION> --load-balancer-name <ELB_NAME> --load-balancer-ports 443\n3. Now that is safe to remove the expired SSL/TLS certificate from AWS IAM, To delete it run:\n aws iam delete-server-certificate --server-certificate-name <CERTIFICATE_NAME>. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.