query
stringlengths
107
3k
description
stringlengths
183
5.37k
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sagemaker-notebook-instance' AND json.rule = 'notebookInstanceStatus equals InService and directInternetAccess equals Enabled'```
AWS SageMaker notebook instance configured with direct internet access feature This policy identifies SageMaker notebook instances that are configured with direct internet access feature. If AWS SageMaker notebook instances are configured with direct internet access feature, any machine outside the VPC can establish a connection to these instances, which provides an additional avenue for unauthorized access to data and the opportunity for malicious activity. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/appendix-notebook-and-internet-access.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: AWS SageMaker notebook instance direct internet access feature can not be disabled; once it is created. You need to create a new notebook instance with disabled direct internet access feature; and migrate all required data from the reported notebook instance to the newly created notebook instance before you delete the reported notebook instance.\n\nTo create a New AWS SageMaker notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and then choose 'Create notebook instance'\n4. On the Create notebook instance page, within the 'Network' section, \nFrom 'VPC – optional' dropdown list, select the VPC where you want to deploy a new SageMaker notebook instance.\n5. Select the 'Disable - Access the internet through a VPC' button under the 'Direct internet access' to disable direct internet access for the new notebook instance.\n6. Choose other parameters as per your requirement and Click on the 'Create notebook instance' button\n\nTo delete reported notebook instance,\n1. Log in to AWS console\n2. Navigate to the AWS SageMaker dashboard\n3. Choose Notebook instances and Choose the reported notebook instance\n4. Click on the 'Actions' dropdown menu, select the 'Stop' option, and when instance stops, select the 'Delete' option.\n5. Within Delete <notebook-instance-name> dialog box, click the Delete button to confirm the action..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-mysql-server' AND json.rule = 'properties.sslEnforcement does not equal Enabled'```
Azure MySQL Database Server SSL connection is disabled This policy identifies Azure MYSQL database server for which the SSL connection is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and application. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure portal.\n2. Navigate to 'Azure Database for MySQL server'\n3. Click on the reported database, select 'Connection security' from left panel\n4. In 'SSL settings' section,\n5. Ensure 'Enforce SSL connection' is set to 'ENABLED'..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-iam-get-account-password-policy' AND json.rule='isDefaultPolicy is true or maxPasswordAge !isType Integer or maxPasswordAge < 1 or maxPasswordAge does not exist'```
AWS IAM password policy does not have password expiration period Checks to ensure that IAM password policy has an expiration period. AWS IAM (Identity & Access Management) allows customers to secure AWS console access. As a security best practice, customers must have strong password policies in place. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Login to the AWS console and navigate to the 'IAM' service.\n2. On the left navigation panel, Click on 'Account Settings'\n3. check 'Enable password expiration' and enter a password expiration period.\n4. Click on 'Apply password policy'.
```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-cloud-spanner-database' AND json.rule = state equal ignore case ready AND enableDropProtection does not exist```
GCP Spanner Database drop protection disabled This policy identifies GCP Spanner Databases with drop protection disabled. Google Cloud Spanner is a scalable, globally distributed, and strongly consistent database service. The Spanner database drop protection feature prevents accidental deletion of databases and configurations. Without drop protection enabled, a user error or malicious action could lead to irreversible data loss and service disruption for all applications relying on that Spanner instance. It is recommended to enable drop protection on spanner database to prevent from accidental deletion. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable drop protection on a cloud spanner database, use the below CLI command:\n\ngcloud spanner databases update <DATABASE_ID> --instance=<INSTANCE_ID> --enable-drop-protection\n\nPlease refer to the URL mentioned below for more details on how to enable drop protection:\nhttps://cloud.google.com/spanner/docs/prevent-database-deletion#enable\n\nPlease refer to the URL mentioned below for more details on the cloud spanner update command:\nhttps://cloud.google.com/sdk/gcloud/reference/spanner/databases/update.
```config from cloud.resource where api.name= 'gcloud-compute-instances-list' and json.rule = ['metadata'].items does not exist and (status equals RUNNING and name does not start with "gke-")```
GCP VM Instances without any Custom metadata information VM instance does not have any Custom metadata. Custom metadata can be used for easy identification and searches. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP Console and from Compute, select Compute Engine.\n 2. Select the identified VM instance to see the details.\n 3. In the details page, click on Edit and navigate to Custom metadata section.\n 4. Add the appropriate Key:Value information and save..
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "logdnaat" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","logGroup","resourceType","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "iam-ServiceId")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-service-id' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.id'; show Y;```
IBM Cloud Service ID with IAM policies provide administrative privileges for Activity Tracker Service This policy identifies IBM Cloud Service ID, which has policy with administrator role permission for Activity Tracker service. When a Service ID having a policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privileged access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', and click on 'Service IDs' in the left panel.\n3. Select the Service ID which is reported and you want to edit access for.\n4. Under the 'Access' tab, go to the 'Access policies' section, click on the three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Activity Tracker' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where api.name = 'ibm-vpc-block-storage-volume' as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```
IBM Cloud Block Storage volume for VPC is not encrypted with BYOK This policy identifies IBM Cloud Block storage volumes that are not encrypted with Bring Your Own keys(BYOK). As a best practice, it is recommended to use BYOK so that no one outside the organization has access to the root key and only authorized identities have access to maintain the lifecycle of the keys. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: A storage volume can be encrypted with BYOK only at the time of creation. Please\nCreate a snapshot using the below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-create&interface=ui#snapshots-vpc-create-from-vol-details\n\nPlease create a storage volume from the above-created snapshot with BYOK, refer to below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-snapshots-vpc-restore&interface=ui#snapshots-vpc-restore-snaphot-list-ui\n\n1. Under the 'Encryption at rest' section, select 'Key Protect'.\n2. Under 'Encryption service instance' and 'Key name', select the instance and key to be used for encryption.\n3. Click 'Create block storage volume' button. The side panel closes, and a message indicate the restored volume.\n\nPlease delete the reported block storage volume using the below URL:\nhttps://cloud.ibm.com/docs/vpc?topic=vpc-managing-block-storage&interface=ui#delete.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-sql-db-list' AND json.rule = transparentDataEncryption is false```
Azure SQL database Transparent Data Encryption (TDE) encryption disabled This policy identifies SQL databases in which Transparent Data Encryption (TDE) is disabled. TDE encryption performs real-time encryption and decryption of the database, related reinforcements, and exchange log records without requiring any changes to the application. It encrypts the storage of an entire database by using a symmetric key called the database encryption key. It is recommended to have TDE encryption on your SQL databases to protect the database from malicious activity. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to Azure Portal\n2. Click on SQL databases (Left Panel)\n3. Choose the reported database\n4. Under Security, Click on Transparent data encryption\n5. Set Data encryption to ON\n6. Click on Save.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-network-security-group' AND json.rule = rules[?any( remote.cidr_block equals "0.0.0.0/0" and direction equals "inbound" and ( protocol equals "all" or ( protocol equals "tcp" and ( port_max greater than 22 and port_min less than 22 ) or ( port_max equals 22 and port_min equals 22 ))))] exists```
IBM Cloud Security Group allow all traffic on SSH port (22) This policy identifies IBM Cloud Security groups that allow all traffic on SSH port 22. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict SSH solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Groups reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Security Groups'\n3. Select the 'Security Groups' reported in the alert\n4. Go to 'Inbound rules' under 'Rules' tab\n5. Click on three dots on the right corner of a row containing rule that has 'Source type' as 'Any' and 'Value' as 22 (or range containing 22)\n6. Click on 'Delete'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = 'properties.state equals Running and ((config.javaVersion exists and config.javaVersion does not equal 1.8 and config.javaVersion does not equal 11 and config.javaVersion does not equal 17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JAVA and (config.linuxFxVersion contains 8 or config.linuxFxVersion contains 11 or config.linuxFxVersion contains 17) and config.linuxFxVersion does not contain 8-jre8 and config.linuxFxVersion does not contain 11-java11 and config.linuxFxVersion does not contain 17-java17) or (config.linuxFxVersion is not empty and config.linuxFxVersion contains JBOSSEAP and config.linuxFxVersion does not contain 7-java8 and config.linuxFxVersion does not contain 7-java11 and config.linuxFxVersion does not contain 7-java17) or (config.linuxFxVersion contains TOMCAT and config.linuxFxVersion does not end with 10.0-jre8 and config.linuxFxVersion does not end with 9.0-jre8 and config.linuxFxVersion does not end with 8.5-jre8 and config.linuxFxVersion does not end with 10.0-java11 and config.linuxFxVersion does not end with 9.0-java11 and config.linuxFxVersion does not end with 8.5-java11 and config.linuxFxVersion does not end with 10.0-java17 and config.linuxFxVersion does not end with 9.0-java17 and config.linuxFxVersion does not end with 8.5-java17))'```
bbaotest2 tested This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-logs-describe-metric-filters' as X; config from cloud.resource where api.name = 'aws-cloudwatch-describe-alarms' as Y; config from cloud.resource where api.name = 'aws-cloudtrail-describe-trails' as Z; filter '(($.Z.cloudWatchLogsLogGroupArn is not empty and $.Z.cloudWatchLogsLogGroupArn contains $.X.logGroupName and $.Z.isMultiRegionTrail is true and $.Z.includeGlobalServiceEvents is true) and (($.X.filterPattern contains "eventName=" or $.X.filterPattern contains "eventName =") and ($.X.filterPattern does not contain "eventName!=" and $.X.filterPattern does not contain "eventName !=") and $.X.filterPattern contains CreateTrail and $.X.filterPattern contains UpdateTrail and $.X.filterPattern contains DeleteTrail and $.X.filterPattern contains StartLogging and $.X.filterPattern contains StopLogging) and ($.X.metricTransformations[*] contains $.Y.metricName))'; show X; count(X) less than 1```
AWS Log metric filter and alarm does not exist for CloudTrail configuration changes This policy identifies the AWS regions which do not have a log metric filter and alarm for CloudTrail configuration changes. Monitoring changes to CloudTrail's configuration will help ensure sustained visibility to activities performed in the AWS account. It is recommended that a metric filter and alarm be established for detecting changes to CloudTrail's configurations. NOTE: This policy will trigger alert if you have at least one Cloudtrail with the multi trial is enabled, Logs all management events in your account and is not set with specific log metric filter and alarm. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to AWS Console\n2. Navigate to CloudWatch dashboard\n3. Click on 'Log groups' in the 'Logs' section (Left panel)\n4. Select the log group created for your CloudTrail trail event logs (CloudTrail should be multi trail enabled with all management events captured) and click 'Create Metric Filter' button.\n5. In 'Define Logs Metric Filter' page, add 'Filter pattern' value as \n{ ($.eventName = CreateTrail) || ($.eventName = UpdateTrail) || ($.eventName = DeleteTrail) || ($.eventName = StartLogging) || ($.eventName = StopLogging) }\nand Click on 'Assign Metric'\n6. In 'Create Metric Filter and Assign a Metric' page, Choose Filter Name, Metric Details parameter according to your requirement and click on 'Create Filter'\n7. Click on 'Create Alarm',\n - In Step 1 specify metric details and conditions details as required and click on 'Next'\n - In Step 2 Select an SNS topic either by creating a new topic or use existing SNS topic/ARN and click on 'Next'\n - In Step 3 Select name and description to alarm and click on 'Next'\n - In Step 4 Preview your data entered and click on 'Create Alarm'.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-instance' AND json.rule = status equal ignore case "running" AND network_interfaces[?any( floating_ips is not empty)] exists```
IBM Cloud Virtual Servers for VPC instance have floating IP address This policy identifies IBM Cloud Virtual Servers for VPC instances which have floating IP assigned. If any virtual server instance has floating IP address attached, it can be reachable from public internet independent of whether its subnet is attached to a public gateway. It is recommended to not attach any floating IP to virtual server instances. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instances'\n3. Select the 'Virtual server instances' reported in the alert\n4. Under 'Network Interfaces' tab, click on edit icon \n5. Under 'Floating IP' dropdown, select 'Unbind current floating IP'\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = defaultRootObject is empty```
dnd_test_create_hyperion_policy_ss_finding_2 Description-abe3365a-9395-4eb7-8d0f-9b3ea0735c7b This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'```
AWS Redshift database does not have audit logging enabled Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-application-gateway' AND json.rule = ['properties.httpListeners'][*].['properties.protocol'] equals Http```
Azure Application gateways listener that allow connection requests over HTTP This policy identifies Azure Application gateways that are configured to accept connection requests over HTTP. As a best practice, use the HTTPS protocol to encrypt the communication between the application clients and the application gateways. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal\n2. Select 'All services'\n3. Select 'Application gateways', under NETWORKING\n4. Select the Application gateways needed to be modified\n5. Select 'Listeners' under Settings\n6. To add HTTPS listener follow below step, if already HTTPS listener present jump to point 10\n7. Click on 'Add listener', enter 'Listener name', 'Frontend IP'\n8. Select 'Protocol' as HTTPS and fill in 'Https Settings' and 'Additional settings' and click on 'Add'\n9. Click on 'Rules' in the left pane and click on 'Request routing rule' and associate HTTPS listener to a rule \n10. Click on three dots on the right corner of a row containing 'Protocol' as HTTP\n11. Click on 'Delete'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(1521,1521)"```
Alibaba Cloud Security group allow internet traffic to Oracle DB port (1521) This policy identifies Security groups that allow inbound traffic on Oracle DB port (1521) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 1521, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ec2-describe-route-tables' AND json.rule = "routes[?(@.state == 'active' && @.instanceId)].destinationCidrBlock contains 0.0.0.0/0"```
AWS NAT Gateways are not being utilized for the default route This policy identifies Route Tables which have NAT instances for the default route instead of NAT gateways. It is recommended to use NAT gateways as the AWS managed NAT Gateway provides a scalable and resilient method for allowing outbound internet traffic from your private VPC subnets. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNUSED_PRIVILEGES']. Mitigation of this issue can be done as follows: To create a NAT gateway:\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, choose 'NAT Gateways'\n5. Click on 'Create NAT Gateway', Specify the subnet in which to create the NAT gateway, and select the allocation ID of an Elastic IP address to associate with the NAT gateway. When you're done, Click on 'Create a NAT Gateway'. The NAT gateway displays in the console. After a few moments, its status changes to Available, after which it's ready for you to use.\n\nTo update Route Table:\nAfter you've created your NAT gateway, you must update your route tables for your private subnets to point internet traffic to the NAT gateway. We use the most specific route that matches the traffic to determine how to route the traffic.\n1. Sign into the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to VPC Dashboard\n4. In the navigation pane, choose 'Route Tables'\n5. Select the reported route table associated with your private subnet \n6. Choose 'Routes' and Click on 'Edit routes'\n7. Replace the current route that points to the NAT instance with a route to the NAT gateway\n8. Click on 'Save routes'.
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ecs-security-group' AND json.rule = "permissions is not empty and permissions[?(@.policy=='Accept' && @.direction=='ingress' && @.sourceCidrIp=='0.0.0.0/0')].portRange contains _Port.inRange(23,23)"```
Alibaba Cloud Security group allow internet traffic to Telnet port (23) This policy identifies Security groups that allow inbound traffic on Telnet port (23) from the public internet. As a best practice, restrict security groups to only allow permitted traffic and limit brute force attacks on your network. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Elastic Compute Service\n3. In the left-side navigation pane, choose Network & Security > Security Groups\n4. Select the reported security group and then click Add Rules in the Actions column\n5. In Inbound tab, Select the rule with 'Action' as Allow, 'Authorization Object' as 0.0.0.0/0 and 'Port Range' value as 23, Click Modify in the Actions column\n6. Replace the value 0.0.0.0/0 with specific IP address range\n7. Click on 'OK'.
```config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as X; config from cloud.resource where api.name = 'gcloud-vertex-ai-aiplatform-training-pipeline' as Y; filter ' $.Y.trainingTaskOutputDirectory contains $.X.id '; show X;```
GCP Storage Bucket storing GCP Vertex AI training pipeline output model This policy identifies publicly exposed GCS buckets that are used to store the GCP Vertex AI training pipeline output model. GCP Vertex AI training pipeline output models are stored in the Storage bucket. Vertex AI training pipeline output model is considered sensitive and confidential intellectual property and its storage location should be checked regularly. The storage location should be as per your organization's security and compliance requirements. It is recommended to monitor, identify, and evaluate storage location for the GCP Vertex AI training pipeline output model regularly to prevent unauthorized access and AI model thefts. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Review and validate the Vertex AI training pipeline output models are stored in the right Storage buckets. Move and/or delete the model and other related artifacts if they are found in an unexpected location. Review how the Vertex AI training pipeline was configured to output to an unauthorised/unapproved storage bucket..
```config from cloud.resource where api.name = 'aws-iam-list-users' AND json.rule = createDate contains 2018```
Edited_ayiumvbvgu_ui_auto_policies_tests_name lvcskhftle_ui_auto_policies_tests_descr This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'oci' AND api.name = 'oci-networking-security-list' AND json.rule = (ingressSecurityRules[?any((source equals 0.0.0.0/0) and (((*.destinationPortRange.min == 22 or *.destinationPortRange.max == 22) or (*.destinationPortRange.min < 22 and *.destinationPortRange.max > 22)) or (protocol equals "all") or ((tcpOptions does not exist) and (udpOptions does not exist) and (protocol does not equal 1))))] exists)```
OCI Security List allows all traffic on SSH port (22) This policy identifies OCI Security lists that allow unrestricted ingress access to port 22. It is recommended that no security list allows unrestricted ingress access to port 22. As a best practice, remove unfettered connectivity to remote console services, such as Secure Shell (SSH), to reduce server's exposure to risk. This is applicable to oci cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the OCI Console\n2. Type the resource reported in the alert into the Search box at the top of the Console.3. Click the resource reported in the alert from the Resources submenu\n4. Under Resources, click Ingress Rules.\n5. If you want to add a rule, click Add Ingress Rules\n6. If you want to delete an existing rule, click the Actions icon (three dots), and then click Remove.\n7. If you wanted to edit an existing rule, click the Actions icon (three dots), and then click Edit..
```config from cloud.resource where cloud.type = 'azure' and api.name = 'azure-container-registry' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.adminUserEnabled is true```
Azure Container Registry with local admin account enabled This policy identifies Azure Container Registries having local admin account enabled. Enabling the admin account allows access to the registry through username and password, bypassing Microsoft Entra ID authentication. Disabling the local admin account improves security by enforcing exclusive use of Microsoft Entra ID identities, which provide centralized management, enhanced auditing, and better control over permissions. By relying solely on Microsoft Entra ID for authentication, the risk of unauthorized access through local credentials is mitigated, ensuring stronger protection for your container registry. As a security best practice, it is recommended to disable local admin account for Azure Container Registries. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to your Azure portal\n2. Navigate to 'Container registries'\n3. Select the reported Container Registry\n4. Under 'Settings' select 'Access Keys'\n5. Ensure that the 'Admin user' box is unchecked.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-cloudfront-list-distributions' AND json.rule = viewerCertificate.certificateSource does not contain cloudfront and viewerCertificate.minimumProtocolVersion does not equal TLSv1.2_2021```
AWS CloudFront web distribution using insecure TLS version This policy identifies AWS CloudFront web distributions which are configured with TLS versions for HTTPS communication between viewers and CloudFront. As a best practice, use recommended TLSv1.2_2021 as the minimum protocol version in your CloudFront distribution security policies. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Navigate to CloudFront Distributions Dashboard\n3. Click on the reported distribution\n4. On 'General' tab, Click on 'Edit' button under 'Settings'\n5. On 'Edit Distribution' page, Set 'Security Policy' to TLSv1.2_2021\n6. Click on 'Save changes'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and domainEndpointOptions.enforceHTTPS is false```
AWS Elasticsearch domain is not configured with HTTPS This policy identifies Elasticsearch domains that are not configured with HTTPS. Amazon Elasticsearch domains allow all traffic to be submitted over HTTPS, ensuring all communications between application and domain are encrypted. It is recommended to enable HTTPS so that all communication between the application and all data access goes across an encrypted communication channel to eliminate man-in-the-middle attacks. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Elasticsearch dashboard\n4. Click on reported Elasticsearch domain\n5. Click on 'Actions', from drop-down choose 'Modify encryptions'\n6. In 'Modify encryptions' page, Select 'Require HTTPS for all traffic to the domain'\n7. Click on Submit.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-key-vault-list' AND json.rule = diagnosticSettings.value[*] size equals 0```
Azure Key vaults diagnostics logs are disabled This policy identifies Azure Key vaults which has diagnostics logs disabled. Enabling Diagnostic Logs gives visibility into the data plane thus gives organisation ability to detect reconnaissance, authorization attempts or other malicious activity. It is recommended to enable diagnostics logs settings for Azure Key vaults. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal.\n2. Navigate to 'Key vaults', and select the reported key vault from the list\n3. Select 'Diagnostic settings' under 'Monitoring' section\n4. Click on '+Add diagnostic setting'\n5. Specify a 'Diagnostic settings name',\n6. Under 'Category details' section, select the type of 'Log' that you want to enable\n7. Under section 'Destination details',\na. If you select 'Send to Log Analytics', select the 'Subscription' and 'Log Analytics workspace'\nb. If you set 'Archive to storage account', select the 'Subscription', 'Storage account' and set the 'Retention (days)'\nc. If you set 'Stream to an event hub', select the 'Subscription', 'Event hub namespace', 'Event hub name' and set the 'Event hub policy name'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and (settings.databaseFlags[?(@.name=='log_lock_waits')] does not exist or settings.databaseFlags[?(@.name=='log_lock_waits')].value equals off)"```
GCP PostgreSQL instance database flag log_lock_waits is disabled This policy identifies PostgreSQL database instances in which database flag log_lock_waits is not set. Enabling the log_lock_waits flag can be used to identify poor performance due to locking delays or if a specially-crafted SQL is attempting to starve resources through holding locks for excessive amounts of time. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\nNOTE: If the instance is stopped, You need to START instance first to edit the configurations, then Click on EDIT.\n5. If the flag has not been set on the instance, \nUnder 'Configuration options', click on 'Add item' in 'Flags' section, choose the flag 'log_lock_waits' from the drop-down menu and set the value as 'On'\nOR\nIf the flag has been set to off, Under 'Configuration options', In 'Flags' section choose the flag 'log_lock_waits' and set the value as 'On'\n6. Click Save.
```config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' as X; config from cloud.resource where api.name = 'aws-sagemaker-training-job' as Y; filter '$.Y.OutputDataConfig.bucketName equals $.X.bucketName'; show X;```
AWS S3 bucket used for storing AWS Sagemaker training job output This policy identifies the AWS S3 bucket used for storing AWS Sagemaker training job output. S3 buckets hold the results and artifacts generated from training machine learning models in Sagemaker. Ensuring proper configuration and access control is crucial to maintaining the security and integrity of the training output. Improperly secured S3 buckets used for storing AWS Sagemaker training output can lead to unauthorized access, data breaches, and potential exposure of sensitive model information. It is recommended to implement strict access controls, enable encryption, and audit permissions to secure AWS S3 buckets for AWS Sagemaker training job output and ensure compliance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To protect the S3 buckets utilized by the Sagemaker training job, please refer to the following link for recommended best practices\nhttps://docs.aws.amazon.com/AmazonS3/latest/userguide/security-best-practices.html.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = 'properties.sslEnforcement contains Disabled'```
Azure PostgreSQL database server with SSL connection disabled This policy identifies PostgreSQL database servers for which SSL enforce status is disabled. SSL connectivity helps to provide a new layer of security, by connecting database server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between database server and client applications helps protect against "man in the middle" attacks by encrypting the data stream between the server and application. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to Azure console\n2. Navigate to 'Azure Database for PostgreSQL servers' dashboard\n3. Click on the alerted database name\n4. Go to 'Connection security' under 'Settings' block.\n5. In 'SSL settings' block, for 'Enforce SSL connection' field, click on 'Enabled’ on the toggle button\n6. Click on 'Save' button from top menu to save the change..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-container-describe-clusters' AND json.rule = nodePools[?any(config.shieldedInstanceConfig.enableIntegrityMonitoring does not exist or config.shieldedInstanceConfig.enableIntegrityMonitoring is false)] exists```
GCP Kubernetes cluster shielded GKE node with integrity monitoring disabled This policy identifies GCP Kubernetes cluster shielded GKE nodes that are not enabled with Integrity Monitoring. Integrity Monitoring provides active alerting for Shielded GKE nodes which allows administrators to respond to integrity failures and prevent compromised nodes from being deployed into the cluster. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Note: Once a Node pool is provisioned, it cannot be updated to enable Integrity monitoring. You must create new Node pools within the cluster with Integrity monitoring enabled. You will also need to migrate workloads from existing non-conforming Node pools to the newly created Node pool, then delete the non-conforming pools.\n\nTo create a node pool with Integrity monitoring enabled follow the below steps,\n\n1. Log in to gcloud console\n2. Navigate to service 'Kubernetes Engine'\n3. Select the alerted cluster and click 'ADD NODE POOL'\n4. Ensure that the 'Enable integrity monitoring' checkbox is checked under the 'Shielded options' in section 'Security'\n5. Click on 'CREATE'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-acm-describe-certificate' AND json.rule = 'type does not equal IMPORTED and (options.certificateTransparencyLoggingPreference equals DISABLED or options.certificateTransparencyLoggingPreference does not exist) and status equals ISSUED and _DateTime.ageInDays($.notAfter) < 1'```
AWS Certificate Manager (ACM) has certificates with Certificate Transparency Logging disabled This policy identifies AWS Certificate Manager certificates in which Certificate Transparency Logging is disabled. AWS Certificate Manager (ACM) is the preferred tool to provision, manage, and deploy your server certificates. Certificate Transparency Logging is used to guard against SSL/TLS certificates that are issued by mistake or by a compromised CA, some browsers require that public certificates issued for your domain can also be recorded. This policy generates alerts for certificates which have transparency logging disabled. As a best practice, it is recommended to enable Transparency logging for all certificates. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: You cannot currently use the console to opt out of or into transparency logging. It's recommended to use the command line utility to enable transparency logging.\n\nRemediation CLI:\n1. Use the below command to list ACM certificate\n aws acm list-certificates\n2. Note the 'CertificateArn' of the reported ACM certificate\n3. Use the below command to ENABLE Certificate Transparency Logging\n aws acm update-certificate-options --certificate-arn <certificateARN> --options CertificateTransparencyLoggingPreference=ENABLED\nwhere 'CertificateArn' is captured in the step2.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-cognitive-services-account' AND json.rule = properties.provisioningState equal ignore case Succeeded and (properties.disableLocalAuth does not exist or properties.disableLocalAuth is false)```
Azure Cognitive Services account configured with local authentication This policy identifies Azure Cognitive Services accounts that are configured with local authentication methods instead of AD identities. Local authentication allows users to access the service using a local account and password, rather than an Azure Active Directory (Azure AD) account. Disabling local authentication methods improves security by ensuring that Cognitive Services accounts require Active Directory identities exclusively for authentication. It is recommended to disable local authentication methods on your Cognitive Services account, instead use Azure Active Directory identities. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable local authentication in Azure AI Services, follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/ai-services/disable-local-auth.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-dns-managed-zone' AND json.rule = 'dnssecConfig.defaultKeySpecs[*].keyType contains keySigning and dnssecConfig.defaultKeySpecs[*].algorithm contains rsasha1'```
GCP Cloud DNS zones using RSASHA1 algorithm for DNSSEC key-signing This policy identifies the GCP Cloud DNS zones which are using the RSASHA1 algorithm for DNSSEC key-signing. DNSSEC is a feature of the Domain Name System that authenticates responses to domain name lookups and also prevents attackers from manipulating or poisoning the responses to DNS requests. So the algorithm used for key signing should be recommended one and it should not be weak. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Currently, DNSSEC key-signing can be updated using command line interface only.\n1. If you need to change the settings for a managed zone where it has been enabled, you have to turn DNSSEC off and then re-enable it with different settings. To turn off DNSSEC, run following command:\ngcloud dns managed-zones update <ZONE_NAME> --dnssec-state off\n2. To update key-signing for a reported managed DNS Zone, run following command:\ngcloud dns managed-zones update <ZONE_NAME> --dnssec-state on --ksk-algorithm <KSK_ALGORITHM> --ksk-key-length <KSK_KEY_LENGTH> --zsk-algorithm <ZSK_ALGORITHM> --zsk-key-length <ZSK_KEY_LENGTH> --denial-of-existence <DENIAL_OF_EXISTENCE>.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case "Running" AND kind contains "functionapp" AND kind does not contain "workflowapp" AND kind does not equal "app" AND properties.httpsOnly is false```
Azure Function App doesn't redirect HTTP to HTTPS This policy identifies Azure Function App which doesn't redirect HTTP to HTTPS. Azure Function App can be accessed by anyone using non-secure HTTP links by default. Non-secure HTTP requests can be restricted and all HTTP requests redirected to the secure HTTPS port. It is recommended to enforce HTTPS-only traffic. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Log in to the Azure portal\n2. Navigate to Function App\n3. Click on the reported Function App\n4. Under Setting section, Click on 'TLS/SSL settings'\n5. In 'Protocol Settings', Set 'HTTPS Only' to 'On'.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "databases-for-postgresql" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resourceGroupId","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud user with IAM policies provide administrative privileges for Databases for PostgreSQL service This policy identifies IBM Cloud users with administrator role permission for Databases for PostgreSQL service. A user has full platform control as an administrator, including the ability to assign other users access policies and modify deployment passwords. If a user with administrator privilege becomes compromised, it may result in a compromised database. As a security best practice, it is advised to provide the least privilege access, such as allowing only the rights necessary to complete a task, instead of excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'Databases for PostgreSQL' service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = "(publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.ignorePublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicPolicy is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicPolicy is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.restrictPublicBuckets is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))"```
AWS S3 Buckets Block public access setting disabled This policy identifies AWS S3 buckets which have 'Block public access' setting disabled. Amazon S3 provides 'Block public access' setting to manage public access of AWS S3 buckets. Enabling 'Block public access' setting prevents S3 resource data being accidentally or maliciously becoming publicly accessible. It is highly recommended to enable 'Block public access' setting for all AWS s3 buckets appropriately. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Block public access' click on 'Edit'\n6. Select 'Block all public access' checkbox\n7. Click on Save\n8. 'Confirm' the changes\n\nNote: Make sure updating 'Block public access' setting does not affect S3 bucket data access..
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy kuzde This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'aws-account-management-alternate-contact' group by account as X; filter ' AlternateContactType is not member of ("SECURITY") ' ;```
AWS account security contact information is not set This policy identifies the AWS account which has not set security contact information. Providing dedicated contact information for security specific, AWS can directly communicate security advisories to the team responsible for handling security-related issues. Failure to specify security contact info in AWS risks missing critical advisories, leading to delayed incident response and increased vulnerability exposure. It is recommended to set security contact information to receive notifications. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Refer to the following link to add or edit the alternate contacts for any AWS account in your organization\n\nhttps://docs.aws.amazon.com/accounts/latest/reference/manage-acct-update-contact-alternate.html.
```config from cloud.resource where cloud.type = 'aws' and api.name='aws-ec2-describe-snapshots' AND json.rule='createVolumePermissions[*].group contains all'```
AWS EBS snapshots are accessible to public This policy identifies EC2 EBS snapshots which are accessible to public. Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances in the AWS Cloud. If EBS snapshots are inadvertently shared to public, any unauthorized user with AWS console access can gain access to the snapshots and gain access to sensitive data. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['INTERNET_EXPOSURE']. Mitigation of this issue can be done as follows: 1. Log in to the AWS console\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated\n3. Navigate to 'EC2' service.\n4. Under the 'Elastic Block Storage', click on the 'Snapshots'.\n5. For the specific Snapshots, change the value of field 'Property' to 'Private'.\n6. Under the section 'Encryption Details', set the value of 'Encryption Enabled' to 'Yes'..
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-ec2-describe-security-groups' AND json.rule = isShared is false and (ipPermissions[?any((ipRanges[*] contains 0.0.0.0/0 or ipv6Ranges[*].cidrIpv6 contains ::/0) and ((toPort == 3389 or fromPort == 3389) or (toPort > 3389 and fromPort < 3389)))] exists)```
AWS Security Group allows all traffic on RDP port (3389) This policy identifies Security groups that allow all traffic on RDP port 3389. Doing so, may allow a bad actor to brute force their way into the system and potentially get access to the entire network. Review your list of security group rules to ensure that your resources are not exposed. As a best practice, restrict RDP solely to known static IP addresses. Limit the access list to include known hosts, services, or specific employees only. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Before making any changes, please check the impact to your applications/services. If the Security Group reported indeed need to restrict all traffic, follow the instructions below:\n1. Log in to the AWS Console\n2. Navigate to the 'VPC' service\n3. Select the 'Security Group' reported in the alert\n4. Click on the 'Inbound Rule'\n5. Remove the rule which has 'Source' value as 0.0.0.0/0 or ::/0 and 'Port Range' value as 3389 (or range containing 3389).
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-monitor-log-profiles-list' AND json.rule = isLegacy is true and (properties.retentionPolicy does not exist or properties.retentionPolicy.enabled is false or (properties.retentionPolicy.enabled is true and (properties.retentionPolicy.days does not equal 0 and properties.retentionPolicy.days < 365)))```
Azure Activity Log retention should not be set to less than 365 days This policy identifies Log profiles which have log retention set to less than 365 days. Log profile controls how your Activity Log is exported and retained. Since the average time to detect a breach is over 200 days, it is recommended to retain your activity log for 365 days or more in order to have time to respond to any incidents. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If a log profile already exists, you first must remove the existing log profile, and then create a log profile.\nFollow URL to create new log profile:\nhttps://learn.microsoft.com/en-us/azure/azure-monitor/essentials/activity-log?tabs=cli#managing-legacy-log-profiles\nMake sure you set retention days to 365 or more days..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-ssm-resource-compliance-summary' AND json.rule = Status equals "NON_COMPLIANT" and ComplianceType contains "Patch" and ResourceType contains "ManagedInstance" and (NonCompliantSummary.SeveritySummary.CriticalCount greater than 0 or NonCompliantSummary.SeveritySummary.HighCount greater than 0)```
AWS Systems Manager EC2 instance having NON_COMPLIANT patch compliance status This policy identifies if the AWS Systems Manager patch compliance status is "NON_COMPLIANT" with critical or high severity for managed instances. Instances labeled non-compliant might lack essential patches for security, stability, or meeting standards. Non-compliant instances pose security risks because attackers often target unpatched systems to exploit known weaknesses. As a security best practice, it's recommended to apply any missing patches to the affected instances. This is applicable to aws cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To remediate the non-compliant managed instances please refer to the below URL:\n\nhttps://docs.aws.amazon.com/systems-manager/latest/userguide/patch-manager-compliance-remediation.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any(Effect equals Allow and Resource equals * and (Action contains kms:* or Action contains kms:Decrypt or Action contains kms:ReEncryptFrom) and Condition does not exist)] exists```
AWS IAM policy allows decryption actions on all KMS keys This policy identifies IAM policies that allow decryption actions on all KMS keys. Instead of granting permissions for all keys, determine the minimum set of keys that users need to access encrypted data. You should grant to identities only the kms:Decrypt or kms:ReEncryptFrom permissions and only for the keys that are required to perform a task. By adopting the principle of least privilege, you can reduce the risk of unintended disclosure of your data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: To allow a user to encrypt and decrypt with any CMK in a specific AWS account; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-one-account\n\nTo allow a user to encrypt and decrypt with any CMK in a specific AWS account and Region; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-one-account-one-region\n\nTo allow a user to encrypt and decrypt with specific CMKs; refer following example:\nhttps://docs.aws.amazon.com/kms/latest/developerguide/customer-managed-policies.html#iam-policy-example-encrypt-decrypt-specific-cmks.
```config from cloud.resource where api.name = 'azure-dns-recordsets' AND json.rule = type contains CNAME and properties.CNAMERecord.cname contains "web.core.windows.net" as X; config from cloud.resource where api.name = 'azure-storage-account-list' AND json.rule = properties.provisioningState equal ignore case Succeeded and properties.primaryEndpoints.web exists as Y; filter 'not ($.Y.properties.primaryEndpoints.web contains $.X.properties.CNAMERecord.cname) '; show X;```
Azure DNS Zone having dangling DNS Record vulnerable to subdomain takeover associated with Azure Storage account blob This policy identifies DNS records within an Azure DNS zone that point to Azure Storage Account blobs that no longer exist. A dangling DNS attack happens when a DNS record points to a cloud resource that has been deleted or is inactive, making the subdomain vulnerable to takeover. An attacker can exploit this by creating a new resource with the same name and taking control of the subdomain to serve malicious content. This allows attackers to host harmful content under your subdomain, which could lead to phishing attacks, data breaches, and damage to your reputation. The risk arises because the DNS record still references a non-existent resource, which unauthorized individuals can re-associate with their own resources. As a security best practice, it is recommended to routinely audit DNS zones and remove or update DNS records pointing to non-existing Azure Storage Account blobs. This is applicable to azure cloud and is considered a high severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the Azure Portal and search for 'DNS zones'\n2. Select 'DNS zones' from the search results\n3. Select the DNS zone associated with the reported DNS record\n4. On the left-hand menu, under 'DNS Management,' select 'Recordsets'\n5. Locate and select the reported DNS record\n6. Update or remove the DNS Record if no longer necessary.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'acl.grants[*].grantee contains AuthenticatedUsers'```
AWS S3 buckets are accessible to any authenticated user This policy identifies S3 buckets accessible to any authenticated AWS users. Amazon S3 allows customer to store and retrieve any type of content from anywhere in the web. Often, customers have legitimate reasons to expose the S3 bucket to public, for example to host website content. However, these buckets often contain highly sensitive enterprise data which if left accessible to anyone with valid AWS credentials, may result in sensitive data leaks. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Public access', Click on 'Any AWS user' and uncheck all items\n6. Click on Save.
```config from cloud.resource where api.name = 'azure-devices-iot-hub-resource' AND json.rule = properties.provisioningState equal ignore case "Succeeded" as X; config from cloud.resource where api.name = 'azure-iot-security-solutions' AND json.rule = properties.status equal ignore case "Enabled" as Y; filter 'not $.Y.properties.iotHubs contains $.X.id'; show X;```
Azure Microsoft Defender for IoT Hub not enabled This policy identifies Azure IoT Hubs without Microsoft Defender for IoT enabled. Azure IoT Hub is a managed service that acts as a central message hub for communication between IoT applications and IoT devices. Without Microsoft Defender for IoT enabled, IoT devices and hubs are more vulnerable to security threats. This increases the risk of unauthorized access, data breaches, and compromised IoT devices, which can lead to operational and security challenges. As best practice, it is recommended to enable Microsoft Defender for IoT on your Azure IoT Hub. This enhances the security posture of your IoT solutions by providing continuous monitoring, threat detection, and automated response capabilities to protect against cyber threats. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable Microsoft Defender for IoT on Azure IoT Hub follow the below URL:\nhttps://learn.microsoft.com/en-us/azure/defender-for-iot/device-builders/quickstart-onboard-iot-hub#enable-defender-for-iot-on-an-existing-iot-hub.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = "databaseVersion contains POSTGRES and settings.databaseFlags[?(@.name=='log_min_error_statement')] does not exist"```
GCP PostgreSQL instance database flag log_min_error_statement is not set This policy identifies PostgreSQL database instances in which database flag log_min_error_statement is not set. The log_min_error_statement flag defines the minimum message severity level that are considered as an error statement. Messages for error statements are logged with the SQL statement. Valid values include DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1, INFO, NOTICE, WARNING, ERROR, LOG, FATAL, and PANIC. Each severity level includes the subsequent levels. log_min_error_statement flag value changes should only be made in accordance with the organization's logging policy. Proper auditing can help in troubleshooting operational problems and also permits forensic analysis. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to set the 'log_min_error_statement' flag for PostgreSQL database as per your organization's logging policy.\n\nTo update the databse flag of GCP PostgreSQL instance, please refer to the URL given below and set log_min_error_statement flag as needed:\nhttps://cloud.google.com/sql/docs/postgres/flags#set_a_database_flag.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-service-fabric-cluster' AND json.rule = properties.provisioningState equal ignore case Succeeded and ((properties.fabricSettings[*].name does not equal ignore case "Security" or properties.fabricSettings[*].parameters[*].name does not equal ignore case "ClusterProtectionLevel") or (properties.fabricSettings[?any(name equal ignore case "Security" and parameters[?any(name equal ignore case "ClusterProtectionLevel" and value equal ignore case "None")] exists )] exists))```
Azure Service Fabric cluster not configured with cluster protection level security This policy identifies Service Fabric clusters that are not configured with cluster protection level security. Service Fabric provides levels of protection for node-to-node communication using a primary cluster certificate. It is recommended to set the protection level to ensure that all node-to-node messages are encrypted and digitally signed. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to 'Service Fabric cluster'\n3. Click on the reported Service Fabric cluster\n4. Select 'Custom fabric settings' under 'Settings' from left panel \n5. Make sure a fabric settings in 'Security' section exist with 'ClusterProtectionLevel' property is set to 'EncryptAndSign'.\n\nNote: Service Fabric provides three levels of protection (None, Sign and EncryptAndSign) for node-to-node communication using a primary cluster certificate. Set the protection level to ensure that all node-to-node messages are encrypted and digitally signed..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-kubernetes-cluster' AND json.rule = properties.powerState.code equal ignore case Running and properties.apiServerAccessProfile.enablePrivateCluster is false and (properties.apiServerAccessProfile.authorizedIPRanges does not exist or properties.apiServerAccessProfile.authorizedIPRanges is empty)```
Azure AKS cluster configured with overly permissive API server access This policy identifies AKS clusters configured with overly permissive API server access. In Kubernetes, the API server receives requests to perform actions in the cluster such as to create resources or scale the number of nodes. To enhance cluster security and minimize attacks, the API server should only be accessible from a limited set of IP address ranges. These IP ranges allow defined IP address ranges to communicate with the API server. A request made to the API server from an IP address that is not part of these authorized IP ranges is blocked. It is recommended to configure AKS cluster with defined IP address ranges to communicate with the API server. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To configure AKS cluster with defined IP address ranges to communicate with the API server; refer below URL:\nhttps://docs.microsoft.com/en-us/azure/aks/api-server-authorized-ip-ranges#update-disable-and-find-authorized-ip-ranges-using-azure-portal.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals "RUNNABLE" and ipAddresses[?any( type equal ignore case "PRIMARY" )] exists and settings.ipConfiguration.authorizedNetworks is empty```
GCP SQL Instance with public IP address does not have authorized network configured This policy identifies GCP Cloud SQL instances with public IP addresses that do not have an authorized network configured. SQL instance can be connected securely by making use of the Cloud SQL Proxy or by adding the client's public addresses as an authorized network to the SQL instance. If the client application is connecting directly to a Cloud SQL instance on its public IP address, the client's external IP address needs to be added as an Authorized network to allow the secure connection. It is recommended to add authorized networks for your SQL instance to minimize the access vector. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: If public IP is not needed for the SQL instance, it is recommeded to remove the Public IP from the instance. Any changes to the public IP should be made according to the organization needs and policies.\n\nTo remove the public IP for from a SQL instance, please refer to the URLs given below:\nFor MySQL: https://cloud.google.com/sql/docs/mysql/configure-ip#disable-public\nFor PostgreSQL: https://cloud.google.com/sql/docs/postgres/configure-ip#disable-public\nFor SQL Server: https://cloud.google.com/sql/docs/sqlserver/configure-ip#disable-public\n\nIf it is deemed that instance needs public IP, it is recommended to add restrictive Authorized Networks to limit allowed public connections to the instance.\n\nTo configure authorized networks for a SQL instance, please refer to the URLs given below:\nFor MySQL: https://cloud.google.com/sql/docs/mysql/authorize-networks\nFor PostgreSQL: https://cloud.google.com/sql/docs/postgres/authorize-networks\nFor SQL Server: https://cloud.google.com/sql/docs/sqlserver/authorize-networks.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-compute-image' AND json.rule = iamPolicy.bindings[?any( members contains "allAuthenticatedUsers" )] exists```
GCP OS Image is publicly accessible This policy identifies GCP OS Images that are publicly accessible. Custom GCP OS images are user-created operating system images tailored to specific needs and configurations. Making these images public can expose sensitive data, proprietary software, and security vulnerabilities. This can lead to unauthorized access, data breaches, and system exploitation, compromising your infrastructure's security and integrity. It is recommended to keep OS images private unless required for organizational needs. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: It is recommended to review and add appropriate, but restrictive roles before revoking access.\n\nTo revoke access from 'allAuthenticatedUsers', follow the below mentioned steps:\n1. Login to the GCP console\n2. Navigate to 'Compute Engine' and then 'Images'\n4. Select the reported image using the check box\n5. Click on the 'PERMISSIONS' tab in the right bar\n6. Filter for 'allAuthenticatedUsers'\n7. Click on the 'Remove principal' button (bin icon)\n8. Select 'Remove allAuthenticatedUsers from all roles on this resource. They may still have access via inherited roles.'\n9. Click 'Remove'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-postgresql-server' AND json.rule = properties.publicNetworkAccess equal ignore case Enabled and firewallRules.value[?any(properties.startIpAddress equals 0.0.0.0 and properties.endIpAddress equals 255.255.255.255)] exists```
Azure PostgreSQL Database Server Firewall rule allow access to all IPV4 address This policy identifies Azure PostgreSQL Database Server which has Firewall rule that allow access to all IPV4 address. Having a firewall rule with start IP being 0.0.0.0 and end IP being 255.255.255.255 would allow access to SQL server from any host on the internet. It is highly recommended not to use this type of firewall rule in any PostgreSQL Database Server. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1.Login to Azure Portal\n2.Click on 'All services' on left Navigation\n3.Click on 'Azure Database for PostgreSQL servers' under Databases\n4.Click on reported server instance\n5.Click on 'Connection security' under Settings\n6.Delete the rule which has 'Start IP' as 0.0.0.0 and 'End IP' as 255.255.255.255 under 'Firewall rule name' section\n7.Click on 'Save'.
```config from cloud.resource where api.name = 'gcloud-compute-target-https-proxies' as X; config from cloud.resource where api.name = 'gcloud-compute-ssl-policies' as Y; filter " $.X.sslPolicy does not exist or ($.Y.profile equals COMPATIBLE and $.Y.selfLink contains $.X.sslPolicy) or ( ($.Y.profile equals MODERN or $.Y.profile equals CUSTOM) and $.Y.minTlsVersion does not equal TLS_1_2 and $.Y.selfLink contains $.X.sslPolicy ) or ( $.Y.profile equals CUSTOM and ( $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_GCM_SHA256 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_GCM_SHA384 or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_128_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_AES_256_CBC_SHA or $.Y.enabledFeatures[*] contains TLS_RSA_WITH_3DES_EDE_CBC_SHA ) and $.Y.selfLink contains $.X.sslPolicy ) "; show X;```
GCP Load Balancer HTTPS proxy permits SSL policies with weak cipher suites This policy identifies GCP HTTPS Load Balancers that permit SSL policies with weak cipher suites. GCP default SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the widest range of insecure cipher suites. To prevent usage of insecure features, SSL policies should use at least TLS 1.2 with the MODERN profile; or the RESTRICTED profile, because it effectively requires clients to use TLS 1.2 regardless of the chosen minimum TLS version; or a CUSTOM profile that does not support any of the following features: TLS_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_256_GCM_SHA384 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_3DES_EDE_CBC_SHA This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: The 'GCP default' SSL policy uses a minimum TLS version of 1.0 and a Compatible profile, which allows the broadest range of insecure cipher suites and is not modifiable. If this SSL policy is attached to the target HTTPS Proxy Load Balancer, updating the proxy with a more secured SSL policy is recommended.\n\nTo create a new SSL policy, refer to the following URL:\nhttps://cloud.google.com/load-balancing/docs/use-ssl-policies#creating_ssl_policies\n\nTo modify the existing insecure SSL policy attached to the Target HTTPS Proxy:\n1. Login to GCP Portal\n2. Go to Network services (Left Panel)\n3. Select Load balancing\n4. Click on 'load balancing components view' hyperlink at bottom of page to view target proxies\n5. Go to 'TARGET PROXIES' tab and Click on the reported HTTPS target proxy\n6. Note the 'Load balancer' name.\n7. Click on the hyperlink under 'In use by'\n8. Note the 'External IP address'\n9. Select Load Balancing (Left Panel) and click on the HTTPS load balancer with same name as previously noted 'Load balancer' name.\n10. In frontend section, consider the rule where 'IP:Port' matches the previously noted 'External IP address'.\n11. Click on the 'SSL Policy' of the rule. This will take you to the alert causing SSL policy.\n12. Click on 'EDIT'\n13. Set 'Minimum TLS Version' to TLS 1.2 and set 'Profile' to Modern or Restricted.\n14. Alternatively, if you use the profile 'Custom', make sure that the following features are disabled:\nTLS_RSA_WITH_AES_128_GCM_SHA256\nTLS_RSA_WITH_AES_256_GCM_SHA384\nTLS_RSA_WITH_AES_128_CBC_SHA\nTLS_RSA_WITH_AES_256_CBC_SHA\nTLS_RSA_WITH_3DES_EDE_CBC_SHA\n15. Click on 'Save'.
```config from cloud.resource where api.name = 'aws-emr-describe-cluster' as X; config from cloud.resource where api.name = 'aws-emr-security-configuration' as Y; filter '($.X.status.state does not contain TERMINATING and $.X.status.state does not contain TERMINATED and $.X.status.state does not contain TERMINATED_WITH_ERRORS) and ($.X.securityConfiguration equals $.Y.name) and ($.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration exists and $.Y.EncryptionConfiguration.AtRestEncryptionConfiguration.LocalDiskEncryptionConfiguration.EncryptionKeyProviderType does not equal Custom)'; show X;```
AWS EMR cluster is not enabled with local disk encryption using Custom key provider This policy identifies AWS EMR clusters that are not enabled with local disk encryption using Custom key provider. Applications using the local file system on each cluster instance for intermediate data throughout workloads, where data could be spilled to disk when it overflows memory. With Local disk encryption at place, data at rest can be protected. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console.\n2. In the console, select the specific region from region drop down on the top right corner, for which the alert is generated.\n3. Navigate to 'EMR' dashboard from 'Services' dropdown.\n4. Go to 'Security configurations', click 'Create'.\n5. On the Create security configuration window,\n6. In 'Name' box, provide a name for the new EMR security configuration.\n7. Under 'Local disk encryption', check the box 'Enable at-rest encryption for local disks'.\n8. Select 'Custom' Key provider type from the 'Key provider type' dropdown list.\n9. Follow the below link for creating the custom key,\n\nhttps://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-data-encryption-options.html\n10. Click on 'Create' button.\n11. On the left menu of EMR dashboard Click 'Clusters'.\n12. Select the EMR cluster for which the alert has been generated and click on the 'Clone' button from the top menu.\n13. In the Cloning popup, choose 'Yes' and Click 'Clone'.\n14. On the Create Cluster page, in the Security Options section, click on 'security configuration'.\n15. From the 'Security configuration' drop down select the name of the security configuration created at step 4 to step 8, click 'Create Cluster'.\n16. Once the new cluster is set up verify its working and terminate the source cluster.\n17. On the left menu of EMR dashboard Click 'Clusters', from the list of clusters select the source cluster which is alerted.\n18. Click on the 'Terminate' button from the top menu.\n19. On the 'Terminate clusters' pop-up, click 'Terminate'..
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-loadbalancer' AND json.rule = profile.family equal ignore case "application" and operating_status equal ignore case "online" and pools[?any( protocol does not equal ignore case "https" )] exists```
IBM Cloud Application Load Balancer for VPC uses HTTP backend pool instead of HTTPS (SSL & TLS) This policy identifies IBM Cloud Application Load Balancer for VPC, which has been using http backend pools instead of HTTPS. HTTPS pool uses TLS(SSL) to encrypt normal HTTP requests and responses. It is highly recommended to use application load balancers with HTTPS backend pools for additional security. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console\n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Load balancers'\n3. Select the 'Load balancer' reported in the alert\n4. Under 'Back-end pools' tab, click on three dots on the right corner of a row containing back-end pool with protocol besides HTTPS.\n5. In the 'Edit back-end pool' screen, under 'Protocol' dropdown, select 'HTTPS'.\n6. Click on 'Save'.
```config from cloud.resource where api.name= 'aws-cloudtrail-describe-trails' AND json.rule = 'isMultiRegionTrail is true and includeGlobalServiceEvents is true' as X; config from cloud.resource where api.name= 'aws-cloudtrail-get-trail-status' AND json.rule = 'status.isLogging equals true' as Y; config from cloud.resource where api.name= 'aws-cloudtrail-get-event-selectors' AND json.rule = eventSelectors[?any( dataResources[?any( type contains "AWS::S3::Object" and values contains "arn:aws:s3")] exists and readWriteType is member of ("All","ReadOnly") and includeManagementEvents is true)] exists as Z; filter '($.X.trailARN equals $.Z.trailARN) and ($.X.name equals $.Y.trail)'; show X; count(X) less than 1```
AWS S3 Buckets with Object-level logging for read events not enabled This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-apigateway-get-stages' AND json.rule = 'clientCertificateId does not exist or clientCertificateId equals None'```
AWS API Gateway endpoints without client certificate authentication API Gateway can generate an SSL certificate and use its public key in the backend to verify that HTTP requests to your backend system are from API Gateway. This allows your HTTP backend to control and accept only requests originating from Amazon API Gateway, even if the backend is publicly accessible. Note: Some backend servers may not support SSL client authentication as API Gateway does and could return an SSL certificate error. For a list of incompatible backend servers, see Known Issues. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-known-issues.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: These instructions assume you already completed Generate a Client Certificate Using the API Gateway Console. If not please generate a client certificate by following below steps and then Configure an API to Use SSL Certificates.\nSteps to Generate a Client Certificate Using the API Gateway Console:\n1. Login to AWS Console\n2. Go to API Gateway console\n3. In the main navigation pane (Left Panel), choose Client Certificates.\n4. From the Client Certificates pane, choose Generate Client Certificate.\n5. Optionally, for Edit, choose to add a descriptive title for the generated certificate and choose Save to save the description. API Gateway generates a new certificate and returns the new certificate GUID, along with the PEM-encoded public key.\n\nSteps to Configure an API to Use SSL Certificates:\n1. Login to AWS Console\n2. Go to API Gateway console\n3. In the API Gateway console, create or open an API for which you want to use the client certificate. Make sure the API has been deployed to a stage (Left Panel).\n4. Choose Stages under the selected API and then choose a stage (Left Panel).\n5. In the Stage Editor panel, select a certificate under the Client Certificate section.\n6. Click Save Changes.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any(tags does not exist and attributes[?any( value equal ignore case "service" and name equal ignore case "serviceType" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name equal ignore case "region")] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud user with IAM policies provide administrative privileges for all Identity and Access enabled services This policy identifies IBM Cloud Users, where policy with administrator role permission across all Identity and Access enabled services. Users with administrator role on All Identity and Access enabled services can access all services or resources in the account. If a user with administrator privilege becomes compromised, it may result in compromised resources in the account. As a security best practice, granting the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions is recommended. This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section> Click on three dots on the right corner of a row for the policy which is having Administrator permission on 'All Identity and Access enabled services' \n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'alibaba_cloud' AND api.name = 'alibaba-cloud-ram-password-policy' AND json.rule = 'requireSymbols does not exist or requireSymbols is false'```
Alibaba Cloud RAM password policy does not have a symbol This policy identifies Alibaba Cloud accounts that do not have a symbol in the password policy. As a security best practice, configure a strong password policy for secure access to the Alibaba Cloud console. This is applicable to alibaba_cloud cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['WEAK_PASSWORD']. Mitigation of this issue can be done as follows: 1. Log in to Alibaba Cloud Portal\n2. Go to Resource Access Management (RAM) service\n3. In the left-side navigation pane, click on 'Settings'\n4. In the 'Security Settings' tab, In the 'Password Strength Settings' Section, Click on 'Edit Password Rule'\n5. In the 'Required Elements in Password' field, select 'Symbols'\n6. Click on 'OK'\n7. Click on 'Close'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-api-key' as X; count(X) greater than 0```
GCP API key is created for a project This policy identifies GCP projects where API keys are created. Keys are insecure because they can be viewed publicly, such as from within a browser, or they can be accessed on a device where the key resides. To avoid the security risk in using API keys, it is recommended to use standard authentication flow instead. Note: There are limited cases where API keys are more appropriate. For example, if there is a mobile application that needs to use the Google Cloud Translation API, but doesn't otherwise need a backend server, API keys are the simplest way to authenticate to that API. If a business requires API keys to be used, then the API keys should be secured using appropriate IAM policies. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: Use of API keys is generally considered as less secure authentication mechanism and should be avoided. A secure authentication mechanism should be used. Follow the below mentioned URL to evaluate an alternate, suitable authentication mechanism:\nhttps://cloud.google.com/endpoints/docs/openapi/authentication-method\n\nTo delete an API Key:\n1. Log in to google cloud console\n2. Navigate to section 'Credentials', under 'APIs & Services'.\n3. To delete API Key, go to 'API Keys' section, click the Actions button (three dots) in front of key name.\n4. Click on ‘Delete API key’ button.\n5. In the 'Delete credential' dialog, click 'DELETE' button.\n\nNote: Deleting API keys might break dependent applications. It is recommended to thoroughly review and evaluate the impact of API key before deletion..
```config from cloud.resource where api.name = 'aws-glue-job' AND json.rule = Command.BucketName exists and Command.BucketName contains "aws-glue-assets-" as X; config from cloud.resource where api.name = 'aws-s3api-get-bucket-acl' AND json.rule = bucketName contains "aws-glue-assets-" as Y; filter 'not ($.X.Command.BucketName equals $.Y.bucketName)' ; show X;```
AWS Glue Job using the shadow resource bucket for script location This policy identifies that the AWS Glue Job using the bucket for script location is not managed from the current location. This could potentially be using the shadow resource bucket for script location. A shadow resource bucket is an unauthorized S3 bucket posing security risks. AWS Glue is a service utilized to automate the extraction, transformation, and loading (ETL) processes, streamlining data preparation for analytics and machine learning. When a job is created using the Visual ETL tool, Glue automatically creates an S3 bucket with a predictable name pattern 'aws-glue-assets-accountid-region'. An attacker could create the S3 bucket in any region before the victim uses Glue ETL, causing the victims Glue service to write files to the attacker-controlled bucket. This vulnerability allows an attacker to inject any code into the Glue job of the victim, resulting in remote code execution (RCE). It is recommended to verify the expected bucket owner and update the AWS Glue jobs script location and enforce the aws:ResourceAccount condition in the policy of the AWS Glue Job to check that the AWS account ID of the S3 bucket used by AWS Glue Job according to your business requirements. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To update the script location for an AWS Glue Job:\n\n1. Sign in to the AWS Management Console and open the AWS Glue Studio console at https://console.aws.amazon.com/gluestudio/.\n2. In the navigation pane, choose 'ETL jobs'.\n3. Select the desired AWS Glue Job and choose 'Edit Job' from the 'Actions' drop-down.\n4. In the 'Job Details' window, under 'Advanced properties', verify that the 'Script path' and 'Script filename' are authorized and managed according to your business requirements.\n5. Move the required script to a new S3 bucket as per your requirements.\n6. In the AWS Glue Studio console, go to the 'Job details' tab and update the 'Script filename' and 'Script path' parameters to reflect the new S3 location.\n7. Choose 'Save'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-db-cluster' AND json.rule = status contains available and deletionProtection is false```
AWS RDS cluster delete protection is disabled This policy identifies RDS clusters for which delete protection is disabled. Enabling delete protection for these RDS clusters prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Amazon RDS dashboard \n4. Click on the DB clusters\n5. Select the reported DB cluster\n6. Click on the 'Modify' button\n7. In Modify DB cluster page, In the 'Additional configuration' section, Check the box 'Enable deletion protection' for Deletion protection..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-sqs-get-queue-attributes' AND json.rule = attributes.KmsMasterKeyId does not exist and attributes.SqsManagedSseEnabled is false```
AWS SQS Queue not configured with server side encryption This policy identifies AWS SQS queues which are not configured with server side encryption. Enabling server side encryption would encrypt all messages that are sent to the queue and the messages are stored in encrypted form. Amazon SQS decrypts a message only when it is sent to an authorised consumer. It is recommended to enable server side encryption for AWS SQS queues in order to protect sensitive data in the event of a data breach or malicious users gaining access to the data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: To configure server side encryption for AWS SQS queue follow below URL as required:\n\nTo configure Amazon SQS key (SSE-SQS) for a queue:\nhttps://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sqs-sse-queue.html\n\nTo configure AWS Key Management Service key (SSE-KMS) for a queue:\nhttps://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-sse-existing-queue.html.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-vm-list' AND json.rule = powerState equal ignore case "PowerState/running" and ['properties.storageProfile'].['osDisk'].['osType'] contains "Windows" and ['properties.securityProfile'].['securityType'] equal ignore case "TrustedLaunch" and ['properties.securityProfile'].['uefiSettings'].['secureBootEnabled'] is false```
Azure Virtual Machine (Windows) secure boot feature is disabled This policy identifies Virtual Machines (Windows) that have secure boot feature disabled. Enabling Secure Boot on supported Windows virtual machines provides mitigation against malicious and unauthorised changes to the boot chain. Secure boot helps protect your VMs against boot kits, rootkits, and kernel-level malware. So it is recommended to enable Secure boot for Azure Windows virtual machines. NOTE: This assessment only applies to trusted launch enabled Windows virtual machines. You can't enable trusted launch on existing virtual machines that were initially created without it. To know more, refer https://docs.microsoft.com/azure/virtual-machines/trusted-launch?WT.mc_id=Portal-Microsoft_Azure_Security This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Virtual machines dashboard\n3. Click on the reported Virtual machine\n4. Select 'Configuration' under 'Settings' from left panel \nNOTE: Enabling Secure Boot will trigger an immediate SYSTEM REBOOT.\n5. On the 'Configuration' page, check 'Secure boot' under 'Security type' section\n6. Click 'Save'.
```config from cloud.resource where api.name = 'aws-neptune-db-cluster' AND json.rule = Status equals "available" as X; config from cloud.resource where api.name = 'aws-neptune-db-cluster-parameter-group' AND json.rule = parameters.neptune_enable_audit_log.ParameterValue exists and parameters.neptune_enable_audit_log.ParameterValue equals 0 as Y; filter '($.X.EnabledCloudwatchLogsExports.member does not contain "audit") or $.X.DBClusterParameterGroup equals $.Y.DBClusterParameterGroupName' ; show X;```
AWS Neptune DB cluster does not publish audit logs to CloudWatch Logs This policy identifies Amazon Neptune DB clusters where audit logging is disabled or audit logs are not published to Amazon CloudWatch Logs. Neptune DB integrates with Amazon CloudWatch for performance metric gathering and analysis, supporting CloudWatch Alarms. While Neptune DB provides customizable audit logs for monitoring database operations, these logs are not automatically sent to CloudWatch Logs, limiting centralized monitoring and analysis of database activities. It is recommended to configure the Neptune DB cluster to enable audit logs and publish audit logs to CloudWatch logs. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To create a custom parameter group if the cluster has only the default parameter group use the following steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. Choose 'Create'. The 'Create cluster parameter group' window appears. \n4. In the 'Parameter group family' list, select a 'DB parameter group family'.\n5. In the 'Parameter group type', Select 'DB cluster parameter group'.\n6. In the 'New cluster parameter group name', enter the name of the new DB cluster parameter group. \n7. In the Description box, enter a description for the new DB cluster parameter group. \n8. Click 'Create'. \n\nTo modify the custom DB cluster parameter group to enable query logging, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Parameter groups'. \n3. In the list, choose the above-created parameter group or the reported resource custom parameter group that you want to modify. \n4. Change the value of the 'neptune_enable_audit_log' parameter to '1' in the value drop-down and click on tick mark for enabling audit logs.\n\nTo modify an Amazon Neptune DB Cluster to use the custom parameter group, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster. \n4. Under 'Additional settings', select the above-created cluster parameter group from the DB parameter group dropdown. \n5. Choose 'Continue' and check the summary of modifications. \n6. On the confirmation page, review your changes. If they are correct, choose 'Modify cluster' to save your changes. \n\nTo modify an Amazon Neptune DB cluster for enabling export logs to cloudwatch, follow the below steps: \n\n1. Sign in to the AWS Management Console and open the Amazon Neptune DB console. \n2. In the navigation pane, choose 'Databases', and then choose the 'DB instance' that you want to modify. \n3. Choose the reported cluster that you want to associate your parameter group with. Choose 'Actions', and then choose 'Modify' to modify your cluster.\n4. Scroll down to the Log exports section, and choose 'Enable' for the 'Audit logs'.\n5. Choose 'Continue'.\n6. Choose 'Modify cluster'..
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains "resource.type =" or $.X.filter contains "resource.type=") and ($.X.filter does not contain "resource.type !=" and $.X.filter does not contain "resource.type!=") and $.X.filter contains "iam_role" and ($.X.filter contains "protoPayload.methodName=" or $.X.filter contains "protoPayload.methodName =") and ($.X.filter does not contain "protoPayload.methodName!=" and $.X.filter does not contain "protoPayload.methodName !=") and $.X.filter contains "google.iam.admin.v1.CreateRole" and $.X.filter contains "google.iam.admin.v1.DeleteRole" and $.X.filter contains "google.iam.admin.v1.UpdateRole"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for IAM custom role changes This policy identifies the GCP account which does not have a log metric filter and alert for IAM custom role changes. Monitoring role creation, deletion and updating activities will help in identifying over-privileged roles at early stages. It is recommended to create a metric filter and alarm to detect activities related to the creation, deletion and updating of custom IAM Roles. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nresource.type="iam_role" AND protoPayload.methodName = "google.iam.admin.v1.CreateRole" OR protoPayload.methodName="google.iam.admin.v1.DeleteRole" OR protoPayload.methodName="google.iam.admin.v1.UpdateRole"\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-app-service' AND json.rule = properties.state equal ignore case Running and kind contains workflowapp and (config.minTlsVersion equals "1.0" or config.minTlsVersion equals "1.1")```
Azure Logic app using insecure TLS version This policy identifies Azure Logic apps that are using insecure TLS version. Azure Logic apps configured to use insecure TLS versions are at risk as they may be vulnerable to security threats due to the known vulnerabilities, weaker encryption methods, and support for compromised hash functions. Logic apps using TLS 1.2 or higher will secure communication and protect against potential cyber attacks. As a security best practice, it is recommended to configure Logic apps with TLS 1.2 or higher to ensure secure communication. This is applicable to azure cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure portal\n2. Navigate to Logic apps\n3. Click on the reported Logic app\n4. Under 'Setting' section, click on 'Configuration'\n5. Under 'General settings' tab, Set 'Minimum Inbound TLS Version' to '1.2' or higher.\n6. Click on 'Save'.
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-storage-account-list' AND json.rule = properties.supportsHttpsTrafficOnly is true and properties.minimumTlsVersion does not equal TLS1_2```
Azure Storage Account using insecure TLS version This policy identifies Azure Storage Account which is using insecure TLS version. Azure Storage Account uses Transport Layer Security (TLS) from communication with client applications. As a best security practice, use newer TLS version as the minimum TLS version for Azure Storage Account. Currently, Azure Storage Account supports TLS 1.2 version which resolves the security gap from its preceding versions. https://docs.microsoft.com/en-us/azure/storage/common/transport-layer-security-configure-minimum-version?tabs=portal This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to Azure Portal\n2. Go to Storage accounts dashboard and Click on the reported storage account\n3. Under the 'Settings' menu, click on 'Configuration'\n4. Under 'Minimum TLS version' select 'Version 1.2' from the drop down\n5. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-es-describe-elasticsearch-domain' AND json.rule = processing is false and (nodeToNodeEncryptionOptions.enabled is false or nodeToNodeEncryptionOptions.enabled does not exist)```
AWS OpenSearch node-to-node encryption is disabled This policy identifies AWS OpenSearch for which none-to-node encryption is disabled. Each OpenSearch domain resides within a dedicated VPC and, by default, traffic within the VPC is unencrypted. Enabling node to node encryption provides additional security layer by making use of TLS encryption for all communications between Amazon OpenSearch Service instances in a cluster. For more information, please follow the URL given below, https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ntn.html This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Go to https://aws.amazon.com, and then choose Sign In to the Console\n1. Under Analytics, choose Amazon OpenSearch Service\n2. Choose your domain\n3. Choose Actions, Edit security configuration\n4. Under Encryption section, check Node-to-node encryption\n5. Click Save changes button\n\nFor more details on node-to-node encryption for an Amazon OpenSearch Service Domain, follow below mentioned URL:\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/ntn.html\n\nNote: Node-to-node encryption is supported only from OpenSearch 6.0 or later. To upgrade older versions of AWS OpenSearch please refer to the URL given below,\nhttps://docs.aws.amazon.com/opensearch-service/latest/developerguide/version-migration.html.
```config from cloud.resource where api.name = 'gcloud-logging-metric' as X; config from cloud.resource where api.name = 'gcloud-monitoring-policies-list' as Y; filter '($.Y.conditions[*].metricThresholdFilter contains $.X.name) and ($.X.filter contains "protoPayload.methodName =" or $.X.filter contains "protoPayload.methodName=") and ($.X.filter does not contain "protoPayload.methodName !=" and $.X.filter does not contain "protoPayload.methodName!=") and $.X.filter contains "SetIamPolicy" and $.X.filter contains "protoPayload.serviceData.policyDelta.auditConfigDeltas:*"'; show X; count(X) less than 1```
GCP Log metric filter and alert does not exist for Audit Configuration Changes This policy identifies the GCP accounts which do not have a log metric filter and alert for Audit Configuration Changes. Configuring metric filter and alerts for Audit Configuration Changes ensures recommended state of audit configuration and hence, all the activities in project are audit-able at any point in time. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to GCP Console\n2. Navigate to 'Logs-based metrics' under the 'Logging' section.\n3. Click on 'CREATE METRIC'.\n4. Provide 'Metric Type' and 'Details'.\n5. In 'Filter selection', add filter as \nprotoPayload.methodName="SetIamPolicy" AND protoPayload.serviceData.policyDelta.auditConfigDeltas:*\n6. Click on 'CREATE METRIC'.\n7. Under 'User-defined metrics' section, choose the metric you created in step 6 and click on the kebab menu (Vertical 3 dots) on the right side of the metrics\n8. Click on 'Create alert from metric'; it will navigate to 'Create alerting policy' under the section 'Monitoring'.\n9. Add the metric name created above if not auto-filled in the Monitoring filter. Choose an appropriate value for other alert condition parameters as desired. Then Click on 'NEXT'\n10. Configure all alert trigger settings as desired. Then Click on 'NEXT'\n11. Configure notifications as desired and provide an appropriate name for the alert policy. Then Click on 'NEXT'\n12. Click on 'CREATE POLICY'..
```config from cloud.resource where cloud.type = 'azure' AND api.name = 'azure-automation-account' AND json.rule = identity does not exist or identity.type equal ignore case "None"```
Azure Automation account is not configured with managed identity This policy identifies Automation accounts that are not configured with managed identity. Managed identity can be used to authenticate to any service that supports Azure AD authentication, without having credentials in your code. Storing credentials in a code increases the threat surface in case of exploitation and also managed identities eliminate the need for developers to manage credentials. So as a security best practice, it is recommended to have the managed identity to your Automation account. This is applicable to azure cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable managed identity on an existing Azure Automation account, follow the below URL:\nhttps://docs.microsoft.com/en-us/azure/automation/quickstarts/enable-managed-identity.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-redshift-describe-clusters' AND json.rule ='loggingStatus.loggingEnabled is false'```
Bobby Copy of AWS Redshift database does not have audit logging enabled Audit logging is not enabled by default in Amazon Redshift. When you enable logging on your cluster, Amazon Redshift creates and uploads logs to Amazon S3 that capture data from the creation of the cluster to the present time. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to AWS Console.\n2. Goto Amazon Redshift service\n3. On left navigation panel, click on Clusters\n4. Click on the reported cluster\n5. Click on Database tab and choose 'Configure Audit Logging'\n6. On Enable Audit Logging, choose 'Yes'\n7. Create a new s3 bucket or use an existing bucket\n8. click Save.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-lambda-list-functions' AND json.rule = authType equal ignore case NONE```
AWS Lambda function URL AuthType set to NONE This policy identifies AWS Lambda functions which have function URL AuthType set to NONE. AuthType determines how Lambda authenticates or authorises requests to your function URL. When AuthType is set to NONE, Lambda doesn't perform any authentication before invoking your function. It is highly recommended to set AuthType to AWS_IAM for Lambda function URL to authenticate via AWS IAM. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console\n2. Select the region, from the region drop-down, for which the alert is generated\n3. Navigate to AWS Lambda Dashboard\n4. Click on the 'Functions' (Left panel)\n5. Select the lambda function on which the alert is generated\n6. Go to 'Configuration' tab\n7. Select 'Function URL'\n8. Click on 'Edit'\n9. Set 'Auth type' to 'AWS_IAM'\n10. Click on 'Save'\n .
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy xzypd This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where api.name = 'ibm-iam-policy' AND json.rule = type equal ignore case "access" and roles[?any( role_id contains "crn:v1:bluemix:public:iam::::role:Administrator" )] exists and resources[?any( attributes[?any( name equal ignore case "serviceName" and value equal ignore case "logdnaat" and operator is member of ("stringEquals", "stringMatch"))] exists and attributes[?any( name is member of ("region","resource","resourceGroupId","logGroup","resourceType","serviceInstance"))] does not exist )] exists and subjects[?any( attributes[?any( name contains "iam_id" and value contains "IBMid")] exists )] exists as X;config from cloud.resource where api.name = 'ibm-iam-user' as Y; filter '$.X.subjects[*].attributes[*].value contains $.Y.iam_id'; show Y;```
IBM Cloud user with IAM policies provide administrative privileges for Activity Tracker Service This policy identifies IBM Cloud users with overly permissive Activity Tracker Administrative role. When a user having policy with admin rights gets compromised, the whole service gets compromised. As a security best practice, it is recommended to grant the least privilege access, such as granting only the permissions required to perform a task, instead of providing excessive permissions. This is applicable to ibm cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud console.\n2. In the IBM Cloud console, click 'Manage' on the title bar > 'Access (IAM)', click on 'Users' in the left panel.\n3. Select the user which is reported and you want to edit access for.\n4. Go to 'Access' tab and under the 'Access policies' section, click on three dots on the right corner of a row for the policy which is having Administrator permission on 'IBM Cloud Activity Tracker ' Service.\n5. Click on Remove OR Edit to assign limited permission to the policy.\n6. Review the policy details that you're about to Edit/Remove, and confirm by clicking Save/Remove..
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-s3api-get-bucket-acl' AND json.rule = "(publicAccessBlockConfiguration does not exist and accountLevelPublicAccessBlockConfiguration does not exist) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.ignorePublicAcls is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.ignorePublicAcls is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.blockPublicPolicy is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.blockPublicPolicy is false)) or ((publicAccessBlockConfiguration does not exist or publicAccessBlockConfiguration.restrictPublicBuckets is false) and (accountLevelPublicAccessBlockConfiguration does not exist or accountLevelPublicAccessBlockConfiguration.restrictPublicBuckets is false))"```
Copy of AWS S3 Buckets Block public access setting disabled This policy identifies AWS S3 buckets which have 'Block public access' setting disabled. Amazon S3 provides 'Block public access' setting to manage public access of AWS S3 buckets. Enabling 'Block public access' setting prevents S3 resource data being accidentally or maliciously becoming publicly accessible. It is highly recommended to enable 'Block public access' setting for all AWS s3 buckets appropriately. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Login to the AWS Console\n2. Navigate to the 'S3' service\n3. Click on the 'S3' resource reported in the alert\n4. Click on the 'Permissions'\n5. Under 'Block public access' click on 'Edit'\n6. Select 'Block all public access' checkbox\n7. Click on Save\n8. 'Confirm' the changes\n\nNote: Make sure updating 'Block public access' setting does not affect S3 bucket data access..
```config from cloud.resource where cloud.type = 'gcp' and api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains "[email protected]" and roles[*] contains "roles/editor" as X; config from cloud.resource where api.name = 'gcloud-cloud-function-v2' AND json.rule = status equals ACTIVE and serviceConfig.serviceAccountEmail contains "[email protected]" as Y; filter ' $.X.user equals $.Y.serviceConfig.serviceAccountEmail '; show Y;```
GCP Cloud Run function is using default service account with editor role This policy identifies GCP Cloud Run functions that are using the default service account with the editor role. GCP Compute Engine Default service account is automatically created upon enabling the Compute Engine API. This service account is granted the IAM basic Editor role by default, unless explicitly disabled. Assigning default service account with the editor role to cloud run functions could lead to privilege escalation. Granting minimal access rights helps in promoting a better security posture. Following the principle of least privileges, it is recommended to avoid assigning default service account with the editor role to cloud run functions. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Cloud Run functions' service\n3. Click on the name of the cloud run function on which alert is generated\n4. Click 'EDIT' at top\n5. Expand 'Runtime, build, connections and security settings' and select 'RUNTIME' tab\n6. Under 'Runtime service account', select an appropriate 'Service account' using the dropdown\n7. Click 'NEXT' at bottom\n8. Click 'DEPLOY' at bottom.
```config from cloud.resource where cloud.type = 'ibm' AND api.name = 'ibm-vpc-virtual-server-instance' AND json.rule = status equal ignore case "running" AND network_interfaces[?any( allow_ip_spoofing is true )] exists```
IBM Cloud Virtual Servers for VPC instance has interface with IP-spoofing enabled This policy identifies IBM Cloud Virtual Servers for VPC instances which has any interfaces with IP-spoofing enabled. If any interface has IP-spoofing enabled, there is a chance of bad actors trying to modify the source address in IP packets to invoke a DDoS attack. It is recommended that IP-spoofing is disabled for all interfaces of a virtual server for VPC This is applicable to ibm cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log in to the IBM Cloud Console \n2. Click on 'Menu Icon' and navigate to 'VPC Infrastructure' and then 'Virtual server instances'\n3. Select the 'Virtual server instance' reported in the alert \n4. Under 'Network interfaces' tab, click on edit icon and set 'Allow IP spoofing' to disabled for each network interface. \n5. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-s3api-get-bucket-acl' AND json.rule = 'versioningConfiguration.status!=Enabled'```
Critical - AWS S3 Object Versioning is disabled This policy identifies the S3 buckets which have Object Versioning disabled. S3 Object Versioning is an important capability in protecting your data within a bucket. Once you enable Object Versioning, you cannot remove it; you can suspend Object Versioning at any time on a bucket if you do not wish for it to persist. It is recommended to enable Object Versioning on S3. This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: 1. Log into your AWS Console and select the S3 service.\n2. Choose the reported S3 bucket and click the Properties tab in the upper right frame.\n3. Expand the Versioning option\n4. Click Enable Versioning\n5. Click Save.
```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = '(access_key_1_active is true and ((access_key_1_last_used_date != N/A and _DateTime.ageInDays(access_key_1_last_used_date) > 90) or (access_key_1_last_used_date == N/A and access_key_1_last_rotated != N/A and _DateTime.ageInDays(access_key_1_last_rotated) > 90))) or (access_key_2_active is true and ((access_key_2_last_used_date != N/A and _DateTime.ageInDays(access_key_2_last_used_date) > 90) or (access_key_2_last_used_date == N/A and access_key_2_last_rotated != N/A and _DateTime.ageInDays(access_key_2_last_rotated) > 90)))'```
Critical - AWS access keys not used for more than 90 days This policy identifies IAM users for which access keys are not used for more than 90 days. Access keys allow users programmatic access to resources. However, if any access key has not been used in the past 90 days, then that access key needs to be deleted (even though the access key is inactive). This is applicable to aws cloud and is considered a critical severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: To delete the reported AWS User access key follow below mentioned URL:\nhttps://aws.amazon.com/premiumsupport/knowledge-center/delete-access-key/.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-redis-instances-list' AND json.rule = state equal ignore case ready and not(customerManagedKey contains cryptoKeys)```
rgade-config-policy-01-28-2025 rgade-config-policy-01-28-2025 This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-iam-get-policy-version' AND json.rule = document.Statement[?any((Condition.IpAddress.aws:SourceIp contains 0.0.0.0/0 or Condition.IpAddress.aws:SourceIp contains ::/0) and Effect equals Allow and Action anyStartWith sagemaker:)] exists```
AWS SageMaker notebook instance IAM policy overly permissive to all traffic This policy identifies SageMaker notebook instances IAM policies that are overly permissive to all traffic. It is recommended that the SageMaker notebook instances should be granted access restrictions so that only authorized users and applications have access to the service. For more details: https://docs.aws.amazon.com/sagemaker/latest/dg/security_iam_id-based-policy-examples.html This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['PRIVILEGE_ESCALATION']. Mitigation of this issue can be done as follows: 1. Login to AWS console\n2. Goto IAM Services\n3. Click on 'Policies' in left hand panel\n4. Search for the Policy for which the Alert is generated and click on it\n5. Under Permissions tab, click on Edit policy\n6. Under the Visual editor, for each of the 'SageMaker' Service, click to expand and perform following.\n6.a. Click to expand 'Request conditions'\n6.b. Under the 'Source IP', remove the row with the entry '0.0.0.0/0' or '::/0'. Add condition with restrictive IP ranges.\n7. Click on Review policy and Save changes..
```config from cloud.resource where api.name = 'ibm-event-streams-instance' AND json.rule = resource_plan_id is not member of ('ibm.eventstreams.lite', 'ibm.eventstreams.standard' ) as X; config from cloud.resource where api.name = 'ibm-key-protect-registration' as Y;filter 'not($.Y.resourceCrn equals $.X.crn)' ; show X;```
IBM Cloud Event Stream is not encrypted with customer-managed key This policy identifies IBM Cloud Event streams that are not encrypted with a customer-managed key. The customer-managed key allows customers to ensure no one outside their organization has access to the key. And customers will have control over the lifecycle of their customer root keys where they can create, rotate, and delete those keys. As a security best practice, it is recommended to use a customer-managed key, which provides a significant level of control over the keys when used for encryption. Note: This policy applies to Enterprise plan Event streams only. This is applicable to ibm cloud and is considered a informational severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: An Event stream can be encrypted with customer-managed keys only at the time of creation. Please follow the below instructions to encrypt an event stream with customer-managed keys while creating a new event stream.\n\n1. Log in to the IBM Cloud Console.\n2. Click on 'Catalog' on the title bar.\n3. Select 'Event Streams' from the list of products, and in the create page select the pricing plan as 'Enterprise'.\n4. Under the 'Encryption' section, select a key protect instance under the 'Select a Key Management Service instance' dropdown.\n5. Under the 'Select a disk encryption key' dropdown, select a key other than the Automatic disk encryption key.\n6. Select other configurations as per the requirements.\n7. Click on 'Create'.\n\nMake sure to transfer all the configurations/connections to the newly created Event stream before deleting the non-encrypted Event stream. Delete the vulnerable Event stream using the below instructions:\n\n1. Log in to the IBM Cloud Console.\n2. Go to Menu > 'Resource List', From the 'Integration' section, select the reported event stream.\n3. Click on 'Actions' button, then click on 'Delete service'.\n4. Click on 'OK' to confirm..
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-cloud-function-v2' AND json.rule = state equals ACTIVE and serviceConfig.ingressSettings equals ALLOW_ALL```
GCP Cloud Function with overly permissive network ingress settings This policy identifies GCP Cloud Functions that have overly permissive network ingress settings. This includes both Cloud Functions v1 and Cloud Functions v2. Ingress settings control whether resources outside of your Google Cloud project or VPC Service Controls perimeter can invoke a function. With overly permissive ingress setting, all inbound requests to invoke function are allowed, both from the public and from resources within the same project. Restrictive network ingress settings for cloud functions in GCP minimize the risk of unauthorized access and attacks by limiting inbound traffic to trusted sources. This approach enhances security, prevents malicious activities, and ensures only legitimate traffic reaches your applications. It is recommended to restrict the public traffic and allow traffic from VPC networks in the same project or traffic through the Cloud Load Balancer. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to GCP console\n2. Navigate to 'Cloud Functions' service\n3. Click on the alerting function\n4. Click on 'EDIT'\n5. Click on 'Runtime, build, connections and security settings' drop-down to get the detailed view\n6. Click on the 'CONNECTIONS' tab\n7. In 'Ingress settings', select either 'Allow internal traffic only' or 'Allow internal traffic and traffic from Cloud Load Balancing'\n8. Click on 'NEXT'\n9. Click on 'DEPLOY'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-storage-buckets-list' AND json.rule = iamConfiguration.uniformBucketLevelAccess.enabled contains false```
GCP cloud storage bucket with uniform bucket-level access disabled This policy identifies GCP storage buckets for which the uniform bucket-level access is disabled. Enabling uniform bucket-level access guarantees that if a Storage bucket is not publicly accessible, no object in the bucket is publicly accessible either. It is recommended that uniform bucket-level access is enabled on Cloud Storage buckets. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. log in to GCP Console\n2. Navigate to 'Storage'\n3. Click on 'Browser' to get the list of storage buckets\n4. Search for the alerted bucket and click on the bucket name\n5. From the top menu go to 'PERMISSION' tab\n6. Under the section 'Access control' click on 'SWITCH TO UNIFORM'\n7. On the pop-up window select 'uniform'\n8. Click on 'Save'.
```config from cloud.resource where cloud.type = 'aws' and api.name = 'aws-msk-cluster' AND json.rule = state equal ignore case active and encryptionInfo.encryptionInTransit.clientBroker contains PLAINTEXT or encryptionInfo.encryptionInTransit.inCluster is false```
AWS MSK cluster encryption in transit is not enabled This policy identifies AWS Managed Streaming for Apache Kafka clusters having in-transit encryption in a disabled state. In-transit encryption secures data while it's being transferred between brokers. Without it, there's a risk of data interception during transit. It is recommended to enable in-transit encryption among brokers within the cluster. This ensures that all data exchanged within the cluster is encrypted, effectively protecting it from potential eavesdropping and unauthorized access. This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable in-transit encryption both within the cluster and client broker communication has to be configured with TLS.\n\nTo enable TLS encryption for client-broker communication, follow the below steps:\n1. Sign in to the AWS Management Console and open the Amazon MSK console at https://console.aws.amazon.com/msk/.\n2. On the navigation menu, choose 'Clusters', and select the MSK cluster for which you want to enable or edit in-transit encryption.\n3. Under the 'Actions' dropdown, select 'Edit security settings'. \n4. Under 'Encryption', please uncheck the 'Plaintext' option and make sure the 'TLS encryption' option is selected for  'Between clients and brokers' encryption configuration.\n5. Click on 'Update' to save changes.\n\nEnabling TLS encryption for within-cluster communication involves creating a new cluster. To create a new cluster, please follow the below steps:\n1. Sign in to the AWS Management Console and open the Amazon MSK console at https://console.aws.amazon.com/msk/.\n2. On the navigation menu, choose 'Clusters', then select 'Create cluster'.\n3. Under the 'Create Cluster' page, please configure the cluster as per the requirements.\n4. At Step 3, under 'Encryption', select 'TLS encryption' for the 'Between clients and brokers' checkbox.\n5. Select 'TLS encryption' for the 'Within the cluster' checkbox.\n6. After providing the required configuration in the remaining steps, Under step 5, click on 'Create cluster'..
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-rds-describe-db-instances' AND json.rule = dbinstanceStatus contains available and dbclusterIdentifier does not exist and deletionProtection is false```
AWS RDS instance delete protection is disabled This policy identifies RDS instances for which delete protection is disabled. Enabling delete protection for these RDS instances prevents irreversible data loss resulting from accidental or malicious operations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to the AWS Console\n2. In the console, select the specific region from the region drop-down on the top right corner, for which the alert is generated.\n3. Navigate to the Amazon RDS dashboard \n4. Click on the DB instances\n5. Select the reported DB instance\n6. Click on the 'Modify' button\n7. In Modify DB instance page, In the 'Additional configuration' section, Check the box 'Enable deletion protection' for Deletion protection..
```config from cloud.resource where cloud.type ='aws' and api.name = 'aws-iam-get-credential-report' AND json.rule = 'password_enabled is true and (access_key_1_active is true or access_key_2_active is true)'```
AWS IAM user has both Console access and Access Keys This policy identifies IAM users who have both Console access and Access Keys. When an IAM user is created, the Administrator can assign either Console access or Access Keys or both. Ideally the Console access should be assigned to Users and Access Keys for system / API applications, but not both to the same IAM user. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['KEYS_AND_SECRETS']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS console and navigate to the 'IAM' service.\n2. Identify the reported IAM user.\n3. In 'Security credentials' tab check for presence of Access Keys.\n4. Based on the requirement and company policy, either delete the Access Keys or remove the Console access for this IAM user..
```config from cloud.resource where api.name = 'aws-cloudwatch-log-group' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' AND json.rule = keyMetadata.keyManager does not equal CUSTOMER or (keyMetadata.keyManager equals CUSTOMER and keyMetadata.keyState equals Disabled) as Y; filter '($.X.kmsKeyId does not exist ) or ($.X.kmsKeyId exists and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```
AWS CloudWatch Log groups not encrypted by Customer Managed Key (CMK) This policy identifies AWS CloudWatch Log groups that are encrypted using the default KMS key instead of CMK (Customer Managed Key) or using a CMK that is disabled. A CloudWatch Log Group is a collection of log streams that share the same retention, monitoring, and access control settings. Encrypting with a Customer Managed Key (CMK) provides additional control over key rotation, management, and access policies compared to the default encryption. As a security best practice, using CMK to encrypt your CloudWatch Log Groups is advisable as it gives you full control over the encrypted data. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To change the encryption key for a AWS CloudWatch Log groups:\n\nUse the associate-kms-key command as follows:\n\naws logs associate-kms-key --log-group-name <Loggroup name that is reported> --kms-key-id <KMS CMK KEY ARN>\n\nNote: When using customer-managed CMKs to encrypt AWS CloudWatch Log groups, Ensure authorized entities have access to the key and its associated operations..
```config from cloud.resource where cloud.type = 'azure' and api.name= 'azure-vm-list' AND json.rule = powerState contains "PowerState/running" and ['properties.networkProfile'].['networkInterfaces'][*].['publicIpAddressId'] exists and ['properties.diagnosticsProfile'].['bootDiagnostics'].['enabled'] is true```
Azure Virtual machine configured with public IP and serial console access This policy identifies Azure Virtual machines with public IP configured with serial console access (via Boot diagnostic setting). The Microsoft Azure serial console feature provides access to a text-based console for virtual machines (VMs) running either Linux or Windows. Serial Console connects to the ttyS0 or COM1 serial port of the VM instance, providing access independent of the network or operating system state. Attacker can leverage public IP assigned Serial console enabled virtual machine for remote code execution and privilege escalation. It is recommended to restrict public access to the reported virtual machine and disable/restrict serial console access. NOTE: Serial Console can be disabled for an individual Virtual machine instance by boot diagnostics only. This is applicable to azure cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To disable/restrict serial console access on the reported VM instance, follow bellow URL:\nhttps://learn.microsoft.com/en-us/troubleshoot/azure/virtual-machines/serial-console-enable-disable.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = user contains iam.gserviceaccount.com AND (roles[*] contains admin or roles[*] contains Admin or roles[*] contains roles/editor or roles[*] contains roles/owner)```
GCP IAM Service account has admin privileges This policy identifies service accounts which have admin privileges. Application uses the service account to make requests to the Google API of a service so that the users aren't directly involved. It is recommended not to use admin access for ServiceAccount. This is applicable to gcp cloud and is considered a medium severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1.Login to GCP Portal\n2.Goto IAM & admin (Left panel)\n3.Choose the reported member and click on the edit icon\n4.Delete the Admin role and provide appropriate role according to requirement.\n5.Click Save.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-sql-instances-list' AND json.rule = state equals RUNNABLE and databaseVersion contains POSTGRES and ( settings.databaseFlags[?any( name equals "log_statement" )] does not exist or settings.databaseFlags[?any( name equals "log_statement" and value equals "all" or value equals "none" )] exists)```
GCP PostgreSQL instance database flag log_statement is not set appropriately This policy identifies PostgreSQL database instances in which database flag log_statement is not set appropriately. If log_statement is not set to a correct value may lead to too many statements or too few statements. Setting log_statement to align with your organization's security and logging policies facilitates later auditing and review of database activities. It is recommended to choose an appropriate value (ddl or mod) for the flag log_statement. This is applicable to gcp cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate SQL Instances page\n3. Click on reported PostgreSQL instance\n4. Click EDIT\n5. If the flag has not been set on the instance, \nUnder 'Customize your instance', click on 'ADD FLAG' in 'Flags' section, choose the flag 'log_statement' from the drop-down menu and set the value as ddl or mod\nOR\nIf the flag has been set to other than ddl or mod, Under 'Customize your instance', In 'Flags' section choose the flag 'log_statement' and set the value as ddl or mod\n6. Click on 'DONE' and then 'SAVE'.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-projects-get-iam-user' AND json.rule = '(roles[*] contains roles/iam.serviceAccountActor) or (roles[*] contains roles/iam.serviceAccountUser) or (roles[*] contains roles/iam.serviceAccountTokenCreator)'```
GCP IAM user with service account privileges Checks to ensure that IAM users don't have service account privileges. Adding any user as service account actor will enable these users to have service account privileges. Adding only authorized corporate IAM users as service account actors will make sure that your information is secure. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['HIGH_PRIVILEGED_ROLE']. Mitigation of this issue can be done as follows: 1. Login to GCP Portal\n2. Go to IAM & Admin (Left Panel)\n3. Select IAM \n4. From the list of users, identify the users with Service Account Actor, Service Account User or Service Account Token Creator roles\n5. Remove these user roles by clicking on Delete icon for any unauthorized user.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-msk-cluster' AND json.rule = brokerNodeGroupInfo.connectivityInfo.publicAccess.type does not equal "DISABLED"```
AWS MSK cluster public access is enabled This policy identifies the Amazon Managed Streaming for Apache Kafka (Amazon MSK) Cluster is configured with public access enabled. Amazon MSK provides the capability to enable public access to the brokers of MSK clusters. When the AWS MSK Cluster is configured for public access, there is a potential risk of data being exposed to the public. To mitigate the risk of unauthorized access and to adhere to compliance requirements, it is advisable to disable public access on the AWS MSK cluster. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Sign in to the AWS Management Console, and open the Amazon MSK console at https://console.aws.amazon.com/msk/home.\n2. In the Navigation panel, select 'Clusters' under the 'MSK Clusters' section.\n3. Click on the cluster that is reported.\n4. Choose the 'Properties' tab.\n5. In the 'Network settings' section, click on the 'Edit' dropdown.\n6. Choose 'Edit public access'.\n7. In the 'Edit public access' dialog, uncheck the 'Public access' checkbox to disable public access.\n8. Click 'Save changes' to apply the changes..
```config from cloud.resource where api.name = "aws-ec2-describe-instances" AND json.rule = architecture contains "foo"```
API automation policy pkgmu This is applicable to aws cloud and is considered a medium severity issue. Sample categories of findings relevant here are []. Mitigation of this issue can be done as follows: N/A.
```config from cloud.resource where cloud.type = 'gcp' AND api.name = 'gcloud-vertex-ai-workbench-instance' AND json.rule = state equals "ACTIVE" AND shieldedInstanceConfig.enableSecureBoot is false```
GCP Vertex AI Workbench Instance has Secure Boot disabled This policy identifies GCP Vertex AI Workbench instances with Secure Boot disabled. Secure Boot is a security feature that ensures only trusted, digitally signed software runs during the boot process, protecting against advanced threats such as rootkits and bootkits. By verifying the integrity of the bootloader and operating system, Secure Boot prevents unauthorized software from compromising the system at startup. Without Secure Boot, instances are vulnerable to persistent malware and unauthorized code that could compromise the system deeply. It is recommended to enable Secure Boot for Vertex AI Workbench instances. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Login to the GCP console\n2. Navigate to the 'Vertex AI' service\n3. In side panel, under 'Notebooks', go to 'Workbench'\n4. Under 'INSTANCES' tab, select 'VIEW' as 'INSTANCES'\n5. Click on the alerting instance\n6. Under the 'SYSTEM' tab, in front of 'VM details', click on the 'View in Compute Engine' link\n7. Stop the VM by clicking on the 'STOP' button. Click the 'STOP' button on the confirmation dialogue.\n8. Once the the VM has been stopped, click on the 'EDIT' button\n9. Under 'Shielded VM', enable 'Turn on Secure Boot'\n10. Click on 'Save'\n11. Click on 'START/RESUME' from the top menu.
```config from cloud.resource where cloud.type = 'aws' AND api.name='aws-rds-describe-db-instances' AND json.rule= 'engine is not member of ("sqlserver-ex") and dbinstanceStatus equals available and dbiResourceId does not equal null' as X; config from cloud.resource where api.name = 'aws-kms-get-key-rotation-status' as Y; filter '$.X.storageEncrypted does not exist or $.X.storageEncrypted is false or ($.X.storageEncrypted is true and $.X.kmsKeyId exists and $.Y.keyMetadata.keyState equals Disabled and $.X.kmsKeyId equals $.Y.keyMetadata.arn)'; show X;```
AWS RDS instance is not encrypted This policy identifies AWS RDS instances which are not encrypted. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up and manage databases. Amazon allows customers to turn on encryption for RDS which is recommended for compliance and security reasons. This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: Amazon RDS instance can only be encrypted at the time of DB instance creation. So to resolve this alert, create a new DB instance with encryption and then migrate all required DB instance data from the reported DB instance to this newly created DB instance.\nTo create RDS DB instance with encryption, follow the instructions mentioned in below reference link based on your Database vendor:\nhttp://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html.
```config from cloud.resource where cloud.type = 'aws' AND api.name= 'aws-s3api-get-bucket-acl' AND json.rule = 'policyAvailable is true and denyUnencryptedUploadsPolicies[*] is empty and sseAlgorithm equals None'```
AWS S3 buckets do not have server side encryption Customers can protect the data in S3 buckets using the AWS server-side encryption. If the server-side encryption is not turned on for S3 buckets with sensitive data, in the event of a data breach, malicious users can gain access to the data. NOTE: Do NOT enable this policy if you are using 'Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C).' This is applicable to aws cloud and is considered a low severity issue. Sample categories of findings relevant here are ['UNENCRYPTED_DATA']. Mitigation of this issue can be done as follows: 1. Login to the AWS Console and navigate to the 'S3' service\n2. Click on the reported S3 bucket\n3. Click on the 'Properties' tab\n4. Under the 'Default encryption' section, choose encryption option either AES-256 or AWS-KMS based on your requirement.\nFor more information about Server-side encryption,\nDefault encryption:\nhttps://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-encryption.html\nPolicy based encryption:\nhttps://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html.
```config from cloud.resource where api.name = 'gcloud-kms-crypto-keys-list' AND json.rule = primary.state does not equal "ENABLED" as X; config from cloud.resource where api.name = 'gcloud-storage-buckets-list' as Y; filter ' $.X.name equals $.Y.encryption.defaultKmsKeyName'; show Y;```
GCP Storage bucket using a disabled CMEK This policy identifies GCP Storage buckets that are using a disabled CMEK. CMEK (Customer-Managed Encryption Keys) for GCP buckets allows you to use your own encryption keys to secure data stored in Google Cloud Storage. If a CMEK defined for a GCP bucket is disabled, the data in that bucket becomes inaccessible, as the encryption keys are no longer available to decrypt the data. This can lead to data loss and operational disruption. If not properly managed, CMEK can also introduce risks such as accidental key deletion or mismanagement, which could compromise data availability and security. It is recommended to review the state of CMEK and enable it to keep the data in the bucket accessible. This is applicable to gcp cloud and is considered a low severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: 1. Log in to GCP console\n2. Navigate Cloud Storage Buckets page\n3. Click on the reported bucket\n4. Go to 'Configuration' tab\n5. Under 'Default encryption key', click on the key name\n6. Select the appropriate key version\n7. Click 'ENABLE'and then click 'ENABLE' in the pop up.
```config from cloud.resource where cloud.type = 'aws' AND api.name = 'aws-step-functions-statemachine' AND json.rule = loggingConfiguration.level does not equal ignore case "ALL"```
AWS Step Function state machines logging disabled This policy identifies AWS Step Function state machines with logging disabled. AWS Step Functions uses state machines to define and execute workflows that coordinate the components of distributed applications and microservices. Step Functions logs state machine executions to Amazon CloudWatch Logs for debugging and monitoring purposes. It is recommended to enable logging on the Step Function state machine to maintain reliability, availability, and performance. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging on the Step Function state machine, follow the below steps:\n\n1. Log into the AWS console and navigate to the Step Function dashboard\n2. On the state machine page, select the reported state machine\n3. Click on 'Edit'\n4. Select 'Config' to edit the configuration\n5. Under the 'Config' tab, under the 'Logging' section, set 'Log level' to 'ALL'\n6. Click on 'Save'..
```config from cloud.resource where api.name = 'aws-connect-instance' AND json.rule = InstanceStatus equals "ACTIVE" and attributes[?any( AttributeType equals "CONTACTFLOW_LOGS" and Value equals "false" )] exists```
AWS Connect instance not configured with contact flow logs This policy identifies the Amazon Connect instance configured with CONTACTFLOW_LOGS set to false. In Amazon Connect, Enabling CONTACTFLOW_LOGS in Amazon Connect is crucial as it allows real-time logging of contact flow executions to CloudWatch. This helps debug, monitor, and optimize customer interactions by tracking steps, conditions, and errors. It is recommended to enable CONTACTFLOW_LOGS to enhance monitoring and ensure adherence to security policies and regulations. This is applicable to aws cloud and is considered a informational severity issue. Sample categories of findings relevant here are ['MISCONFIGURATION']. Mitigation of this issue can be done as follows: To enable logging for AWS Connect instance, perform the following actions:\n1. Sign into AWS console and Open the Amazon Connect console at https://console.aws.amazon.com/connect/.\n2. On the instances page, choose the instance alias that is reported.\n3. In the navigation pane, choose 'Flows'.\n4. Navigate to the Flow logs section and select 'Enable Flow logs' and choose 'Save'.\nNote: Logs are generated only for flows that include a 'Set logging behavior block' with logging set to enabled..