Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
To send mail if ADF Pipeline runs more then 10min use Wait activity in parallel of your main pipeline and set the time to wait in seconds for 10 min(600 sec) in wait activityAfter that with theRest Api get the status of pipelinethrough web activity.URL: https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DataFactory/factories/{factoryName}/pipelineruns/@{pipeline().RunId}?api-version=2018-06-01Then in the If activity the status of pipeline run isIn Progresswith expression@equals(activity('Web1').output.value.status,'In progress')then send the mail using web activity(logic app)refer thisSO answerto more unerstanding on rest api with web activity
I need to send an email whenever my pipeline runs longer then 10mins. For sending mail I am using logic app but how to check if the time exceeds then 10mins? How can we check and do this in the pipeline itself? Can anyone kindly help me on this
How to send mail if ADF Pipeline runs more then 10mins
To address my issue, I added the 'key' as 'my-build' in the CURL command and successfully triggered a 'SUCCESSFUL' build status update for the 'Build key - description' with a general description. The update was made to the Bitbucket repository using the provided API call, as shown in the command below:curl -X POST -is -u <USER_NAME>:<APP_PASSWORD> \ -H 'Content-Type: application/json' \ https://api.bitbucket.org/2.0/repositories/<USER_NAME>/<REPOSITORY_SLUG>/commit/<COMMIT>/statuses/build \ -d '{ "key": "my-build", "state": "SUCCESSFUL", "name": "Build key - description", "url": "https://<AZURE_URL_REFERENCE>", "description": "A general description" }'
How can I update the build status from Azure Pipeline to Bitbucket with a custom message and custom values? I've tried below API, but they didn't update the build status as expected. My goal is to see the build status reflected in commits and pull requests in Bitbucket.curl -X POST -is -u <USER_NAME>:<APP_PASSWORD> \ -H 'Content-Type: application/json' \ https://api.bitbucket.org/2.0/repositories/<USER_NAME>/<REPOSITORY_SLUG>/commit/<COMMIT>/statuses/build \ -d '{ "state": "SUCCESSFUL", "name": "Build key - description", "url": "https://<AZURE_URL_REFERENCE>", "description": "A general description" }'What Do I Need to Do to Update Bitbucket Build Status from Azure Pipeline with Custom Message and Values
How to Update Bitbucket Build Status from Azure Pipeline with Custom Message and Values?
To upsert multiple tables in ADF, you need to get the keys of the respective tables.Here I am using primary key as the key to upsert. I am using the below query to get the list of tables and their primary keys.select C.* FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS T JOIN INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE C ON C.CONSTRAINT_NAME=T.CONSTRAINT_NAME;These are my tables:sample1:sample2:Give the lookup array to ForEach activity.I have source csv files with same name as tables name. So, I have used dataset parameters for the filename and given the expression like below for the filename.@concat(item().TABLE_NAME,'.csv')In sink also, I have used dataset parameters for the schema name and table name.The Key columns dynamic content only accepts array. So, give our key column as an array@createArray(item().COLUMN_NAME).Execute the pipeline and you can see my table values were upserted.sample1:sample2:
I have multiple tables, and I want to create a key column for each table that will be used dynamically for upsertThis is a preview of a list of all tables. In all these tables, we need to create key columns that will be used as key columns for upsert operations.
Azure DataFactory Foreach Copy Upsert, how to create and use key column
I think you're looking for this:https://marketplace.visualstudio.com/items?itemName=Veracode.veracode-vsts-build-extension
I need to use the 'Veracode Update and Scan' task inside an ADO pipeline. Unfortunately the task is not available in the Maketplace. Is the task public or I need to contact the Veracode support or more guidance?I did try finding if the task is deprecated by Microsoft or not but couldn't find anything. Also there is nothing related to Veracode in the Microsoft documentation about ADO tasks:https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/?view=azure-pipelines
Is the Veracode Update and Scan task available in the ADO Pipeline marketplace?
According todocsthe annotation interface@Aggregationexists since version 2.2.So it seems you can't use within1.9.3.RELEASEversion.
import org.springframework.data.mongodb.repository.Aggregation;Following import is not resolved for<dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-mongodb</artifactId> <version>1.9.3.RELEASE</version> </dependency>I tried<dependency> <groupId>org.springframework.data</groupId> <artifactId>spring-data-mongodb</artifactId> <version>3.4.15</version> </dependency>but my production uses 1.9.3.RELEASE
spring-data-mongodb dependency for @Aggregation annotation for aggregation pipeline
I had to remove the check from Shallow Fetch in Pipeline Configuration:
I do get following error when running a release pipeline with maven-release plugin in background:Starting: Reattach detached head ============================================================================== Task : Bash Description : Run a Bash script on macOS, Linux, or Windows Version : 3.227.0 Author : Microsoft Corporation Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash ============================================================================== Generating script. Script contents: git checkout main ========================== Starting Command Output =========================== /usr/bin/bash /agent/_work/_temp/b44c1c2b-b722-4d90-a662-570bb11df3cb.sh error: pathspec 'main' did not match any file(s) known to git ##[error]Bash exited with code '1'. Finishing: Reattach detached head
Azure pipeline - error: pathspec 'main' did not match any file(s) known to git
Try thisstages: - my_stage workflow: rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' my_stage_mr: stage: my_stage script: - echo "running only when changes are in resources folder and on merge request" rules: - changes: - src/test/resources/*
I have problem with my gitlab pipeline - stagemy_stageruns even if I commit files tosrc/main/javafolder. What I want to achieve is to run in only on Merge RequestANDonly if changes were done to src/test/resources.stages: - my_stage my_stage_mr: stage: my_stage image: myimage-11-slim rules: - if: '$CI_PIPELINE_SOURCE == "merge_request_event"' changes: - src/test/resources/*
Gitlab stage runs despite rules condition forbids
WHY is it that whenever I post a question here after searching extensively for an answer, about seven times in ten I then almost immediately find that answer? So embarrassing...Nevertheless, I DID find an answer - and it's such a good solution that I wanted to share it:https://bgolus.medium.com/rendering-a-sphere-on-a-quad-13c92025570c#bac5
Let's say I want to render a perfect sphere or ellipsoid in Unity 3D - one with a centre point and one or two radii, rather than a mesh. How would I need to go about this? Would I need to make a custom render pipeline, or is there a way to accomplish it with shaders?The reason I ask is that I'd like to replicate the style of the classic adventure game Ecstatica, which combines both traditional 3D objects and mathematical ellipsoids. To get maximum effect the ellipses have to be super-smooth and intersect each other perfectly. They also have to intersect with regular polygonal meshes.n.b. I'm not asking for complex mathematics at the moment, I can probably accomplish that myself, I'm just wondering what my starting point should be?
Rendering perfect spheres and ellipsoids in Unity
One option is partition your input files by date/time and store each partition in a separate directory. Then set the input file path in your File source -> BigQuery sink pipeline as a macro parameter, and pass in the file path as a runtime argument.For example, you might have a file structure like the following:input_files/2023-08-26/ file1.csv, file2.csv,... input_files/2023-08-27/ fileA.csv, fileB.csv,...For the first run, passinput_files/2023-08-26as runtime argument for the file path argument. On the next run passinput_files/2023-08-27, and so on.
We have requirement to create incremental pipeline for BigQuery datasets. How can we create incremental pipeline in google cloude data fusion?FYI: We have google csv file as data source.Also want to check records exists in to BigQuery table before insert in google cloud data fusion pipeline.
Incremental pipeline in google cloud data fusion
Finally, I got the solution after days of digging and useless meetings with Microsoft support team.Solution: What I found is that I set deploymentMethod as 'auto' in my pipeline task, which makes my application read-only after deployment on Azure AppService and limits the app size to 29.9 MBs while the actual size was 156 MBs. So, I changed the deploymentMethod to 'zipDeploy' which deploy every file and folder of my application on webapp Azure with actual app size of 156 MBs that fixed the problem.I am sharing this solution so it might save someone's several hours.
My Asp.net core MVC application deploys properly using apipeline to an Azure app servicethat has deployment slots, but when I try to access the app in my browser, I receive a"service unavailable error" message"When IStop and Start the app service the error goes away, but the weird part is that deployment from Visual Studio works perfectly without the need to restart the app service.I have tried app service stop and start from pipeline but it is also not working, a maunal restart required on deployment from pipeline.MyCICD yaml pipelineis multistage that deploys app on deployment slots.I am stuck in there, can someone please help me out of it.
Serivce Unavailable Error on Deployment from Pipeline Azure DevOps
In Settings > Repository > Push Rules make sure that "Reject unverified users" is not checked.I've had this happen before in repositories that have had a lot of team turnover. Then a new person comes on board and no one can figure out why the "new fellow" can commit.
I have a project with a build and deploy pipeline on GitLab CI/CD. I can run it manually and also on commit but my team members in my organization cannot. They get an error: "The pipeline failed due to the user not being verified" and GitLab told them that they have to use a debit or credit card. Is there any possibility to do this another way?I tried to create a Deploy key in settings -> repository but it didn't help.
Gitlab CI pipeline failed because of user not authorized
I found the workaround.This snippet can be used in the pipeline to achieve the same.# NOTE: Some lines in the output were missing while using this task. # Write terraform show output in default format to a markdown file - task: TerraformTaskV4@4 name: TerraformShow displayName: Terraform Show inputs: provider: 'azurerm' environmentServiceNameAzureRM: $(sctfbackend) command: 'show' commandOptions: 'tfplan -no-color' outputTo: 'file' outputFormat: 'default' fileName: '$(working_dir)/TerraformPlan.md' workingDirectory: $(working_dir) # Display plan in the pipeline build summary - task: Bash@3 displayName: Show plan summary inputs: targetType: 'inline' workingDirectory: '$(working_dir)' script: | ls -la sed -i '1 i\```' TerraformPlan.md echo '```' >> TerraformPlan.md echo "##vso[task.uploadsummary]$(working_dir)/TerraformPlan.md"The first step is to get the plan file using-out. Then show the plan with the flag-no-colorto avoid the coloring characters in the output.Write it to a markdown file.Enclosing the content of that file in acode blockof markdown will provide much cleaner formatting.Then publish the file to the pipeline summary.
Some Terraform extensions likethisare providing a feature to publish Terraform plan to a tab in the pipeline run overview/summary.Terraform taskpublished by Microsoft DevLabs is not currently supporting this feature.I'm trying to make use of the pipeline logging commanduploadsummaryas a workaround for this.# Write terraform show output in default format to a markdown file - task: TerraformTaskV4@4 name: TerraformShow displayName: Terraform Show inputs: provider: 'azurerm' environmentServiceNameAzureRM: $(sctfbackend) command: 'show' commandOptions: 'tfplan -no-color' outputTo: 'file' outputFormat: 'default' fileName: '$(working_dir)/TerraformPlan.md' workingDirectory: $(working_dir) # Display plan in the pipeline build summary - task: Bash@3 displayName: Show plan summary inputs: targetType: 'inline' workingDirectory: '$(working_dir)' script: | echo "##vso[task.uploadsummary]$(working_dir)/TerraformPlan.md"But it reads the input file as Markdown. So the formatting of the output looks ugly in the tab. The#in the plan output will be taken as heading1 in Markdown.+will be taken as bullets for list...etc as shown below:How can I fix this to get a cleaner Terraform plan tab in the pipeline summary?
How to write terraform plan to a tab in azure devops pipeline summary?
In order to achieve your requirement, you can use two copy activity in the single pipeline to copy data from two sources to respective sinks and add trigger to that pipeline in two different time.Create a pipeline with two copy activities, one for each set of data you want to copy. Configure each copy activity to copy data from the appropriate source to the appropriate destination.Create a trigger for the pipeline. Click Add trigger in the pipeline and select New/Edit.In the trigger settings, Set the type as "schedule" trigger. Set the recurrence to " Every 1 Day" and set the start date to the first time you want the pipeline to run. Then, add two schedules in the "Execute at these times" .Click Ok.Make sure to Publish the pipeline for trigger to be activated.
I need to run two set of data taken from database in single azure data factory pipeline at 2 different times in a day using single triggeri want to run the pipeline run twice a day, for 2 different data in 2 different times a day
i need to run two set of data in single adf pipeline at 2 different times using single trigger
Please check thenotebook herethat shows how to add a custom handler and package it using torch model archiver.The steps areCreateTorch custom handlerto add pre/post-process logic with prediction logic followingRun torch model archiver with custom handler as additional dependencytorch-model-archiver -f \ --model-name=model \ --version=1.0 \ --serialized-file=$LOCAL_TRAINED_MODEL_DIRECTORY/pytorch_model.bin \ --handler=$PREDICTOR_DIRECTORY/custom_handler.py \ --extra-files ... \ --export-path=$ARCHIVED_MODEL_PATH
Here is my modelclass NeuralNetwork(nn.Module): def __init__(self): super().__init__() self.flatten = nn.Flatten() self.linear_relu_stack = nn.Sequential( nn.Linear(28 * 28, 512, dtype=torch.float), nn.ReLU(), nn.Linear(512, 512), nn.ReLU(), nn.Linear(512, 10) ) def forward(self, x): x = self.flatten(x) logits = self.linear_relu_stack(x) return logitsI am following this repo for crate pipeline:https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/training/pytorch_gcs_data_training.ipynbI'm currently working on a project that involves deploying a PyTorch model using the "torch-model-archiver" tool. While I understand the basic process of using "torch-model-archiver" to create a .mar file, I'm facing some challenges with the default handler provided by the tool.I'm not quite sure how to implement this. Could someone please guide me on how to create a custom handler for "torch-model-archiver"? I want to generate a .mar file for my PyTorch model using this custom handler.
Creating a Custom Handler for "torch-model-archiver" to Generate a Model .mar File
I believe there are two problems to be solved and one doesn’t mean you don’t need the other.You can use dbt to go from raw data to relations ready for DS. At that point you can switch to tools and libraries suited for DS and ML.Finally, you will need to deploy that ML model and you will need tools for ML Ops.The reason I keep dbt in the mix is that I presume there are other use cases for the data and SQL along with the other features or dbt like DQ, docs, and lineage are better suited for a wider audience.
I am building a open source data stack for a large-scale batch pipeline. The data is later to be used in a ML model that is updated quarterly.I want to use Airbyte for ingestion and Airflow for generel orchestration.In general, I want to use modern open source software but I ran into some issues in regards to what to choose for data storage and transformation. First, I thought I might take Cassandra and PySpark but I read in several sources that they are not really compatible or it requires some effort. Then I thought about going with something like dbt and Postgres. But dbt seems only a good choice when it comes to Analytics and not for ML data. E.g. data enrichment is not really well done with SQL in dbt. Postgres might be a bad choice because if I end up with huge data volumes, Postgres will slow down and performance decreases.Are there any suggestions in regards to which tools I should use?
open source data stack - Airbyte, Airflow, ?,?
I suggest you don't do any overriding. You can edit the related pagelet models directly at the place where they are delivered by Intershop. That means directly in theapp_sf_responsive...cartridges. Passing call parameters is hard enough already don't bother with that overriding complexity.Product tiles are always rendered using waincludes (aka. remote includes). That means another webadapter application server interaction is necessary to resolve this. There is a hidden feature which allows you to basically copy a query-parameter value (from the top level request) into such a remote include. Insert this and adapt to your liking in yourcomponent.product.productTile.pagelet2<callParameterDefinitions xsi:type="pagelet:CallParameterDefinition" name="ParameterInside" type="java.lang.String" from="ParameterFromBrowserURL"/>This should propagate the value out ofhttps://host/INTERSHOP/web/.../pipeline?ParameterFromBrowserURL=AAAinto your tile. Make sure you switchONPage Cache andDO NOTtest this inside Design View as in both cases remote includes are treated as local includes (copy does not happen there).
This question is related to this thread:Cannot Access Pipeline Dictionary Entry of View Pipeline in a ComponentI want to pass a pipeline parameter to a component which is referred to somewhere after a Tag. I don't really understand how this works. Do I have to override a viewcontext or just its interface?The ViewContext ist called via the tag previously mentioned, and I need a pipeline parameter in the component "component.product.productTile.pagelet2" - but I cannot understand what I need to override to pass my parameter along.Also I may have overridden the components in the wrong cartridge. I take it, that my override has to be in the app_sf_responsive_cm? Or do I hav eto override 2 things in different cartridges?Because the viewcontext ist "viewcontext.include.product.pagelet2" which resides in app_sf_responsive. So I am quite confused, really. If somebody could help me that would be great.I tried overriding the "component.product.productTile.pagelet2" and the "viewcontext.include.product.pagelet2" by adding the tag with my parameter as explained on the intershop documentation cookbook.
Intershop - pass call parameters to component
Please check below link for help on this :https://psspavan96.medium.com/ci-cd-for-node-js-application-using-google-cloud-part-1-5f7466df913dDB you can configure separately using Cloud SQL and link it via service type config map and secrets in your image in k8s
How can I containerize a Node.js application with a MySQL database and then deploy it on GKE using Bitbucket Pipeline and DockerHub? I need guidance on the steps and best practices to achieve this deployment successfully. Thanks in advance for any help from the community!I am facing difficulty while creating the pipeline and deployment.yml file for GKE
Nodejs Deployment on GKE
Maybe you need astartup probe?Sometimes, you have to deal with legacy applications that might require an additional startup time on their first initialization. In such cases, it can be tricky to set up liveness probe parameters without compromising the fast response to deadlocks that motivated such a probe. The trick is to set up a startup probe with the same command, HTTP or TCP check, with a failureThreshold * periodSeconds long enough to cover the worse case startup time.Example:ports: - name: liveness-port containerPort: 8080 hostPort: 8080 livenessProbe: httpGet: path: /healthz port: liveness-port failureThreshold: 1 periodSeconds: 10 startupProbe: httpGet: path: /healthz port: liveness-port failureThreshold: 30 periodSeconds: 10Thanks to the startup probe, the application will have a maximum of 5 minutes (30 * 10 = 300s) to finish its startup. Once the startup probe has succeeded once, the liveness probe takes over to provide a fast response to container deadlocks. If the startup probe never succeeds, the container is killed after 300s and subject to the pod's restartPolicySource:https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes
I'm using kubernetes with helm. My github pipeline contains:- name: Verify deployment run: kubectl rollout status deployment/name-api --namespace my-prod --timeout=60sSometime this part fails with:Readiness probe failed: Get “https://...:80/”: dial tcp...:80: connect: connection refusedWhen I then re-run the Action, it often passes.What should I do against this?I've already tried adding the last line below indeployment.yamlin thetemplatesfolder, based on the answer athttps://stackoverflow.com/a/51932875. But that didn't help.spec: template: spec: containers: - name: {{ .Chart.Name }} … readinessProbe: initialDelaySeconds: 5
Kubernetes: Readiness probe failed
Found a hidden undocumented variable that can be used to check for failure of the current job.#!/usr/bin/env bash ## bamboo does not catch some script failures unless this flag is set set -e TASK="CHECK BUILD STATUS" echo ================== START $TASK ================== ## bamboo_jobFailed is a boolean BUILD_STATUS=${bamboo_jobFailed} echo "BUILD_STATUS_failed?: $BUILD_STATUS" text=$"@here Hi all! There will be a delay in the Superset data refresh today, we are looking into it :slightly_smiling_face: Thank you!" if [[ "$BUILD_STATUS" == "false" ]] then echo "the build is passing" else echo "the build has failed" curl -X POST -H 'Content-type: application/json' --data "{\"text\" : \"$text\"}" https://hooks.slack.com/services/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 fi echo =================== END $TASK ===================
I am trying to automate some slack Messages at the end of my daily Bamboo Pipeline runs.What I am trying to figure out is, if there is a way to check for the pipeline status and post a message to Slack reflecting that?i.e. if the pipeline fails I'd want a fail message, and vice versa.I haven't found a way to retrieve the status of the pipeline inside of the scriptAny ideas welcome.Regards
Bamboo Pipeline Slack Messaged depending on status
I tried with your expression like below and I got same error.Update [dbo].[sample1] set [MaxId]='int(@{variables('MaxRowId')})' where name='Laddu';Here, your expression will result the query like below in the script activity input.Update [dbo].[sample1] set [MaxId]='int(24)' where name='Laddu';where'int(24)'is a varchar value and it is the cause of the error.To resolve this, modify your query like below. convert the string to integer in ADF expression using@{int(variables('MaxRowId'))}like below.Update [dbo].[sample1] set [MaxId]=@{int(variables('MaxRowId'))} where name='Laddu';This will give the query like below which you can see in the script activity input.And you can see my row also updated in the table.
I have constructed an ADF pipeline with copy data to create a CSV from an Azure SQL DB query. It needs to run as a delta load each time, so I have the following...Lookup- To get the maximum row ID of the last run, which is stored in a table in the databaseSet Variable- To assign that value to a variable (which is a string as ADF doesn't have an int var)Copy Data- Runs the query and generates the CSV file in a storage blobLookup- To get the maximum row ID of the table I'm pulling the data fromSet Variable- To update the variable value from the ID of the last run to the max ID of this runScript- An update statement to write the max ID back to the DB ready for the next runIt all works up until the update statement, where it fails when trying to convert the ID variable to an integer. (Statement modified slightly for public view)UPDATE [dbo].[DeltaLoads] SET [MaxId] = 'int(@{variables('MaxRowId')})' WHERE [Client] = '<ClientName>' AND [RuntimeVariable] = 'MaxRowId'As the value is numeric and only ever will be numeric, why does the conversion from the string fail? Have I written the expression incorrectly?
Converting a string to an integer failing in Azure Data Factory pipeline
You can try the below approach to achieve your requirement.NOTE: For this method, if you are copying the files for the first time from the source location, you need to copy the entire files for the first pipeline run and then modify the pipeline like below first next scheduling of pipeline.First, create theSharePointOnlineListResourcedataset to list out the files in the site. Specify your list name or Documents if your files are in the Documents.Now, use lookup query to filter out the files like below.$select=Name,Created,Path,&$filter=ContentType eq 'Document'This will list out all the files with Creation date of it.Now, use Filter activity, to filter out the latest files(files created after the last pipeline run).Here, My file creation dates are inUTC-7format so converted them toUTCby adding 7 hours. You need to do it as per your file dates.Also, I am filtering out the files which are created in the last 60 minutes (from last pipeline run). You need to give this value as per your pipeline scheduling difference.Items:@activity('Lookup1').output.valueCondition:@lessOrEquals(div(sub(ticks(utcnow()),ticks(addToTime(item().Created,7,'Hour'))),600000000),60)You can see the latest files list.Pass this array@activity('Filter1').output.Valueto a ForEach activity and iterate through these list. Inside ForEach, use the copy activity and pass thefile path from the above arrayin each iteration to the source of copy activity by using dataset parameters.
How can I ignore a file in sharepoint that has previously been processed? The ADF pipeline shouldn't process the file without moving it to another archive folder until a new file is uploaded in the folder.Expectation should be the file should not move to other folder but the file should not picked for the processing, If an new files uploaded then it should be picked by for the processing
How can I ignore a file that has previously been processed in ADF
Azure bicep is used to provision infrastructure on Azure. This is not possible to create pipeline in Azure DevOps using bicep files.
I use YAML for creating separate pipelines for each environment (PRE-PROD, PROD). Is it possible to create separate pipelines using bicep?
Create separate pipelines using bicep
This is a contrived and silly code sequence.add $t3, $t1, $t2 sub $t3, $t2, $t1In the above,$t3is written byaddbut then overwritten bysuband without theadds$t3being ever used.  In and of itself, this does not create a hazard, the data hazards are also known as RAW Hazards, which stands for Read After Write.  In the above, there is only Write After Write, for which MIPS pipelines do not suffer a hazard.In the next instruction:and $t4, $t3, $t1there is one data RAW hazard, on$t3.When we factor in the next instruction:or $t0, $t3, $t4We have two hazards in one instruction, one on$t3and one on$t4.  Since there are two hazards in one instruction, we have to become problem-statement reading experts to determine the answer.  As it is still only one instruction, if you're counting instructions that have hazards, it counts as one, but if you're counting hazards themselves, it counts as two.With:xor $t1, $t4, $t3We have a hazard on$t4, but$t3has made it through the pipeline and back to the register file, if you use the common implementation detail that the register file can read current-cycle-written results.And finally:sw $t4, 4($t0)has a RAW data hazard on$t0.So, I count 5 RAW hazards, and, 4 instructions that have at least one RAW hazard.
I have been looking at this problem, and upon my own attempt, I found 3 data hazards. However, other sources say otherwise.Find all hazards in the following instructions (MIPS Pipeline):add $t3, $t1, $t2 sub $t3, $t2, $t1 and $t4, $t3, $t1 or $t0, $t3, $t4 xor $t1, $t4, $t3 sw $t4, 4($t0)Not only that, but even chatGPT isnt giving any conclusive explanations:
I am confused as to which instructions in MIPS have a hazard
I have tried the below approach to see If I am getting the same error as you are getting at your end in Azure datafactory pipeline.The Below is my Pipeline:I tried to reproduce and see if I am encountering with the errorfailed to update run GlobalRunId(xx(Runid,yy))'.However, As you can see the pipeline ran successfully.in my approach I am writing the Spark SQL tables to ADLS and then writing the data to the SQL table.When I ran the pipeline I see no ERRORfailed to update runGlobalRunId(xx,(Runid,yy))'.The errorfailed to update runGlobalRunId(xx,(Runid,yy))'that you are getting It looks like databricks internal error. Also make sure that all linked services and datasets used in the pipeline are valid and accessible.Sometimes, the issue could be with linked services (like expired credentials,Token expiry) will also lead pipeline run failures.Check if the pipeline has dependencies on other pipelines or datasets, make sure they are all configured correctly and available.Check the outputs of the activities in your pipeline. If an activity is failing retry the pipeline run to see if it resolves the issue.As you mentioned the notebook runs successsfully although error states 'failed to update run GlobalRunId(xx,(Runid,yy))' In this kind of scenarios please create asupport ticketwith databricks To get more details about the error and resolution to this situation.
I'm facing this error during a pipeline run when using azure data factory to call a databricks notebook. The notebook runs successsfully although the 'failed to update run GlobalRunId(xx,(Runid,yy))' message appears along with the databricks notebook status. This is causing ADF to provide a false alert.I tried searching for similar issues but this is really uncommon. Would appreciate any help on this thanks.I checked the databricks logs but there is nothing unusual relating to this error. Was hoping to find something useful
Databricks execution failed with error state: InternalError, error message: failed to update run GlobalRunId(xx155,RunId(yy))
A workaround, as the digit in the file name is four digit, then using %Y works successfully.
There is a framework used to ingest files into a DataLake in AWS S3, the name is Serverless DataLake Framework aka SDLF, some configuration is needed to move a file through many stages in the S3 repository. The first one is to pass a file from the S3/Landing stage to S3/Raw stage. To do that part of the configuration is the file: source_mappings.json, let me show an example:[ { "SourceId": "ABC123", "Target": { "Location": { "Subdirectory": "domainxxx/systemyyy/filezzz/file_XX%Y%m%d" } }, "Source": { "Location": { "IncludePatterns": ["systemyyy/file_XX*"], "DatePattern": "file_%Y%m%d" } }, "System": "systemyyy" } ]That works successfully because normally the files to ingest comes with a date as part of the name of the file, but I got a file to ingest that has no date as part of the name of the file, instead it has a consecutive number, lets say "file_1084.dat","file_1085.dat",..,"file_1090.dat"..So my question is if anyone have tried this before.. I tried with many other tags like //d{4} or [0-9]{4} or just *, but nothing seems to work..
How to set a numeric value in the source_mappings.json file in a AWS SDLF pipeline?
Here is the sasme thread:https://learn.microsoft.com/en-us/answers/questions/1329289/what-are-the-possible-situations-that-may-cause-diGetLastError return 1:ERROR_INVALID_FUNCTION, Incorrect function.According to the Doc:DisconnectNamedPipe functionIf the client end of the named pipe is open, the DisconnectNamedPipe function forces that end of the named pipe closed. The client receives an error the next time it attempts to access the pipe. A client that is forced off a pipe by DisconnectNamedPipe must still use the CloseHandle function to close its end of the pipe.As far as I'm concerned, you should try to useCloseHandle functionto close its end of the pipe.
I am encounting an error while trying todisconnecta pipe in my code usingDisconnectNamedPipe:if (this->isPipeClosed()) return true; if (this->status <= CONNECTING_STATE) return true; if (!DisconnectNamedPipe(this->instance)) { auto err = GetLastError(); Util::pushKernelLog(TAG, locateHere(), Util::P_ERROR, "DisconnectNamedPipe failed with {}.", std::to_string(err).c_str()); this->status = ERRROR_PIPE_STATE; return false; }The client log:[ ERROR][AbstractPipe] DisconnectNamedPipe failed with 1.At the same time, the pipe server log like this:What are the possible situations that may causeDisconnectNamedPipeto returnfalseandGetLastError()to return1?
What are the possible situations that may cause DisconnectNamedPipe to return false and GetLastError() to return 1?
Just like Redis client function'set', add a parameter 'ex' with expiry time in seconds.r = redis.Redis() pipe = r.pipeline() for i in range(10): pipe.set('p' + str(i), i, **ex=120**) pipe.execute()
I need to add multiple key/values in batch mode to Redis with expiry time.My Python code to add multiple key/values to Redis looks like this:r = redis.Redis() pipe = r.pipeline() for i in range(10): pipe.set('p' + str(i), i) pipe.execute()How can I add the keys with expiry time so that when I query for keys, the expired keys should not be included in the result.
How to add keys with expiry to Redis in batch mode?
To me it seems like you have the wrong .NET Version for the Runtime. Try to use this step before:- task: UseDotNet@2 inputs: version: '3.x' includePreviewVersions: true
I am getting this error in a build which was working in my devops pipeline.... does anyone know how to fix this please?
Azure Pipeline Build Issue
It's not clear to me what shell your runner is using here, but I suspect the (immediate) issue is because you have unbalanced quotation marks in your job script steps. For example:- 'env:NUGET_PATH" restore'The"is unmatched.
when i try to run the gitlab pipeline its failed and shows the eval: line 159: unexpected EOF while looking for matching error on gitlab pipeline this errorstages: # List of stages for jobs, and their order of execution - build - test - deploy-uat variables: NUGET_PATH: 'C:\GitLab-Runner\nuget.exe' MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\MSBuild.exe' TEST_TOOL_EXE: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\Common7\IDE\Extensions\TestPlatform\vstest.console.exe' build-job: # This job runs in the build stage, which runs first. stage: build script: - echo "$CI_PROJECT_DIR" - 'env:NUGET_PATH" restore' - echo "Clean & Build Solution" - '& "$env:MSBUILD_PATH" .\Solution1.sln /p:Configuration=Release /p:DeployOnBuild=True /p:DeployDefaultTarget=WebPublish /p:WebPublishMethod=FileSystem /p:DeleteExistingFiles=True /p:publishUrl=.\bin\publish /clp:ErrorsOnly' artifacts: expire_in: 2 days paths: - "**/bin" - "**/obj" rules: - if: ($CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "main")can you help me to slove this error
eval: line 159: unexpected EOF while looking for matching error on gitlab pipeline
From thedocsfielddevicecan be set to"cuda:0". As simple as that.
Trying to run a simple text classification with a pipeline (needs to be in batch processing) is yielding me a device allocation issue.tokenizer_filter = AutoTokenizer.from_pretrained("salesken/query_wellformedness_score") tokenizer_kwargs = {'padding':True,'truncation':True,'max_length':512} model_filter = AutoModelForSequenceClassification.from_pretrained("salesken/query_wellformedness_score").to(torch.device("cuda")) filtering = pipeline("text-classification", model=model_filter, tokenizer=tokenizer_filter, batch_size=8) scores = filtering(df['content'].tolist(), **tokenizer_kwargs)The simple code above is yielding:Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)Apparently the input is on CPU (as it is a python list of str) and the model on GPU. How to move the input to GPU?
pipeline input is not in cuda device, but its a list[str]
You cannot define the schedule in the.gitlab-ci.ymlfile. You have to create the schedule using the UI: CI/CD, Schedules, New schedule.In.gitlab-ci-ymlyou have to specify that the job should run when the pipeline is triggered by a schedule. Example fromthe documentation:job:on-schedule: rules: - if: $CI_PIPELINE_SOURCE == "schedule" script: - make worldThe use ofrulesis preferred overonly/except.
I need to configure .gitlab-ci.yml file in order to run a scheduled pipeline for e2e job only. I tried to apply that on gitlab UI, but as I can see nothing is running. So my question is, what should I add on the Yml file to make it works ? Note : I have too many jobs on the yml file but I need to schedule just e2e job.this is my yml file :e2e: stage: test <<: *kubernettes-docker <<: *test-only <<: *npm-cache script: - ls -lrt - docker-compose run e2e sh ./docker/scripts/e2e_entrypoint.sh -d only: refs: - cron when: schedule: - cron: "30 14 * * *" tags: - clusterv2 artifacts: paths: - dist/e2e_output/e2e_output/e2e_output
How to configure Scheduled pipeline in gitlab yml file?
With different steps you can take the output of the previous step and pass it in as an input for the next step. For example if you have a training step that you have defined like the following:training_step = TrainingStep( name="Training", step_args=train_args, )You can grab the model artifacts generated by this training job with the following code.model_data=training_step.properties.ModelArtifacts.S3ModelArtifacts
I'm currently working with SageMaker Pipelines and have a question about inter-step communication. In systems like Apache Airflow, there's a concept of XCom, which allows tasks to exchange small data artifacts or messages. I'm wondering if there's an equivalent feature in SageMaker Pipelines that would allow us to pass variables or small data artifacts directly between steps, without writing to and reading from an S3 bucket? Any help would be greatly appreciated.
AWS SageMaker pipeline xcom alternative
include: - project: 'some/project/in-gitlab' file: - '/some/file.yml' stages: - job1 - job2 - job3 - job4 job2: rules: - when: never job4: rules: - when: neverThis will ensurejob2andjob4doesn't run.But I would suggest you split the jobs into different files and just include the ones which are needed.
In my .gitlab-ci.yml I have the following:include: - project: 'some/project/in-gitlab' file: - '/some/file.yml'Insome/project/in-gitlab/some/file.ymllets say it has 4 stages:stages: - job1 - job2 - job3 - job4I want to run onlyjob1andjob2in my pipeline. How to run onlyjob1andjob2which are inherited fromsome/project/in-gitlab/some/file.yml?
How to skip certain jobs which are imported from another repository in Gitlab
You're piping the data frameactivity_and_caloriestomutate(), then you're piping the result of that tomax().max()is receiving a data frame object (the name of which can be represented by a period, "."). It doesn't know thatcaloriesdifis a column name - it thinks it's another variable in your environment, which doesn't exist, hence the error.max()expects a vector of values, which you could supply usingpull().activity_and_calories %>% mutate(caloriesdif = Calories.x - Calories.y) %>% pull(caloriesdif) %>% max()
I am trying to run the following code and keep getting "Error: object 'caloriesdif' not found".Here is my code:activity <-read.csv("dailyActivity_merged.csv") head(activity) calories <-read.csv("dailyCalories_merged.csv") head(calories) activity_and_calories <- merge(activity, calories, by=c("Id", "ActivityDay")) head(activity_and_calories) activity_and_calories %>% mutate(caloriesdif = Calories.x - Calories.y) %>% max(caloriesdif)I can get it to bring up the caloriesdif column but not what the max value is.I have tried reruning the code and changing the names of the variables but I cannot find the maximum value.
Pipeline issues with a new column not being found
As I understood from the question, you have bunch of CSV files and you'r intention is to copy data from these files to Dynamics CRM.If you are having PowerApps also, then ADFS is not the recommended approach. You can use DataFlows in the PowerApps to achieve your task. This has added advantage that it gives you the power query by default so that you can easily do the transformations. Please referthisdocument for your reference.If you are not comfortable with Dataflow or for some reason you require ADFS only, then in ADFS there is a option of PowerQuery in the pipeline, you can use that and do the transformations as you like it.
I need some help with the following issue. I have a bunch of CSV files which, in my pipeline, I read from Azure BLOB storage. Now each CSV file contains a list of records, and all CSV files use the exact same data format/columns. I must decide per CSV line/record that is processed, if I want to create a Dynamics CE contact for it or not, based on some logic. My current pipeline first gets all the CSV files/records into one dataset, and then uses a For-Each loop to process each line/record separately. Inside the For-Each I have a conditional check to see if the current line must be used or not. If the data should be used, then how do I create a contact in Dynamics using the data from this record? It seems the Copy-Data activity only works with a dataset, while inside my For-Each I only have access to the current @item() ???
Azure Data Factory pipeline to process each line in CSV separately and conditionally copy data into Dynamics CRM
In copy activity, you cannot query the file to get the delta or incremental data. Therefore, you can use dataflow to filter the delta rows from source and load it in the sink table. Below are the detailed steps.Lookup activity is taken, and dataset has list of filenames and last copied date (the date at which the file is copied previously).Foreach activity is taken andItemsin settings is given as@activity('Lookup1').output.value.Inside foreach activity, dataflow activity is taken.In dataflow activity, Source transformation is taken and in source, filename is given dynamically in source dataset using dataset parameter.A dataflow parameter namedateof string type is created.Then filter activity is taken in dataflow and condition is given to filter the records which are changed after the previous load.Then Sink transformation is taken and table name for sink dataset is given dynamically. (Here I am giving the same sink table name as source file name).Then you can give the value for dataflow parameter and source and sink dataset parameters in dataflow activity.Then you can update the date value in the lookup table using script activity inside for-each activity and this script activity should be taken next to dataflow activity.
How can we incrementally load files from azure blob storage to sql server or synapse by azure data factory using metadata table or lookup table as we can't query on files as we do on sql tables .In incrementally load files data may be incresing or have new or updated records . I want complete flow as we do it for sql by using metadata table or lookup table. Thanks
How can I create an incremental Azure Data Factory pipeline using metadata table?
You should concatenate each item toself.contenidomail, not the localcontenidomailvariable (which you just initialized to be empty).class ScrapeadorPipeline: def __init__(self): self.contenidomail = '' def process_item(self, item, spider): adapter = ItemAdapter(item) if adapter.get('SITUACION') == "DESCARGADO": self.contenidomail += str(adapter.get('Nro')) + " " + adapter.get('SITUACION') + "\n" return item def close_spider(self, item): mailer.send(to=["*@msn.com"], subject="Reporte", body=self.contenidomail)
How to send email with specific scraped data after spider ends in Scrapy.I'm trying to send some item data with multiple lines after the end of a spider, but i'm only receiving one line. I think i have to do some iteration but i can't make it work.items.pyimport scrapy class ScrapeadorItem(scrapy.Item): Nro = scrapy.Field() SITUACION = scrapy.Field()pipelines.pyfrom itemadapter import ItemAdapter from scrapy.mail import MailSender mailer = MailSender(mailfrom="*@gmail.com", smtpuser="*@gmail.com", smtphost="smtp.gmail.com", smtpport=587, smtppass="*") class ScrapeadorPipeline: def process_item(self, item, spider): adapter = ItemAdapter(item) contenidomail = "" if adapter.get('SITUACION') == "DESCARGADO": contenidomail = contenidomail + "\n" + str(adapter.get('Nro')) + " " + adapter.get('SITUACION') + "\n" self.contenidomail = contenidomail return item def close_spider(self, item): mailer.send(to=["*@msn.com"], subject="Reporte", body=self.contenidomail)
How to send mail with scraped data?
Please check paths. Where you create phpstan-baseline.neon when run ./vendor/bin/phpstan analyse -c phpstan.neon --generate-baseline (you can write path in this command also, it need because baseline will have path from place where you create it)and try use this path in run script string php vendor/bin/phpstan analyse -c phpstan.neon --no-progress
Phpstan analysis fails when I run it as part of the Gitlab pipeline, it passes locally. I'm talking about the analysis and not the running of the command itself. I do not get 'command not found' or 'phpstan.neon' not found errors, but rather the analyzer returns all kinds of errors that I put in the baseline.What I did, locally:./vendor/bin/phpstan analyse -c phpstan.neon // fails./vendor/bin/phpstan analyse -c phpstan.neon --generate-baseline./vendor/bin/phpstan analyse -c phpstan.neon // passesMy phpstan.neon looks like:includes: - phpstan-baseline.neon parameters: level: 5 paths: - srcand my pipeline looks like:... composer job here phpstan: stage: code_quality needs: - composer script: - php vendor/bin/phpstan analyse -c phpstan.neon --no-progressI expected the job to pass, but I get:[ERROR] Found 767 errorsThe errors are a combination of stuff that is wrong with the code itself but also errors caused by the baseline: "Ignored error pattern was not matched in reported errors."What am I doing wrong?
Phpstan analysis fails as part of the Gitlab pipeline
def toggleValues = [ 'VAR1': 'value1', 'VAR2': 'value2', 'VAR3': 'value3' ] toggleValues.findAll{params[it.key] == true}.collect{it.value}
I have several choices in my jenkins pipeline script:`booleanParam[ name: 'VAR1', defaultValue: true, description: ''] booleanParam[ name: 'VAR2', defaultValue: true, description: ''] booleanParam[ name: 'VAR3', defaultValue: true, description: '']`I'm trying to make a map like a ["microservice1": true, "microservice2": true, etc.] if I chose params above (tick the params) So Is there any method to get these variables from runtime? I mean Jenkins makes some map|array with all variables by itself. And I can get it with built-in function or somehow else/ Thanks in advanceI wrote just a simble code like adef getParams() { def myArray = [] if (params.VAR1) {myArray.add("var1")} if (params.VAR2) {myArray.add("var2")} if (params.VAR3) {myArray.add("var3")} return myArray }But I feel it's stupid
Getting variables from Jenkins runtime
You understand right, if job2 fails, the next stage shouldn't run, see the example in the documentation, it is basically the same as your yamlhttps://docs.gitlab.com/ee/ci/yaml/#allow_failureSince your gitlab-ci.yml is obviously a simplified version, I don't see anything that goes wrong here but maybe there's something you left out that affects this. Is job2 a manual job? does job2 extend other jobs that maybe have "allow_failure: true"? I suggest adding "allow_failure: false" to job2.
I'm running a pipeline on Gitlab with 2 stages. On the first stage there are 2 jobs of which one hasallow_failure: true. This works, because it fails and I see an orange exclamation mark. The other one fails too and I get a red cross.The second stage has 1 job that should not be executed, because a task that may not fail, failed. It is still executed. What am I doing wrong? I was in the understanding that, by default, a job from a second, third, etc stage would never be executed (manually or automatically) until all previous stages have been 100% successful (unless a certain task is allowed to fail).Here is my gitlab-ci.yml:stages: - stage1 - stage2 job1: stage: stage1 script: - <something> allow_failure: true job2: stage: stage1 script: - <something> job3: stage: stage2 script: - <something>
Why is a job in the second stage of my Gitlab pipeline being executed despite a previous job in the first stage failing?
This can get you an idea, (i haven't tried this code in my system) Reading Excel data in an Apache Beam Python pipeline can be achieved using the apache_beam.io.fileio module. This module provides functionality to read files in a parallel and distributed manner.def read_data_from_excel_file(file_pattern): def process_excel_file(element): file_name = element.metadata.path df = pd.read_excel(file_name, 'Lista de Gargalos') return df.values.tolist() return ( beam | "MatchFiles" >> beam.io.fileio.MatchFiles(file_pattern) | "ReadMatches" >> beam.io.fileio.ReadMatches() | "ProcessExcelFile" >> beam.Map(process_excel_file) ) file_pattern = "gs://nidec-ga-transient/ConcessoesERestricoes.xlsx" pipeline = beam.Pipeline() ( pipeline | "Importar Dados CloudStorage" >> read_data_from_excel_file(file_pattern) | "DoSomething" >> beam.Map(print) # Add more transformations as needed ) pipeline.run().wait_until_finish()
I'm trying to read an Excel file in my cloud storage bucket with Apache Beam Python pipeline, but it is not working. I tried to read with Pandas but I can't use the data in Pcollection.Do you know a way to do that?def read_data_from_excel_file(): bucket_name = "nidec-ga-transient" blob_name = "ConcessoesERestricoes.xlsx" storage_client = storage.Client() bucket = storage_client.bucket(bucket_name) blob = bucket.blob(blob_name) data_bytes = blob.download_as_bytes() df = pd.read_excel(data_bytes, 'Lista de Gargalos') return df Pipeline = ( pipeline_load_data | "Importar Dados CloudStorage" >> read_data_from_excel_file() # | "Write_to_BQ" >> beam.io.WriteToBigQuery( # tabela, # schema=table_schema, # write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND, # create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED, # custom_gcs_temp_location = 'gs://ddc-test-262213-staging/[email protected]/temp' ) )I got this error when I ran the code:TypeError: unsupported operand type(s) for >>: 'str' and 'NoneType'
How can I read Excel data with Apache Beam Python pipeline? I get an unsupported operand error
As per my understanding you want logs of all activities, pipelines into a file.You can log all the Pipeline runs,trigger runs,Activity runs segregated fashion USING"Azure Monitor"all at one go .No need of any extra pipeline/stored procedures/set variables .All the info will be logged in yyyy-->Mm-->dd-->hh-->mm-->ss fashion in your storage account in form ofJson format.But this will cost you more .Just follow the images below :
I have this ADF pipelineΙ want to save the output with Name, Type, Status, Duration (same like in pipeline debug output) for each run to Azure Data Lake Storage. The process should be automatic, don't want to use 'Export to CSV' manually since i don't know when job will run.Is there any way i can achieve this?I tried many logging options available but all gives details of pipeline run, not of activities within it. 'Set Variable' before and after of each activity is an option but that will complicate my flow as I have many activities within it.
Saving log of each Activity in ADF pipeline to ADLS
Go to your EC2 console, Actions, and then Security. Check your IAM role to make sure you have given permission to SSM to connect to your instance.Also, SSM uses temporary SSH keys behind the scenes. I think they're generated "on demand" but you might try uninstalling and reinstalling the SSM agent if the above doesn't help you.
I am trying to access my AWS EC2 console via session manager which was working perfectly fine before, I was testing out some Ubuntu sources list and removed openssh-client(apt-get remove)accidentally and then re-installed it again(apt-get install).Now I am trying to access the EC2 command line via session manager but it says:The version of SSM Agent on the instance supports Session Manager, but the instance is not configured for use with AWS Systems Manager. Verify that the IAM instance profile attached to the instance includes the required permissions.I have not changed any settings or configurations in AWS, just getting the following error message when I try to access EC2 via session manager.clueless right now, what steps to take in order to get back AWS session manager access
AWS EC2 session manager access
I don't believe this is a natively-supported feature. You might be able to use an action stage to make an API call to the Data Fusion instance for a pipeline to get the last run, write it to a file, then use theArgument SetterorGCS Argument Setterplugins to set this as a runtime argument to the pipeline.
How do I find last pipeline run timestamp and status and pass it as runtime argument in cloud data fusion pipeline? I need to find this to do incremental records based on last_updated column in the table.I cannot find any macro or runtime argument in cloud data fusion pipeline that tells the pipeline previous run details.
How do I find last pipeline run timestamp and status and pass it as runtime argument in cloud data fusion pipeline?
As I understand it, this is what you're trying to achieve:( F.col("Col1").like("%Text1%") | F.col("Col1").like("%Text2") ) & F.col("Col3").like("%Text3%") & ( (~F.col("Col2").like("%12345%")) | (~F.col("Col2").like("%67890%")) )
I'm trying to achieve result for following logicIF Text.Contains([Col1] = "Text1" OR [Col1] = "Text2")) AND Text.Contains([Col3] = "Text3")) AND (IF not (Text.Contains([Col2] ="12345") OR not Text.Contains([Col2] = "67890"))I wrote following code in pySpark but Im not getting desired results( (F.col("Col1").like("%Text1%") | F.col("Col1").like("%Text2")) & ( (F.col("Col3").like("%Text3%")) & (~F.col("Col2").like("%12345%") | (~F.col("Col2").like("%67890%"))) ) )Will someone please help me correct the code to get the desired results?My input dataset is:Col1Col3Col2Text1; RandomText3; Random12345;67890;34567Random; Text2Text398765;54321So The expected output is a Boolean result where False is expected for 1st row whereas True is expected for 2nd row.
Multiple OR and AND conditions in pySpark
(Sorry, I do not have enough reputation to make a comment)Besides ensuring you have an updated version of scikit-learn, I would do what the error message suggests, access the last step in your pipeline before calling the method. i.e. replace the last line in your snippet withX_train = pd.DataFrame(data = X_train, columns = preprocessor[:-1].get_feature_names_out())
I am trying pass my dataset (combination of numeric and categorical features) through a pipeline which does the necessary imputations, scaling and one-hot encoding (for categorical features).In return the output would be apandas.DataFramewith the new feature names (after encoding) as headers. The code snippets are as follows:num_pipeline = Pipeline( steps = [ ("Imputer", SimpleImputer(strategy = 'median')), ("Scaler", StandardScaler(with_mean=False)) ] ) cat_pipeline = Pipeline( steps = [ ("Imputer", SimpleImputer(strategy='most_frequent')), ("Encoder", OneHotEncoder(sparse=False)), ("Scaler", StandardScaler(with_mean=False)) ] ) preprocessor = ColumnTransformer( [ ("Numerical_Pipeline", num_pipeline, num_columns), ("Categorical_Pipeline", cat_pipeline, cat_columns) ] ) from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1) X_train = preprocessor.fit_transform(X_train) X_train = pd.DataFrame(data = X_train, columns = preprocessor.get_feature_names_out())However, I am running into an error as this one.Could someone point me in the right direction on how to solve this?
Unable to get feature names after One-Hot Encoding the categorical features
If you have already created an Azure Release Pipeline and configured a Deployment Group targeting a Windows Server, the PowerShell script to launch an .exe file would be:Replace "C:\path\to\your\file.exe" with the actual path to your .exe file, and "/parameter1 value1 /parameter2 value2"
I am trying to launch the .exe file on deployment group server(windows server) using azure release pipeline with PowerShell script.Start-Process -FilePath "E:\Azure\test.exe"but I am unable to open the exe file on server and not any interface open on windows and no error showing in azure release pipeline. I have also tried with command prompt but nothing happened.I am trying to launch the exe file on deployment group server and open a interface in new window.
Launch the .exe file using azure release pipeline in deployment group windows server using powershell script
I am not sure I understand your issue fully, but from what I take from it, you want to compare the overall performance of two workflows for the same kind of analysis. One is written in Nextflow and one in Snakemake.I think it should be OK to simply give both workflows the same amount of overall resources globally, and let them figure out the resource usage of individual steps. In snakemake, this is done globally with thesnakemakecommand-line call via the options:CPUs:--cores 24memory:--resources mem_mb=64000For a fair comparison you should also ensure that these resources are actually (fully) available for the workflow runs. Especially make sure that there are no competing processes running on the server(s) you are running the workflows on.For memory, note that for Snakemake (and maybe also for Nextflow?) it depends on how well a workflow and its rules are written, if the specified limit is actually respected by each step / process.
I want to compare two pipeline using same data while one is written by snakemake and the other is nextflow. But I met a problem when setting computation resource.For nextflow, I can only set cpus and memory in config file. The threads number can not be set but automatically assigned.In snakemake, I want to set the same computational resource as nextflow. But although I can set cpu by --cores and memory, I can't make the threads setting automatically (its default 1).So I just wonder if simply setting cpus and memory of each is OK?Is there any solution for that?
A question about performance comparison between Snakemake and Nextflow
You can use the individual pipeline variables likepipeline run time,copied rows count,copy statusas they are (without creating a json document consisting of all values) and use the copy activity in adf to insert the values in cosmos db container. Below is the approach.Three pipeline variablespipeline run time,copied rows count,copy statusare taken and their values are set using set variable activity here.Then a copy activity is taken with a dummy source dataset. Source dataset has one row. In source tab of copy activity, New columns are added by clicking+Newin additional columns. All three columns are set with the value that is defined using set variable activity.Then sink dataset is taken for cosmos dB withinsertas write behavior.Then in Mapping section, Schema is imported by giving sample values for the pipeline variables. Then all additional columns that is created in source are mapped to sink columns.When pipeline is run, data is inserted to cosmos dB.
In ADF, I generate pipeline variables such as the pipeline run time, rows copied count, copy status etc. from various activities. I then use Set Variable to create a JSON string consisting of the above values. Now I want to store this string into an existing CosmosDB as a new record.I cannot use Azure Functions or a Notebook job to insert, as I want it to be possible within ADF without additional dependencies.I have tried using Data Flow, but it is unable to add a new record from values in a pipeline variable. I cannot use a separate data source to union and add into the CosmosDB as the values I want inserted are generated during runtime.
Inserting value stored in a pipeline variable as a new row of sink dataset data
The correct annotation looks something like this, give this a try:@Library('shared-lib') _You're missing the underscore, if this does not resolve, can you share the error logs.
I am not much aware of jenkins groovy coding. I am trying to get a task done on the jenkins side. So please let me know if I am doing it wrong.I generated a groovy script on the agent dynamically using the main pipeline groovy script and in that I am trying to use a jenkins shared library.My dynamically generated groovy file on the agent is as follows:#!usr/bin/groovy @Library('SharedLibrary')If I use that same library on my pipeline script, I am able to load the library and I am able to use the variables declared in my vars folder as well in the shared library.I am getting the following error for that file unable to resolve class Library, unable to find class for annotation.I tried to even use the full qualifier, or.jenkinsci.plugins.workflow.libs.Library and that did not work as well.Any pointers will be very helpful.Thank you, Kartik
Unable to find class for annotation for groovy script file on agent
GenerateProjectSpecificOutputFolderis in theMSBuild sourcebut it appears there is no official documentation.
I knew a lot of developers use this parameter /p:GenerateProjectSpecificOutputFolder=true to output the solution's projects to different folders. Is there any official documentation to explain the detail of this parameter and other similar parameters? ThanksI tried it in Azaure DevOps pipeline and it works, but need more information about this.
Where is the detailed documentation of MSBuild parameter: GenerateProjectSpecificOutputFolder
Is the%>%pipe operator always feeding the left-hand side (LHS) to the first argument of the right-hand side (RHS)? Even if the first argument is specified again in the RHS call?No. You’ve noticed the exception yourself: if the right-hand side uses., the first argument of the left-hand side isnotfed in. You need to pass it manually.However, this isnothappening in your case because you’re not using.by itself, you’re using it inside an expression. To avoid the left-hand side being fed as the first argument, you additionally need to use braces:iris %>% {cor(x = .$Sepal.Length, y = .$Sepal.Width)}Or:iris %$% cor(x = Sepal.Length, y = Sepal.Width)— after all, that’s what%$%is there for, as opposed to%>%.But compare:iris %>% lm(Sepal.Width ~ Sepal.Length, data = .)Here, we’re passing the left-hand side expression explicitly as thedataargument tolm. By doing so, we prevent it being passed as the first argument tolm.
Is the%>%pipe operator always feeding the left-hand side (LHS) to the first argument of the right-hand side (RHS)? Even if the first argument is specified again in the RHS call?Say I want to specify which variable to use incor():library(magrittr) iris %>% cor(x=.$Sepal.Length, y=.$Sepal.Width)But this fails, it looks like it call something likecor(., x=.$Sepal.Length, y=.$Sepal.Width)?I know I could use insteadiris %$% cor(x=Sepal.Length, y=Sepal.Width)But wanted to find a solution with%>%...
How to apply which() function using pipes in R? [duplicate]
I hope you're doing well. I just wanted to ask for some clarification regarding your question. To better understand your query, could you please provide more information about the context in which your RNA-seq data is? Once we have this information, you can proceed to design a bioinformatic pipeline that can clean up your raw reads, map them against a desired reference, and calculate expression data based on the specific conditions of your experiment. Keep in mind that it's important to consider your initial wet lab conditions when establishing a reasonable pipeline for your data. While it's not necessary to use the most updated software, it's essential to think carefully about the best approach to take full advantage of your data.
I’m a new in bioinfo and rna seq analysis for human and mouse mRNA and miRNA analysis.I used to use script pipelines workflows to do my analysis. The pipelines used fastq, cutadapt, star and bowtie2 and FeaturesCounts as softwares but I just so some people say that Cutadapt, bowtie and Star were obsolete. So I’m just without ground lol.Majority of this script like pipelines of RNA-seq that I found are with this softwares that I already use and articles are just to divergents about the meter.My question is true that it have better tools now? Who are they? Do you guys have a recommendation of pipelines/workflows user friendly and script-like??Thanks for all the help.
rna-seq user friendly pipeline recommendations
Issue resolved by replacing Visual Studio Build task with dotnet Build task
I've a VS solution having various projects, all of them targeting .NET6.0<PropertyGroup> <TargetFramework>net6.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup>It builds in visual studio without any problem, and writes the output to bin\debug\net6.0However, in Azure DevOps pipeline, the build fails with this error##[error]C:\Program Files\dotnet\sdk\6.0.403\Sdks\Microsoft.NET.Sdk\targets\Microsoft.PackageDependencyResolution.targets(267,5): Error NETSDK1005: Assets file 'C:\Agent3\_work\142\s\*****\obj\project.assets.json' doesn't have a target for 'netcoreapp2.1'. Ensure that restore has run and that you have included 'netcoreapp2.1' in the TargetFrameworks for your project. 2023-03-16T08:57:39.9848471Z C:\Program Files\dotnet\sdk\6.0.403\Sdks\Microsoft.NET.Sdk\targets\Microsoft.PackageDependencyResolution.targets(267,5): error NETSDK1005: Assets file 'C:\Agent3\_work\142\s\*****\obj\project.assets.json' doesn't have a target for 'netcoreapp2.1'. Ensure that restore has run and that you have included 'netcoreapp2.1' in the TargetFrameworks for your project. [C:\Agent3\_work\142\s\*****\*****.csproj]How can I find which dependency is expecting netcoreapp2.1?Following tasks are in use in the pipelineNuGet tool installer 5.8.0NuGet restoreVisual Studio Build (Visual Studio 2022 - dev machine also uses VS2022)
C# project with Target framework .NET 6.0 expecting netcoreapp2.1
As far as I know. It is not possible to automate creating an SSH service connection.Normally Microsoft offersaz devops service-endpointcommand line to create a service connection for azure resource manager or GitHub. But does not offer SSH creation. If someone else can show something else I will remove my answer.Original answer:From your project setting, click on Service connections, and search for SSHYou can add the parameter for your ssh and then at the end check Grant access permission to all pipelines:With that said there is an example ofAzure pipeline connect to an ssh server and then copy files into it
How can I add a new SSH service connection using a build pipeline in Azure DevOps? I would like to specify the username ip and key and grant access to all pipelinesSSH service connection using the pipeline. Didnt get any results to do this!
Add SSH service connection using a build pipeline in azure devops
Your receiver may be waiting for an intra frame that it hasn't yet received. You may add propertykey-int-maxof x264enc for the affordable setup time, such as 30 frames for 3s @10fps:pipeline = "appsrc ! video/x-raw,format=BGR,width= 1920, height= 1080,framerate=10/1 ! queue ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=80000 speed-preset=superfast insert-vui=1 key-int-max=30 ! h264parse ! rtph264pay ! udpsink host=127.0.0.1 port=5010";On receiver side, you may also add rtpjitterbuffer and adjust its latency property for your case:gst-launch-1.0 udpsrc port=5010 ! queue ! application/x-rtp,encoding-name=H264 ! rtpjitterbuffer latency=500 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosink sync=0
i tried to use this pipeline to create a video streaming for some .png pictures.SENDER: auto rtsp_pipeline = "appsrc ! video/x-raw,format=BGR,width= 1920, height= 1080,framerate=10/1 ! videoscale ! videoconvert ! x264enc tune=zerolatency bitrate=80000 speed-preset=superfast ! rtph264pay ! udpsink host=127.0.0.1 port= 5010";output_stream = cv::VideoWriter(rtsp_pipeline, cv::CAP_GSTREAMER, 0, frame_rate, cv::Size(1920, 1080), true);timer_ = this->create_wall_timer(100ms, std::bind(&PipelineSimulator::timer_callback, this));with this timer_callback: img_ = cv::imread(this->filenames[current_index]); output_stream.write(img_);RECEIVER: gst-launch-1.0 udpsrc port=5010 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! decodebin ! videoconvert ! autovideosinkthis is the result:enter image description hereI don't know where the problem is.I tried to change the color format. But i think it is a problem buffer
gstreamer green screen Ubuntu 20.04
No, there is no such option (go back to previous choice) in AWS Amplify CLI for now. Discussion is open here:https://github.com/aws-amplify/amplify-cli/issues/6367You might try to use "update" command, for example "amplify update auth", and change option which you would like to edit, but you have to complete configuration process before using the "update".
I was wondering if there was a way to go back and edit the previous field while performing auth configuration via AWS Amplify in the command line terminal. By default,enteris to go forward,spaceis to select among several options, andais to toggle all, but how does one go back to editing the previous field?
How to go back to previous item in AWS Amplify CLI auth configuration?
Kinesis Firehose doesn't give you that much control over what the HTTP requests look like, unfortunately, since there are onlya handful of configuration optionsfor the HTTP Endpoint destination.Kinesis sends HTTP requests according to their own specification, and expects responses to look in a specific way.You can find the specification here.This specification is not compatible with Elasticsearch's Index API as you discovered. On a side note, you will probably want to use the Bulk API for performance reasons, since Kinesis will buffer events and deliver multiple ones in one request.So, you will most likely need something in-between Kinesis and your Elasticsearch cluster to handle the differences in regards to the HTTP requests and responses. This could be a simply nginx proxy, for instance, or it could be something like an API Gateway which invokes a Lambda function. This Lambda function then sends a request to your index' Bulk API and constructs a response for Kinesis according to the HTTP specification.I hope that helps.
I deployed a ES cluster to AWS EKS and it can be access through application load balancer. I have created an index in the cluster. I am going to deploy a Kinesis firehose to usehttp_endpointas the target to stream data to ES cluster.But how can I configure what http verb firehose uses to put data to ES endpoint? For example, my ES endpoint ishttp://xxxx.com/myindexand I'd like the firehose to sendPOSTrequest to this endpoint. How can I configure it to usePOSTrather thanPUT?
How can I stream data to self hosted elasticsearch cluster from Kinesis firehose?
Yes, there is anAWS Secrets Manager JDBC Librarywhich is basically a wrapper to common JDBC drivers with support to AWS Secrets.This wrapper tries to connect to the database.If an authentication exception is caught, it will refresh the secretsto provide a valid connection.Here are the two steps to configure your spring boot application.1 - Add the dependency to your pom.xml<dependency> <groupId>com.amazonaws.secretsmanager</groupId> <artifactId>aws-secretsmanager-jdbc</artifactId> <version>1.0.7</version> </dependency>2 - Setup the database connection on your application.yamlspring: datasource: url: jdbc-secretsmanager:mysql://database-host:3306/rotate_db username: secret/rotation driver-class-name: com.amazonaws.secretsmanager.sql.AWSSecretsManagerMySQLDriverThe username is actually the secret name created at AWS Secrets.Make sure to use the right URL, in this example it is a URL for MySQL.
RequirementRemove DB credentials from Java Code(property files) to AWS SM.Implement autorotation of DB credentials.Problem StatementThough we are able to retrieve DB credentials from AWS SM from our application, but we are facing below issues during auto rotation of passwords:How Java Code will identify that DB passwords are rotated by AWS SMAll the instances of application should be updated with new DB credentials after automatic password rotation from AWS SM.Proposed SolutionSolution1Whenever passwords are rotated, java application won’t be able to connect to DB.At that time, we will get SQL Connection exception (Connection lost exception) in our application.Java Application will catch the exception & then add a mechanism to retrieve the DB secrets again from AWS SM.Set up new Db connection with the updated credentials.Step 3 & 4 would be done for all the instances of the applicationSolution 2We can call refresh method and will set up new DB connection automatically & avoid SQL Connection exception .Is there any way without any db connection issues? we can rotate db password using aws SM
AWS secret manager Password Rotation Without Restarting Spring boot application
To understand tunnel addressing you need to understand the difference between route-based tunnels and policy-based tunnels. Amazon uses route-based tunnels, which means that there is a virtual interface at either end of the tunnel. The "inside tunnel ipv4 cidr" is the address space used by that tunnel interface.
I'm configuring a VPN Site to Site in AWS and I would like to understand some Tunnel Options.What does Inside tunnel IPV4 CIDR means ? I need to allow this range in some place ? And there's some impact if my subnet has higher IP range than this tunnel CIDR ? For example if my lambda’s subnet are /21 and this inside IPV4 CIDR tunnel is /30 ?What should the security groups of the other account allow ? The outside IP address ?
What does 'Inside tunnel IPv4 CIDR' in VPN Site to Site Options mean?
If I were you I would just implement my own converter that transforms a StepFunction event into an API Gateway event and then calling the express-serverless in your lambda.The packageaws-lambdacontains definitions in TypeScript of many AWS events including those ones, then try to generate a mock API Gateway event from your own step function event value. From the sources (4.3.9) we have the function:function getEventSourceNameBasedOnEvent ({ event }) { if (event.requestContext && event.requestContext.elb) return 'AWS_ALB' if (event.Records) return 'AWS_LAMBDA_EDGE' if (event.requestContext) { return event.version === '2.0' ? 'AWS_API_GATEWAY_V2' : 'AWS_API_GATEWAY_V1' } throw new Error('Unable to determine event source based on event.') }So probably to make it work correctly, you have to define a RequestContext mock value in your mock event and it should be enough.
I have a lambda setup that usesvendia serveless-expressas a handler. It has been working well to serve REST APIs from the single lambda function with multiple routes.I now have a new requirement, where the same lambda needs to be part of a step function's state machine. But that doesn't seem to be allowed by vendia app, as it always throws the error: "Unable to determine event source based on event" as it expects the event to be api gateway / alb only.So, based on this, it looks like I will need a separate lambda for step, which makes me have duplicate code in multiple lambdas.Is it possible for the lambda to handle the inputs from step and still be a vendia express app? Please let me know if I am trying something that doesn't make sense at all.
Can a lambda that uses vendia serverless-express be a step in a state machine?
Jupyter server emits logs to stdout. So for notebook instances at least you can access the logs from CloudWatch. These would be under the /aws/sagemaker/NotebookInstances log group.You can follow the below steps to automatically log kernel level logs to CW -Choose Notebook instances.In the list of notebook instances, choose the notebook instance for which you want to view Jupyter logs by selecting the Notebook instance Name.This will bring you to the details page for that notebook instance.Under Monitor on the notebook instance details page, choose View logs.In the CloudWatch console, choose the log stream for your notebook instance. Its name is in the form NotebookInstanceName/jupyter.log.Reference:https://docs.aws.amazon.com/sagemaker/latest/dg/jupyter-logs.html
CloudWatch records logs for Sagemaker instance such as Kernel Started, Kernel shutdown, Notebook Saved etc by default. Though, I want to list some custom logs along with these default logs.Please have a look at the picture attached.Sample image of How default logs for a Sagemaker notebook instance look in CloudWatchThe goal is to be able to see some custom logs with these. For example - 'Cell 1 executed!'
Can we generate custom logs for Sagemaker notebook instance in CloudWatch?
I have a pretty dumb reason that caused this AttributeError: after a long reference chain through initialization importing, the last package that actually triggered this error (for me) is numpy, which I copied and pasted directly into site-packages. The source that I copied from has a different python version. So maybe something was masked. I re-installed the numpy package through the correct version, and it works well.
I am trying to use confluent_kafka:1.6.0 module in aws lambda function and in theinitfile in line 9 they have the following statement:os.add_dll_directory(libs_dir)which is used to import some dll file that the module need, but I keep getting an error that say:"errorMessage": "module 'os' has no attribute 'add_dll_directory'". Facts:os module does exist.sys exist with version:3.8.8sys.version_info(major=3, minor=8, micro=8, releaselevel='final', serial=0)All my python libraries are in a separate layer from my lambda functionWhen I display function of os, add_dll_library doesn't come back as part of the listAnyone got an idea of what why os does not have that method? and would I fix that in AWS lambda?
module 'os' has no attribute 'add_dll_directory' python 3.8
If you have custom applications you can deploy them classically on a virtual machine, in AWS it is, for example, anEC2instance. You can also deploy the dockerized applications onECSYou can connect your backend to a RDS via environment variables such as host, password, port... documentedhere
This is my first Question here. I develop a fullstack nestJS app with Angular and want to host it on aws now. After i read the manual, they just always talk about "fullstack" in combination with multiple frontends. The Backend Environment from AWS doesnt help me anything, because i wrote my own backend.So, can someone tell me, who i can deploy frontend and backend on aws and connect them with a rds ? Frontend works and i try something like that with the build file:version: 1 frontend: phases: preBuild: commands: - npm ci build: commands: - npm run build artifacts: baseDirectory: dist/apps/frontend-app files: - '**/*' cache: paths: - node_modules/**/* backend: phases: preBuild: commands: - npm ci build: commands: - npm run build-backend artifacts: baseDirectory: dist/apps/api files: - '**/*' cache: paths: - node_modules/**/*Thanks and stay healthy
Amazon AWS Amplify NestJs - host own frontend and backend
I created an npm package to do this in CDKhttps://github.com/HarshRohila/cdk-secure-parameter-storeThis usesLambda backed Custom ResourceHow does this work?The cloudformation is not having an API to create a secure parameter store but AWS SDK does. So the idea is to use CloudFormation Custom Resource to which we can attach a lambda, that lambda is called whenever CustomResource is created/updated/deleted and the lambda can use AWS SDK to create and delete parameter storeMore discussion about this issuehere
So I found out that you can't use CloudFormation to insert a parameter that needs to be secured with a KMS Key into Secure Parameter Store. Obviously, you can use the cli, but that has huge drawbacks when it comes to doing multiple insert secure parameters within a pipeline because if one fails in the middle, the other ones to revert back as it would if it was done via CDK and Cloudformation.So the question is, how have others incorporated this type of functionality in a CI/CD pipeline? Manually go to each environment and put it into a Secure Parameter Store?
CDK and automation of inserting secure string parameters into ssm parameter store?
Seems this is possible with this pagehttps://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-point-domain-to-container-service
While setting up a new container service in AWS Lightsail I've come across an issue setting up my DNS. As describedhereI am not able to route the traffic for the apex of my domain (querowebdesign.com) to the container service itself using the Lightsail DNS service, so I was looking at using Route 53 to manage my DNS and leverage alias records to provide this capability. However, when attempting to create a Route 53 alias A record I don't seem to be able to route directly to a Lightsail container service.So a couple of questions - firstly, does my approach make sense, and secondly is this possible at all?
Is it possible to connect a Route 53 hosted zone to Lightsail Container Service using the apex domain?
You've most likely run into the WebSocket connection limit. The Javascript client of Gremlin is not managing a connection pool. The documentation recommends using a single connection per Lambda lifetime and manually handling retry. (If the gremlin client doesn't do it for you).Neptune LimitsAWS Documentation
I am working on an app that uses AWS Lambda which eventually updates Neptune. I noticed, that in some cases I get a 429 Error from Neptune: Too Many Requests. Well, as descriptive as it might sound, I would love to hear an advice on how to deal with it. What would be the best way to handle that?Although I am using a dead letter queue, I'd rather have it not going this road at the first place.Btw the lambda is triggered by a SQS (standard) queue.Any suggestions?
AWS Neptune access from Lambda - Error 429 "Too Many Requests"
When I started working with NATS I had a similar issue. For me, the best and easiest solution was to do port-forwarding:kubectl port-forward service/nats 4222:4222after doing this, you should be able to do:nats server ping -s nats://localhost:4222
I have deployed NATS (https://nats.io/) into my Kubernetes cluster which is running on AWS and I am trying to expose this service externally.These are the current details of my nats service.NAME TYPE CLUSTER-IP EXTERNAL-IP nats ClusterIP None None Port(s) 4222/TCP,6222/TCP,8222/TCP,7777/TCP,7422/TCP,7522/TCPCurrently, the nats service is a ClusterIP service and when I try to patch it to become a LoadBalancer service with this command:kubectl patch svc nats -p '{"spec": {"type": "LoadBalancer"}}'It leads to this error:The Service "nats" is invalid: spec.clusterIP: Invalid value: "None": may not be set to 'None' for LoadBalancer services.Hence, how can I be actually expose this Nats service externally? Any guidance provided will be greatly appreciated.
How to expose a NATS server externally
In Redshift, you can use thetranslate functionto normalize a string. The translate function takes three arguments: the source string, the characters to replace, and the replacement characters. You can use this function to replace all the special characters in your string with their ASCII equivalent. For example, the following query uses the translate function to replace all the special characters in a string with their ASCII equivalent. Additionally, spaces are replaced with "-" characters.SELECT translate('São Paulo', ' áàãâäéèêëíìîïóòõôöúùûüçÁÀÃÄÉÈÊËÍÌÎÏÓÒÕÖÔÚÙÛÜÇ', '-aaaaaeeeeiiiiooooouuuucAAAAAEEEEIIIIOOOOOUUUUC')This query would return the string "Sao Paulo". You can use the lower function to convert the string to lowercase. Here's an example of how you could use these functions together to normalize a string:SELECT lower(translate('São Paulo', ' áàãâäéèêëíìîïóòõôöúùûüçÁÀà ÄÉÈÊËÍÌÎÏÓÒÕÖÔÚÙÛÜÇ', '-aaaaaeeeeiiiiooooouuuucAAAAAEEEEIIIIOOOOOUUUUC'))This query would return the string "sao-paulo".
Since my texts are in Portuguese, there are many words with accent and other special characters, like: "coração", "hambúrguer", "São Paulo".Normally, I treat these names in Python with the following function:from unicodedata import normalize def string_normalizer(text): result = normalize("NFKD", text.lower()).encode("ASCII", "ignore").decode("ASCII") return result.replace(" ", "-")This would replace the blank spaces with '-', replace special characters and apply a lowercase convertion. The word "coração" would become "coracao", "São Paulo" would become "Sao Paulo" and so on. Now, I'm not sure what's the best way to do this in Redshift. My solution would be to apply multiple replaces, something like this:replace(replace(replace(lower(column), 'á', 'a'), 'ç', 'c')...Even though this works, it doesn't look like the best solution. Is there an easy way to normalize my string?
What's the best way to 'normalize' a string in Redshift?
BatchGetItemalready does its work in parallel:In order to minimize response latency,BatchGetItem retrieves items in parallel.Although I don't have benchmarks for you, a singleBatchGetItemcan get up to 100 items in parallel. AlsoBatchGetItemis single API call. Thus, performing one API call to get 100 items should be much faster then doing 100 individual API calls usingGetItemdue to just network latency.
Is there much difference (in time performance) in usingBatchGetItemvs issuing severalGetItemin parallel?My code will be cleaner if I can useGetItemand just handle the parallelisation myself.However, if there's a definite time performance advantage toBatchGetItemthen I'd certainly use that.
AWS BatchGetItem vs GetItem in parallel
Just need to send 2 extra headers 'In-Reply-To' and 'References' and the value of these 2 headers would be <message-id of last email to which you want to create a thread +"@email.amazonses.com>". Dont forget '<' & '>'. Eg. "[email protected]
I tried to first go through documentation and aws forums but not able to find any solution.Thisis one big discussion regarding threading in outlook but gives no solution. i am hoping it to be a fairly simple thing but since I am new to aws-ses, its getting hard to find a good solution for this.
How do I create an email thread when using AWS-SES?
I am not duplicating this template. But since I had similar issue to the model validation failed, if your using labels, make sure you have format of Key: XXX Value: XXX
I'm able to create an EKS cluster using Cloudformation. I'm also able to have a "node group" and EC2's + autoscaling. On the other side I can also install a FargateProfile and create "fargate nodes". This works well. But I want to use only fargate (no EC2 nodes etc). Here for I need to host my management pods (in kube-system) also on Fargate. How can I managed this?I tried this:Resources: FargateProfile: Type: AWS::EKS::FargateProfile Properties: ClusterName: eks-test-cluster-001 FargateProfileName: fargate PodExecutionRoleArn: !Sub 'arn:aws:iam::${AWS::AccountId}:role/AmazonEKSFargatePodExecutionRole' Selectors: - Namespace: kube-system - Namespace: default - Namespace: xxx Subnets: !Ref SubnetsBut my management pods remain on the EC2's. Probably I'm missing some labels but is this the way to go? Some labels are generated with a hash so I can't just add them in my fargateProfile.with eksctl it seems possible: Adding the --fargate option in the command above creates a cluster without a node group. However, eksctl creates a pod execution role, a Fargate profile for the default and kube-system namespaces, and it patches the coredns deployment so that it can run on Fargate.but how to do this in CloudFormation?I also tried with the labels but then I got an error in CloudFormation: Modelvalidation failed (#: extraneous key [k8s-app] is not permitted)
Install EKS with Fargate using CloudFormation
You can use SNS Mobile Push notification in IOS & Android, but not in < Huawei > as I know !!Visit this docs to understand more:https://aws.amazon.com/sns/faqs/#Mobile_push_notificationsSNS Mobile Push lets you use Simple Notification Service (SNS) to deliver push notifications to # Apple, Google, Fire OS, and Windows devices, as well as Android devices in China with Baidu Cloud Push # . With push notifications, an installed mobile application can notify its users immediately by popping a notification about an event, without opening the application.AWS Amplify for FlutterAWS Amplify announces its support for the Flutter framework.AWS Amplify offers a variety of services such as Authentication, including third party/social media log in, push notifications, storage, GraphQL, Datastore, REST API, Analytics, PubSub, and more.Visit this article :https://medium.com/flutter-community/aws-amplify-for-flutter-ed7890f75493
I wonder if maybe anyone has worked with aws push notifications with flutter for ios, android and huawei platforms. If so, what plugins have they used?If you have documentation on this topic, I would appreciate it if you share it.
AWS SNS in Flutter
If you are not using the Default VPC for the EC2 Instance that you created then you need to attach an Internet Gateway for the VPC that you are planning to use for the EC2 Instance.Do the following:Create VPC and subnetsHow to Create VPC and SubnetsCreate Internet GatewayHow to Create and Attach Internet GatewayNow create you EC2 Instance using the VPC you created and For auto assign pubic IP will be disabled if you are using non default VPC (i.e. one we create manually) so enable it like this while creating the instance. (Select the vpc and subnet you created in Step 1 for Network and Subnet field in the bottom image)Also ensure that SSH accepts is enabled from anywhere while creating the security group in AWS for the ec2 instanceNow once the instance is launched you would be able to access it via SSHssh -i your_certificate.pem username@public_ipHope you find it useful.Note: message saying "Session Manager setup is incomplete" may still persist but you will be able to access the box by using SSH if you follow the above steps and this message comes because the SSM Agent might not be installed. Kindly refer thisComplete the Session Manager Setup
I've been using AWS free tier for a while. I can usually create new instances and connect to them with no issues. One week ago I started getting the Session Manager message in bold below while trying to connect.I've did some troubleshooting, tried every step suggest by AWS but still no luck. I can still create instances but can't connect. The problem isn't related to a specific instance, it's impacting any instance I create. I've tried rebooting but no change and the instance/s do not appear in System Manager Session Manager console.We weren't able to connect to your instance. Common reasons for this include:SSM Agent isn't installed on the instance. You can install the agent on both Windows instances and Linux instances. The required IAM instance profile isn't attached to the instance. You can attach a profile using AWS Systems Manager Quick Setup. Session Manager setup is incomplete. For more information, see Session Manager Prerequisites.**
Can't connect to AWS Windows or Linux instances anymore
Firstly their may not always be an email as others has said. As to call AWS APIs you need to be able to assume a role which has the ability to callec2:RunInstances. If for example you granted this role to anLambdafunction, that Lambda could indeed create a new ec2 instance, but it's not like it has an email.Using TagsWhat you want to see is who calledec2:RunInstancesfor that ec2 instance. If you have enabled Cost Allocation tags you could use theaws:createdBytag, as describedhere. To access the tags in the instance, you first need the id, and then query for the tags:instance_id=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) aws ec2 describe-tags \ --filters "Name=resource-id,Values=$instance_id" 'Name=key,Values=aws:createdBy' \ --query 'Tags[].Value' --output textCloudTrailIf you weren't in the instance, you could search for events of nameRunInstancesand find whereresponseElements.instanceSet.items[].instanceId == 'YOUR INSTANCE ID'. This can be done in AWS Config I as well I believe, if you have enabled it for your instance from created at date.
How can I find out the email address I used to create an EC2/lightsail instance through SSH?
How do I get email information from ec2 instance?
You can try enabling the tls back but use-Djdk.tls.client.protocols=TLSv1.2with the command line to downgrade the tls version.Or try upgrading the Java version.
My application is written in spring boot and working fine with a self-managed MongoDB server. Now I am trying to connect my same spring boot application with AWS DocumentDB. I started a documentDB cluster and connecting with spring-boot with the following configuration.spring.data.mongodb.uri=mongodb://<user>:<password>@<my-cluster-endpoint>:27017/?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=false spring.data.mongodb.database=mydbI have disabled TLS and it is showingTLS Enabled: Noin cluster detail. I am deploying my spring boot application in an EC2 instance which is running in the same vpc as documentdb is running, I have cross checked it. When running my application I am getting the following error.No server chosen by com.mongodb.client.internal.MongoClientDelegate$1@51a81d99 from cluster description ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE, serverDescriptions=[ServerDescription{address=:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message}, caused by {java.net.SocketTimeoutException: Read timed out}}]}. Waiting for 30000 ms before timing outI am not finding a good source for working with documentdb in spring-boot. Any idea of how to find the exact reason?
AWS DocumentDB with spring boot fails Error : No server chosen by com.mongodb.client.internal.MongoClientDelegate
AFAIK this is not possible on the API Gateway level. Option is to do the mapping on the lambda integration level.
I would like to proxy the incoming requests to different endpoints based on a request header received in the request.In AWS API gateway, I can set up different endpoints as separate stage variables but at integration >> Endpoint URL setting, I would like to pick the stage variable based on the value of request header value.For example:if header value is brand-id: abc then request should be proxied to abc.test.comif header value is brand-id: pqr then request should be proxied to pqr.test.comI'm expecting something like this in "Endpoint URL" value:http://${stageVariables.${method.request.header.brand-id}}/Any help to achieve this would be appreciated.
How to set integration endpoint dynamically based on request header in AWS API Gateway?
For this exact situation, we usehttps://github.com/kahing/goofysIt's very reliable, and additionally, offers the ability to mount S3 buckets as folders on any device - windows, mac as well as of course Linux.Works outside of the AWS cloud 'boundary' too - great for developer laptops.Downside is that it does /not/ work in a Lambda context, but you can't have everything!
I am building an application where the user can upload images, for this I am using S3 as files storage.In other area of the application there is some process deployed on EC2 that need to use the uploaded images. This process need the images multiple times (it generate some report with it) and it's in part of multiple EC2 - using elastic beanstalk.The process doesn't need all the images at once, but need some subset of it every job it gets (depend the parameters it gets).Every ec2 instance is doing an independent job - they are not sharing file between them but they might need the same uploaded images.What I am doing now is to download all the images from s3 to the EC2 machine because it's need the files locally. I have read that EFS can be mounted to an EC2 and then I can access it like it was a local storage.I did not found any example of uploading directly to EFS with nodejs (or other lang) but I found a way to transfer file from S3 to EFS - "DataSync".https://docs.aws.amazon.com/efs/latest/ug/transfer-data-to-efs.htmlSo I have 3 questions about it:It is true that I can't upload directly to EFS from my application? (nodesjs + express)After I move files to EFS, will I able to use it exactly like it in the local storage of the ec2?Is it a good idea to move file from s3 to efs all the time or there is other solution to the problem I described?
Move files from S3 to AWS EFS on the fly
Assuming that everything else is correct,connectionblock should be insideprovisioner, not outside of it:resource "aws_instance" "ec2_test_instance" { ami = var.instance_test_ami instance_type = var.instance_type subnet_id = var.aws_subnet_id key_name = aws_key_pair.deployer.key_name provisioner "remote-exec" { connection { type = "ssh" host = self.public_ip user = "centos" private_key = file("${path.module}/my-key") } inline = [ "sudo yum -y install wget, unzip", "sudo yum -y install java-1.8.0-openjdk", ] } }
getting error "import KeyPair: MissingParameter: The request must contain the parameter PublicKeyMaterial " when I run "terraform apply". what does this error mean.resource "aws_instance" "ec2_test_instance" { ami = var.instance_test_ami instance_type = var.instance_type subnet_id = var.aws_subnet_id key_name = aws_key_pair.deployer.key_name tags = { Name = var.environment_tag } provisioner "local-exec" { command = "echo ${self.public_ip} > public-ip.txt" } provisioner "remote-exec" { connection { type = "ssh" host = self.public_ip user = "centos" private_key = file("${path.module}/my-key") } inline = [ "sudo yum -y install wget, unzip", "sudo yum -y install java-1.8.0-openjdk" ] } }
terraform V12: Error import KeyPair: MissingParameter: The request must contain the parameter PublicKeyMaterial
It can work. However, keep in mind there are quotas set by AWS for the number of groups in a user pool (10,000) and the number of groups a user can belong to (100).https://docs.aws.amazon.com/cognito/latest/developerguide/limits.htmlIt these limits work for your application, then it should be fine.
I'm developing a serverless application which includes a concept of groups which are not pre-defined (so not the classic fixed Admin/Guest...).Those groups are indeed generated freely by the end users of the application whom then can invite other users into these groups, remove them, delete the group etc.Being part of a group(s) allow a user to perform certain operations on entities related to that specific group.I'm now wondering if Cognito User Groups can be used for this purpose given that those groups will be created directly from the application by the end user and potentially an infinite number of groups.The other option is to implement my own authoriser like querying DynamoDB to check if a user is a part of specific group. I can't really find a reference to pick up the best one.Any experience/suggestions will be much appreciated.
Can Cognito User Groups be used for dynamic groups in a serverless app?
There is anignore_tags argument on the AWS provider. However for AWS provider v2.70.0 it says it doesn't work on aws_autoscaling_group resources. I am using AWS provider v3.75.0 and thedocumentationfor that version doesn't exclude aws_autoscaling_group resources and I can confirm it works.My provider now looks like this:provider "aws" { version = "~> 3.75.0" region = var.aws_region ignore_tags { keys = ["CodeDeployProvisioningDeploymentId"] } }
I'm using Terraform v0.12.25 with provider.aws v2.70.0. I have ASG resource defined in Terraform:resource "aws_autoscaling_group" "web" { name = "CodeDeploy_production_web" max_size = 40 min_size = 1 wait_for_capacity_timeout = "0" health_check_type = "EC2" desired_capacity = 1 launch_configuration = aws_launch_configuration.web.name vpc_zone_identifier = data.aws_subnet_ids.subnets.ids suspended_processes = [] tag { key = "Environment" propagate_at_launch = true value = "production" } tag { key = "Name" propagate_at_launch = true value = "Web_App_production_CD" } tag { key = "CodeDeployProvisioningDeploymentId" propagate_at_launch = true value = "" } lifecycle { ignore_changes = [ desired_capacity, name ] } }I want to ignore changes on tag "CodeDeployProvisioningDeploymentId". I've tried adding it toignore_changesblock but I didn't succeed in making it work. Does anyone know how to do this?
Terraform ignore_changes for CodeDeployProvisioningDeploymentId tag
This isn't a supported option with S3 Batch Operations
I read from this doc "After the first 1000 objects have been processed, S3 Batch Operations examines and monitors the overall failure rate, and will stop the job if the rate exceeds 50%."https://aws.amazon.com/blogs/aws/new-amazon-s3-batch-operations/How to I control this value, to say, 100? 50? Or any custom value for that nature that suits my business case?
How do I set number of objects failed threshold trigger to stop S3 Batch Operations job to custom value other than 1000?
In my understanding this is possible. We are logging the email bounce details in the Amazon Cloud watch. For more information please follow the linkhttps://docs.aws.amazon.com/ses/latest/DeveloperGuide/event-publishing-retrieving-cloudwatch.html
Currently I am using configuration sets with metrics dashboards for monitoring SES sending statistics. But I am wondering if it is possible to log send details from SES directly to Cloudwatch as normal logs, so I could later query them using log insights?
AWS SES logs to Cloudwatch
Youcan't mix API Gateway mapping between RestApis and Websocket APIsunder a single custom domain. In other words, it could be expressed like we can't use the same domain or subdomain for RestAPI and WebSocket.Few things that you should be aware of when creating the custom domain for WebSocket mappingWebSocket doesn't support Edge optimized custom domain endpointWebSocket security policy allows only TLS 1.2, not TLS 1.0Domain certificate can't be referred from another region unless like RestApi custom domain referenceHow to create a Custom domain in cloud formation for RestApi and WebsocketWebSocket custom domainApiGWCustomDomainName: Type: 'AWS::ApiGateway::DomainName' Properties: RegionalCertificateArn: !Ref RegionalCertificateArn DomainName: !Ref DomainName EndpointConfiguration: Types: - REGIONAL SecurityPolicy: TLS_1_2 AppApiMapping: Type: 'AWS::ApiGatewayV2::ApiMapping' Properties: ApiMappingKey : !Ref BasePath DomainName: !Ref ApiGWCustomDomainName ApiId: !Ref websocketAPI Stage : !Ref StageRest APIApiGWCustomDomainName: Type: 'AWS::ApiGateway::DomainName' Properties: CertificateArn: !Ref CertificateArn DomainName: !Ref DomainName AppApiMapping: Type: 'AWS::ApiGateway::BasePathMapping' Properties: BasePath: !Ref BasePath DomainName: !Ref ApiGWCustomDomainName RestApiId: !Ref RestApi Stage : !Ref ApiStageName
I have created a messaging app using aws WebSocket api and deployed using serverless. The apis are successfully deployed and I am able to test those using wscat. I have other Rest apis in the stack too. I tried mapping my new WebSocket api stack to an existing domain name, but getting the error :Only REGIONAL domain names can be managed through the API Gateway V2 API. For EDGE domain names, please use the API Gateway V1 API. Also note that only REST APIs can be attached to EDGE domain names.I'm stuck and trying to figure out what changes are to be made.I went throughhttps://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-api.htmlandhttps://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.htmlbut couldn't figure out.
Connect aws WebSocket api to custom domain name
What worked for me is to create a zip file in the root directory of your app:zip ../rails-default.zip -r * .[^.]https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ruby-rails-tutorial.html
I am trying to deploy a Rails 6 app on AWS via Elastic Beanstalk.When I runeb deploy, it fails. When I look at the logs, I see this message2020/06/03 14:19:51.457403 [ERROR] rbenv: version `2.7.0' is not installed (set by /var/app/staging/.ruby-version) 2020/06/03 14:19:51.457439 [ERROR] An error occurred during execution of command [app-deploy] - [stage ruby application]. Stop running the command. Error: install dependencies in Gemfile failed with error Command /bin/sh -c bundle config set --local deployment true failed with error exit status 1. Stderr:rbenv: version `2.7.0' is not installed (set by /var/app/staging/.ruby-version)However, when Ieb sshand runruby -vI see that I am runningruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]So I updated myGemfileand.ruby-versiontoruby 2.7.1to match my AWS environment.When Icd /var/app/stagingandcat .ruby-versionI get2.7.1So why is this deploy failing? I am not requiringruby 2.7.0anywhere in my project.I've made sure togit push, so I know my Gemfile is pushed my to repo. I am going crazy trying to get this Rails App deployed.
How to resolve Rail 6 deploy error on AWS Elastic Beanstalk for ruby 2.7.1
try using the scp command you'll just need your pem and the public ID of your EC2tutorialhere
I'd like to add files to my efs using command line but can't see any info online. I've mounted my EFS to an EC2 instance. Any idea how to do this?
How to upload local files to AWS Elastic File System (EFS)
It turns out that the instance needed rebooted. Normal escape sequences now work.
Using Lightsail, running Ubuntu 16Using the native web browser terminal sshOpen a file in vim Switch to 'insert' mode in vimMake changesDocumented vim key sequences of escaping don't work. I've tried:Control [Escape KeyControl CCaps Lock
Escape 'insert' mode in Amazon AWS Lightsail SSH Terminal vim
Query(), which you've linked too, is only useful with a composite (partition + sort) key, and thus the result set is ordered.Scan() can be used with just a partition key. So in a sense the results is "un-ordered" within a given partition. For that matter, I don't is it guaranteed that the logical first partition key will be the first scanned partition.However, Amazon isn't magic. There's still some physical (and/or logical) order of the data, they may not publish the internals, and the order you see today may not be the one you see tomorrow. but Scan() by definition, starts at the beginning and goes to the end. You can't have a beginning or an end without an order in-between.
DynamoDB documentation (such ashere) explains how to useLastEvaluatedKeyto paginate through results. I know that it works, but I would like to understand how. As far as I know DynamoDB builds an unordered hash index on the partition key. Shouldn't that mean that if you give it a key, it doesn't know which keys are before or after it -- because it's not ordered? So how does it then know which keys follow theLastEvaluatedKey? How does this index work? Are new items/keys just appended to it? What happens to deleted items/keys?
How does DynamoDB LastEvaluatedKey work internally?
You can import the npm packagecdk-constants. It "aims to be an up to date constants library for all things AWS".While you can't use cdk-constants to programmatically check if a Managed Policy exists, you can inspect the library to see what Managed Policies are available.
I'm setting up infrastructure using AWS CDK. Is there any way I can check if a Managed Policy already exists?
CDK create new managed policy only if doesn't exist
The S3 VPC Endpoint is configured via the routing of the VPC. For this reason you will not be able to resolve to it outside of the VPC.However, you could run a private EC2 instance with a proxy in front to forward traffic to your S3 bucket, then set a resolveable hostname to the EC2 instance itself.Additional linkshttps://docs.aws.amazon.com/vpc/latest/userguide/vpce-gateway.html#vpc-endpoints-routing
I've been looking for a solution to host internal S3 website that's accessible over VPN. Clearly I don't wanna use CLoudFront + WAF to restrict the IP range.I tried setting up the following,Created a VPC Endpoint for S3 servicesCreated a static S3 website bucket with bucket policy restricting access only through VPC Endpoints.Created a private hosted zone and configured an Alias record set to S3 website addressThe above solution exposes HTTP endpoint and I wanna secure it with SSL and I'm looking for options.Have also been looking to setup reverse proxy infront of S3, but couldn't find a clear implementation reference.Does the above solution would work? Or I'm missing something big?
Hosting internal S3 Website that's accessible over VPN and is secured with HTTPS
Package upgrade works for me !
Getting following error while trying to create session from cql,Error: Consistency level ANY is not supported for this operation. Supported consistency levels are: ONE, LOCAL_QUORUM, LOCAL_ONEI've usedAmazon Managed Apache Cassandra ServiceFollowing is the code for creating sessionclusterConfig := gocql.NewCluster("<HOST:PORT>") clusterConfig.Authenticator = gocql.PasswordAuthenticator{Username: "Username", Password: "Password"} clusterConfig.SslOpts = &gocql.SslOptions{ CaPath: "./AmazonRootCA1.pem", } clusterConfig.Consistency = gocql.LocalQuorum clusterConfig.ConnectTimeout = time.Second * 10 clusterConfig.ProtoVersion = 3 clusterConfig.DisableInitialHostLookup = true clusterConfig.Keyspace = "TestDB" clusterConfig.NumConns = 3 session, err := clusterConfig.CreateSession() if err != nil { fmt.Println("err>", err) } return sessionI am setting consistency level toLocalQuorumbut still its giving above mentioned error. If anybody knows how to resolve please help us out
gocql.createSession: Consistency level ANY is not supported for this operation
This is a very common problem. The way we got around the problem when reading text/json files is we had an extra step in between to cast and set proper data types. The crawler data types are a bit iffy sometimes and is based on the data sample available at that point in time
I have JSON files in an S3 Bucket that may change their schema from time to time. To be able to analyze the data I want to run a glue crawler periodically on them, the analysis in Athena works in general.Problem: My timestamp string is not recognized as timestampThe timestamps currently have the following format2020-04-06T10:37:38+00:00, but I have also tried others, e.g.2020-04-06 10:37:38- I have control over this and can adjust the format.The suggestion to set the serde parameters might not work for my application, I want to have the scheme completely recognized and not have to define each field individually. (AWS Glue: Crawler does not recognize Timestamp columns in CSV format) Manual adjustments in the table are generally not wanted, I would like to deploy Glue automatically within a CloudFormation stack.Do you have an idea what else I can try?
Glue Crawler does not recognize Timestamps
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM Authenticator for Kubernetes. You may update your config file referring to the following format:apiVersion: v1 clusters: - cluster: server: ${server} certificate-authority-data: ${cert} name: kubernetes contexts: - context: cluster: kubernetes user: aws name: aws current-context: aws kind: Config preferences: {} users: - name: aws user: exec: apiVersion: client.authentication.k8s.io/v1alpha1 command: aws-iam-authenticator env: - name: "AWS_PROFILE" value: "dev" args: - "token" - "-i" - "mycluster"Useful links:https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.htmlhttps://github.com/kubernetes-sigs/aws-iam-authenticator#specifying-credentials--using-aws-profiles
I have to setup CI in Microsoft Azure Devops to deploy and manage AWS EKS cluster resources. As a first step, found few kubernetes tasks to make a connection to kubernetes cluster (in my case, it is AWS EKS) but in the task "kubectlapply" task in Azure devops, I can only pass the kube config file or Azure subscription to reach the cluster.In my case, I have the kube config file but I also need to pass the AWS user credentials that is authorized to access the AWS EKS cluster. But there is no such option in the task when adding the New "k8s end point" to provide the AWS credentials that can be used to access the EKS cluster. Because of that, I am seeing the below error while verifying the connection to EKS cluster.During runtime, I can pass the AWS credentials via envrionment variables in the pipeline but can not add the kubeconfig file in the task and SAVE it.Azure and AWS are big players in Cloud and there should be ways to connect to connect AWS resources from any CI platform. Does anyone faced this kind of issues and What is the best approach to connect to AWS first and EKS cluster for deployments in Azure Devops CI.No user credentials found for cluster in KubeConfig content. Make sure that the credentials exist and try again.
How to connect AWS EKS cluster from Azure Devops pipeline - No user credentials found for cluster in KubeConfig content
You can use Transcribe streaming to transcribe live speech, the service supports websocket:https://docs.aws.amazon.com/transcribe/latest/dg/websocket-med.html
I want to convert the Live speech of the user to text using AWS Transcribe API. For some reason, there is no proper documentation on how it is to be done in android. This is the link to do it in an inefficient mannerSpeech to text by AWS service using Java APIIn this link, the solution is to get the audio file from the user, store it to the S3, and then convert it using transcribe and wait for it to get complete till around few minutes and store it back to the S3.I want to do it without storing file to S3, converting it, again store the output file to S3. How can I do it ?
How to get live Speech to Text using AWS Transcribe in android?
Your assumption is right and you need to configure on-premise DNS resolution to internal AWS DNS. I didn't do that before, butResolving DNS Queries Between VPCs and Your Network - Amazon Route 53can help you :-)Also, you can just open RDS to public internet, but it's not safe (and not your case, I assume)
I have a site to site VPN connection from my on prem network to the VPC RDS resides in. I am trying to connect to mysql using the DNS endpoint RDS provides. I am unable to connect to the DNS endpoint but I am able to connect using the private ip that the endpoint resolves to.I assume that the DNS is internal to AWS and my on prem network can not resolve it.The RDS instance is publicly accessible.How could I connect using the DNS endpoint?
How can I connect to an RDS instance from an on prem network using a site to site VPN connection
are you usingup(docs), if you are can you check the runtime in your lambda settings and change the runtime there.Thx!
I have the following error message when my deployment script runs to deploy to aws lambda. I've updated the node version as shown in the screen shot below, it confirms I've changed node to version 12.x.Not sure why I'm still getting this error message? :-(error messageError: deploying: eu-west-1: updating function config: InvalidParameterValueException: The runtime parameter of nodejs8.10 is no longer supported for creating or updating AWS Lambda functions. We recommend you use the new runtime (nodejs12.x) while creating or updating functions.deployment script#!/bin/bash yarn # deploy /tmp/up/up deploy stagingpossible solution?I've seen this in documentation but having done the above I was under the impression I don't need to do this..?aws lambda update-function-configuration --function-name --layers arn:aws:lambda::800406105498:layer:nsolid-node-10:6 --runtime providedDocumentation available here ->https://aws.amazon.com/blogs/developer/node-js-6-is-approaching-end-of-life-upgrade-your-aws-lambda-functions-to-the-node-js-10-lts/
How to fix the AWS Lambda nodejs8.10 is no longer supported error
I would be interested in this as well - I have the issue that I need to pass the parameteridpto the /authorize of the OIDC target and there is a field calledIdentifier (optional)but whatever I do, it does not include it.
I'm using AWS Cognito for my SSO and added a federated IDP (pingfederate). Cognito does not have any option to add additional query parameters that I want for ping federate (acr_values and prompt). There's no documentation around this in AWS as well.Is there a way that we can force Cognito to send additional query parameters to a federated IDP? I've read that Auth0 has dynamic parameters that can work around this
AWS Cognito pass additional parameters to OIDC IDP
Check your bucket property setup; maybe it is configured to block public access no matter what’s your permission. Also, check your account setting that doesn’t block public access to S3https://aws.amazon.com/s3/features/block-public-access/
I'm trying to upload files to AWS S3 Storage with public access but despite of explicitly configure the public access on the code the files are uploaded as private instead. I'm using theaws-amplifypackage on an Angular app.This is the code I'm using:public onSubmit() { this.form.disable(); const contentType: string = this.imgFile.extension === 'png' ? 'image/png' : 'image/jpeg'; // Image upload to AWS S3 storage. Storage.put(this.imgFile.name, this.imgFile.content, { progressCallback: (progress: any) => { console.log(`Uploaded: ${progress.loaded}/${progress.total}`); }, contentType: contentType, ACL: 'public-read', visibility: 'public', level: 'public', }).then((result: any) => { this.img = S3_URL + replaceAll(result.key, ' ', '+'); // GraphQL API mutation service. this.bannerService.createBanner( this.img, this.form.value.channel, this.form.value.trailer, this.backgroundColor, this.buttonColor, this.textColor, ); // Re-enable the form. this.form.enable({ onlySelf: true, emitEvent: true, }); // Clean all form fields. this.formElement.nativeElement.reset(); }).catch((err) => { console.log('error =>', err); }); }Any idea of why S3 is ignoring the public access I'm indicating and putting the files as private? Thanks in advance!
aws-amplify S3 Storage uploads files but put them as "private" despite of explicit public access configuration
My issue was the audio file being uploaded to s3 was specifying an ACL. I removed that from the s3 upload code and I no longer get the error. Also per the docs, if you have "transcribe" in your s3 bucket name, the transcribe service will have permission to access it. I also made that change but you still need to ensure you aren't using an ACL
I uploaded a .flac file to an Amazon S3 bucket but when I try to transcribe the audio using the Amazon Transcribe Golang SDK I get the error below. I tried making the .flac file in the S3 bucket public but still get the same error, so I don't think its a permission issue. Is there anything that prevents the Transcribe service from accessing the file from the S3 bucket that I'm missing? The api user that is uploading and transcribing have full access for the S3 and Transcribe services.example Go code:jobInput := transcribe.StartTranscriptionJobInput{ JobExecutionSettings: &transcribe.JobExecutionSettings{ AllowDeferredExecution: aws.Bool(true), DataAccessRoleArn: aws.String("my-arn"), }, LanguageCode: aws.String("en-US"), Media: &transcribe.Media{ MediaFileUri: aws.String("https://s3.us-east-1.amazonaws.com/{MyBucket}/{MyObjectKey}"), }, Settings: &transcribe.Settings{ MaxAlternatives: aws.Int64(2), MaxSpeakerLabels: aws.Int64(2), ShowAlternatives: aws.Bool(true), ShowSpeakerLabels: aws.Bool(true), }, TranscriptionJobName: aws.String("jobName"), }Amazon Transcribe response:BadRequestException: The S3 URI that you provided can't be accessed. Make sure that you have read permission and try your request again.
Amazon Transcribe and Golang SDK BadRequestException
I see you have added handler ashandler: public/index.phpin serverless.yml file, but your file name is test.php. It seems to me like a typing mistake.
I'm using the Serverless framework to deploy my PHP functions on the AWS Lambda. I have tried with a simple example but I can see inside cloudwatch this error:Handler `/var/task/public/test.hello` doesn't existThis is my serverless file:service: symfony-bref provider: name: aws region: eu-central-1 runtime: provided environment: APP_ENV: prod plugins: - ./vendor/bref/bref functions: api: handler: public/index.php description: '' timeout: 30 # in seconds (API Gateway has a timeout of 30 seconds) layers: - ${bref:layer.php-73-fpm} events: - http: 'ANY /' - http: 'ANY /{proxy+}' S3Handler: handler: public/test.hello layers: - ${bref:layer.php-73} events: - s3: bucket: ${ssm:/symfony-bref/AWS_S3_BUCKET_NAME:1} event: s3:ObjectCreated:* existing: trueand my functions test.php is inside the folder public:<?php function hello($eventData) : array { return ["msg" => "hello from PHP " . PHP_VERSION]; }What can I do for function S3Handler? Api function is working fine.
Serverless framework, handler doesn't exists