Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
One method to get a properParamMapobject is to useCrossValidatorModel.avgMetrics: Array[Double]to find the argmaxParamMap:implicit class BestParamMapCrossValidatorModel(cvModel: CrossValidatorModel) {
def bestEstimatorParamMap: ParamMap = {
cvModel.getEstimatorParamMaps
.zip(cvModel.avgMetrics)
.maxBy(_._2)
._1
}
}When run on theCrossValidatorModeltrained in the Pipeline Example you cited gives:scala> println(cvModel.bestEstimatorParamMap)
{
hashingTF_2b0b8ccaeeec-numFeatures: 100,
logreg_950a13184247-regParam: 0.1
}
|
I want to find the parameters ofParamGridBuilderthat make the best model in CrossValidator in Spark 1.4.x,InPipeline Examplein Spark documentation, they add different parameters (numFeatures,regParam) by usingParamGridBuilderin the Pipeline. Then by the following line of code they make the best model:val cvModel = crossval.fit(training.toDF)Now, I want to know what are the parameters (numFeatures,regParam) fromParamGridBuilderthat produces the best model.I already used the following commands without success:cvModel.bestModel.extractParamMap().toString()
cvModel.params.toList.mkString("(", ",", ")")
cvModel.estimatorParamMaps.toString()
cvModel.explainParams()
cvModel.getEstimatorParamMaps.mkString("(", ",", ")")
cvModel.toString()Any help?Thanks in advance,
|
How to extract best parameters from a CrossValidatorModel
|
No, they are not the same. Clojure doesn't really have a need for|>because all function calls are enclosed in lists, like(+ 1 2): there's no magic you could do to make1 + 2workin isolation.1->is for reducing nesting and simplifying common patterns. For example:(-> x (assoc :name "ted") (dissoc :size) (keys))Expands to(keys (dissoc (assoc x :name "ted") :size))The former is often easier to read, because conceptually you're performing a series of operations onx; the former code is "shaped" that way, while the latter needs some mental unraveling to work out.1You can write a macro that sorta makes this work. The idea is to wrap your macro around the entire source tree that you want to transform, and let it look for|>symbols; it can then transform the source into the shape you want. Hiredman has made it possible to write code in a very Haskell-looking way, with hisfunctionalpackage.
|
Is the -> operator in Clojure (and what is this operator called in Clojure-speak?) equivalent to the pipeline operator |> in F#? If so, why does it need such a complex macro definition, when (|>) is just defined aslet inline (|>) x f = f xOr if not, does F#'s pipeline operator exist in Clojure, or how would you define such an operator in Clojure?
|
-> operator in Clojure
|
You are almost there, you just need to add the -i flag to make the pipe work:-i, --interactive Keep STDIN open even if not attached
docker exec -i container-name mysqldump [options] database > database.sql.xzI replaced the pipe by a file redirection but it will work the same with a Pipe. Just make sure to don't use the -t option as this will break it.Extra:To import back the sql dump into mysql:docker exec -i container-name mysql [options] database < database.sql.xzThis little script will detect if I am running mysql in a pipe or not:#!/bin/bash
if [ -t 0 ]; then
docker exec -it container-name mysql "$@"
else
docker exec -i container-name mysql "$@"
fi
|
What I try to implement is invokingmysqldumpin container and dump the database into the container's own directory.At first I try command below:$ docker exec container-name mysqldump [options] database | xz > database.sql.xzThat's not working, so I try another one which is :$ docker exec container-name bash -c 'mysqldump [options] database | xz > database.sql.xz'This time it worked.But that's really lame.Then I try using docker-py this timecmdoption that worked looks like this:cmd=['bash', '-c', 'mysqldump [options] database | xz > database.sql.xz']the logger event as below:level="info" msg="-job log(exec_start: bash -c mysqldump [options] database | xz > database.sql.xz, fe58e681fec194cde23b9b31e698446b2f9d946fe0c0f2e39c66d6fe68185442, mysql:latest) = OK (0)"My question:is there a more elegant way to archive my goal?
|
pipeline in docker exec from command line and from python api
|
Have a look at following Integration:Project -> Settings -> Integrations -> Pipelines emails
|
The goal is to have everyone get a notification for every failed pipeline (at their discretion). Currently, any of us can run a pipeline on this project branch, and the creator of the pipeline gets an email, no one else does. I have tried setting the notification level towatchandcustom (with failed pipelines checked)at project, group and global levels without success. The help page regardingnotificationssays the failed pipeline checkbox for custom notification levels notifies the author of the pipeline (which is the behavior I am experiencing). Is there any way to allow multiple people to get notified of a failed pipeline?Using Gitlab CE v10.0Have Group (security::internal)Group has Project (security::internal)Project has scheduled pipleine (runs nighly)Pipeline runs integration tests (purposely failing)Schedule created by me (schedules have to have an owner)When the automated pipeline runs and fails, I get an email (good)No one else gets email (bad)
|
Notify all group members of failed pipelines in GitLab
|
Does the pipe process continue even if the first process has ended, or is the issue that you have no way of knowing that the first process failed?If it's the latter, you can look at thePIPESTATUSvariable (which is actually a BASH array). That will give you the exit code of the first command:parse_commands /da/cmd/file | process_commands
temp=("${PIPESTATUS[@]}")
if [ ${temp[0]} -ne 0 ]
then
echo 'parse_commands failed'
elif [ ${temp[1]} -ne 0 ]
then
echo 'parse_commands worked, but process_commands failed'
fiOtherwise, you'll have to use co-processes.
|
I've grown fond of using a generator-like pattern between functions in my shell scripts. Something like this:parse_commands /da/cmd/file | process_commandsHowever, the basic problem with this pattern is that if parse_command encounters an error, the only way I have found to notify process_command that it failed is by explicitly telling it (e.g. echo "FILE_NOT_FOUND"). This means that every potentially faulting operation in parse_command would have to be fenced.Is there no way process_command can detect that the left side exited with a non-zero exit code?
|
Inform right-hand side of pipeline of left-side failure?
|
Looks like the reason is docker image's updates (githubissue). Latest versions do not allow to connect to DB without a password from anywhere. So you need to specify username/password:definitions:
services:
postgres:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: pipelines
POSTGRES_USER: test_user
POSTGRES_PASSWORD: test_user_passwordOr if you still don't want to use password, you can just setPOSTGRES_HOST_AUTH_METHOD=trustenvironment variable:definitions:
services:
postgres:
image: postgres:9.6-alpine
environment:
POSTGRES_HOST_AUTH_METHOD: trust
|
I'm using Bitbucket pipeline with PosgreSQL for CI/CD. According to thisdocumentationPostgreSQL service has been described inbitbucket-pipelines.ymlthis way:definitions:
services:
postgres:
image: postgres:9.6-alpineIt worked just fine until now. But all my latest pipelines failed with following error:Error: Database is uninitialized and superuser password is not specified.
You must specify POSTGRES_PASSWORD for the superuser. Use
"-e POSTGRES_PASSWORD=password" to set it in "docker run".
You may also use POSTGRES_HOST_AUTH_METHOD=trust to allow all connections
without a password. This is *not* recommended. See PostgreSQL
documentation about "trust":
https://www.postgresql.org/docs/current/auth-trust.htmlHow can I fix it? There was no changes inbitbucket-pipelines.ymlfile which could be the reason of such error..
|
CI/CD pipeline with PostgreSQL failed with "Database is uninitialized and superuser password is not specified" error
|
The way I usually do it is with aFeatureUnion, using aFunctionTransformerto pull out the relevant columns.Important notes:You have to define your functions withdefsince annoyingly you can't uselambdaorpartialin FunctionTransformer if you want to pickle your modelYou need to initializeFunctionTransformerwithvalidate=FalseSomething like this:from sklearn.pipeline import make_union, make_pipeline
from sklearn.preprocessing import FunctionTransformer
def get_text_cols(df):
return df[['name', 'fruit']]
def get_num_cols(df):
return df[['height','age']]
vec = make_union(*[
make_pipeline(FunctionTransformer(get_text_cols, validate=False), LabelEncoder()))),
make_pipeline(FunctionTransformer(get_num_cols, validate=False), MinMaxScaler())))
])
|
I am pretty new to pipelines in sklearn and I am running into this problem: I have a dataset that has a mixture of text and numbers i.e. certain columns have text only and rest have integers (or floating point numbers).I was wondering if it was possible to build a pipeline where I can for example callLabelEncoder()on the text features andMinMaxScaler()on the numbers columns. The examples I have seen on the web mostly point towards usingLabelEncoder()on the entire dataset and not on select columns. Is this possible? If so any pointers would be greatly appreciated.
|
sklearn pipeline - how to apply different transformations on different columns
|
There is a lot of space given to questions like this in both AMD and Intel's optimization guides. Validity of advice given in this area has a "half life" - different CPU generations behave differently, for example:AMD Software Optimization Guide (Sep/2005), section 8.3, pg. 167:Avoid using the REP prefixwhen performing string operations, especially when copying blocks of memory.AMD Software Optimization Guide (Apr/2011), section 9.3, pg. 148:Use the REP prefix judiciouslywhen performing string operations.TheIntel Architecture Optimization Manualgives performance comparison figures for various block copy techniques (includingrep stosd) onTable 7-2. Relative Performance of Memory Copy Routines, pg. 7-37f., for different CPUs, and again what's fastest on one might not be fastest on others.For many cases, recent x86 CPUs (which have the "string" SSE4.2 operations) can do string operations via the SIMD unit, seethis investigation.To follow up on all this (and/or keep yourself updated when things change again, inevitably), readAgner Fog's Optimization guides/blogs.
|
I've been writing in x86 assembly lately (for fun) and was wondering whether or not rep prefixed string instructions actually have a performance edge on modern processors or if they're just implemented for back compatibility.I can understand why Intel would have originally implemented the rep instructions back when processors only ran one instruction at a time, but is there a benefit to using them now?With a loop that compiles to more instructions, there is more to fill up the pipeline and/or be issued out-of-order. Are modern processors built to optimize for these rep-prefixed instructions, or are rep instructions used so rarely in modern code that they're not important to the manufacturers?
|
Performance of x86 rep instructions on modern (pipelined/superscalar) processors
|
You either need to use a scripted pipeline and put "load" instruction inside the node section (seethis question) or if you are already using a declarative pipeline (which seems to be the case), you can include it in "environment" section:environment {
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
MY_FUN = load 'testfun.groovy'
}
|
When i load another groovy file in Jenkinsfile it show me following error."Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: node"I made a groovy file which contains a function and i want to call it in my Declarative Jenkinsfile. but it shows an error.My Jenkinsfile--->
def myfun = load 'testfun.groovy'
pipeline{
agent any
environment{
REPO_PATH='/home/manish/Desktop'
APP_NAME='test'
}
stages{
stage('calling function'){
steps{
script{
myfun('${REPO_PATH}','${APP_NAME}')
}
}
}
}
}Result--org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.FilePath is missing
Perhaps you forgot to surround the code with a step that provides this, such as: nodeSuggest me what is the right way to do it.
|
Required context class hudson.FilePath is missing Perhaps you forgot to surround the code with a step that provides this, such as: node
|
If you look at machine.config, web.config and applicationHost.config in IIS 7, you can see that the way static content is served does not change when you switch between classic and integrated pipeline. The only thing that changes is whether requests mapped to asp.net pass through a managed module or the native ISAPI filter module.The only thing that could affect performance is if you modify the default settings for authorization modules and any custom modules you've added to execute when handling requests for static content. And even here the overhead is probably negligible.Therefore a more appropriate benchmark would be IIS 6 vs IIS 7, and I suspect IIS 7 would be the clear winner.
|
With integrated pipeline, all requests are passed through ASP.NET, including images, CSS.Whereas, in classic pipeline, only requests for ASPX pages are by default passed through ASP.NET.Could integrated pipeline negatively affect thread usage?Suppose I request 500 MB binary file from an IIS server:With integrated pipeline, an ASP.NET
worker thread would be used
for the binary download (right?).With classic pipeline, the request is
served directly by IIS, so no ASP.NET
thread is used.To me, this favors classic pipeline, as I would like as many threads as possible to serve ASPX pages.Am I completely off base here?
|
IIS7 Integrated vs Classic Pipeline - which uses more ASP.NET threads?
|
Call the shell fromtime:/usr/bin/time -f "%es" bash -c "ls | wc"Of course, this will include the shell start-up time as well; it shouldn't be too much, but if you're on a system that has a lightweight shell likedash(and it's sufficient to do what you need), then you could use that to minimize the start-up time overhead:/usr/bin/time -f "%es" dash -c "ls | wc"Another option would be to just time the command you are actually interested in, which is thepsqlcommand.timewill pass its standard input to the program being executed, so you can run it on just one component of the pipeline:echo "SELECT * FROM sometable" | /usr/bin/time -f "%es" psql
|
I want to measure the running time of some SQL query in postgresql. Using BASH built-in time, I could do the following:$ time (echo "SELECT * FROM sometable" | psql)I like GNU time, which provides more formats. However I don't know how to do it with pipe line. For simplicity, I usels | wcin the following examples:$ /usr/bin/time -f "%es" (ls | wc)
-bash: syntax error near unexpected token `('
$ /usr/bin/time -f "%es" "ls | wc"
/usr/bin/time: cannot run ls | wc: No such file or directoryIf I do not group the pipe in any way, it does not complains:$ /usr/bin/time -f "%es" ls | wc
0.00sBut apparently, this only measure the first part of the pipe, as showing in the next example$ /usr/bin/time -f "%es" ls | sleep 20
0.00sSo the question is what is the correct syntax for GNU Time with pipe line?
|
how to use GNU Time with pipeline
|
You can use the following run command to set a system path variable in your actions workflow.Syntax:echo "{path}" >> $GITHUB_PATH- run: |
echo "$AA/BB/bin" >> $GITHUB_PATHAdditionally, if you have downloaded some binaries and trying to set its path, GitHub uses a special directory called$GITHUB_WORKSPACEas your current directory. You may need to specify this variable in your path in that case.- run: |
echo "$GITHUB_WORKSPACE/BB/bin" >> $GITHUB_PATH
|
I was wondering how I can set the system path variables in the GitHub actions workflow.export "$PATH:$ANYTHING/SOMETHING:$AA/BB/bin"
|
How to set system path variable in github action workflow
|
There is mention of**fit_paramsin thefitmethod ofPipelinedocumentation. You must specify which step of the pipeline you want to apply the parameter to. You can achieve this by following the naming rules in the docs:For this, it enables setting parameters of the various steps using their names and the parameter name separated by a ‘__’, as in the example below.So all that being said, try changing the last line to:clf.fit(X,Y, **{'ExtraTrees__sample_weight': weights})Updated link:This is a good exampleof how to work with parameters in pipelines.
|
I want to apply sample weights and at the same time use a pipeline from sklearn which should make a feature transformation, e.g. polynomial, and then apply a regressor, e.g. ExtraTrees.I am using the following packages in the two examples below:from sklearn.ensemble import ExtraTreesRegressor
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeaturesEverything works well as long as I seperately transform the features and generate and train the model afterwards:#Feature generation
X = np.random.rand(200,4)
Y = np.random.rand(200)
#Feature transformation
poly = PolynomialFeatures(degree=2)
poly.fit_transform(X)
#Model generation and fit
clf = ExtraTreesRegressor(n_estimators=5, max_depth = 3)
weights = [1]*100 + [2]*100
clf.fit(X,Y, weights)But doing it in a pipeline, does not work:#Pipeline generation
pipe = Pipeline([('poly2', PolynomialFeatures(degree=2)), ('ExtraTrees', ExtraTreesRegressor(n_estimators=5, max_depth = 3))])
#Feature generation
X = np.random.rand(200,4)
Y = np.random.rand(200)
#Fitting model
clf = pipe
weights = [1]*100 + [2]*100
clf.fit(X,Y, weights)I get the following error: TypeError: fit() takes at most 3 arguments (4 given)
In this simple example, it is no issue to modify the code, but when I want to run several different tests on my real data in my real code, being able to use pipelines and sample weight
|
sklearn pipeline - Applying sample weights after applying a polynomial feature transformation in a pipeline
|
Thebitbucket-pipelines.ymlfile is just running bash/shell commands on Unix. The script runner looks for the return status codes of each command, to see if it succeeded(status = 0)or failed(status = non-zero). So you can use various techniques to control this status code:Add" || true"to the end of your command./gradlew -DBASE_URL=qa2 clean BSchrome_win || trueWhen you add"|| true"to the end of a shell command, it means "ignore any errors, and always return a success code 0". More info:Bash ignoring error for a particular commandhttps://www.cyberciti.biz/faq/bash-get-exit-code-of-command/Use "gradlew --continue" flag./gradlew -DBASE_URL=qa2 clean BSchrome_win --continueThe"--continue"flag can be used to prevent a single test failure from stopping the whole task. So if one test or sub-step fails, gradle will try to continue running the other tests until all are run. However, it may still return an error, if an important step failed. More info:Ignore Gradle Build Failure and continue build script?Move the 2 steps to theafter-scriptsectionafter-script:
- cd config/geb # You may need this, if the current working directory is reset. Check with 'pwd'
- cd build/reports
- zip -r testresult.zip BSchrome_winTestIf you move the 2 steps for zip creation to theafter-scriptsection, then they will always run, regardless of the success/fail status of the previous step.
|
I am running all my test cases and some of them get fail sometimes, pipeline detects it and fail the step and build. this blocks the next step to be executed (zip the report folder). I want to send that zip file as an email attachment.Here is mybitbucket-pipelines.ymlfilecustom: # Pipelines that can only be triggered manually
QA2: # The name that is displayed in the list in the Bitbucket Cloud GUI
- step:
image: openjdk:8
caches:
- gradle
size: 2x # double resources available for this step to 8G
script:
- apt-get update
- apt-get install zip
- cd config/geb
- ./gradlew -DBASE_URL=qa2 clean BSchrome_win **# This step fails**
- cd build/reports
- zip -r testresult.zip BSchrome_winTest
after-script: # On test execution completion or build failure, send test report to e-mail lists
- pipe: atlassian/email-notify:0.3.11
variables:
<<: *email-notify-config
TO: '[email protected]'
SUBJECT: "Test result for QA2 environment"
BODY_PLAIN: |
Please find the attached test result report to the email.
ATTACHMENTS: config/geb/build/reports/testresult.zipThe steps:- cd build/reports
and
- zip -r testresult.zip BSchrome_winTestdo not get executed because- ./gradlew -DBASE_URL=qa2 clean BSchrome_winfailedI don't want bitbucket to fail the step and stop the Queue's step from executing.
|
How to prevent a step failing in Bitbucket Pipelines?
|
It is possible to break a pipeline with anything that would otherwise break an outside loop or halt script execution altogether (like throwing an exception). The solution is to wrap the pipeline in a loop that you can break if you need to stop the pipeline.For example, the below code will return the first item from the pipeline and then break the pipeline by breaking the outside do-while loop:do {
Get-ChildItem|% { $_;break }
} while ($false)This functionality can be wrapped into a function like this, where the last line accomplishes the same thing as above:function Breakable-Pipeline([ScriptBlock]$ScriptBlock) {
do {
. $ScriptBlock
} while ($false)
}
Breakable-Pipeline { Get-ChildItem|% { $_;break } }
|
I have written a simple PowerShell filter that pushes the current object down the pipeline if its date is between the specified begin and end date. The objects coming down the pipeline are always in ascending date order so as soon as the date exceeds the specified end date I know my work is done and I would like to let tell the pipeline that the upstream commands can abandon their work so that the pipeline can finish its work. I am reading some very large log files and I will frequently want to examine just a portion of the log. I am pretty sure this is not possible but I wanted to ask to be sure.
|
Is it possible to terminate or stop a PowerShell pipeline from within a filter
|
I started from scratch and the following spider should be run withscrapy crawl amazon -t csv -o Amazon.csv --loglevel=INFOso that opening the CSV-File with a spreadsheet shows for meHope this helps :-)import scrapy
class AmazonItem(scrapy.Item):
rating = scrapy.Field()
date = scrapy.Field()
review = scrapy.Field()
link = scrapy.Field()
class AmazonSpider(scrapy.Spider):
name = "amazon"
allowed_domains = ['amazon.co.uk']
start_urls = ['http://www.amazon.co.uk/product-reviews/B0042EU3A2/' ]
def parse(self, response):
for sel in response.xpath('//table[@id="productReviews"]//tr/td/div'):
item = AmazonItem()
item['rating'] = sel.xpath('./div/span/span/span/text()').extract()
item['date'] = sel.xpath('./div/span/nobr/text()').extract()
item['review'] = sel.xpath('./div[@class="reviewText"]/text()').extract()
item['link'] = sel.xpath('.//a[contains(.,"Permalink")]/@href').extract()
yield item
xpath_Next_Page = './/table[@id="productReviews"]/following::*//span[@class="paging"]/a[contains(.,"Next")]/@href'
if response.xpath(xpath_Next_Page):
url_Next_Page = response.xpath(xpath_Next_Page).extract()[0]
request = scrapy.Request(url_Next_Page, callback=self.parse)
yield request
|
I made the improvement according to the suggestion from alexce below. What I need is like the picture below. However each row/line should be one review: with date, rating, review text and link.I need to let item processor process each review of every page.Currently TakeFirst() only takes the first review of the page. So 10 pages, I only have 10 lines/rows as in the picture below.Spider code is below:import scrapy
from amazon.items import AmazonItem
class AmazonSpider(scrapy.Spider):
name = "amazon"
allowed_domains = ['amazon.co.uk']
start_urls = [
'http://www.amazon.co.uk/product-reviews/B0042EU3A2/'.format(page) for page in xrange(1,114)
]
def parse(self, response):
for sel in response.xpath('//*[@id="productReviews"]//tr/td[1]'):
item = AmazonItem()
item['rating'] = sel.xpath('div/div[2]/span[1]/span/@title').extract()
item['date'] = sel.xpath('div/div[2]/span[2]/nobr/text()').extract()
item['review'] = sel.xpath('div/div[6]/text()').extract()
item['link'] = sel.xpath('div/div[7]/div[2]/div/div[1]/span[3]/a/@href').extract()
yield item
|
Scrapy pipeline to export csv file in the right format
|
A few ways to do this:$name = 'c:\temp\aaa.bbb.ccc'
# way 1
$name.Split('.') | Select-Object -Last 1
# way 2
[System.IO.Path]::GetExtension($name)
# or if the dot is not needed
[System.IO.Path]::GetExtension($name).TrimStart('.')
|
This might be weird, but stay with me.
I want to get only the last element of a piped result to be assigned to a varaiable.
I know how I would do this in "regular" code of course, but since this must be a one-liner.More specifically, I'm interested in getting the file extension when getting the result from an FTP requestListDirectoryDetails.Since this is done within a string expansion, I can't figure out the proper code.Currently I'm getting the last 3 hars, but that is real nasty.New-Object PSObject -Property @{LastWriteTime = [DateTime]::ParseExact($tempDate, "MMM dd HH:mm",[System.Globalization.CultureInfo]::InvariantCulture)Type = $(if([int]$tempSize -eq 0) { "Directory" } else { $tempName.SubString($tempName.length-3,3) })Name = $tempNameSize = [int]$tempSize}My idea was doing something similar to$tempName.Split(".") | ? {$_ -eq $input[$input.Length-1]}that is, iterate over all, but only take out where the element I'm looking at is the last one of the input-array.What am I missing ?
|
Get last element of pipeline in powershell
|
I askedandymccurdy, the author of redis-py, on github andthe answeris as below:If you're using redis-py<=2.9.1, socket_timeout is both the timeout
for socket connection and the timeout for reading/writing to the
socket. I pushed a change recently (465e74d) that introduces a new
option, socket_connect_timeout. This allows you to specify different
timeout values for socket.connect() differently from
socket.send/socket.recv(). This change will be included in 2.10 which
is set to be released later this week.The redis-py version is 2.6.7, so it's both the timeout for socket connection and the timeout for reading/writing to the socket.
|
In the code below, is the pipeline timeout 2 seconds?client = redis.StrictRedis(host=host, port=port, db=0, socket_timeout=2)
pipe = client.pipeline(transaction=False)
for name in namelist:
key = "%s-%s-%s-%s" % (key_sub1, key_sub2, name, key_sub3)
pipe.smembers(key)
pipe.execute()In the redis, there are a lot of members in the set "key". It always return the error as below with the code last:error Error while reading from socket: ('timed out',)If I modify the socket_timeout value to 10, it returns ok.Doesn't the param "socket_timeout" mean connection timeout? But it looks like response timeout.The redis-py version is 2.6.7.
|
How to set the redis timeout waiting for the response with pipeline in redis-py?
|
Try this:library(stringr)
df$D <- df$A %>%
{ gsub("\\.","", .) } %>%
str_trim() %>%
{ as.numeric(gsub(",", ".", .)) }With pipe your data are passed as afirstargument to the next function, so if you want to use it somewhere else you need to wrap the next line in{}and use.as a data "marker".
|
To clean some messy data I would like to start using pipes%>%, but I fail to get the R code working ifgsub()is not at the beginning of the pipe, should occur late (Note: this question is not concerned with proper import, but with data cleaning).Simple example:df <- cbind.data.frame(A= c("2.187,78 ", "5.491,28 ", "7.000,32 "), B = c("A","B","C"))Column A contains characters (in this case numbers, but this also could be string) and need to be cleaned.
The steps aredf$D <- gsub("\\.","",df$A)
df$D <- str_trim(df$D)
df$D <- as.numeric(gsub(",", ".",df$D))One easily could pipe thisdf$D <- gsub("\\.","",df$A) %>%
str_trim() %>%
as.numeric(gsub(",", ".")) %>%The problem is the second gsub because it asks for the Input .... which actually the result of the previous line.Please, could anyone explain how to use functions like gsub() further down the pipeline?
Thanks a lot!system: R 3.2.3, Windows
|
R: combine several gsub() function in a pipe
|
Yes and no[1]. If you fetch a pdf it will be stored in memory, but if the pdfs are not big enough to fill up your available memory so it is ok.You could save the pdf in the spider callback:def parse_listing(self, response):
# ... extract pdf urls
for url in pdf_urls:
yield Request(url, callback=self.save_pdf)
def save_pdf(self, response):
path = self.get_path(response.url)
with open(path, "wb") as f:
f.write(response.body)If you choose to do it in a pipeline:# in the spider
def parse_pdf(self, response):
i = MyItem()
i['body'] = response.body
i['url'] = response.url
# you can add more metadata to the item
return i
# in your pipeline
def process_item(self, item, spider):
path = self.get_path(item['url'])
with open(path, "wb") as f:
f.write(item['body'])
# remove body and add path as reference
del item['body']
item['path'] = path
# let item be processed by other pipelines. ie. db store
return item[1] another approach could be only store pdfs' urls and use another process to fetch the documents without buffering into memory. (e.g.wget)
|
I need to save a file (.pdf) but I'm unsure how to do it. I need to save .pdfs and store them in such a way that they are organized in a directories much like they are stored on the site I'm scraping them off.From what I can gather I need to make a pipeline, but from what I understand pipelines save "Items" and "items" are just basic data like strings/numbers. Is saving files a proper use of pipelines, or should I save file in spider instead?
|
Should I create pipeline to save files with scrapy?
|
When using where-object, the condition doesn't have to strictly be related to the objects that are passing through the pipeline. So consider a case where sometimes we wanted to filter for odd objects, but only if some other condition was met:$filter = $true
1..10 | ? { (-not $filter) -or ($_ % 2) }
$filter = $false
1..10 | ? { (-not $filter) -or ($_ % 2) }Is this kind of what you are looking for?
|
I'm trying to useifinside a pipeline.I know that there iswhere(alias?) filter, but what if I want activate a filter only if a certain condition is satisfied?I mean, for example:get-something | ? {$_.someone -eq 'somespecific'} | format-tableHow to useifinside the pipeline to switch the filter on/off? Is it possible? Does it make sense?ThanksEDITED to clarifyWithout pipeline it would look like this:if($filter) {
get-something | ? {$_.someone -eq 'somespecific'}
}
else {
get-something
}EDIT after ANSWER's riknikSilly example showing what I was looking for. You have a denormalized table of data stored on a variable$dataand you want to perform a kind of "drill-down" data filtering:function datafilter {
param([switch]$ancestor,
[switch]$parent,
[switch]$child,
[string]$myancestor,
[string]$myparent,
[string]$mychild,
[array]$data=[])
$data |
? { (!$ancestor) -or ($_.ancestor -match $myancestor) } |
? { (!$parent) -or ($_.parent -match $myparent) } |
? { (!$child) -or ($_.child -match $mychild) } |
}For example, if I want to filter by a specific parent only:datafilter -parent -myparent 'myparent' -data $mydataThat's very elegant, performant and simple way to exploit?. Try to do the same usingifand you will understand what I mean.
|
how to use "if" statements inside pipeline
|
Yes it looks like this is the standard way of achieving this. For examplein the source forsklearn.preprocessingwe haveclass FunctionTransformer(BaseEstimator, TransformerMixin)
|
I have a pipeline in scikit-learn that uses a custom transformer I define like below:class MyPipelineTransformer(TransformerMixin):which defines functions__init__, fit() and transform()However, when I use the pipeline inside RandomizedSearchCV, I get the following error:'MyPipelineTransformer' object has no attribute 'get_params'I've read online (e.g. links below)(Python - sklearn) How to pass parameters to the customize ModelTransformer class by gridsearchcvhttp://scikit-learn.org/stable/auto_examples/hetero_feature_union.htmlthat I could get 'get_params' by inheriting from BaseEstimator, instead of my current code inheriting just from TransformerMixin. But my transformer is not an estimator. Is there any downside to having a non-estimator inherit from BaseEstimator? Or is that the recommended way to get get_params for any transformer (estimator or not) in a pipeline?
|
Sklearn Pipeline - How to inherit get_params in custom Transformer (not Estimator)
|
http://www.bit-tech.net/hardware/cpus/2008/11/03/intel-core-i7-nehalem-architecture-dive/5Penryn's pipeline was a very nippy 14 stages, while in comparison
Nehalem extends this quite considerably to 20-to-24 stages.
|
This question already has an answer here:How should I approach to find number of pipeline stages in my Laptop's CPU [closed](1 answer)Closed3 years ago.How many instructions it can handle at a time ?
|
How many pipeline stages does the Intel Core i7 have? [duplicate]
|
Those two are different protocols having different issues.In case of Go-Back-N, you are correct. The window size can be up to 255. (2^8-1 is the last seq # of packets to send starting from 0. And it's also the maximum window size possible for Go-Back-N protocol.)However, Selective Repeat protocol has limitation of window size up to half of the max seq # since the receiver cannot distinguish a retransmitted packet having the same seq # with an already ack'ed packet but lost and never reached to sender in previous window. Hence, the window size must be in half range of seq # so that the consecutive windows cannot have duplicated seq # each other.Go-Back-N doesn't have this issue since the sender pushes n packets up to window size (which is at max: n-1) and never slides the window until it gets cumulative acks up to n. And those two protocol have different max size windows.Note: For Go-Back-N, the maximum window size is maximum number of unique sequence numbers - 1. If the window is equal to the maximum number of unique sequence numbers, if all the acknowledgements are lost, the receiver will accept all the retransmitted messages as a separate set of messages and relays the messages an additional time to it's application. To avoid this inconsistency, maximum window size = maximum number of unique sequence numbers - 1. This answer has been updated according to the fact provided in the comment by @noamgot.
|
The question is :We have a transport protocoll that uses pipelining and use a 8-bit long sequence number (0 to 255)What is the maximum window size sender can use ? (How many packets the sender can send out on the net before it muse wait for an ACK?)Go-Back-N the maximum window size is: w= 2^m -1 w=255.Selective Repeat the maximu window size is: w=(2^m)/2 w=128.I do not know which is correct and which formula shall I use.Thanks for help
|
The realationship between window size and sequence number
|
Your script has DOS line-endings.If you convert the line endings to Unix line endings, it runs fine:$ tr -d '\r' < raw.php\?i\=VURksJnn > script
$ cat script | bash
Test script
You're not root
End test
$
|
I want to make a Bash script which has to useWgetand run its output with Bash like this:wget -q -O - http://pastebin.com/raw.php?i=VURksJnn | bashThe pastebin file is a test script, but this commands shows me:"Unknown command" (maybe due to new lines) and "Unexpected end of file", and I don't know why.Am I missing something?
|
Wget file and send it to Bash
|
This will do what you are looking for:tr '[:upper:][:lower:]' '[:lower:][:upper:]'
|
So I was searching around and using the commandtryou can convert from lower case to upper case and vice versa. But is there a way to do this both at once?So:$ tr '[:upper:]' '[:lower:]' or $ tr A-Z a-zWill turn"Hello World ABC"to"hello world abc", but what I want is"hELLO wORLD abc".
|
Unix tr command to convert lower case to upper AND upper to lower case
|
We can useComposefrom the functional package to create our own binary operator that does something similar to what you want# Define our helper functions
square <- function(x){x^2}
add5 <- function(x){x + 5}
# functional contains Compose
library(functional)
# Define our binary operator
"%|>%" <- Compose
# Create our complexFunction by 'piping' our functions
complexFunction <- square %|>% add5 %|>% as.character
complexFunction(1:5)
#[1] "6" "9" "14" "21" "30"
# previously had this until flodel pointed out
# that the above was sufficient
#"%|>%" <- function(fun1, fun2){ Compose(fun1, fun2) }I guess we could technically do this without requiring the functional package - but it feels so right usingComposefor this task."%|>%" <- function(fun1, fun2){
function(x){fun2(fun1(x))}
}
complexFunction <- square %|>% add5 %|>% as.character
complexFunction(1:5)
#[1] "6" "9" "14" "21" "30"
|
Is there a way to write pipelined functions in R where the result of one function passes immediately into the next? I'm coming from F# and really appreciated this ability but have not found how to do it in R. It should be simple but I can't find how. In F# it would look something like this:let complexFunction x =
x |> square
|> add 5
|> toStringIn this case the input would be squared, then have 5 added to it and then converted to a string. I'm wanting to be able to do something similar in R but don't know how. I've searched for how to do something like this but have not come across anything. I'm wanting this for importing data because I typically have to import it and then filter. Right now I do this in multiple steps and would really like to be able to do something the way you would in F# with pipelines.
|
R Pipelining functions
|
Yes you can group updates together, for all nonmajor updates this can look like this:
(taken from therenovate docs){
"packageRules": [
{
"matchPackagePatterns": [
"*"
],
"matchUpdateTypes": [
"minor",
"patch"
],
"groupName": "all non-major dependencies",
"groupSlug": "all-minor-patch"
}
]
}
|
Renovate is updating the packages as soon as there is a new version. But renovate also creates a seperate PR/branch for each update. So if new versions released for 5 of my packages renovate will create 5 branches.
This leads to 5 pipelines, 1 PR is merged and the other 4 will rebase and run the pipeline again. So there will run 15 PR-pipelines + the pipeline for themainbranch on each merge.So all together there will run 19 pipelines.Is it possible to combine – lets say all minor and patch updates – into one branch and PR to avoid the huge amount of PRs?the only thing I found was theprConcurrentLimitwhich avoids the rebase and rerun of the PR-pipelines on each merge. But this will also trigger 10 pipelines.If I can combine all together there is just 1 PR pipeline and 1main-branch pipeline. So 2 pipelines in total. That would be awesome.
|
Renovate: Combine all updates to one branch/PR
|
There are at least the following three.ImagePlayAdaptive Vision Studio 4.3 LiteImprovCVI personally like ImagePlay the most.
|
I am learning OpenCV (using python interface). I'm not really sure what I'm doing, so I keep adding and removing functions (blur, threshold, contours, edge detection) and modifying parameters.What would be very helpful is a UI that allows me to create a pipeline and add / remove functions, and then modify the parameters on the fly to see the effect. Does that exist? I have used Blender in the past and they have a node editor as shown below:You can connect the output of one function to the next and you can either enter or click and drag to change parameters. Unfortunately, the nodes are somewhat limited in Blender, but it would seem to me that having a similar capability using the python interface for OpenCV would be possible. I just wanted to know if it already exists and where I can get it if it does.
|
OpenCV Pipeline Editor
|
Yes, I must agree that there is lack of examples for that but I managed to create the stream on which I sent several insert commands in batch.You should install module for redis stream:npm install redis-streamAnd this is how you use the stream:var redis = require('redis-stream'),
client = new redis(6379, '127.0.0.1');
// Open stream
var stream = client.stream();
// Example of setting 10000 records
for(var record = 0; record < 10000; record++) {
// Command is an array of arguments:
var command = ['set', 'key' + record, 'value'];
// Send command to stream, but parse it before
stream.redis.write( redis.parse(command) );
}
// Create event when stream is closed
stream.on('close', function () {
console.log('Completed!');
// Here you can create stream for reading results or similar
});
// Close the stream after batch insert
stream.end();Also, you can create asmany streamsas you want and open/close them as you want at any time.There areseveral examplesof usingredis streamin node.js onredis-stream node module
|
I have lot's of data to insert (SET \ INCR) to redis DB, so I'm looking forpipeline\mass insertionthroughnode.js.I couldn't find any good example/ API for doing so in node.js, so any help would be great!
|
How to pipeline in node.js to redis?
|
You just need to use command substitution:date ... $(last $USER | ... | awk '...') ...Bash will evaluate the command/pipeline inside the$(...)and place the result there.
|
So I need to convert a date to a different format. With a bash pipeline, I'm taking the date from the last console login, and pulling the relevant bits out with awk, like so:last $USER | grep console | head -1 | awk '{print $4, $5}'Which outputs:Aug 08($4=Aug $5=08, in this case.)Now, I want to take 'Aug 08' and put it into adatecommand to change the format to a numerical date.Which would look something like this:date -j -f %b\ %d Aug\ 08 +%m-%dOutputs: 08-08The question I have is, how do I add that to my pipeline and use the awk variables $4 and $5 where 'Aug 08' is in that date command?
|
How do I use output from awk in another command?
|
FromWikipedia,SIGPIPEis the signal sent to a process when it attempts to write to a pipe without a process connected to the other end.When you first createp1usingstdout=PIPE, there is one process connected to the pipe, which is your Python process, and you can read the output usingp1.stdout.When you createp2usingstdin=p1.stdoutthere are now two processes connected to the pipep1.stdout.Generally when you are running processes in a pipeline you want all processes to end when any of the processes end. For this to happen automatically you need to closep1.stdoutsop2.stdinis the only process attached to that pipe, this way ifp2ends andp1writes additional data to stdout, it will receive a SIGPIPE since there are no longer any processes attached to that pipe.
|
Here is what I can read in the python subprocess module documentation:Replacing shell pipeline
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
The p1.stdout.close() call after starting the p2 is important in order for p1
to receive a SIGPIPE if p2 exits before p1.I don't really understand why we have to close p1.stdout after have created p2.
When is exactly executed the p1.stdout.close()?
What happens when p2 never ends?
What happens when nor p1 or p2 end?
|
closing stdout of piped python subprocess
|
In your example,$strexists inside a subshell and by default, it disappears once the line has finished. A child process cannot modify its parent.Aside from changing the shell optionlastpipe, you can also change the code to avoid using a pipe. In this case, you could use:read str < <(your command)
# or
str=$(your command)Both of these create subshells too, but$stris assigned to in the parent process.
|
echo hello | read str
echo $strThisreadis executed after the pipeline, which means that the output of theechogets read intostr- but because it is after a pipe, the contents ofstrare now in a subshell that cannot be read by the parent shell.What happens in to the contents ofstr? Does the pipe create a subshell, and then once the content are read intostr, does the parent process kill the child process andstris erased - or does the contents ofstrlive on somewhere outside the shell? Like how do we see what is in the subshells? Can we access subshells from the parent shells?
|
What happens when reading into a variable in a pipeline?
|
You need to wrap the main body of the script with process{}, this will then allow you to process each item on the pipeline. As process will be called for each item you can even do away with the for loop.So your script will read as follows:param(
[parameter(ValueFromPipeline=$true)]
[string]$pipe
)
process
{
Write-Host "Read in " $pipe
}You can read about input processing here:Function Input Processing Methods
|
I have a script that I'm trying to add pipeline functionality to. I'm seeing the strange behavior, though, where the script seems to only be run against the final object in the pipeline. For exampleparam(
[parameter(ValueFromPipeline=$true)]
[string]$pipe
)
foreach ($n in $pipe) {
Write-Host "Read in " $n
}Dead simple, no? I then run1..10 | .\test.ps1and it only outputs the one lineRead in 10. Adding to the complexity, the actual script I want to use this in has more to the parameters:[CmdletBinding(DefaultParameterSetName="Alias")]
param (
[parameter(Position=0,ParameterSetName="Alias")]
[string]$Alias,
[parameter(ParameterSetName="File")]
[ValidateNotNullOrEmpty()]
[string]$File
<and so on>
)
|
Script only seems to process the last object from the pipeline
|
Sorry, found it just after I posted this. You have to add:dispatcher.connect(self.spider_opened, signals.spider_opened)
dispatcher.connect(self.spider_closed, signals.spider_closed)in__init__otherwise it never receives the signal to call it
|
I am having some trouble with a scrapy pipeline. My information is being scraped form sites ok and the process_item method is being called correctly. However the spider_opened and spider_closed methods are not being called.class MyPipeline(object):
def __init__(self):
log.msg("Initializing Pipeline")
self.conn = None
self.cur = None
def spider_opened(self, spider):
log.msg("Pipeline.spider_opened called", level=log.DEBUG)
def spider_closed(self, spider):
log.msg("Pipeline.spider_closed called", level=log.DEBUG)
def process_item(self, item, spider):
log.msg("Processsing item " + item['title'], level=log.DEBUG)Both the__init__andprocess_itemlogging messages are displyed in the log, but thespider_openandspider_closelogging messages are not.I need to use the spider_opened and spider_closed methods as I want to use them to open and close a connection to a database, but nothing is showing up in the log for them.If anyone has any suggested that would be very useful.
|
Scrapy pipeline spider_opened and spider_closed not being called
|
In this link:https://docs.gitlab.com/ee/ci/yaml/#ruleschangesThey write thatrules: changesshould work exactly likeonly/except. If you read aboutonly/except, there are a few strange things with it:https://docs.gitlab.com/ee/ci/yaml/#using-onlychanges-without-pipelines-for-merge-requestsWhen pushing a new branch or a new tag to GitLab, the policy always evaluates to true.To get around this and only run your job on the development branch, you should be able to combine anifwithchanges:deploy_dev_client:
stage: client
tags:
- my tags
script:
- '& cd WebClient'
- 'npm rebuild node-sass'
- 'npm install @angular/[email protected]'
- '& npm run build-release --max_old_space_size=$NODE_MEMORY_SIZE'
rules:
- if: '$CI_COMMIT_REF_NAME== "development"'
changes:
- WebClient/**/*
when: always(I haven't tested this code, so let me know if it is all wrong!)
|
I'm trying to build a job that can be conditionally executed depends on whether the files or subdirectories inWebClientis modified indevelopbranch, usingrules. If there are changes found in the develop branch only, then a pipeline will be built.Currently what I got in my.gitlab-ci.ymlisdeploy_dev_client:
stage: client
tags:
- my tags
script:
- '& cd WebClient'
- 'npm rebuild node-sass'
- 'npm install @angular/[email protected]'
- '& npm run build-release --max_old_space_size=$NODE_MEMORY_SIZE'
rules:
- changes:
- WebClient/**/*
when: always
- when: neverHowever, after testing, I realized that the pipeline is executed whenever I push something from my local repo to gitlab, even at the other side branches.I have tried usingonly:-develop', however it results inyaml invaliderror, maybe due to not being able to useonlyifruleshas already been used. Is there anyway I can still usingrulesto target onlydevelopbranch?
|
How to use rule in gitlab-ci
|
OneHotEncoderdoesn't support string features, and with[(d, OneHotEncoder()) for d in dummies]you are applying it to all dummies columns. UseLabelBinarizerinstead:mapper = DataFrameMapper(
[(d, LabelBinarizer()) for d in dummies]
)An alternative would be to use theLabelEncoderwith a secondOneHotEncoderstep.mapper = DataFrameMapper(
[(d, LabelEncoder()) for d in dummies]
)
lm = PMMLPipeline([("mapper", mapper),
("onehot", OneHotEncoder()),
("regressor", LinearRegression())])
|
I am trying to oneHotEncode the categorical variables of my Pandas dataframe, which includes both categorical and continues variables. I realise this can be done easily with the pandas .get_dummies() function, but I need to use a pipeline so I can generate a PMML-file later on.This is the code to create a mapper. The categorical variables I would like to encode are stored in a list called 'dummies'.from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
mapper = DataFrameMapper(
[(d, LabelEncoder()) for d in dummies] +
[(d, OneHotEncoder()) for d in dummies]
)And this is the code to create a pipeline, including the mapper and linear regression.from sklearn2pmml import PMMLPipeline
from sklearn.linear_model import LinearRegression
lm = PMMLPipeline([("mapper", mapper),
("regressor", LinearRegression())])When I now try to fit (with 'features' being a dataframe, and 'targets' a series), it gives an error 'could not convert string to float'.lm.fit(features, targets)
|
How to do Onehotencoding in Sklearn Pipeline
|
I don't think of any good reason to defineArray.getandList.nththis way. Given that pipeplining is very common in F#, they should have been defined so that thesourceargument came last.In case ofList.nth, it doesn't change much because you can useSeq.nthand time complexity is stillO(n)wherenis length of the list:[1..100] |> Seq.nth 10It's not a good idea to useSeq.nthon arrays because you lose random access. To keepO(1)running time ofArray.get, you can define:[<RequireQualifiedAccess>]
module Array =
/// Get n-th element of an array in O(1) running time
let inline nth index source = Array.get source indexIn general, different argument order can be alleviated by usingflipfunction:let inline flip f x y = f y xYou can use it directly on the functions above:[1..100] |> flip List.nth 10
[|1..100|] |> flip Array.get 10
|
Is there a good reason for a different argument order in functions getting N-th element of Array, List or Seq:Array.get source index
List .nth source index
Seq .nth index sourceI would like to use pipe operator and it seems possible only with Seq:s |> Seq.nth nIs there a way to have the same notation with Array or List?
|
Different argument order for getting N-th element of Array, List or Seq
|
Lets assume you want to use PCA and TruncatedSVD as your dimesionality reduction step.pca = decomposition.PCA()
svd = decomposition.TruncatedSVD()
svm = SVC()
n_components = [20, 40, 64]You can do this:pipe = Pipeline(steps=[('reduction', pca), ('svm', svm)])
# Change params_grid -> Instead of dict, make it a list of dict
# In the first element, pass parameters related to pca, and in second related to svd
params_grid = [{
'svm__C': [1, 10, 100, 1000],
'svm__kernel': ['linear', 'rbf'],
'svm__gamma': [0.001, 0.0001],
'reduction':pca,
'reduction__n_components': n_components,
},
{
'svm__C': [1, 10, 100, 1000],
'svm__kernel': ['linear', 'rbf'],
'svm__gamma': [0.001, 0.0001],
'reduction':svd,
'reduction__n_components': n_components,
'reduction__algorithm':['randomized']
}]and now just pass the pipeline object to gridsearchCVgrd = GridSearchCV(pipe, param_grid = params_grid)Callinggrd.fit()will search the parameters over both the elements of the params_grid list, using all values fromoneat a time.Please look at my other answer for more details:"Parallel" pipeline to get best model using gridsearch
|
I want to build a Pipeline in sklearn and test different models using GridSearchCV.Just an example (please do not pay attention on what particular models are chosen):reg = LogisticRegression()
proj1 = PCA(n_components=2)
proj2 = MDS()
proj3 = TSNE()
pipe = [('proj', proj1), ('reg' , reg)]
pipe = Pipeline(pipe)
param_grid = {
'reg__c': [0.01, 0.1, 1],
}
clf = GridSearchCV(pipe, param_grid = param_grid)Here if I want to try different models for dimensionality reduction, I need to code different pipelines and compare them manually. Is there an easy way to do it?One solution I came up with is define my own class derived from base estimator:class Projection(BaseEstimator):
def __init__(self, est_name):
if est_name == "MDS":
self.model = MDS()
...
...
def fit_transform(self, X):
return self.model.fit_transform(X)I think it will work, I just create a Projection object and pass it to Pipeline, using names of the estimators as parameters for it.But to me this way is a bit chaotic and not scalable: it makes me to define new class each time I want to compare different models. Also to continue this solution, one could implement a class that does the same job, but with arbitrary set of models. It seems overcomplicated to me.What is the most natural and pythonic way to compare different models?
|
Alternate different models in Pipeline for GridSearchCV
|
TheInvokeRESTAPI@1it's server job task (agentless job in the classic editor), not a regular task that can run on an agent.You need to put it in server job in this way:pool: serverMore info you can findhere.
|
I have an Azure pipeline which needs to send a REST request to an endpoint. I am trying to use the built in taskInvokeRESTAPI@1to do this, but it errors when running on Azure DevOps.Script:---
trigger:
batch: true
branches:
include:
- master
pr:
- master
stages:
- stage: Run_Tests
jobs:
- job: RA001
pool: windows-server
steps:
- task: InvokeRESTAPI@1
displayName: "Run Test"
inputs:
connectionType: 'connectedServiceName'
serviceConnection: 'myconnection'
method: 'PUT'
headers: |
{
"AccessKey":"$(system.MyKey)"
}
urlSuffix: '/api/v3/schedules/uniquenumber/runNow'
waitForCompletion: 'false'Returns:Job RA001: Step references task 'InvokeRESTAPI' at version '1.152.1'
which is not valid for the given job target.
|
Step references task InvokeRESTAPI at version 1.152.1 which is not valid for the given job target
|
I had the same problem and when I looked in the .csproj file for my web application I had the following lines...<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" Condition="'$(Solutions.VSVersion)' == '8.0'" />
<Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" Condition="'$(Solutions.VSVersion)' == '9.0'" />I guess this happened since I had recently upgraded from VS2008 to VS2010. So it looks like the during the conversion process it got all screwed up. To fix it I just replaced those lines with ...<Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" />After that everything worked as it should.
|
Deploying a Web Application Project from VS2010 RTM causes an error in MSBuild. It complains that the PipelinePreDeployCopyAllFilesToOneFolder target cannot be found.Is there any way to further diagnose this?Thank you.
|
MSBuild target PipelinePreDeployCopyAllFilesToOneFolder cannot be found when deploying
|
Instead of simply runningchmodon your local system, it's better to rungit update-index --chmod=+x path/to/file. This adds an executable flag to a file in Git and should ensure that a script can be executed inside a GitLab pipeline.See alsothis question.
|
I am running a simple shell script with Gitlab CICD and I am getting Permission denied. Kindly suggestWhen I dochmod +x test.shit says operation not permitted.stages:
- build
build:
stage: build
script:
- ls
- ./test.shShelltest.shecho HiError:
|
Gitlab Shell Script Permission Denied
|
Overall, no. Check out things likeAirflowfor this. Job objects give you a pretty simple way to run a container until it completes, that's about it. The parallelism is in that you can run multiple copies, it's not a full workflow management system :)
|
Reading the Kubernetes "Run to Completion" documentation, it says that jobs can be run in parallel, but is it possible to chain together a series of jobs that should be run in sequential order (parallel and/or non-parallel).https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/Or is it up to the user to keep track of which jobs have finished and triggering the next job using a PubSub messaging service?
|
Kubernetes can analytical jobs be chained together in a workflow?
|
The error is because you are using a single underscore between estimator name and its parameter when using in pipeline. It should be two underscores.From thedocumentation of Pipeline.fit(), we see that the correct way of supplying params in fit:Parameters passed to the fit method of each step, where each parameter name is prefixed such that parameter p for step s has key s__p.So in your case, the correct usage is:pipe.fit(X_train, y_train, classifier__eval_metric='auc')(Notice two underscores between name and param)
|
I'm trying to useXGBoost, and optimize theeval_metricasauc(as describedhere).This works fine when using the classifier directly, but fails when I'm trying to use it as apipeline.What is the correct way to pass a.fitargument to the sklearn pipeline?Example:from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import load_iris
from xgboost import XGBClassifier
import xgboost
import sklearn
print('sklearn version: %s' % sklearn.__version__)
print('xgboost version: %s' % xgboost.__version__)
X, y = load_iris(return_X_y=True)
# Without using the pipeline:
xgb = XGBClassifier()
xgb.fit(X, y, eval_metric='auc') # works fine
# Making a pipeline with this classifier and a scaler:
pipe = Pipeline([('scaler', StandardScaler()), ('classifier', XGBClassifier())])
# using the pipeline, but not optimizing for 'auc':
pipe.fit(X, y) # works fine
# however this does not work (even after correcting the underscores):
pipe.fit(X, y, classifier__eval_metric='auc') # failsThe error:TypeError: before_fit() got an unexpected keyword argument 'classifier__eval_metric'Regarding the version of xgboost:xgboost.__version__shows 0.6pip3 freeze | grep xgboostshowsxgboost==0.6a2.
|
How to optimize a sklearn pipeline, using XGboost, for a different `eval_metric`?
|
According toabout_Functions:After the function receives all the objects in the pipeline, the End
statement list runs one time. If no Begin, Process, or End keywords
are used, all the statements are treated like an End statement list.Thus you just need to omit theelseblock. Then all objects in the pipeline are processed, but due to theifclause the actual processing is only being done until the time limit is hit.
|
What I'm trying to do is get a function to stop the pipeline feed when a time limit has been reached. I've created a test function as follows :function Test-PipelineStuff
{
[cmdletbinding()]
Param(
[Parameter(ValueFromPipeLIne=$true)][int]$Foo,
[Parameter(ValueFromPipeLIne=$true)][int]$MaxMins
)
begin {
"THE START"
$StartTime = Get-Date
$StopTime = (get-date).AddMinutes($MaxMins)
"Stop time is: $StopTime"
}
process
{
$currTime = Get-Date
if( $currTime -lt $StopTime ){
"Processing $Foo"
}
else{
continue;
}
}
end { "THE END" }
}This will certainly stop the pipeline, but it never calls my "end{}" block, which, in this case is vital. Does anyone know why my "end{}" block isn't being called when I stop the pipeline using "continue"? Behaviour seems to be the same if I throw a PipelineStoppedException.
|
Stopping PowerShell pipeline, ensure end is called
|
There's currently no way to do this.
Here is whatthe documentationsays:
|
I have a task that creates a cache- task: Cache@2
inputs:
key: 'sonarCache'
path: $(SONAR_CACHE)
cacheHitVar: CACHE_RESTORED
displayName: Cache Sonar packagesHowever, the cache got corrupted. So how do I run this pipeline while telling it to ignore any existing cache ?For some reason, I cannot change the cache keysonarCache
|
How to delete an Azure Pipeline cache without changing cache key
|
.can be used as a valid object name (a syntactically valid name) and documented here:A syntactically valid name consists of letters, numbers and the dot or
underline characters and starts with a letter or the dot not followed
by a number." (from manual ofmake.names).The single dot satisfies "the dot not followed by a number."
|
In trying to understand the base R "Bizarro pipe" as described in the Win Vector blog, I confirmed that simple examples produce pipelike behavior in R with no packages installed. For example:> 2 ->.; exp(.)
[1] 7.389056I found that the dot is used as an operator in plyr and magrittr. I spent a couple of hours looking in base R for every synonym I could think of for dot operator, with every help tool I knew of; I even ran some ridiculous regex searches. Finally, in desperation, I tried this:>. <- 27
>.
[1] 27So far, I have failed to disconfirm that a naked dot, without even a ` ` to its name, is a valid variable name in R. But I am still hoping that this is merely a side-effect from some more sensible behavior, documented somewhere.Is it? And if so, where?I acknowledge that in its first appearance in the Win Vector blog, the authors identified it as a joke.
|
I see it, but I don't believe it. Legal names in R, piping operations, and the dot
|
This works too:# comment here
ls -t $HOME/.vnc/*.pid |
#comment here
xargs -n1 |
#another comment
...based onhttps://stackoverflow.com/a/5100821/1019205.
it comes down tos/|//;s!\!|!.
|
I've found that it's quite powerful to create long pipelines in bash scripts, but the main drawback that I see is that there doesn't seem to be a way to insert comments.As an example, is there a good way to add comments to this script?#find all my VNC sessions
ls -t $HOME/.vnc/*.pid \
| xargs -n1 \
| sed 's|\.pid$||; s|^.*\.vnc/||g' \
| xargs -P50 --replace vncconfig -display {} -get desktop \
| grep "($USER)" \
| awk '{print $1}' \
| xargs -n1 xdpyinfo -display \
| egrep "^name|dimensions|depths"
|
bash: comment a long pipeline
|
You can see how pipeline order works with a simple bit of script:function a {begin {Write-Host 'begin a'} process {Write-Host "process a: $_"; $_} end {Write-Host 'end a'}}
function b {begin {Write-Host 'begin b'} process {Write-Host "process b: $_"; $_} end {Write-Host 'end b'}}
function c { Write-Host 'c' }
1..3 | a | b | cOutputs:begin a
begin b
process a: 1
process b: 1
process a: 2
process b: 2
process a: 3
process b: 3
end a
end b
c
|
I understand that PowerShell piping works by taking the output of one cmdlet and passing it to another cmdlet as input. But how does it go about doing this?Does the first cmdlet finish and then pass all the output variables across at once, which are then processed by the next cmdlet?Or is each output from the first cmdlet taken one at a time and then run it through all of the remaining piped cmdlet’s?
|
How does the PowerShell Pipeline Concept work?
|
Your problem is that thewhileloop runs in a subshell because it is the second command in a pipeline, so any changes made in that loop are not available after the loop exits.You have a few options. I often use{and}forcommand grouping:nm "$@" |
{
while read line
do
…
done
for j in "${array[@]}"
do
echo "$j"
done
}Inbash, you can also useprocess substitution:while read line
do
…
done < <(nm "$@")Bash also supports an optionshopt -s lastpipewhich runs the last process in a pipeline in the current shell environment so that changes made by the last part of the pipeline are available to the rest of the script. Then you do not need the command grouping notation or process substitution.Also, it is better to use$(…)in place of back-quotes`…`(and not just because it is hard work getting back quotes into markdown text!).Your line:element="`echo \"$line\" | sed -n \"s/^U \([0-9a-zA-Z_]*\).*/$file:\1/p\"`"could be written:element="$(echo "$line" | sed -n "s/^U \([0-9a-zA-Z_]*\).*/$file:\1/p")"or even:element=$(echo "$line" | sed -n "s/^U \([0-9a-zA-Z_]*\).*/$file:\1/p")It really helps when you need to nest command substitution operations. For example, to list thelibdirectory adjacent to wheregccis found:ls -l $(dirname $(dirname $(which gcc)))/libvsls -l `dirname \`dirname \\\`which gcc\\\`\``/libI know which I find easier!
|
I have the following problem. Let´s assume that$@contains only valid files. Variablefilecontains the name of the current file (the file I'm currently "on"). Then variableelementcontains data in the formatfile:function.Now, when variable element is not empty, it should be put into the array. And that's the problem. If I echoelement, it contains exactly what I want, although it is not stored in array, so for cycle doesn't print out anything.I have written two ways I try to insert element into array, but neither works. Can you tell me, What am I doing wrong, please?I'm using Linux Mint 16.#!/bin/bash
nm $@ | while read line
do
pattern="`echo \"$line\" | sed -n \"s/^\(.*\):$/\1/p\"`"
if [ -n "$pattern" ]; then
file="$pattern"
fi
element="`echo \"$line\" | sed -n \"s/^U \([0-9a-zA-Z_]*\).*/$file:\1/p\"`"
if [ -n "$element" ]; then
array+=("$element")
#array[$[${#array[@]}+1]]="$element"
echo element - "$element"
fi
done
for j in "${array[@]}"
do
echo "$j"
done
|
Unable to add element to array in bash
|
You need to look at the pipeline object. imbalanced-learn has aPipelinewhich extends the scikit-learn Pipeline, to adapt for the fit_sample() and sample() methods in addition to fit_predict(), fit_transform() and predict() methods of scikit-learn.Have a look at this example here:https://imbalanced-learn.org/stable/auto_examples/pipeline/plot_pipeline_classification.htmlFor your code, you would want to do this:from imblearn.pipeline import make_pipeline, Pipeline
smote_enn = SMOTEENN(smote = sm)
clf_rf = RandomForestClassifier(n_estimators=25, random_state=1)
pipeline = make_pipeline(smote_enn, clf_rf)
OR
pipeline = Pipeline([('smote_enn', smote_enn),
('clf_rf', clf_rf)])Then you can pass thispipelineobject to GridSearchCV, RandomizedSearchCV or other cross validation tools in the scikit-learn as a regular object.kf = StratifiedKFold(n_splits=n_splits)
random_search = RandomizedSearchCV(pipeline, param_distributions=param_dist,
n_iter=1000,
cv = kf)
|
I'm relatively new to Python. Can you help me improve my implementation of SMOTE to a proper pipeline? What I want is to apply the over and under sampling on the training set of every k-fold iteration so that the model is trained on a balanced data set and evaluated on the imbalanced left out piece. The problem is that when I do that I cannot use the familiarsklearninterface for evaluation and grid search.Is it possible to make something similar tomodel_selection.RandomizedSearchCV. My take on this:df = pd.read_csv("Imbalanced_data.csv") #Load the data set
X = df.iloc[:,0:64]
X = X.values
y = df.iloc[:,64]
y = y.values
n_splits = 2
n_measures = 2 #Recall and AUC
kf = StratifiedKFold(n_splits=n_splits) #Stratified because we need balanced samples
kf.get_n_splits(X)
clf_rf = RandomForestClassifier(n_estimators=25, random_state=1)
s =(n_splits,n_measures)
scores = np.zeros(s)
for train_index, test_index in kf.split(X,y):
print("TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
sm = SMOTE(ratio = 'auto',k_neighbors = 5, n_jobs = -1)
smote_enn = SMOTEENN(smote = sm)
x_train_res, y_train_res = smote_enn.fit_sample(X_train, y_train)
clf_rf.fit(x_train_res, y_train_res)
y_pred = clf_rf.predict(X_test,y_test)
scores[test_index,1] = recall_score(y_test, y_pred)
scores[test_index,2] = auc(y_test, y_pred)
|
How to implement SMOTE in cross validation and GridSearchCV
|
To use the pipeline-level variable in thedisplayNameof a job or a stage, you should use the expression '${{ variables.varName }}'. Because thedisplayNameof jobs and stages are set at compile time.The expression '${{ variables.varName }}' is called template expression which can be used to get the value of variable at compile time, before runtime starts.The macro syntax '$(varName)' can get the variable value during runtime before a task runs. So, you can use it in thedisplayNameand the input of a task.For more details, you can seethis document.Below is an example as reference.azure-pipelines.ymlvariables:
Number_Version: 1.1.0
jobs:
- job: Display_Version
displayName: 'Job Name - Update version ${{ variables.Number_Version }}'
pool:
vmImage: ubuntu-latest
steps:
- task: Bash@3
displayName: 'Task Name - Update version $(Number_Version)'
inputs:
targetType: inline
script: echo "Input of task - Update version $(Number_Version)"Result
|
- job: Display_Version
displayName: Update version $(Number_Version)
steps:
....I am trying to display a variable which is the variables of the pipeline, and it does not display it ...
Can anyone explain to me why?
|
azure devops , displayname with variable
|
Instead of trigger the build with your PowerShell script, you can install theTrigger Build Taskextension and use it. there you have an optionWait till the triggered builds are finished before build continues:In YAML:- task: TriggerBuild@3
displayName: 'Trigger a new build of Test'
inputs:
buildDefinition: Test
waitForQueuedBuildsToFinish: true
waitForQueuedBuildsToFinishRefreshTime: 60If this option is enabled, the script will wait until the all the queued builds are finished. Note: This can take a while depending on your builds and your build will not continue. If you only have one build agent you will even end up in a deadlock situation!
|
I had a question regarding Azure DevOps pipelines and tasks and was wondering if anyone could help.I have a pipeline with a task that runs a PowerShell script. This script kicks off a separate pipeline, but once that script is run, the original task returns a "pass" (as expected) and the next task in the original pipeline begins to run.Ideally, I would like the next task in pipeline 1 to wait until the pipeline that was kicked off by the script is complete (and returns a pass). Does anyone know of a way this can be achieved? The steps are using YAML. So far I have seen conditions to wait for other steps in the same pipeline, but nothing to stop a step from running until a completely separate pipeline is completed (and passes successfully).Hopefully I am making sense. I can provide screenshots if that would help as well!
|
Azure DevOps pipeline task to wait to run for another pipeline to complete
|
You can't really parallelize reading from or writing to files; these will be your bottleneck, ultimately. Are yousureyour bottleneck here is CPU, and not I/O?Since your processing contains no dependencies (according to you), it's trivially simple to usePython's multiprocessing.Pool class.There are a couple ways to write this, but the easier w.r.t. debugging is to find independent critical paths (slowest part of the code), which we will make run parallel. Let's presume it's process_item.…And that's it, actually. Code:import multiprocessing.Pool
p = multiprocessing.Pool() # use all available CPUs
input = open("input.txt")
x = (process_line(line) for line in input)
y = p.imap(process_item, x)
z = (generate_output_line(item) + "\n" for item in y)
output = open("output.txt", "w")
output.writelines(z)I haven't tested it, but this is the basic idea. Pool's imap method makes sure results are returned in the right order.
|
Suppose I have some Python code like the following:input = open("input.txt")
x = (process_line(line) for line in input)
y = (process_item(item) for item in x)
z = (generate_output_line(item) + "\n" for item in y)
output = open("output.txt", "w")
output.writelines(z)This code reads each line from the input file, runs it through several functions, and writes the output to the output file. NowIknow that the functionsprocess_line,process_item, andgenerate_output_linewill never interfere with each other, and let's assume that the input and output files are on separate disks, so that reading and writing will not interfere with each other.But Python probably doesn't know any of this. My understanding is that Python will read one line, apply each function in turn, and write the result to the output, and then it will read the second line onlyaftersending the first line to the output, so that the second line does not enter the pipeline until the first one has exited. Do I understand correctly how this program will flow? If this is how it works, is there any easy way to make it so that multiple lines can be in the pipeline at once, so that the program is reading, writing, and processing each step in parallel?
|
How can I parallelize a pipeline of generators/iterators in Python?
|
It is basically because "geometry shader" was a pretty stupid choice of words on Microsoft's part. It should have been called"primitive shader."Geometry shaders make the primitive assembly stage programmable, and you cannot assemble primitives before you have an input stream of vertices computed. There is some overlap in functionality since you can take one input primitive type and spit out a completely different type (often requiring the calculation of extra vertices).These extra emitted vertices do not require a trip backwards in the pipeline to the vertex shader stage - they are completely calculated during an invocation of the geometry shader. This concept should not be too foreign, because tessellation control and evaluation shaders also look very much like vertex shaders in form and function.There are a lot of stages of vertex transform, and what we call vertex shaders are just the tip of the iceberg. In a modern application you can expect the output of a vertex shader to go through multiple additional stages before you have a finalized vertex for rasterization and pixel shading (which is also poorly named).
|
In both theOpenGLandDirect3Drendering pipelines, the geometry shader is processed after the vertex shader and before the fragment/pixel shader. Now obviously processing the geometry shader after the fragment/pixel shader makes no sense, but what I'm wondering is why not put it before the vertex shader?From a software/high-level perspective, at least, it seems to make more sense that way: first you run the geometry shader to create all the vertices you want (and dump any data only relevant to the geometry shader), then you run the vertex shader on all the vertices thus created. There's an obvious drawback in that the vertex shader now has to be run on each of the newly-created vertices, but any logic that needs to be done there would, in the current pipelines, need to be run for each vertex in the geometry shader, presumably; so there's not much of a performance hit there.I'm assuming, since the geometry shader is in this position in both pipelines, that there's either a hardware reason, or a non-obvious pipeline reason that it makes more sense.(I am aware that polygon linking needs to take place before running a geometry shader (possibly not if it takes single points as inputs?) but I also know it needs to run after the geometry shader as well, so wouldn't it still make sense to run the vertex shader between those stages?)
|
Why is the geometry shader processed after the vertex shader?
|
The only way I got it to work was to set☑️ Skipped pipelines are considered successfulinSetttings > General > Merge requests > Merge Checksandmarking the manual step as "allow_failure"upload:
stage: 'upload'
rules:
# Only allow uploads for a pipeline source whitelisted here.
# See: https://docs.gitlab.com/ee/ci/jobs/job_control.html#common-if-clauses-for-rules
- if: $CI_COMMIT_BRANCH
when: 'manual'
allow_failure: trueAfter this clicking theMerge when Pipeline succeedsbutton …… will merge the MR without any manual interaction:
|
I have a pipeline with 3 stages:build,deploy-testanddeploy-prod. I want stages to have following behavior:always runbuildrundeploy-testautomatically when onmasteror manually when on other branchesrundeploy-prodmanually, only available onmasterbranchMy pipeline configuration seems to achieve that but I have a problem when trying to merge branches into master. I don't want to executedeploy-teststage on every branch before doing merge. Right now I am required to do that as the merge button is disabled with a messagePipeline blocked. The pipeline for this merge request requires a manual action to proceed. The settingPipelines must succeedin project is disabled.I tried adding additional rule to preventdeploy-teststage from running in merge requests but it didn't change anything:rules:
- if: '$CI_MERGE_REQUEST_ID'
when: never
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manualFull pipeline configuration:stages:
- build
- deploy-test
- deploy-prod
build:
stage: build
script:
- echo "build"
deploy-test:
stage: deploy-test
script:
- echo "deploy-test"
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
when: on_success
- when: manual
deploy-prod:
stage: deploy-prod
script:
- echo "deploy-prod"
only:
- master
|
Accept merge request without running manual stages
|
We developed PipeGraph, an extension to Scikit-Learn Pipeline that allows you to get intermediate data, build graph like workflows, and in particular, solve this problem (see the examples in the gallery athttp://mcasl.github.io/PipeGraph)
|
I am using aPipelinein scikit learn to group some preprocessing together with aOneClassSVMas the final classifier. To compute reasonable metrics, I need a post-processing which transforms the -1,1 output of theOneClassSVMto 0 and 1. Is there any structured way to add such post-processing to aPipeline?
Transformers cannot be used after the final estimator.
|
Post-process classifier output in scikit learn Pipeline
|
Short-circuiting boolean expressions are exactly equivalent to some set of nested ifs, so are as efficient as that would be.If b doesn't have side-effects, it can still be executed in parallel with a (for any value of "in parallel", including pipelining).If b has side effects which the CPU architecture can't cancel when branch prediction fails then yes, this might require delays which wouldn't be there if both sides were always evaluated. So it's something to look at if you do ever find that short-circuiting operators are creating a performance bottleneck in your code, but not worth worrying about otherwise.But short-circuiting is used for control flow as much as to save unnecessary work. It's common among languages I've used, for example the Perl idiom:open($filename) or die("couldn't open file");the shell idiom:do_something || echo "that failed"or the C/C++/Java/etc idiom:if ((obj != 0) && (obj->ready)) { do_something; } // not -> in Java of course.In all these cases you need short-circuiting, so that the RHS is only evaluated if the LHS dictates that it should be. In such cases there's no point comparing performance with alternative code that's wrong!
|
boolean a = false, b = true;
if ( a && b ) { ... };In most languages,bwill not get evaluated becauseais false soa && bcannot be true. My question is, wouldn't short circuiting be slower in terms of architecture? In a pipeline, do you just stall while waiting to get the result of a to determine if b should be evaluated or not? Would it be better to do nested ifs instead? Does that even help?Also, does anyone know what short-circuit evaluation is typically called? This question arose after I found out that my programming friend had never heard of short-circuit evaluation and stated that it is not common, nor found in many languages, and is inefficient in pipeline. I am not certain about the last one, so asking you folks!Okay, I think a different example to perhaps explain where my friend might be coming from. He believes that since evaluating a statement like the following in parallel:(a) if ( ( a != null ) && ( a.equals(b) ) ) { ... }will crash the system, an architecture that doesn't have short-circuiting (and thereby not allowing statements like the above) would be faster in processing statements like these:(b) if ( ( a == 4 ) && ( b == 5 ) )since if it couldn't do (a) in parallel, it can't do (b) in parallel. In this case, a language that allows short-circuiting is slower than one that does not.I don't know if that's true or not.Thanks
|
Benefits of using short-circuit evaluation
|
When mixing pipeline operators and curried arguments be aware of the order you pass arguments with.let size = 4
let photosInMB_pipeforward =
size, @"C:\Users\chrsmith\Pictures\"
||> filesUnderFolder
|> Seq.map fileInfo
|> Seq.map fileSize
|> Seq.fold (+) 0L
|> bytesToMBThink about it as if the compiler is putting parentheses around the function and its parameters like this.@"C:\Users\chrsmith\Pictures\" |> filesUnderFolder sizebecomes@"C:\Users\chrsmith\Pictures\" |> (filesUnderFolder size)or(filesUnderFolder size) @"C:\Users\chrsmith\Pictures\"Out of order examplelet print2 x y = printfn "%A - %A" x y;;
(1, 2) ||> print2;;
1 - 2
1 |> print2 2;;
2 - 1With three argumentslet print3 x y z = printfn "%A - %A - %A" x y z;;
(1, 2, 3) |||> print3;;
1 - 2 - 3
(2, 3) ||> print3 1;;
1 - 2 - 3
3 |> print3 1 2;;
1 - 2 - 3Definitionslet inline (|>) x f = f x
let inline (||>) (x1,x2) f = f x1 x2
let inline (|||>) (x1,x2,x3) f = f x1 x2 x3
|
Is piping parameter into line is working only for functions that accept one parameter?
If we look at the example atChris Smiths' page,// Using the Pipe-Forward operator (|>)
let photosInMB_pipeforward =
@"C:\Users\chrsmith\Pictures\"
|> filesUnderFolder
|> Seq.map fileInfo
|> Seq.map fileSize
|> Seq.fold (+) 0L
|> bytesToMBwhere his filesUnderFolder function was expecting only rootFolder parameter,
what if the function was expecting two parameters, i.e.let filesUnderFolder size rootFolderThen this does not work:// Using the Pipe-Forward operator (|>)
let size= 4
let photosInMB_pipeforward =
@"C:\Users\chrsmith\Pictures\"
|> filesUnderFolder size
|> Seq.map fileInfo
|> Seq.map fileSize
|> Seq.fold (+) 0L
|> bytesToMBSince I can definelet inline (>>) f g x y = g(f x y)I think I should be able to use pipeline operator with functions having multiple input parameters, right? What am I missing?
|
Piping another parameter into the line in F#
|
Use"Bitbucket trigger pipeline" pipein your pipeline's final step.
You can easily setup this pipe:script:
- pipe: atlassian/trigger-pipeline:4.1.5
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'Where variables:$BITBUCKET_USERNAME - Bitbucket user that will trigger the pipeline. Note, that this should be an account name, not the email.$BITBUCKET_APP_PASSWORD -Bitbucket app passwordof the user that will trigger the pipeline. Remember to check thePipelines WriteandRepositories Readpermissions when generating the app password.This pipe will trigger the branch pipeline for master in your-awesome-repo. This pipeline will continue, without waiting for the triggered pipeline to complete.
|
I has the two repositories in bitbucket pipelines, both with pipelines enable.How to execute the pipeline after the other pipeline complete?
|
Can I execute a pipeline from other pipeline in bitbucket pipelines?
|
I was able to locate a docker image that has the changes required to pass builds. For those that run into this issue and need a quick fix until Sam gets his docker image updated seebitnami/git
|
I have a staging, and a production server setup on Bitbucket Pipelines running a yaml script with the following;image: samueldebruyn/debian-git
name: Staging - Upload FTP
script:
- apt-get update
- apt-get -qq install git-ftp
- git ftp push --user $FTP_USERNAME --passwd $FTP_PASSWORD -v ftp://$FTP_HOST/$FTP_STAGING_PATH
- echo "Completed upload"This script has been working great, and widely used in same format online for others using pipelines.I submitted to my staging server literally 5-10 minutes before Debian 11 was released with successful builds, then post Debian 11 Release all subsequent releases Ive pushed to staging, or production result in a failure build with the following error...Ign:1 http://security.debian.org/debian-security stable/updates InRelease
Get:2 http://deb.debian.org/debian stable InRelease [113 kB]
Err:3 http://security.debian.org/debian-security stable/updates Release
404 Not Found [IP: 151.101.250.132 80]
Get:4 http://deb.debian.org/debian stable-updates InRelease [40.1 kB]
Get:5 http://deb.debian.org/debian stable/main amd64 Packages [8178 kB]
Reading package lists...
E: The repository 'http://security.debian.org/debian-security stable/updates Release' does not have a Release file.Am I missing something, or did Debian 11 just break a lot of pipelines?!or issamueldebruyn/debian-gitout of date now?
|
Debian 11 Update broke samueldebruyn/debian-git?
|
Actually two things needed to do:config.assets.precompile += %w( *.js *.css )as describedhere, andconfig.serve_static_assets = truefor local production testing when usingrail sof course usingrake assets:precompilehowever in my case - without config.assets.precompile this would have no effect since the manifest didn't contain any reference to my javascript file.HTH.
|
theRails Guidessays:If there are missing precompiled files in production you will get an
Sprockets::Helpers::RailsHelper::AssetPaths::AssetNotPrecompiledError
exception indicating the name of the missing file(s).I do execute:bundle exec rake assets:precompilehowever I don't get any error, and my javascript file is missing in themanifest.yml. Also it's not appearing inpublic/assets, so the problem is only on production.I have in theapplication.js//= require formalize/jquery-formalizeWhat am I missing?Thanks.
|
Rails javascript asset missing after precompile
|
Gitlab 13.9 introduced a!reference-tag which makes this possible;.setup:
script:
- echo creating environment
test:
script:
- !reference [.setup, script]
- echo running my own command
|
How to append thescriptsection in onestagein the .gitlab-ci.yml file?e.g in this examplestages:
- stage1_name
.a:
script:
- echo "String 1"
.b:
script:
- echo "String 2"
stage1_name:
stage: stage1_name
extends: .a
extends: .b
script:
- echo "String 3"how to get as output:String 1
String 2
String 3instead of:String 3
|
Append the script on one stage .gitlab-ci.yml
|
Disclaimer: I'm no expert, just putting together the pieces...The answer (as of Janurary 2019) seems to be: there is no official support for this.System.IO.Pipelines was created primarily fornetworking use cases. In fact, the pipelines code released in 2.1 hadno support for any endpoints:Here we need a bit of caveat and disclaimer: the pipelines released in .NET Core 2.1 do not include any endpoint implementations.There is aproposed design for an APIfor generic stream adapter but that is part of the .NET Core 3.0 Milestone.There evenseems to be some reticenceto implementing file-based pipelines access (AKA aFileStreampipelines equivalent). This is particularly disappointing since I too was hoping for pipelines powered file I/O.I think your best bet at the moment is using theUsePipe()methods inhttps://github.com/AArnott/Nerdbank.StreamsUpdate:Here's another example I just foundhttps://github.com/tulis/system-io-pipelines-demo/tree/master/src/SystemIoPipelinesDemo/SystemIoPipelinesDemoUpdate:I had a go at making a Pipeline based file reader. You can read all about it here:https://github.com/atruskie/Pipelines.File.UnofficialEssentially, from a performance perspective, using a pipeline stream adapter like Nerdbank.Streams is a good way to go!
|
I consider replacing Stream-based IO in our application with System.IO.Pipelines to avoid unnecessary memory allocation (considered first RecyclableMemoryStream but it seems to be discontinued). But in some places I still have to use Stream because of the interface imposed by an external library. So my PipeWriter will need wrap its data in a Stream.I didn't find much on this topic, but found a suggestion to use decorator pattern (C# Stream Pipe (Stream Spy)) in an answer to a different question. I am not sure it would be a right to hide Pipelines behind a Stream wrapper but can't find anything else that will let me piped data to a stream. Am I missing something?UPDATE. Here's an example using SSH.NET open source library to upload a file to an FTP server (https://gist.github.com/DavidDeSloovere/96f3a827b54f20d52bcfda4fe7a16a0b):using (var fileStream = new FileStream(uploadfile, FileMode.Open))
{
Console.WriteLine("Uploading {0} ({1:N0} bytes)", uploadfile, fileStream.Length);
client.BufferSize = 4 * 1024; // bypass Payload error large files
client.UploadFile(fileStream, Path.GetFileName(uploadfile));
}Note that we open a FileStream to read a file and then pass a Stream reference to an SftpClient. Can I use System.IO.Pipelines here to reduce memory allocation? I will still need to provide a Stream for SftpClient.
|
Using System.IO.Pipelines together with Stream
|
With the configuration fileI use Spray 1.2.0 in an Akka system. Inside my actor, I import the existing Akka system so I can use the default Akka configuration file.implicit val system = context.system
import context.dispatcher
val pipeline: HttpRequest => Future[HttpResponse] = sendReceiveNow you can change the configuration inapplication.conf.spray.can.host-connector {
max-connections = 10
max-retries = 3
max-redirects = 0
pipelining = off
idle-timeout = 30 s
client = ${spray.can.client}
}In codeIt is possible to change the settings in code using the HostConnectorSetup, but you have to define all parameters. (Based on thespray usage example.)val pipeline: Future[SendReceive] =
for (
Http.HostConnectorInfo(connector, _) <-
IO(Http) ? Http.HostConnectorSetup("www.spray.io", port = 80, settings = Some(new HostConnectorSettings(maxConnections = 3, maxRetries = 3, maxRedirects = 0, pipelining = false, idleTimeout = 5 seconds, connectionSettings = ClientConnectionSettings(...))))
) yield sendReceive(connector)
|
When using spray's pipelining to make an HTTP request like this:val urlpipeline = sendReceive ~> unmarshal[String]
urlpipeline { Get(url) }is there a way to specify a timeout for the request and the number of times it should retry for that specific request?All the documentation I've found only references doing in a config (and even then I can't seem to get it to work).thx
|
Can I set a timeout and number of retries on a specific pipeline request?
|
I know it is a bit of an old post, but I was looking for the same and found that it is available now sinceGitLab 13.1.The text for a badge can be customized to differentiate between multiple coverage jobs that run in the same pipeline. Customize the badge text and width by adding the key_text=custom_text and key_width=custom_key_width parameters to the URL:https://gitlab.com/gitlab-org/gitlab/badges/main/coverage.svg?job=karma&key_text=Frontend+Coverage&key_width=130The example is for the Coverage badge but this also works for Pipelines, so in your case:https://gitlab.com/my-group/my-repository/badges/master/pipeline.svg?key_text=master&key_width=50
https://gitlab.com/my-group/my-repository/badges/dev/pipeline.svg?key_text=dev&key_width=50(Found this viahttps://microfluidics.utoronto.ca/gitlab/help/ci/pipelines/settings.md#custom-badge-text)
|
As the standard pipeline badge from GitLab looks like thisyou can tell pretty well that those are not really distinguishable.Is there a way to change thepipelinetext manually or programmatically to something else for each badge?Btw, the badges were added with those linkshttps://gitlab.com/my-group/my-repository/badges/master/pipeline.svg
https://gitlab.com/my-group/my-repository/badges/dev/pipeline.svgAdditional facts:The pipeline runs locally on my computerMy repo is private
|
How to change pipeline badge name
|
One way to choose how to divide the blocks is to decide which parts you want to scale independently of the others. A good starting point is to divide the CPU-bound portions from the I/O-bound portions. I'd consider combining the last two blocks, since they are both I/O-bound (presumably to thesamedatabase).
|
I try to create well-designed TPL dataflow pipeline with optimal using of system resources. My project is a HTML parser that adds parsed values into SQL Server DB. I already have all methods of my future pipeline, and now my question is what is the optimal way to place them in Dataflow blocks, and how much blocks i should use? Some of methods are CPU-bound, and some of them - I/O-bound(loading from Internet, SQL Server DB queries). For now I think that placing each I/O operation in separate block is the right way like on this scheme:What are the basic rules of designing pipelines in that case?
|
TPL Dataflow pipeline design basics
|
If you want to both fit and transform the data on intermediate steps of the pipeline then it makes no sense to reuse the same pipeline and better to use a new one as you specified, because callingfit()will forget all about previously learnt data.However if you only want totransform()and see the intermediate data on an already fitted pipeline, then its possible by accessing thenamed_stepsparameter.new_pipe = sklearn.pipeline.Pipeline([('transformer1':
old_pipe.named_steps['transformer1']),
('transformer2':
old_pipe.named_steps['transformer2'])])Or directly using the inner variblestepslike:transformer_steps = old_pipe.steps
new_pipe = sklearn.pipeline.Pipeline([('transformer1': transformer_steps[0]),
('transformer2': transformer_steps[1])])And then calling thenew_pipe.transform().Update:If you have version 0.18 or above, then you can set the non-required estimator inside the pipeline toNoneto get the result in same pipeline. Its discussed inthis issue at scikit-learn githubUsage for above in your case:pipe.set_params(clusterer=None)
pipe.transform(df)But be aware to maybe store the fittedclusterersomewhere else to do so, else you need to fit the whole pipeline again when wanting to use that functionality.
|
I am using asklearn.pipeline.Pipelineobject for my clustering.pipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2),
('clusterer': clusterer)])Then I am evaluating the result by using the silhouette score.sil = preprocessing.silhouette_score(X, y)I'm wondering how I can get theXor the transformed data from the pipeline as it only returns theclusterer.fit_predict(X).I understand that I can do this by just splitting the pipeline aspipe = sklearn.pipeline.Pipeline([('transformer1': transformer1),
('transformer2': transformer2)])
X = pipe.fit_transform(data)
res = clusterer.fit_predict(X)
sil = preprocessing.silhouette_score(X, res)but I would like to just do it all in one pipeline.
|
getting transformer results from sklearn.pipeline.Pipeline
|
This is the Windows version of afork bomb.%0is the name of the currently executing batch file. A batch file that contains just this line:%0|%0Is going to recursively execute itself forever, quickly creating many processes and slowing the system down.This is not a bug in windows, it is just a very stupid thing to do in a batch file.
|
If you run a .bat or .cmd file with%0|%0inside, your computer starts to use a lot of memory and after several minutes, is restarted. Why does this code block your Windows? And what does this code programmatically do? Could it be considered a "bug"?
|
Batch fork bomb? [duplicate]
|
Checktransformersversion. Make sure you are on latest. Pipelines were introduced quite recently, you may have older version.
|
I have installedpytorchwithcondaandtransformerswithpip.I canimport transformerswithout a problem but when I try toimport pipeline from transformersI get an exception:from transformers import pipeline
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-4-69a9fd07ccac> in <module>
----> 1 from transformers import pipeline
ImportError: cannot import name 'pipeline' from 'transformers' (C:\Users\Alienware\Anaconda3\envs\tf2\lib\site-packages\transformers\__init__.py)This is a view of the directory where it searches for theinit.py file:What is causing the problem and how can I resolve it?
|
Can not import pipeline from transformers
|
I found the answer now. My problem is that I have an old version (2.4) of firebird. I upgraded to version 2.9 - and everything works fine. So thank you so much for your assistance. You have all guidet me into the correct direction.
|
I'm trying to develop a backup with a firebird database using the firebird package but it gives me an error.FbConnectionStringBuilder cs = new FbConnectionStringBuilder();
cs.UserID = "SYSDBA";
cs.Password = "masterkey";
cs.Database = "C:\\Develop\\Database\\DB\\Database.fdb";
FbBackup backupSvc = new FbBackup();
backupSvc.ConnectionString = cs.ToString();
backupSvc.BackupFiles.Add(new FbBackupFile(@"C:\\Develop\\Database\\DB\\Database.fbk", 2048));
backupSvc.Verbose = true;
backupSvc.Options = FbBackupFlags.IgnoreLimbo;
backupSvc.ServiceOutput += new ServiceOutputEventHandler(ServiceOutput);
backupSvc.Execute();I cant figure out why I can't compile the following statement:backupSvc.ServiceOutput += new ServiceOutputEventHandler(ServiceOutput);The errors are:Error CS0246 The type or namespace name 'ServiceOutputEventHandler'
could not be found (are you missing a using directive or an assembly
reference?)andError CS0103 The name 'ServiceOutput' does not exist in the current
contextIs there anyone who can help?
|
Problems with making backup with Firebird package
|
This works with ignoring 1.jpeg in the root-Directory of the application. I added some additonal excludes for reference.$application = 'Default web site/SendSMS_Img'
$workingDir = 'desired FilePath'
$fileIgnoreList = @('\\1.jpeg','\\*.aspx','\\foo.bar','\\*.publishproj')
$folderIgnoreList = @('\\bin','\\images','\\foo','\\bar')
Add-PSSnapin WDeploySnapin3.0
Backup-WDApp $application -SkipFileList $fileIgnoreList -SkipFolderList $folderIgnoreList -Output $workingDirShareFollowansweredOct 10, 2019 at 14:53gReXgReX1,07011 gold badge2121 silver badges3535 bronze badgesAdd a comment|
|
I want to useBackup-WDAppweb deploy powershell cmdlet to backup an iisApp to my windows machine from another windows machine. I want the backup to be saved in a location mentioned by me.Please add how to mention folder location to the following cmdlet$list = @('\\Default Web Site\\SendSMS_Img\\1.jpeg')
Backup-WDApp "Default web site/SendSMS_Img"
-SourcePublishSettings "D:\Web Deploy\SendSMS.publishsettings"
@{encryptPassword='pass@123'} –SkipFileList $listAlso, I want the backup to skip a jpeg file which is inC:\Users\username\System.Collections.Hashtable\MachineName_IisApp_Default web site_SendSMS_Img_20130813172754.zip\Content\Default web site\SendSMS_Img. My cmdlet isn't skipping the jpeg file. I guess the regular exp is wrong.
|
Web deploy powershell cmdlet Backup
|
You forgot to open a bracket "(". Try this:table_types=($($MYSQL -u $DB_USER -p$DB_PASS -e "show table status from $DB" | awk '({ if ($2 == "MyISAM" || $2 == "InnoDB") print $1,$2}'))ShareFolloweditedAug 8, 2013 at 21:07answeredAug 8, 2013 at 21:02Rahul TripathiRahul Tripathi170k3232 gold badges282282 silver badges334334 bronze badgesAdd a comment|
|
i am using a script in my macbook pro to backup Mysql databases of my development apps, and its working fine.
But i tried to use it in my ubuntu server test and gave this error :Syntax error: "(" unexpected (expecting "done")On this line of code:table_types=($($MYSQL -u $DB_USER -p$DB_PASS -e
"show table status from $DB" | awk '{ if ($2 == "MyISAM" || $2 == "InnoDB") print $1,$2}'))Can someone help me?
|
Syntax error in mysql backup script
|
According to thedocumentation, you can't runsvnadminfrom a remote machine:Since svnadmin works via direct repository access (and thus can only be used on the machine that holds the repository), it refers to the repository with a path, not a URL.ShareFollowansweredJan 15, 2012 at 15:53Eli AcherkanEli Acherkan6,41122 gold badges2828 silver badges3434 bronze badges4Thank you! What solution you can recommend me for the backup with URL access?–user1150517Jan 15, 2012 at 15:59Sorry, I'm not familiar enough with SVN administration to help you there, but let's see what the collective mind of StackOverflow has to offer...–Eli AcherkanJan 15, 2012 at 16:05One of the answersheresuggestssvnsync.–Eli AcherkanJan 15, 2012 at 16:10An other way is to usesvnrdump. This very much works assvnadmin dumpbut does support operations over the net.–Tobias KremerOct 10, 2013 at 10:28Add a comment|
|
I try to create a backup of my SVN repository, located on Linux server, from Windows command-line Subversion Client:C:\project>svnadmin hotcopy svn://"URL_of_my_SVN_repository"/ C:/BACKUPand receive following error:
svnadmin: E205000: 'svn://"URL_of_my_SVN_repository"/' is a URL when it should be a local pathHow I can solve it? I need to initiate a backup my SVN repository from Windows PC (due to our network policy I have access to the Linux server port 3690 (SVN) only).
|
Backup of SVN repository, located on Linux server, to Windows Client
|
In the codeplex projectExtreme T-SQL ScriptI have written T-SQL procedures to script the content of tables. I just abandoned its use myself in favor ofssms tools pack, but the later is no option for you.When using these procedures in SSMS or VS the main problem is that Microsoft has limits on max column width and max length of output from Print-Statements.I can't predict, which such limits exist when using mylittleadmin.
It depends on which datatypes and which varchar length you are using. Writing scripts that handle special needs is possible.Further you need something to script the database objects first and it might be difficult to find something for that, as most people just use SSMS for this purpose. sp_helptext might help to script procedures.ShareFolloweditedMay 23, 2017 at 12:00CommunityBot111 silver badgeansweredDec 28, 2010 at 12:52bernd_kbernd_k11.7k77 gold badges4646 silver badges6464 bronze badges1Warning : endless loop. Extreme T-SQL link points back to this page.–bbsimonbbOct 23, 2016 at 19:16Add a comment|
|
How can I move a db from one server to another (I only have access to the database with mylittleadmin). Like the title says, I guess the "easiest" way would be by generating SQL with a stored procedure.I'm using SQL Server 2008 on both servers.
|
Stored Procedure to generate insert and create SQL for database
|
(EDIT: Looks like others, includingJeff Atwood, have created tools to do this.)Ok, figured out that it's hard to recursively search subfolders and delete those folders matching a pattern using a batch file andcmd. Luckily, PowerShell (which is installed on Windows 7 by default, IIRC) can do it (kudos):get-childitem C:\Projects\* -include TestResults -recurse | remove-item -recurse -forceThat was based off of example 4 of theRemove-Itemhelp entry. It searches the path, recursively, for anything (file or folder) named "TestResults" (could put a wildcard in there if wanted) and pipes the results to theRemove-Itemcommand, which deletes them. To test it out, just remove the pipe toRemove-Item.How do we remove more than just one folder per statement? Input the list of folders in PowerShell'sarray syntax:get-childitem C:\Projects\* -include bin,obj,TestResults -recurse | remove-item -recurse -forceYou can run this from the command prompt or similar like so (reference)powershell.exe -noexit get-childitem C:\Projects\* -include bin,obj,TestResults -recurse | remove-item -recurseObvious DisclaimerDon't use this method if you keep necessary files inside folders or files namedbin,obj, etc.ShareFolloweditedMar 20, 2017 at 10:18CommunityBot111 silver badgeansweredOct 11, 2010 at 21:19PatPat16.6k1616 gold badges9595 silver badges114114 bronze badges0Add a comment|
|
I want to do a backup of my Projects folder. It mostly contains sub-folders for .NET projects (VS2005, VS08, and VS10).What should I delete before doing a backup (e.g.TestResults,bin,obj); if there is something I shouldn't delete but should ignore, what might that be; and, most importantly, how best to automate the whole process (I'm thinking batch file or better)?What gotchas do I need to be aware of?
|
Clean up .net project folders before performing backup
|
Login to the Netbackup master server and perform the following steps./usr/openv/netbackup/bin/admincmd/bpnbaz -setupExAudit/usr/openv/netbackup/bin/admincmd/bpnbaz -AddUser domainType:master_server:User/usr/openv/netbackup/bin/bpnbat -addUser User User_pwd master_server/usr/openv/netbackup/bin/admincmd/bpnbaz -listusers
Number of users with NetBackup Administrator Privileges: 1Domain Type : domainType
Domain : master_server
Username : UserOperation completed successfully./usr/openv/netbackup/bin/admincmd/bpnbaz -DisableExAuditRestart Netbackup servicesDomain Type:Unixpwd - UNIX Password file on the Authentication serverWINDOWS - Primary Domain Controller or Active DirectoryShareFollowansweredJul 19, 2018 at 4:07JituJitu3622 bronze badgesAdd a comment|
|
I have account in Netbackup Java console and commandline with all permissions, but in REST api, I can only access part of resources like /appdetails. To other path like /admin/jobs, I got following error:{"errorCode":8000,"errorMessage":"User does not have permission(s) to perform the requested operation."}Does anyone know how to fix this? Thank you.
|
How to grant permission in RESTful API to user in Netbackup 8.1.1?
|
I wouldhighlyrecommend some form of version control, likegitormercurial. This will give you the file history that you want (bonus: branching) and will also provide easy way to clone and share history between different repositories.GitYou can use a plugin likevim-futitiveto handle staging, commits, and many other git functions.Vimcastsprovides some a nicescreencast tutorialfor fugitive:A complement to command line gitWorking with the git indexResolving merge conflicts with vimdiffBrowsing the git object databaseExploring the history of a git repositoryPersistent UndoTo go along with version control, Vim also provides persistent undo where it save's Vim's undo history to a file. See:h persistent-undo.Simply set'undofile'in yourvimrc:set undofileMay want to set'undodir'to a different location, e.g.set undodir=~/.local/vim/undo. Note:'undodir'must exist.It should be noted persistent undo isnotversion control and should not be treated as such. For example undo history can easily be messed up by editing a file in a different editor.Plugins likeGundoandundotreemay help in navigating deep in the past or complicated undo histories.ShareFolloweditedApr 27, 2018 at 22:16answeredApr 27, 2018 at 22:01Peter RinckerPeter Rincker44.3k99 gold badges7575 silver badges107107 bronze badgesAdd a comment|
|
I want to create a write function in Vim that does the following:Copies the file before it was edited to a directory (lets call this back_up_before_last_change)Writes the new edits to the current fileCopies the updated file to a directory (lets call this back_up)This way I could look an older change and if the file gets accidentally deleted I would always have a most recent backup. Would anyone know how to do this or recommend me some resource so I could figure it out for myself. Thank you
|
Custom Vim Write Function To Make a Backup Copy
|
You could usesplit. For example, this:split -b500k $DESTFILE ${DESTFILE}-will split$DESTFILEinto 500 KB pieces called:${DESTFILE}-aa
${DESTFILE}-ab
${DESTFILE}-ac
...Then you could loop through them with something like:for x in ${DESTFILE}-*
do
dropboxUpload $x
endShareFollowansweredAug 28, 2011 at 2:12mu is too shortmu is too short431k7171 gold badges842842 silver badges809809 bronze badges2How can I join them on windows to extract after that?–Mark SAug 28, 2011 at 2:36@Mark: I think you'd usecopy /a parts... destbut my Windows knowledge is rather weak.–mu is too shortAug 28, 2011 at 2:41Add a comment|
|
this a part of a .sh script I need to edit to make some backups and upload them on Dropbox but I need to split that backup in smaller parts.NOW=$(date +"%Y.%m.%d")
DESTFILE="$BACKUP_DST/$NOW.tgz"
# Backup mysql.
mysqldump -u $MYSQL_USER -h $MYSQL_SERVER -p$MYSQL_PASS --all-databases > "$NOW-Databases.sql"
tar cfz "$DESTFILE" "$NOW-Databases.sql"And then the function to upload the backup on DropBox....dropboxUpload "$DESTFILE"How can I split the .tar file in smaller parts (for example of 100 or 200mb size) and get the name and the number of those files to upload them with the dropboxUpload function?
|
Bash script - splitting .TAR file in smaller parts
|
The preferred option would be to useOracle Data Guard. First, you would instantiate the new production database as a physical standby for the current database. Then, when you wanted to move to the new database, you'd simply issue a switchover from the primary to the standby. You may want to follow that up by instantiating a physical standby for the new production database on the backup server.If you don't have the enterprise edition, you can do essentially the same thing manually. Assuming the database is in ARCHIVELOG mode, you can run a backup of the current production database while it is up, restore that backup to the production server, and then apply archived logs from the current production database to get the backup close to synchronized. When you're ready to do the switchover, you'd need to shut down the current production database, copy the last archived logs to the backup, apply the archived logs, and then bring up the backup as the new production database.ShareFollowansweredJul 12, 2011 at 13:59Justin CaveJustin Cave229k2424 gold badges370370 silver badges387387 bronze badgesAdd a comment|
|
Let's say you have an Oracle database running on production-backup. You want to move back to production (which has no data at this time). To export, import, index, and run statistics collection takes 4 hours. So, if you stop production-backup, you are down 4 hours while you migrate back to production. Part of the long import time is that there is a bunch of historical data in there not immediately needed for operations. How would you migrate your data from production-backup to production to minimize downtime so that you aren't down for 4 hours?
|
Oracle - Reducing downtime while moving from one database to another
|
jos_ja_website_activityis not a core table, it must have been added by a third-party extension. My guess is that you could empty it without serious consequences. However, you might want to find out which extension is generating this log. The 'JA' in the table name may stand for JoomlArt, which is a popular commercial extension club. I'd start by looking for a plugin, template, or component in the system done by JoomlArt and seeing if there's a control panel where you can turn off logging.ShareFollowansweredJul 9, 2011 at 13:05jlleblancjlleblanc3,5102525 silver badges2626 bronze badges1Wicked thanks, I found out that it was JAdmin, some crappy component I installed ages ago and uninstalled straight away but I only uninstalled the component and not the plugin.–WasimJul 9, 2011 at 22:05Add a comment|
|
I was wondering how important the information in the jos_ja_website_activity table in Joomla is? The reason I ask is that it has 3 million records and when my external backup system is backing up the DB, it takes ages to backup. While it's taking it's time to back up, Joomla cannot access the table and seeing as it accesses the table for every page load, my site then goes down while the backup is doing it's work.So my question really is if i empty the table is this going to affect my system at all? Also is there a way to stop it from logging information to that table or is it essential that it does?Thanks for your help!
|
How important is the jos_ja_website_activity table in Joomla?
|
When you connect with ssh, a profile script is executed and that sets a lot of environment variables. I don't know anything about Ansible, but it would appear that something about the way it connects does not invoke the profile. It's the same thing as when a script is run by cron. Bottom line is all scripts should take care of setting the necessary environment variables for themselves, instead of depending on inheriting them.ShareFollowansweredAug 14, 2020 at 15:54EdStevensEdStevens3,75422 gold badges1111 silver badges1818 bronze badgesAdd a comment|
|
I use an Ansible-Controller-machine to connect a Server in order to run a script that takes an Oracle database backup and I have the following, for me strange, result:
i)when I use ssh everything works fineand I can run a script that makes a database backup
ii)when I use Ansible, with the same credentials, the script fails. I found that -although Ansible uses the same user- some env-variables like $PATH, $ORACLE_HOME etc. are different comparing to the connection using ssh. Do you have any idea of what I am doing wrong?
|
Ansible fails to run a script for a backup of an Oracle database
|
Specify the "Overwrite the existing database (WITH REPLACE)" option:ShareFollowansweredAug 8, 2020 at 12:49Dan GuzmanDan Guzman44.7k33 gold badges4848 silver badges7474 bronze badges2Thanks Dan. I checked this option and I even installed the database with the same name but again I am getting the same error when trying to restore the database even with the same name. Do you know how this can be resolved? Thank you in advance–velAug 8, 2020 at 12:57Personally, I use T-SQL RESTORE. You could try scripting the command to a new query window. Verify theREPLACEoption is in included and execute it.–Dan GuzmanAug 8, 2020 at 13:04Add a comment|
|
I have database backup A.bak
I want to restore that backup file, using the SQL Server Management Studio, into database B which has all the same tables/columns but just different name.
If I try to do the restore - I am getting error:The backup set holds a backup of a database other than the existingHow can I resolve this? I tried renaming the.bakfile but it didn't work
|
RESTORE database with different name Fails - "The backup set holds a backup of a database other than the existing"
|
,000 records really is not very many records.Before you start taking drastic actions (such as limiting the ability of users to seamlessly see all the data at once), you should consider basics for improving performance:Indexes to improve standard query performance.Partitioning to limit the portions of tables that need to be accessed.Full text indexing to improvematch()queries.Optimization of SQL queries.In general, these are sufficient for databases that are orders of magnitude larger than the volume you are dealing with.These may not apply to your particular situation; but you should exhaust the lower-hanging fruit for performance optimization before changing your physical data model for a problem that might never occur.ShareFollowansweredNov 30, 2016 at 17:51Gordon LinoffGordon Linoff1.3m5858 gold badges666666 silver badges811811 bronze badges1i'm not sure if db server can take it when live search is on, it's computing power is quite limited.–Roman KurbanovNov 30, 2016 at 23:04Add a comment|
|
I have a single database, most of the tables are connected in some way.
It consists of over 500000 records.
I need to implement live search, but number of records bothers me.Database will grow and live search in millions of records will sure cause problems. So i need to move old records (let's assume date field is present) to another database and only keep fresh ones available for search.Old records won't be used anymore, that's for sure, but i still need to keep them.Any ideas how that could be implemented in MySQL?
|
Separate single database into active and archive
|
No, there isn't any standard tool that does this out of the box. But it's pretty simple to code up, and there are a few projects to do it:https://unix.stackexchange.com/questions/18628/generating-sets-of-files-that-fit-on-a-given-media-size-for-tar-tShareFolloweditedApr 13, 2017 at 12:36CommunityBot111 silver badgeansweredApr 6, 2014 at 22:02BraveNewCurrencyBraveNewCurrency12.9k33 gold badges4343 silver badges5151 bronze badgesAdd a comment|
|
I want to backup my NAS on multiple DVD's. What i had in mind was a script what does the following:-Create a folder for each DVD
-Copy the files and filestructure into the DVD folders
-Stop / goto next DVD folder when the first DVD folder is fulli.e. the trigger is 4 GByte (which calculates easy for the example)I have a datasrouce with 10 gb of data., so this will be 3 DVD's. So the script first create three folders: DVD-1, DVD-2 and DVD-3. Next the copy will start to copy 4 GB to the DVD-1 folder. After that, the remaining files must come in DVD-2 and DVD-3.As far as i know, rsync and cp doesn't bother about calculating this. I know it is an option to do this by using archives like zip, tar or gz but at first i want to try it with unpacked files.Is all above an option with standard Linux bash commands or is it insane?
|
Backup files in pre-folders of a certain size
|
I don't think you will find this kind of information directly on a website. Best is to look for the info provider by provider.A simple Google search with the key words: "provider" backup policy will bring you this info. I just got the CloudBees one in the first position. Same with the other providers.ShareFollowansweredApr 4, 2014 at 9:45Captain HaddockCaptain Haddock48822 silver badges77 bronze badges2I have searched it on Google, but the PaaS do not mention that they have no offsite data centre directly.–Jacky ShekApr 6, 2014 at 8:42AFAIF providers don't usually have their own offsite datacenter. They usually use high-level providers like Amazon as infrastructure. If they usually use IaaS as Amazon or others, I don't think they will invest in infrastructure for this purpose. However, I think this is good since it will be very difficult to get a better service like Amazon could provide you.–Captain HaddockApr 7, 2014 at 5:36Add a comment|
|
I have worry about that the PaaS does not have their own offsite data centre for backup our data. Is there any information about how many PaaS do not have a offsite for backup the data.
|
Is there any PaaS cloud service provider that do not have their own offsite data center?
|
Try exporting mongoDB to a JSON file using command :mongoexport --host "localhost:27017" --collection coll_Name --db db_Name --any_file_Name "path to get the any_file_Name.json file"And to import this json in mac type command :mongoimport --db db_Name --collection coll_Name --file "any_file_Name.json path"ShareFollowansweredSep 12, 2019 at 6:02kapil pandeykapil pandey1,88311 gold badge1414 silver badges2626 bronze badgesAdd a comment|
|
I am new to mongodb. I have a mongodb database of 2gb size in windows machine. I want to backup and restore it to my mac os. I tried copying and pasting the /data/db folders from windows to my mac machine. But it didn't work. The database is not showing up in mac on doing this. I guess I missing something majorly. Can someone assist me on this?Thanks in advance!
|
How to backup mongodb from windows machine to mac os?
|
Why not keep your Project folder in Dropbox itself?ShareFollowansweredJan 24, 2012 at 22:29shadyabhishadyabhi16.9k2626 gold badges8383 silver badges131131 bronze badges2Thanks for the good idea, it's a rather simple thing)) But it is required instant internet access, because of the instant data exchange between Dropbox client and server while editing project code, which is not always convenient to me. I want save only the last project state (snapshot) before closing Eclipse.–SuncatcherJan 26, 2012 at 7:19No, internet connection is not required. While you are editing, you cna be offline. THe next time you get connect to internet, it will get synced.–shadyabhiJan 26, 2012 at 8:08Add a comment|
|
I'm using Eclipse Java EE IDE for Web Developers and want something like that:
Eclipse should add current project (or open projects) to archive and copy the archive to specified folder on its (Eclipse) exit.
Is it possible with Eclipse built-in functionality - scripting, or plugin?
I don't want to use bat-files, Script Host or other external tools.
The global idea is to realize on-exit backup of my project between home and workplace using Dropbox, e.g. automated sending project backup to dropbox folder.
But current task is to realize local backup.
|
Eclipse project backup
|
If you are doing a backup with data, why don't you just right-click database - select tasks and backup and backup to a file?Other than that, it would be hard to tell you what the error is without knowing what error you are getting.ShareFollowansweredJan 24, 2012 at 18:21cairnzcairnz3,92711 gold badge1919 silver badges2222 bronze badgesAdd a comment|
|
I tried backing up my database using SSMS Generate Scripts options. This produces a rather large file that SSMS won't run (memmory limit).I tried then running this script using sqlcmd, but I get a syntax error.
I read that sqlcmd mode is different and I do not wish to manually remove the errors (there are potentially alot of them).Is there a way to generate the script so it will obey the rules of sqlcmd or vice versa?
|
After using SSMS's Generate Scripts sqlcmd won't run it
|
The only way is to write a script that take the data from the old db and insert thme in the new db. Or you can in some way to connect to the two databases and then make some select and insert query, something likeinsert into new_db.table as select old_db.table.field1, ....orinsert into new_db.table (field_1, field_2) values (select old_db.table.field_1, ...)In any way, is a manual process, also if can be automated to some extend with a scriptShareFollowansweredJan 13, 2012 at 23:41GianlucaGianluca3,23722 gold badges3535 silver badges3636 bronze badges21I my opinion, just sql. Eventually just write down the sql if you need to do it on more than one server–GianlucaJan 14, 2012 at 9:12Thank you very much, I edited my question with the final solution.–carpamonJan 14, 2012 at 15:23Add a comment|
|
I recently rewrote a Java EE web application (running on a MySQL database) to Rails 3.1. The problem now is that the database model of the new application is not the same as the old one because Iadded, removed and renamed some attributes. The database table names are also different.Is there a way of migrating this data? The only way I can imagine to do this is writing a stored procedure with many ALTER TABLE and CREATE TABLE statements to update the database to the new model.Thanks in advanced.Solution:I finally used INSERT..SELECT statements in a mysql stored procedure to migrate the data. INSERT INTO new_schema.new_table SELECT FROM old_schema.old_table. I am now considering making a Rake task to call that procedure and doing other stuff.
|
Migrate data from old MySQL database
|
Before you make changes to that row, do aSelect * from <table> where <partition_key> = ??specifying the partition key.Once you are done with the changes, use the above output and insert it back usingInsert into KeyspaceName.TableName(ColumnName1, ColumnName2, ColumnName3 . . . .) values (Column1Value, Column2Value, Column3Value . . . .)ShareFollowansweredOct 14, 2020 at 5:07dilsingidilsingi2,9481515 silver badges2424 bronze badges2So you recommend doing this with cqlsh right? I will try that.–ParthiNov 13, 2020 at 7:15@Parthi please remember to accept answer, once it works–dilsingiNov 15, 2020 at 3:44Add a comment|
|
I am making changes to a row in cassandra and want to restore to previous state later on. Using older DBeaver Entrpise version 4.0.5, exporting as insert / csv / json. But the map columns on the table are not exported properly and inserting this exported data is failing.Please suggest how to backup the row and restore it. Since the data is large, it is difficult to construct the insert statement manually.
|
Backup a select with a where clause result of cassandra for restoring later
|
You can't delete a Recovery Services vault that has servers registered in it, or that holds backup data.To gracefully delete a vault, unregister servers it contains, remove vault data, and then delete the vault.
If you try to delete a vault that still has dependencies, an error message is issued, and you will need to manually remove the vault dependencies, including:
Backed up items
Protected servers
Backup management servers (Azure Backup Server, DPM)
Refer to this article for detailed info:https://learn.microsoft.com/en-us/azure/backup/backup-azure-delete-vaultNote: You can use Cloud Shell available in portal to achieve this. Please select PowerShell after you launch Cloud Shell.Kindly let us know if the above helps or you need further assistance on this issue.
|
I used the Azure Backup client (MARS) to back up a server he had. The server no longer exists. In the Azure portal I am unable to delete the vault because the resource group contains backup items.I tried using Powershell but Az.RecoveryServices is not meant to be used for MARS BackupManagementType. You can Get-AzureRmRecoveryServicesBackupContainer but then Get-AzureRmRecoveryServicesBackupItem fails because there is no WorkLoadType for MARSSo I cant delete the backup items from the Portal. I cant delete backup Items using powershell and the server no longer exists so I can use the MARS agent to delete items.
|
Azure back up unable to delete backup items
|
Did you try with...mysql --user=root --password=xxxx --database=databasename < /home/ec2-user/backup/databasebackup_$todayHope it helps
|
I make daily backups by running the following cronjob:mysqldump --user root --password=xxxx databasename > /home/ec2-user/backup/databasebackup_$todayHow would I go about to simply restore from a backup, completely overwriting whatever there might be?
|
How to restore mysql database backup from terminal?
|
Write a service to constantly check a folder size. Below code is to check the folder size. Attach this to a service which will execute this code based on events or whatever how you needimport java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
public class PathSize
{
public static void main(String[] args) {
Path path = Paths.get(".");
long size = calculateSize(path);
System.out.println(size);
}
public static long calculateSize(Path path) {
try {
if (Files.isRegularFile(path)) {
return Files.size(path);
}
return Files.list(path).mapToLong(PathSize::calculateSize).sum();
} catch (IOException e) {
return 0L;
}
}
}
|
hello mates,i want to ask if there's an android app that can backup a single folder in storage once the folder is updated fully automatically, i mean if i added a picture or file in a folder i want it to be checked and backed up automatically without any user interferenceis there an app already out there that can do this and if i should create one would it be a script that i can put it in init folder or an apksorry for the newbie question guys
|
android backup single folder once it's updated
|
I don't know if there would be any wrong answer here.. but it depends mostly on how this backup table will be used. What I would do:Back everything up to the single table, but with another field named RunMonth (or something similar) that will indicate the month that those 50k records belong to. This would make querying the historical data much easier as opposed to having them in separate tables.I hope that makes sense.
|
In SQL Server 2005, I have a requirement to take backup the output table After Monthly Processing, the output table contains 50,000 rows. Which is the best way to take backup of the table for each month either taking backup as one table or monthly wise separate tables. Which method provides to save memory and best performance for managing data in SQL Server 2005?
|
Which is the best way to take backup of all table data either merging one or separate in SQL Server?
|
Take themysqldumpof your MySQL database. Schedule cronjob which puts the dump every hour.
For putting, you can usemysql -h<db_hostname> -u<db_username> -p<db_password> <databasename> < <mysqldump_file.sql>If your server is linux, then schedule cron job usingsudo crontab -e.
|
I have a WordPress demo site to let visitors try out some live plugins.What I want is that the connected MySQL database refreshes every hour. Meaning: Going back to the clean starting state.So every hour the demo site will be clean and fresh again without any data from demo users.Is there a query via php that should be running or how can I manage this?Hope you understand my question.Many thanks in advance!UPDATE:I have create a cronjob and the job gets activated coz I'm receiving an email every time it gets extivated :)But why is my sql file not executing? Any ideas?<?php
exec("mysql -u DB-USER -pDB-PASS DB-NAME < /home/users/XXXX/XXXX/yw-cronjobs/clean_demo_db.sql");
$to = '[email protected]';
$subject = 'Demo Cron Executed';
$message = 'Cron job to reset demo has been executed';
$headers = 'From:[email protected]' . "\r\n" .
'Reply-To:[email protected]' . "\r\n" .
'X-Mailer: PHP/' . phpversion();
mail($to, $subject, $message, $headers);
?>When I import the clean_demo_db.sql file in PHPMyAdmin it does what it should...Am I missing something here?
|
How to set MySQL database to 'Starting state' every hour
|
you can try thisSELECT TEXT FROM
ALL_SOURCE
WHERE NAME LIKE '%abc%'
TYPE='PACKAGE BODY'
ORDER BY LINEdistinct typePROCEDURE , PACKAGE ,PACKAGE BODY, TYPE BODY, TRIGGER ,FUNCTION ,TYPE
|
I want to save selected procedures and functions for backup purposes in .prc file format. As of now I am doing this manually by the help of PLSQL Developer.I have found some solutions, but none worked for me. Here is an example.expdp schemas=Test, include= procedure like 'abc%';and here is the error while executing the above script.SP2-0734: unknown command beginning "expdp sche..." - rest of line ignored.Please help me if there is any way to automate this manual effort.
|
How I can take backup of selected Procedures or Functions from Oracle database
|
You would do this using RESTORE.https://msdn.microsoft.com/en-us/library/ms186858.aspxNote that the first parameter is the NAME of the database you want this to be.
|
Me and my friend has the same database but the difference is the data in the database.The goal is to use his database that is created as a backup file with format ".bak" with another database name.How should I enable to retrieve his database as a new database with a new name without affecting my current database?Thank you!
|
Using the backup file without affecting my database
|
VSS operates at the volume level and synthesizes a new(old) volume based on copy-on-write pieces. This is at a wholly different level than shared folders which are materialized by the sharing services above the file system.
|
I'm trying to gain a better understanding of Shadow Copies through VSS.According to these articles TechNet:https://technet.microsoft.com/en-us/library/ee923636.aspxhttps://technet.microsoft.com/en-us/library/cc771305.aspxThe name of the service is "Shadow Copies of Shared Folders".When configuring VSS (right click on drive --> Configure Shadow Copies ...) the dialog box contains the following text in the tab:Shadow copies allow users to view the contents of shared folders as the contents existed at previous points in time. For information on Shadow Copies, click here.The hyperlink opens a local help file that mirrors the TechNet articles.Here's the kicker:Non-shared folders are also being copied.The D: drive of my server (Windows Server 2008 R2), has Shadow Copies enabled. In the root of the drive, I have 6 folders. Only one of them is shared, but ALL of them have "previous versions" available.Is this expected behavior? Or is there some other configuration which is causing non-shared folders to have Shadow Copies made?
|
Shadow Copies (VSS) and Non-Shared Folders
|
As noted in a comment, you can backup the RPMdatabase, but that is only one part of replicating your configuration to another server:RPM's database recordsalmostall of the information regarding the packages you have installed. Using the database, you could in principle do something like a script that usedcpioorpaxto append all of the files known to the RPM database to a suitably large archive area.rpm -qagives a list of packages, andrpm -qlpackagegives a list of files for the given package.However, RPM's database does not necessarily record files created by package%preand%postscripts.Likewise, it does not record working data (such as a MySQL database) that may be in/var/lib.To handle those last two cases, you are going to have to do some analysis of your system to ensure that you do not leave something behind.rpm -qfpathnamecan tell you who owns a given file- or directory. What you have to do is to check for cases where RPM does not know who owns it.
|
Is there away to backup all installed applications/RPMs/packages/ (even repositories) as is (with same exact versions/patches) on 1 script that can re-install them on a fresh bare bone server of the same specsNote: I can't do any image or CloneZilla tricksNote: there are some 3rd party software that is not offered by any repos ... solution should contain a backup of these packages (preferably all)thanks!
|
CentOS6 - Backup all RPMs and installed programs
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.