Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Add this line in your cron:* * * * * php /var/www/my.app/artisan schedule:run >> /dev/null 2>&1
I have created scheduled task and it's running smoothley with command:php artisan schedule:runAlso I'm able to run it with command:php /var/www/my.app/artisan schedule:run >> /dev/null 2>&1But when I try to run* * * * * php /var/www/my.app/artisan schedule:run >> /dev/null 2>&1as it described in Laravel doc's here:https://laravel.com/docs/5.5/scheduling- nothing happends. If I remove>> /dev/null 2>&1(which is here to hide output). I'm getting:Command not foundThanks for any help.Just in case I used this command to add line to cron:(crontab -l ; echo "* * * * * php /var/www/bmon.app/artisan schedule:run >> /dev/null 2>&1")| crontab -Nice description here:https://askubuntu.com/questions/408611/how-to-remove-or-delete-single-cron-job-using-linux-command
Laravel scheduled task is not running with Cron, but works with artisan
You will need to have full paths on your cronjobs, I see you missed them out onmysqldumpand also yourawsfor the connection URL. I would dowhereis mysqldumpandwhereis awsto find which full path you need to run it.
I have a database backup command that takes a mysql dump and then uploads that dump file to AWS S3, when I run the command as a normal user it works perfectly but when I use the same command in a cron job it fails.I have checked the syslog and there is no error message saying there was a problem after the job. There is only a line saying the job is run and then goes on to run the next cron job.The command is as follows, I have removed the sensitive parts:mysqldump -u {{ db_user }} -p{{ db_password }} {{ db_name }} > /home/db_backup.sql | aws s3 cp /home/db_backup.sql s3://{{ s3_url }}/$(date --iso-8601=seconds)_db.sql --profile backupprofileWhen this command is run by a normal user there is a warning output not to use the the mysql password in command line but this is essential for the command to work without interaction. There is also a second line ofor the S3 to say that the upload worked. Could these outputs be effecting the cronjob is someway?
Crontab command not running
First thing to do is check your paths. Your cron environment has a minimal setup - it is not an emulation of your user BASH environment. You may want to use the full path for 'service', e.g.STATUS=$(/usr/sbin/service nagios status)For a more detailed description about cron BASH environments:https://serverfault.com/questions/698577/why-is-the-cron-env-different-from-the-users-env
I'm relatively new to bash scripting, but I've written a small script that is supposed to check the status of a service and restart it if it isn't running. The setup in cron is fine and it is running, the issue I'm having is the setting of the variable "STATUS" as shown in the code below. When I run the script from prompt it runs fine, but when it runs via cron the STATUS variable doesn't get set. Can anyone tell me what's going on here?Thanks!#!/bin/bash STATUS=$(service nagios status) DATE=$(date) if [ "$STATUS" == "No lock file found in /var/run/nagios.pid" ] then service nagios start echo "$DATE - Stopped - $STATUS" >> /var/log/nagios/nagios_check.log elif [ "$STATUS" == "nagios is not running" ] then service nagios start echo "$DATE - Stopped - $STATUS" >> /var/log/nagios/nagios_check.log else echo "$DATE - Running - $STATUS" >> /var/log/nagios/nagios_check.log fi
Bash Script Result Different via Cron
You are not checking that your commands are failing. You are not checking for errors. You are also not sending the error output to your log file.This document describes the return value from AWS commands:AWS CLI Return CodesModify both your commands to check for failure. Notice the change for the output of pg_dump and the addition of sending the error output to standard output.pg_dump -w -c -U $POSTGRES_USER $POSTGRES_DB -f "${BACKUP_PATH}/$(date '+%Y-%m-%dT%H:%M').sql" 2>&1 if [ $? -eq 0 ] then echo "$(date) Database dump created" else echo "$(date) Database dump FAILED" fi /usr/local/bin/aws s3 sync $BACKUP_PATH $S3_URL 2>&1 if [ $? -eq 0 ] then echo "$(date) Syncing completed" else echo "$(date) Syncing FAILED" fiNow review the log file again for error messages and to determine which command failed.
Here is my cron jobs list:root@b03fbed2b08d:~# crontab -l */3 * * * * S3_BACKUPS_BUCKET=rake-backups /root/dump_psql.sh > /root/logs/cron-2018-01-28T02:24.logContent of dump_psql.sh#!/bin/sh BACKUP_PATH="/root/backups" S3_URL="s3://${S3_BACKUPS_BUCKET}" echo "$(date) Dumping ${POSTGRES_DB} database" DB_NAME=rake USER=dbroot pg_dump -w -c -U $POSTGRES_USER $POSTGRES_DB > "${BACKUP_PATH}/$(date '+%Y-%m-%dT%H:%M').sql" echo "$(date) Database dump created" echo "$(date) Syncing ${BACKUP_PATH} folder with ${S3_URL} as $(whoami)" /usr/local/bin/aws s3 sync $BACKUP_PATH $S3_URL echo "$(date) Syncing completed"When I call this script manually It works fine and gives this output:Sun Jan 28 02:51:46 UTC 2018 Dumping rake database Sun Jan 28 02:51:47 UTC 2018 Database dump created Sun Jan 28 02:51:47 UTC 2018 Syncing /root/backups folder with s3://rake-backups as root upload: backups/2018-01-28T02:48.sql to s3://rake-backups/2018-01-28T02:48.sql upload: backups/2018-01-28T02:51.sql to s3://rake-backups/2018-01-28T02:51.sql Sun Jan 28 02:51:48 UTC 2018 Syncing completedBut cron job output looks like this:Sun Jan 28 02:48:01 UTC 2018 Dumping database Sun Jan 28 02:48:01 UTC 2018 Database dump created Sun Jan 28 02:48:01 UTC 2018 Syncing /root/backups folder with s3://rake-backups as root Sun Jan 28 02:48:05 UTC 2018 Syncing completede.g aws sync has no any output (environment variables are in place) and script has no effect - backups are not in the butcket. Where I am wrong?
Why aws cli command has no effect while calling from cron?
Since I got this working eventually, I am gonna answer my own question here.I did the following steps to get the script running from startup:Changed the type of the script from shell to bash (extension.bash).Changed the shebang statement to be#!/bin/bash.In Startup Applications, give the commandbash path/to/scriptto run the script.Basically when I changed the shell type fromshtobash, the script starts running as soon as the system boots up.Note, in case this helps someone: My intention to haverun_roscore.bashas a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by havingroscore&as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
I am working with Ubuntu 16.04 and I have two shell scripts:run_roscore.sh : This one fires up a roscore in one terminal.run_detection_node.sh : This one starts an object detection node in another terminal and should start up oncerun_roscore.shhas initialized the roscore.I need both the scripts to execute as soon as the system boots up.I made both scripts executable and then added the following command to cron:@reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.I have also tried adding both scripts to theStartup Applicationsusing this command for roscore:sh /path/to/run_roscore.shand following command for detection node:sh /path/to/run_detection_node.sh. And it still does not work.How do I get these scripts to run?EDIT: I used the following command to see the system log for the CRON process:grep CRON /var/log/syslogand got the following output:CRON[570]: (CRON) info (No MTA installed, discarding output).So I installed MTA and then systemlog shows:CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
How to run two shell scripts at startup?
In crontab, type full pathes:* * * * * echo $(/usr/share/rvm/rubies/ruby-2.3.3/bin/ruby -v) >> 123.rb * * * * * echo "123" >> 123.rbshould work.Or, add$PATHvariable:In console:echo $PATHCopy value, in crontab file add:export $PATH="<copied pathes>:/usr/share/rvm/rubies/ruby-2.3.3/bin/" * * * * * echo $(ruby -v) >> 123.rb * * * * * echo "123" >> 123.rb
For some kind of reason, I can't userubyincrontab.In the console, if I runecho $(ruby -v), it will return the version ofrubyperfectly.However, I put the following code in crontab* * * * * echo $(ruby -v) >> 123.rb * * * * * echo "123" >> 123.rbThen I notice that, the123.rbonly contains a lot of123.So I checked the path ofruby.root@myserver:~# which ruby /usr/share/rvm/rubies/ruby-2.3.3/bin/rubyBasically, I am the only one user who can access this server, so I install all application asroot. Should I and how do I add somelinkto enable the permission forroot?
Why crontab can't use ruby?
Question 1 My Node App hosted in AWS, so do i need to run any cron function in AWS EC2 Instance ?There is No need to run in ec2 as well as code you can choose one based on interval.Question 2 The way I am using my cron job inside node js is correct or running inside the os is recommended and why ?Now as long as//run the codeis not a CPU-bound thing like cryptography, you can stick with One Node process, at least to start. Since you are requiringrequestI think you might be making an HTTP request, which is I/O, which means this will be fine.For what it is worth it's just simpler to have One thing toinstall/launch/start/stop/upgrade/connect-a-debuggerthan to deal with an app server as well as a separate cron-managed process.Also It depends on how strictly you have to adhere to that minute interval and if your node script is doing anything else in the meantime.Just executing once a minute viaCRONis much more straightforward and in my opinion conforms more to theUnix Philosophy.Question 3 Do I need to restart my job if am using node js cron job ?If you build and run this in node, you will have tomanage the lifecycle of the appand make sure it's running, recover from crashes, etc.Thanks
I am using nodejs cron jobpackage :var schedule = require('node-schedule'); var j = schedule.scheduleJob('*/1 * * * *', function(){ console.log('The answer to life, the universe, and everything!'); });Bear with me I have multiple questions :Question 1My Node App hosted in AWS, so do i need to run any cron function in AWS EC2 Instance ?Question 2The way I am using my cron job inside node js is correct or running inside the os is recommended and why ?Question 3Do I need to restart my job if am using node js cron job ?
Do i need to write cron job in node js or AWS?
Cron doesn't offer that level of scheduling flexibility, so you have to make your script smarter.Make your cron job run twice a day, leaving some log file or other artifact that shows it has run. Then have it check whether it's already run that day, and finally also check the week number to see if it's OK for it to run the second time.
I would like to set a cronjob on certain week numbers. The reason for that, I have a script that should run once a day except of week number 8 and 9. There it should run twice a day.How can I set a cronjob based on week numbers?
Crontab on week numbers
The problem is thatcrontreats%as newlines. You need to escape themFromcrontab POSIX manpage:Percent-signs (%) in the command, unless escaped with backslash\, will be changed into newline characters, and all data after the first%will be sent to the command as standard input.* * * * * date +\%Y\%m\%d\%H\%M\%S >> /home/user/time2.txt
I'm trying to append the current date and time to a log file every minute using cron. I want the date and time to be formatted in a specific way.This works:* * * * * date >> /home/user/time1.txtThis doesn't:* * * * * date +%Y%m%d%H%M%S >> /home/user/time2.txtAny insight is much appreciated!
cron task not writing to file
The problem of high CPU is caused because the worker loads the complete framework everytime it checks for a job in the queue.You can use:php artisan queue:work --daemonin your case:/usr/local/bin/php /home/electro/public_html/artisan queue:work --daemonThis will load the framework once and the checking/processing of jobs happen inside a while loop, which lets CPU breathe easy.
How to run Laravel queue:work in shared hosting without overlappingi am using this code in a cronjob but this is using too much CPU resource due to overlapping command what is the best way to do this./usr/local/bin/php /home/electro/public_html/artisan queue:work
How to run Laravel queue:work in shared hosting without overlapping
I think it should be:* * * * * php /home/isnap/test/api/local/artisan command:trending_posts_view && /tmp/myscript.sh(space between local and artisan replaced with slash)
I try to run my laravel command through cron job. But when I put my laravel command into cron job using crontab after adding in cron tab i did not see the working on cron job because my database not updated. Below is my cron tab file# daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/jvm/java-8-oracle/bi$ SHELL=/bin/bash[email protected]* * * * * php /home/isnap/test/api/local artisan command:trending_posts_view && /tmp/myscript.shHelp to solve this issue
Cron job not working using crontab linux
Here are some options on AWS ...Launch a t2.nano EC2 instance to run a script that issues GET, then sleeps for 1 second, and repeats. You can't use cron (doesn't support every second). This costs about 13 cents per day.If you are going to do this for months/years then reduce the cost by using Reserved Instances.If you can tolerate periods where the GET requests don't happen then reduce the cost even further by using Spot instances.That said, why do you need to issue a GET request every second? Perhaps there is a better solution here.
I am trying to setup a function which will be working somewhere on the server. It is a simple GET request and I want to trigger it every second.I tried google cloud functions and AWS. Both of them don't have a straightforward solution to run it every second. (every 1 minute only)Could you please suggest me a service, or combination of services that will allow me to do it. (preferably not costly)
Where and how to set up a function which is doing GET request every second?
You should check whether the path of thephpprogram correct. You can check the the full path of thephpprogram in terminal output by this command:which phpTo create a cron job, should always use the full path of the program. The crontab default environment is not like the logged in user. The program may not be found if the path is not defined to the crontab environment.The crontab syntax composed of two parts, datetime to execute & command to be executed. User is not required to add before the command.* * * * * command to be executed - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- min (0 - 59)
Hi guys i have the next problem(sorry for my english):I want to executate a command in Mautic cron job, i put the next comands:*/1 * * * * /usr/local/bin/php /apps/mautic/htdocs/app/console mautic:segments:update*/1 * * * * /usr/local/bin/php /apps/mautic/htdocs/app/console mautic:campaigns:update*/1 * * * * /usr/local/bin/php /apps/mautic/htdocs/app/console mautic:campaigns:triggerI try a lot of things but no one of them work, like:*/1 * * * * root /usr/local/bin/php /apps/mautic/htdocs/app/console mautic:segments:updateor*/1 * * * * bitnami php /apps/mautic/htdocs/app/console mautic:segments:updateBut i don't know what fail's,if it's the user name,the route to php,the comand,if they doesn't have permission...If i put this manually work perfectlyphp /apps/mautic/htdocs/app/console mautic:segments:updatety so much
Doesn't run this commands on mautic cron job
Perl would be a suitable tool to align text fields. Try the following code:#!/bin/bash echo "$EMAILBODY" | perl -e ' format STDOUT = @>>>>>>> @<<<<<<<<<<<<<<<<<< @>>>>>>> @<<<<<<<<<<<<<< @<<<<<<<< @<<<<<<<<<<<<<<<<<< @<<<<<<<<<<<<<<<<<<<<<<<<<<<<< $id, $type, $site, $city, $prov, $start, $elasp . while (<>) { split(/,/, $_); grep {s/^"//, s/"$//} @_; ($id, $type, $site, $city, $prov, $start, $elasp) = @_; write; }'The result will look like:Alert ID Type IOL Site City Province Alert Start Time Elasped Time since first alert 4 Trasaction Alert 99196 NANAIMO BC 2015-06-30 15:11:00 867 23:00 6 Communication Alert 88395 GRANDE PRAIRIE AB 2015-07-01 15:23:39 866 22:48 7 Communication Alert 88433 HINTON AB 2015-07-01 15:23:39 866 22:48 8 Communication Alert 88484 LAC LA BICHE AB 2015-07-01 15:23:39 866 22:48 11 Transation Alert 88395 GRANDE PRAIRIE AB 2015-07-02 16:40:59 865 22:53Note that the script above assumes each field does not contail commas. In such a case, Text::CSV module will work.Hope this helps.
I have a cron job which when run gives me a csv file which looks like below:I have another cron job where I am trying to display the csv data directly in an email body and the output of that is as below:As you can see above, the output from email is distorted and looks ugly. WHat changes should I make for the email format to be similar with the csv format?.I am using the below bash commands in my cron :EMAILBODY="$(${RUNDIR}/${DEV1} ${FILELOCATION} ${FILENAME})" echo "$EMAILBODY" | mailx -r ${SENDER} -s "Alerts Email for $(date +%Y-%m-%d_%H:%M)"[email protected]I tried the below change to the script:echo "$EMAILBODY" | column -s, -t | mailx -r ${SENDER} -s "Alerts Email for $(date +%Y-%m-%d_%H:%M)"[email protected]and now I see this in the email:As you can notice, the format is little better than before. But how can I align the data perfectly with the headers?
Convert CSV format as text in email body
To run thecronevery minuteand to save to a file the current environment used this could be used:* * * * * env > ~/cronenvNext, you can start a shell like it will be run withincronby doing:env - `cat ~/cronenv` /bin/shHere you could try something like:su - admiir -c "/u01/users/admiir/test.sh > /u01/app/iir/InformaticaIR/iirlog/crontab_launchsh.log"You can omit thesudo suand only usesuOnce your script is working you could then update yourcronwith:* * * * * su - admiir -c "/path/to/test.sh > /path/to/out.txt"You could also run thecronas the specific user by doing:sudo crontab -u username -e
I have created one test.sh shell script which I have scheduled using crontab -e to execute after every 1 minutes and redirecting output to a file.test.shecho "Printing all Environment Var" env echo "Bye Bye"Below is how my crontab look like#crontab 0,1,2,3,4,5,6,7,8,9,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59 * * * * sudo su - admiir -c /u01/users/admiir/test.sh > /u01/app/iir/InformaticaIR/iirlog/crontab_launchsh.logWhen I run ls -ltr the timestamp is getting updated but nothing is getting printed in the output file.
Cronjob not redirecting output
Use/etc/cron.das suggested by@Jeff Richards.Files in/etc/cron.ddo not need the use of thecrontabcommand to be updated.#!/bin/bash set -uex START_TIME=$(date +%s) # turn off rsync by deleting cron sudo rm /etc/cron.d/www.site.com_sync # deploy to production node 1 cd /var/www/www.site.com/public_html/ npm run prod service varnish restart END_TIME=$(date +%s) # turn on rsync by making the cron again echo '* * * * * root /root/scripts/sync.sh >>/dev/null 2>&1' | sudo tee /etc/cron.d/www.site.com_sync
Currently this is our scenario.SSH into Node 1sudo crontab -eChange this* * * * * /rsync.sh >>/dev/null 2>&1to#* * * * * /rsync.sh >>/dev/null 2>&1cd /var/www/www.site.com/public_html/npm run prodWait for npm successsudo crontab -eChange this#* * * * * /rsync.sh >>/dev/null 2>&1to* * * * * /rsync.sh >>/dev/null 2>&1Exit SSHSo each time we are deploying we are adding or removing a comment from the crontab and appending or removing the#before. It is time consuming so I wrote this script.I only have one cron line.(I am not bash expert)#!/bin/bash START_TIME=`date +%s` # turn off rsync by deleting cron (from root user) crontab -e -u root # deploy to production node 1 npm run prod sudo service varnish restart END_TIME=`date +%s` # turn on rsync by making the cron again crontab -e -u root | { cat; echo "* * * * * /root/scripts/sync.sh >>/dev/null 2>&1"; } | crontab - echo -e ""This is not working as it is just not adding a line nor finding and removing the code as I want. Anyone help?I get this error:no crontab for root - using an empty one Vim: Warning: Output is not to a terminaland it hangs..
How can I open crontab from Bash and add a comment to a cron?
It depends on which type of scaling you have selected:https://cloud.google.com/appengine/docs/standard/java/an-overview-of-app-engineRequests on Basic & Manual Scaling can run indefinitely, Automatic scaling has a 60 second deadline for http requests & 10 minutes for task queue requests. If you're not sure which type of scaling you have you probably have Automatic.You could setup a micro-service with Basic scaling specifically for tasks like this; so that your primary service can stay on Automatic scaling.You could also split up your cron task into several tasks, and then daisy-chain them using push queues (i.e. you cron task launches, does some work, and then launches task2 and dies. task2 launches, does some work, launches task3 and dies. etc)
I am using nodejs on google app engine with an end point for a cron job. When the rest end point is called I want to proceed with my cron job after returning the response back to the caller.The cron task will continue for about an hour. Will GAE terminate the task if it runs for an hour or more ? I suppose GAE should not kill my nodejs server process because that way my application will stop. I want to know if there is any possibility for the task to end prematurely due to some restrictions on GAE.
run tasks on google app engine that runs my app server
Provide absolute path,df -h | awk 'NR!=1{print $1, $4, $5}' >> availability.txtuse absolute path for availability.txtdf -h | awk 'NR!=1{print $1, $4, $5}' >> /tmp/availability.txtpath from where script is executed plays a role in creating availability.txt,
I just created a local cron job on Linux mint. The cron contains the following:*/5 * * * * /home/claudio/crons/autoremove.shand the .sh file contains the following:#!/usr/bin/env bash apt-get autoremove -y df -h | awk 'NR!=1{print $1, $4, $5}' >> availability.txtFrom what I understand, it should run autoremove every 5 minutes and update the availability.txt file with the content ofdf -h. But it is not working, I've setup the crontab but every 5 minutes the cron does not run because the availability.txt file is not created.Any idea of why the script is not running?
Local cron not running every 5 minutes
Good question! Both of these solutions are quite feasible, but it's probably going to be easier to write a script in python (solution #2).Bash scripts are great, but if you make a bash script here you'd need to write another script that was passed the result of all your other scripts. It would look something like this:##results.sh first_result = python script1.py second_result = python script2.py python email_results.py $first_result $second_resultWith this methodology it will be difficult to time the scripts and is generally a little unwieldly.If you used python, you could use time.time() to time things and it would generally be a little neater.##python import time import script1 start = time.time() result = script1() end = time.time() time_elapsed = (end - start) email_results(result, time_elapsed)Hopefully this helps! Good luck!
I have few independent projects written in Python that I would like to have executeddaily. I'm going to use crontab on an Ubunutu server but I would like to write ascriptto manage these projects and at the end send a report with information on what scripts failed, what errors they produced, if they were successful, time to execute etc.I have2ideas, please help me decide which one is better or provide me a better solution?1:crontabwill execute a bash file and this bash file will launch each script and calculate the time they took to run.2:crontabwill execute a python script which will execute all the others scripts and calculate the time they use to run etc.Sorry english is not my main language.
Daily python task crontab
It's hard to give a good answer without knowing more about what service (1) has to do when it is 'active'. It sounds you want cron to launch a task every minute.You can use cron in conjunction with push queues:https://cloud.google.com/appengine/docs/standard/go/taskqueue/push/When creating a push queue task, you can set the propertydelaybefore adding it to the queue:https://cloud.google.com/appengine/docs/standard/go/taskqueue/reference#Task(For me in Python they called itcountdownhttps://cloud.google.com/appengine/docs/standard/python/refdocs/google.appengine.api.taskqueue.taskqueue#google.appengine.api.taskqueue.taskqueue.add)You could have a cron job that fires every 24 hrs. That cron job would load up your push queue with tasks who's delays are staggered. The delay of the first one is 1 min, the delay of the second one is 2 min, etc.
I am trying to figure out how to run a service(1) when it does not receive any calls.I want to use Microservices Architecture.Basically i want to run this service (1) when the other service(2) is receiving calls and all data.As the service(1) i mentioned is not receiving it would not have to spawn new instances and i would want only the service(2) to scale.I have noticedscheduling jobs with cron yamlbut the number of calls is limited. I need to get this service(1) to be active every 1 min when service(2) is active.
Continuously running service in Google Cloud Engine
Can't you do all of the work in your PHP file and mySQL database?Add a column to each post row called something like "time_added"Run the cron every hour.Query the database and check for posts older than 24 hours that don't have the correct status set to say that you already did something with them after 24 hours.Query the database and check for posts older than 1 hour that don't have the status set to say that you have already done something with them after 1 hour or 24 hours.Update the status of the posts.
I am new to cron jobs but used to php. In my application, I need to run a script/event that if an item is posted in MySQL db, after an hour it's status will change, from private to public, and after 24 hours it's status will change from public to expired. I tried to use the Mysql Schedule Events, It works well, but almost all shared hostings do not support it.My Previous code:if($query==true){ mysqli_query($con,"CREATE EVENT ikiraka.act$id ON SCHEDULE AT (CURRENT_TIMESTAMP + INTERVAL 1 MINUTE) DO UPDATE `ibirakas` set `status`='public' WHERE `id`='$id'") or die(mysqli_error($con)); header("location:view-post.php?kiraka_id=$id"); }The cron job I am aquanted with:10 * * * * /usr/bin/php /www/virtual/username/cron.php > /dev/null 2>&1You can see what I am trying to do. So the cron jobs normally run a specific task at a specific time. How can I trigger a job rather than a normal routine, and how can I set it to run after 1 hour and another after 24 hour for a specific item not everything in db? Your help is appreciated
php - cron jobs to be triggered rather than at specific times and not repetitive
The cron environment will usually differ from the environment you have in an interactive shell. In this case, you should check the DISPLAY environment variable, which many X utilities use to figure out which session to connect to. If it's not set,fehwill probably fail in just the way you described.Missing environment variables can be set directly in the command line you're using in the crontab, or you can write a wrapper script that sets up the environment, then callsfeh, and then call the wrapper from cron.
This question already has an answer here:Jquery element.text() error "is not a function" when I use elements array(1 answer)Closed5 years ago.I am trying to create a method to change my desktop background randomly. I am using crontab to handle the change every 10 minutes.The crontab*/10 * * * * /usr/bin/feh --recursive --randomize --bg-fill /home/aaron/Pictures/wallpapers/minimalist 2>&1The syslogsyslog:Oct 20 09:20:01 skull-nuc CRON[19895]: (aaron) CMD (/usr/bin/feh --recursive --randomize --bg-fill /home/aaron/Pictures/wallpapers/minimalist 2>&1) syslog:Oct 20 09:30:01 skull-nuc CRON[20449]: (aaron) CMD (/usr/bin/feh --recursive --randomize --bg-fill /home/aaron/Pictures/wallpapers/minimalist 2>&1)Trouble shooting -First I changed my shell to sh and tested the command. It works. I tested the command in bash. It works. I allow it to run from cron and nothing happens and no error is produced. It just runs every ten minutes and my background only changes when I do it manually.I have verifiedScript works aloneScript works from shcron service is runningcron is running the command with no discernable outputI am unsure what else to do
Running command from cron not working with no error [duplicate]
Assuming you are exposing an http service, you may use a mix betweencron in dockerandcurl with cron, you can configure a cron inside your docker container to send a curl request periodically, invoking your microservice.
I have a docker container in Openshift, this docker have a spring boot microservice that I want to execute only every X minutes.How I can do it using Openshift?I don't know how to create a cron job or similar to launch this microservice every X minutes.Thanks!
How I run a cron job in Openshift?
As per the comments, I believe the problem is in how you are calling the python script in the crontab. Run the exact command you've given crontab and fix any problems it returns.
I know this question has been asked before but I still haven't been able to get it to work. My crontab file just has this:0 5 * * * /home/harry/my_env/bin/python /home/harry/compile_stats/process_tonight.pyHere's what my process_tonight.py looks like:import datetime import sys sys.path.append('/home/harry/compile_stats/') import compile # Module in above path print("Processing last night\n") date = str(datetime.datetime.today().year) + "-" + str(datetime.datetime.today().month) + "-" + str(datetime.datetime.today().day-1) compile.process(date, date)This file works perfectly fine when I just run it regularly from the command line but doesn't work when I schedule it.I also looked at my /var/log/syslog file and the task I'm looking to run isn't showing up there.Any ideas?EDIT:The time it's set to run in my example (5 A.M) is just a random time to put in. It's not running for any time I put in there.EDIT 2#:As per user speedyturkey I simplified my python script to better diagnose the problem:import datetime #import sys #sys.path.append('/home/harry/compile_stats/') #import compile # Module in above path print("Processing last night\n") date = str(datetime.datetime.today().year) + "-" + str(datetime.datetime.today().month) + "-" + str(datetime.datetime.today().day-1) #compile.process(date, date)Nothing is happening so I guess the problem isn't with the import.
Crontab Python Script not running
You are creating a date stamped backup file, but attempting to copy static file name. Try changing the copy command to:aws s3 cp $DESDIR/$FILENAME s3://s3backup
So I am trying to automate backups to S3 buckets through linux.The script I am trying to run isTIME=`date +%b-%d-%y` FILENAME=backup-$TIME.tar.gz SRCDIR=/opt/nexus DESDIR=/usr/local/backup tar -cpzf $DESDIR/$FILENAME $SRCDIR aws s3 cp /usr/local/backup/backup.tar.gz s3://s3backupThe cronjob to run that script is44 11 * * * ./backup.shHowever whenever I try to run the backup script (by updating cronjob) it does not seem to be working at all.Any ideas why it will not work?
Cronjob is not running in Linux
crontab -l 2>/dev/null| cat - <(echo "your new crontab entry here") | crontab -Explanationcrontab -lOutputs the current crontab to stdout.2>/dev/null[Optional] Supresses error messages from crontab -l. You'll get an error message if there is no crontab entry for the user. But that's not a problem.cat - <(echo "your crontab entry here")The-takes the input from the pipe (crontab -l) and uses it as the first thing to cat. Then the rest appends your new crontab entry to stdout. The<()syntax takes the output of the command inside and stores it in a temporary file.crontab -This sets the crontab entry to the stdin (which, thanks to the pipe, is all the stdout from the previous commands.)Edit:It looks like you'll need to wrap the command withbash -cin order to get the pipes to work. See thisstackoverflow entry.Or, you can send a series of commands to paramiko. Just beware of concurrency.crontab -l > /tmp/current.cron echo "your crontab entry here" >> /tmp/current.cron crontab /tmp/current.cronAnother alternative is:crontab <(cat <(crontab -l 2>/dev/null) <(echo "your new crontab entry"))
I am having a crontab entry in my Python code describing which script should be scheduled in the UNIX remote server at the specified time.I am writing a Python script which will connect to the ssh using Paramiko, it will go to the specified crontab file path in the remote server -> open the crontab file -> add the crontab entry specified in the Python script at the end of the file (on the new line) -> save & exit the crontab file.Please let me know how I can achieve this.P.S. : I already know how to connect to the server using Paramiko. Just stucked at the file handling part in the remote server.
How to edit crontab file present in the remote UNIX server using Python
I reached my quota. Found out after checking all the logs in the logbook and I found the following message:severity: "DEBUG" textPayload: "Billing account not configured. External network is not accessible and quotas are severely limited. Configure billing account to remove these restrictions"
my google cloud functions function should be repeated every 5 minutes. It only works during specific periods of the day (without specifying that behavior in my settings). This is the code triggered by the cron job:exports.fivemins_job = functions.pubsub.topic('fivemins-tick').onPublish((event) => { console.log("This job is ran every 5 minutes!") });cron.yaml:cron: - description: Push a "tick" onto pubsub every 5 minute url: /publish/fivemins-tick schedule: every 5 minspackage.json:{ "name": "functions", "description": "Cloud Functions for Firebase", "dependencies": { "@google-cloud/storage": "^0.4.0", "child-process-promise": "^2.2.0", "firebase-admin": "^4.1.2", "firebase-functions": "^0.5" }, "private": true }statistics:I would like to have it running all day and night. Any more info I should provide?
Repetitive cloud function only works on specific periods during the day
You can write a bash script which executes your php script every 2 seconds for defined amount. And this bash script can be executed via cronjob.Example:#!/bin/bash count=10 for i in `seq 1 $count`; do /bin/php /path/to/scrip.php & sleep 2 done
i have one php file that i want to run every 2 second for a 1 minute. because in my server i can set min cron for 1 minute only. so i made this script.<?php $start = microtime(true); set_time_limit(60); for ($i = 0; $i < 59; ++$i) { shell_exec('/usr/local/bin/php /usr/local/www/my_file.php'); time_sleep_until($start + $i + 2); } ?>and second option is.<?php for ($i = 0; $i <= 59; $i+=2) { shell_exec('/usr/local/bin/php /usr/local/www/my_file.php'); sleep(2); } ?>but both of them are not working because my script execution time is 50 to 60 second. so, its not running every 2 second but every 50 to 60 second. so, is there any solution for that to start new script execution every 2 second? i don't have any idea please help me.
run php script exactly every 2 second
Your example is incorrect, it seems to be a cross betweensimple routesandextended routes.To be able to useself.uri_for('home')you need to use named routes, i.e. extended routes:app = webapp2.WSGIApplication([ webapp2.Route(r'/', handler=HomePage, name='home'), ])With that in placeself.uri_for('home')shouldwork, assumingselfis awebapp2.RequestHandlerinstance.The workaround just looks ugly, but that is pretty much whaturi_fordoesunder the hoodas well:def uri_for(self, _name, *args, **kwargs): """Returns a URI for a named :class:`Route`. .. seealso:: :meth:`Router.build`. """ return self.app.router.build(self.request, _name, args, kwargs)
So I have the routes defined for my app insidemain.py, something like:app = webapp2.WSGIApplication([ webapp2.Route('/', handler=HomePage, name="home") ])Inside the cron job I can't seem to access the routes of the app, for example this doesn't work:self.uri_for('home')I found somewhere online a snippet that fixes it, but it's ugly to use:cls.app.router.add(r)Whererwould be an array of routes.Is there a way to have acces to the app's routes inside an app engine cron job?
How to access app's routes inside app engine cron job?
You could use some kind of a lock file.First file:at the beginning of the first script create empty file on the disk calledlockfile.txt(or any other name)at the end remove the file from the diskSecond filecheck if the file calledlockfile.txtexists, if not - run the code
I hvave a cron job that is doing some updates. After completion of that process i need to run another cron job for inserting that updated records into another table.For this I am trying this script.$output = shell_exec('ps -C php -f'); if (strpos($output, "php do_update.php")===false) { shell_exec('php insert_process.php'); }But it is not doing any thing and i am getting ps is not internal / external command. So how can I find that first cron job execution is completed or not?Once that is executed then i will run the second cron for inserting data. Any holp would be greatly appreciated.
How to find the first cron job process complete or still running?
I'm guessing you have the AWS credentials setup as environment variables in your EC2 user's account. The cron job wont' have access to those environment variables, which is why you need to move them to~/.aws/credentials.Howeverthe much better option is toassign the permissions to the EC2 instance directlyvia an IAM role.
I need to copy the logs present inEC2instance toS3bucket periodically. So I am using Amazon cli and crontab to schedule it. Incrontab -e, I added below lines* * * * * aws --version >> /tmp/out.txt 2&>1 * * * * * aws s3 cp log_file_path s3://bucket >> /tmp/out.txt 2&>1First statement, i just used to check if aws cli works fine as I am new to this which is redirecting the aws version to a file every minute.First command works fine but second doesn't. If I run theaws s3 cpcommand standalone, then it runs fine i.e. copy the log file to s3 bucket. But doesn't work with cron as mentioned above,Through logs got to know that I get the below error:Upload failed....An error occurred (AccessDenied) when calling the PutObject operation : Access Denied.Can someone please point out how to make it work.
AWS - Copying EC2 logs to S3 through cron
To add something to the end of the file:echo "USERNAME_DATA=’SUPERADMIN’" >> /usr/local/data/conf/info.confTo configure a cronto do this automatically every day - you just need to edit a crontab file, to do this execute:crontab -eand then add a line inside of the opened file:23 17 * * * echo "USERNAME_DATA=’SUPERADMIN’" >> /usr/local/data/conf/info.confIn this example it will be executed at 17:23 every day, it's easy set a different schedulehttps://corenominal.org/2016/05/12/howto-setup-a-crontab-file/
There is a file located at specific location and it gets back normal everyday so I have to edit it again I want to create acron jobso that file gets edited everyday.I just need to add a line to that file in the end.The file is located at/usr/local/data/conf/info.confand I want to add this line at the end of the file everydayUSERNAME_DATA=’SUPERADMIN’How can I set up a cron job to append this line to the file?
Create a cron job to append a line to a file every day
here is typo in :"0 0/10 * * *?"should be"0 0/10 * * * ?"here is the useful resourceCronMaker is a utility which helps you to build cron expressions. CronMaker uses Quartz open source scheduler.
I'm tring to run a function every 10 minutes.According to the documentation, it is stated that e.g("0 0/5 * * *?") Runs every 5 minutes since the program started, but why when I change 5 by 10("0 0/10 * * *?"), the function does not run every 10 minutes e.g (10:10 - 10:20 - 10:30)Is it really me who misunderstands the Cron Expressions or the syntax is wrong.
Scheduled Cron Expressions does not work as expected
Usecommand substitution, like this:curl -X PUT "https://api.cloudflare.com/client/v4/zones/2wertyh/dns_records/23ertghj" \ -H "X-Auth-Email:[email protected]" \ -H "X-Auth-Key: 123ertgyh" \ -H "Content-Type: application/json" \ --data '{"type":"A","name":"qwsdfg.com.br","content":"'"$(curl ipinfo.io/ip)"'","ttl":1800,"proxied":false}'String argument for--datais composed from three concatenated parts,'beginning'"$(curl ...)"'ending'(more details see inthis answer).
I've one cron task to update my DDNS with my current ip address and do it through a cURL call.The problem is one of the parameters to pass in the call is the CURRENT IP and in order to discover ir i need to do another cURL call.I would like to know if is possible to nest two cURL calls in one single script in order to make my cron task avoiding extra scriptsexample:to get my current ip I usecurl ipinfo.io/ipto update my ddns i need to do:curl -X PUT "https://api.cloudflare.com/client/v4/zones/2wertyh/dns_records/23ertghj" \ -H "X-Auth-Email:[email protected]" \ -H "X-Auth-Key: 123ertgyh" \ -H "Content-Type: application/json" \ --data '{"type":"A","name":"qwsdfg.com.br","content":"MY-CURRENT-IP","ttl":1800,"proxied":false}'how can i fit this two calls together in order to make my cron task
Nested cURL call
It looks like you are trying to run your script using a directory "/usr/local/bin". Presumably you either want to run the script using python:python -q /home/bdmweath/public_html/scripts/my_script.pyOr make the script executable and run it directly:chmod +x my_script.py ... /home/bdmweath/public_html/scripts/my_script.py
I'm new to the whole website world, so I apologize before hand if this is a duplicate question of some kind.Important note: I am well aware this specific script won't update the graph. It's just a representation of the script's file path and the output that I want. The graphwill updatewhen my script is in place and the cron job is run properly.I have a script that I'm running, say this one:#!/usr/bin/env python import matplotlib matplotlib.use('Agg') import matplotlib.pyplot as plt import numpy as np t = np.arange(0.0, 2.0, 0.01) s = 1 + np.sin(2*np.pi*t) plt.plot(t, s) plt.xlabel('time (s)') plt.title('Sine Wave') plt.grid(True) plt.savefig("home/bdmweath/public_html/images/script_output/test.png")I'm having the script run once an hour, and I have the email set up and it's sending emails. The email I'm getting is:/usr/local/cpanel/bin/jailshell: /usr/local/bin: is a directoryThe cron job I'm running is:/usr/local/bin -q home/bdmweath/public_html/scripts/my_script.pyHTML code:<div class = 'class_name'> <h2>Header Text</h2> <a href = 'images/script_output/test.png'><img src='images/script_output/test.png' style="width:20%;height:20%;"></a> </div>Can anyone explain what is going on why it won't update?
Cron Job - Python matplotlib script
'>>' redirects output to a file appending the redirected output at the end.So it should be in /var/log/test.logYour error should be in the cron settings.Thislink really helped me the beginnnig
Guys i am testing if cron runs correctly in my docker debian container.I've set up crontab* * * * * /bin/echo "it works!" >> test.logBut i can't find this file anywhere.I tried* * * * * /bin/echo "it works!" >> /var/log/test.lognot luck too.What is ">>" function's writes to path?
What is crontab directory path?
Scheduling syntax is forrecurringjobs. Those that happen over and over again, until the end of times.For single-shot jobs, you enqueue them, with an optional delay (which is what you need here). So you can just enqueue the whole batch of your jobs all at once, with 1 hour increments:1.upto(50).each do |x| Resque.enqueue_in(x.hours, SomeJob) end
I am using resque for job scheduling. I want to run a job every 1 hour and for N times only. I also want to pass count in the job as argumentfor example:i=0 50.times do every 1.hour, roles: [:whenever_cron] do runner "Resque.enqueue(SomeJob, i+1)" end endHow can I do this?Note : -I don't want to run 50 jobs every hour. I want to run 1 job 50 times
Run background-job every hour for n number of times(ruby on rails)
Yes, this should be possible. You can invoke the flow from an Azure Function. You will have to call the Logic App endpoint workflow from the Azure Function. To start with look at this blogpost:Invoking Flows from another Logic AppIn your Azure function you can use HttpWebRequest or other mechanisms to call the Logic App endpoint.
The time triggers within Logic Apps aren't really specific enough to meet my scheduling requirements, but using the scheduler looks overly expensive for our needs.I see that functions can use CRON for timed triggering, so wondered if functions can actually be used to call Logic Apps, and hence have the Logic Apps triggered by the CRON time schedules?
Can you use an Azure Function to trigger an Azure Logic App?
There is nothing like this in the Prestashop Core and module, but anyway, things can be simply done :Call a function in module constructor, so it will be executed each time$this->mySuperCron();Then store a time and just check time before executing your request :private function mySuperCron() { $check_time = strtotime('now - 20 minutes'); if ( (int) Configuration::get('MYSUPERMODULETIMER') < (int) $check_time ) { // Make your cron here by either calling functions here or do it with file_get_contents / curl or echo an ajax which will be executed in backoffice Configuration::updateValue('MYSUPERMODULETIMER', (int) $check_time); } }
I am making a prestashop module in which I need to run cron job after every 20 minutes. I didn't find any hook for that. All I found is "Cron task manager" module but I don't want to use a module for that.
Which hook should be used to run a cron job in prestashop?
Asdeaghhas mentioned in the comments you can usetimeto measure the execution of a command. You will need to wrap the command in parentheses in order to be treated as one command for the stdout.(time wget -q https://anydomain.com/sendmail.php) &> /tmp/time.sendmail.logThis will log the output in the tmp file.
i am usingubuntu serverand i am sending mails through cronjob. There is little problem that i want to know the execution time of the URL which is executed via cronjob.For example*/5 * * * * wget -q https://anydomain.com/sendmail.phpPlease tell me that how can i find the execution time ofsendmail.phpeven i tried to read the logs of cronjob but i didn't find proper answer.
URL execution time on linux server
Your issue arises not from sidekiq but from Rails 3.2.13.#<=>does not handleundefined method to_datetime. It was fixed in future versions of Rails. For example, in the Rails 3.2.22.5:def <=>(other) if other.kind_of?(Infinity) super elsif other.respond_to? :to_datetime super other.to_datetime else nil end endTherefore, the simplest way to solve your issue is to update your Rails version. If it is not an options paste your code or rewrite#<=>.
Start sidekiq with the commandbundle exec sidekiq -e production -P /path/to/pid/file/tmp/pids/sidekiq.pid -L /path/to/log/file/shared/log/sidekiq.log --daemonIn the log error2017-06-29T06:59:44.776Z 16181 TID-1jr7pg ERROR: CRON JOB: undefined method `to_datetime' for #<EtOrbi::EoTime:0x0000000a933848> 2017-06-29T06:59:44.776Z 16181 TID-1jr7pg ERROR: CRON JOB: /home/user/.rvm/gems/ruby-2.0.0-p247@script-admin/gems/activesupport-3.2.13/lib/active_support/core_ext/date_time/calculations.rb:141:in `<=>'error while executing the method/home/user/.rvm/gems/ruby-2.0.0-p247@script-admin/gems/activesupport-3.2.13/lib/active_support/core_ext/date_time/calculations.rb:141:in <=>:def <=> (other) super other.kind_of?(Infinity) ? other : other.to_datetime endWhat can be done with the problem?UPD:Updated version rails to3.2.22.5and there is a new errorERROR: CRON JOB: comparison of Time with EtOrbi::EoTime failed ERROR: CRON JOB: /home/user/.rvm/gems/ruby-2.0.0-p247@script-admin/gems/sidekiq-cron-0.3.1/lib/sidekiq/cron/job.rb:434:in `<'in this placedef not_enqueued_after?(time) @last_enqueue_time.nil? || @last_enqueue_time < last_time(time) end
Error "undefined method `to_datetime'" in sidekiq
Your script probably does not have execute permission. You can add it with:chmod +x /home/ubuntu/abc/abc/dev_cron.sh
I am trying to run a script fromcrontabbut it keeps on telling me permissions denied even after I added a username.I got the error message in/var/mail/ubuntufirst time I have this crontab setup like this.crontab -eshows:* * * * * /home/ubuntu/abc/abc/dev_cron.shI would get the below as error message in/var/mail/ubuntu/bin/sh: 1: /home/ubuntu/abc/abc/dev_cron.sh: Permission deniedthen I changed thecrontab -eto* * * * * ubuntu /home/ubuntu/abc/abc/dev_cron.shas I have read some other posts saying where I typedubuntuis theusernamebut then I would still get such error message:/bin/sh: 1: ubuntu: not foundthen I should of changingubuntutosudoand I would get such error message:sudo: /home/ubuntu/abc/abc/dev_cron.sh: command not foundI have usedls -land saw that the filedev_cron.shdoes belong toubuntu.Can someone please give me a hand what am I doing wrong here?Thanks in advance.
Unable to setup crontab script
Nailed it!Instead of using1 * * * * ./hello.pyin crontab to set the cron running per minute , I rewrote the statement to1 * * * * /usr/bin/python3 hello.py.This solved the problem!
I want to run hello.py file which containsprint("Hello World")using crontab.For that, my hello.py has this code:#! /usr/bin/python3 print('Hello, world!')And, in the same folder, I have used crontab -e command to open crontab and in order to execute this file every minute, I have written:1 * * * * ./hello.pyI have also set permissions for the file to be executable usingchmod a+x hello.py.When I run/usr/bin/python3 hello.pyIt runs perfectly. Also, when I use only./hello.py, the file runs.Why is it still not executed using crontab?
Why is my crontab -e not running my python script?
Yes, correct. But, if you are covering the entire day, just:schedule: every 10 minutes
I need create a schedule job, i am using Google App engine. The requirement is the cron job will be execute each 10 minutes like this 0,10,20,30,40,50,60 in each hour. I read the documentation from Google site at : [https://cloud.google.com/appengine/docs/standard/php/config/cronref#schedule_format][1]This is my config :schedule: every 10 minutes from 00:00 to 23:50Is it correct with the requirment ?
cron job on Google App Engine
You cancdin crontab the way you do it. Or you can callos.chdir()in your script. In the latter case you can write the directory in the script or pass it as a command line argument:/python path/python script.py /folder/folder.
I need some help running a python script from crontab:The script looks for subfolders from current path and does something to them, also extracts a zip file located in the same folder of the script into each found subfolder.When I go withcd /folder/folderthenpython script.pyis all good. But when run it with crontab it runs in users home folder and not where the script is placed.To overcome this I placed in crontab something like this:* * * * cd /folder_of_scrpit/ && /python_path/python script.py >> log.txtand works as needed but feels weird, is there a better way of achieving this?
Running python script from crontab?
I am not know why you want it but the following should work:H 17,18,19 * * 1-5
I want to know how to build periodically on Jenkins. For now, I haveH 17 * * 1-5. My job builds at 17h every weekday, but I want it to build 3-4 times in the same night. For example : 1. 17h 2. 18h 3. 19h ... I'm not sure to understand the syntax of cron. Can someone help me ? Thx.
How to build periodically on Jenkins?
First of all you can add cron only in Server Not for localhostTo add cron in cPanel follow the steps belowStep1:- First create a cron function to which the server is going to hit, and get the Full path likehttp://fullpathStep2:- Then got to cron jobs in cpanel and set the time when the cron will hit that route.To set the time you have to follow thisMinutes represents the minutes of a given hour, 0-59 respectively. Hours represents the hours of a given day, 0-23 respectively. Days represents the days of a given month, 1-31 respectively. Months represents the months of a given year, 1-12 respectively. Day of the Week represents the day of the week, Sunday through Saturday, numerically, as 0-6 respectively.Like thisStep3:- Then Write the cron comand likecurl http://fullpathLike this:-This above cron job is set for 1 sec.That means the cron will hit that route in every one sec.Like this you can set your cron on cPanel.
I am using Laravel 5.4 on Windows. In the documentation, it does not say how to add cron entries to server. I searched on YouTube, but didn't get any useful video. I need to learn how to add cron entries both to localhost and cPanel.
Add cron entry to XAMPP server
No, the trigger can have only one schedule.One of the main reason why this is done is to prevent a situation when it is not clear for scheduler how to resolve competition between conditions.Imagine you have a job with 2 intersected schedules: let's say you want to run the job every 15 mins and every hour, and it takes up to 10 mins to execute it. In this case, you would need to specify how you want to handle scenarios, whena job is executing, but scheduler fires new execution.a job should be fired by both schedulesTo allow handling such cases, the trigger has attributes like Priority and Misfire Instructions.
I am trying to create a windows service that executes twice a day. and I was successfully able to create it using two triggers added to a single job.var job = JobBuilder.Create<Job>().StoreDurably().WithIdentity("Report_Name", "Report_Group").Build(); scheduler.AddJob(job, true); var trigger_1 = TriggerBuilder.Create() .WithIdentity("Report_Name_1", "Report_Group_A") .StartNow() .WithCronSchedule(string.Format("0 {0} {1} ? * *", Utility.Schedule_StartTime_1.Minute, Utility.Schedule_StartTime_1.Hour)) //0 Min hour .ForJob(job) .Build(); var trigger_2 = TriggerBuilder.Create() .WithIdentity("Report_Name_2", "Report_Group_B") .StartNow() .WithCronSchedule(string.Format("0 {0} {1} ? * *", Utility.Schedule_StartTime_2.Minute, Utility.Schedule_StartTime_2.Hour)) //0 Min hour .ForJob(job) .Build(); scheduler.ScheduleJob(trigger_1); scheduler.ScheduleJob(trigger_2); scheduler.Start();Can I use a single trigger to add multiple cron schedules
Quzrtz scheduler with multiple cron schedules in a single trigger
I believe what you want isScheduledExecutorServicewith itsschedulemethod that accepts adelayparameter. That delay can be specified in differentTimeUnitamounts such nanoseconds, seconds, hours, etc.The idea is to calculate difference between desired date of execution and current time in some time units (seconds for example).Here is a snippet that should give you a clue (Java 8).public class App { public static void main(String[] args) { final Runnable jobToExecute = () -> System.out.println("Doing something on " + new Date()); ScheduledExecutorService executorService = new ScheduledThreadPoolExecutor(1); ScheduledFuture future = executorService.schedule(jobToExecute, diffInSeconds(LocalDateTime.of(2017, 5, 30, 23, 54, 00)), TimeUnit.SECONDS); } private static long diffInSeconds(LocalDateTime dateTime) { return dateTime.toEpochSecond(ZoneOffset.UTC) - LocalDateTime.now().toEpochSecond(ZoneOffset.UTC); } }You can track the completion status of your job via theScheduledFutureobject returned by theScheduledExecutorService::schedulemethod.
I have a Backend in Java. I want to create jobs that run only once on a certain date. I have seen examples in .Net I do not know if in my Java backend it's possible?This is .Net jobshttps://www.quartz-scheduler.net/for example...createJob(Date date) { ... start(date) { ... myMethod();usecreateJob(myDate); //30-05-2017 15:25
How make a job to determine date only once from Java
You can do this automatically by using a Queue messaging system, likeRabbitMQ, and every time you have a new insertion in database, you send a message to RabbitMQ, and you need a consumer to do the job from your crontab.But maybe it's a too big solution.
i'm wondering if it is possible to automatically create cronjob everytime i get a link to a feed rss on my website, to keep track of every post of the post inside those feed rss.And i'm wondering also if there is a better solution instead of create a cronjob for each feed rss.(i already know how to get post from feed, i just want to know if and how its possible to do that thing with cronjob)Currently i've this table on db:--------------------- | FEED_RSS | |-------------------| |id | title | link | |-------------------| | 1 | feed1 | link1 | | 2 | feed2 | link2 | | 3 | feed3 | link3 | | 4 | feed4 | link4 |Those values ar recorded by an input text from users.So, is a good solution to create multiple cronjob? And first of all, can i do that dinamically for each record of the database? if yes...how should i do that?(i already know how to get post from feed, i just want to know if and how its possible to do that thing with cronjob)
The best way to create cronjob automatically? (If it is possible)
Not sure why but this worked. Rather calling the notify-send directed added that in a script.* * * * * export DISPLAY=:0.0 && /bin/sh /home/notifyCustom.sh
* * * * * export DISPLAY=:0.0 && usr/bin/notify-send "Hello world!"I added the above command to the crontab, notify-send is working from terminal but not from the cron. Also checked the logs, its working every minute but notifications are not being displayed.
notify-send not working from cron
According to the CronTrigger tutorial (http://www.quartz-scheduler.org/documentation/quartz-2.x/tutorials/tutorial-lesson-06.html):When using the ‘L’ option, it is important not to specify lists, or ranges of values, as you’ll get confusing/unexpected results.What you can do is to use 2 separate triggers that execute the same command:0 0 0 15 * ? *and0 0 0 L * ? *Hope this helps your problem.
Hi I tries to set a quartz job that will run on the 15 and the last day of month.This is not working:0 0 0 15,L * ? *What did I do wrong?
quartz cron that will run on 15 and the last day of month
A Cron expression will perform all combinations, so you'll need to define two separate expressions.0 24 13 ? * MON,WED * 1. Monday, May 1, 2017 1:24 PM 2. Wednesday, May 3, 2017 1:24 PM 0 34 15 ? * MON,WED * 1. Monday, May 1, 2017 3:34 PM 2. Wednesday, May 3, 2017 3:34 PMInthis blogpostyou can find an example of how to add multiple triggers withquartz-scheduler.
I was just checked onhttp://www.cronmaker.com/and try to create cron expression for following scenario. Run the job on two specific times at 1:24, and 3:34 on monday and tuesday.I was generated following expression for that.0 24,34 12,13 ? * MON,WED * 1. Monday, May 1, 2017 12:24 PM 2. Monday, May 1, 2017 12:34 PM 3. Monday, May 1, 2017 1:24 PM 4. Monday, May 1, 2017 1:34 PMBut got the following result.The problem is that it run 4 times in a day but want to run only two time. Is Possible ? to make cron expression for the scenario.
Cron Expression : Run on specific time (1:30 pm , 15:24 pm) on MON and WED only
There are a couple of things that can keep files within your /etc/cron* directories from running (e.g. /etc/cron.daily):Permissions. Make sure the permissions of the files are 0644.The filename must meet certain conditions. From the documentation: "...they must be entirely made up of letters, digits and can only contain the special signs, underscores ('_') and hyphens ('-'). Any file that does not conform to these requirements will not be executed by run-parts.
I'm trying to add a daily cron job to backup a database. I'm able to do it manually by runningsh /path/to/file/backup.shbut when I place the file in the cron.daily directory, it doesn't run daily. To try and diagnose it, I created a test file in cron.daily called test just to see if it would run. When I ranrun-parts --test /etc/cron.daily, I got the output/etc/cron.daily/apache2/etc/cron.daily/apt/etc/cron.daily/bsdmainutils/etc/cron.daily/dpkg/etc/cron.daily/etckeeper/etc/cron.daily/logrotate, etc.So then I tried copying the content of logrotate to a new file, atest, then ran run-parts again but with the same results.atest:#!/bin/sh test -x /usr/sbin/logrotate || exit 0 /usr/sbin/logrotate /etc/logrotate.confIs there something special I need to do to get cron to recognize a newly added task in cron.daily?This isn't unique to cron.daily, I've tried monthy, weekly, and hourly as well with the same results. I've also tried restarting cron without success. I'm running this on Debian 7.2.
Newly added jobs not running in cron.daily
Try this ==>For Weekday*/30 08-17 * * 1-5 /path_of_fileFor Weekend// Saturday, sunday*/30 08-17 * * 0,6 /path_of_file*/30 for every half hour08-17 for 8 AM to 5 PM1-5 for Monday to friday0,6 For Sunday and Saturday
How can I set a cron to run every 30 mins between 8 am and 5 pm during the weekedays , here is my code but seems it not working*/30 8-16 * * 1,2,3,4,5 cd /root/Desktop; ./script.shplease help me to solve this
How to set a cron job to run at a exact date and time frame?
I have one file with 2 function file1.pydef something(): print ('something') def somethingElse(): print ('something else')another file file2.pyimport file1 file1.something()you can setup cron on the file2.py
There is .py file with multipe methods in it. I want to run a specific method from that file every 15 minutes.I can edit crontab on server and something like following :*/15 * * * * /usr/bin/python /path/to/my/file.pyBut this will run entire python file. How do I run only method of that file ?
Run only method from .py file using crontab
Cron uses a different shell (/bin/sh) to your login shell (/bin/bash). The bash shell has various different files that it uses to set things up (man bashwill give you the full details).The best thing to do is not to try to get them to be the same (why does cron need PS1 etc.) rather create a script that has everything that you need in a controlled way and have cron use that.If the environment that you want is in/home/me/setupenv.shthen add the following to the cron script and it will run it:. /home/me/setupend.shDon't forget the leading.otherwise it will run the script in a different environment and the changes will be lost when the script ends.
In bash login shell, there are dozens of environment variables exists, such as follow:HOSTNAME=myhost TERM=screen SHELL=/bin/bash HISTSIZE=1000 SSH_TTY=/dev/pts/20 LC_ALL=en_US.UTF-8 USER=user LD_LIBRARY_PATH=$:/usr:/usr/lib:/usr/local/lib:/lib:/usr/local/lib64 DRC_ROOT=/home/ds PATH=/usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/mysql/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin MAIL=/var/spool/mail/user PWD=/data/user JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera LANG=en_US.UTF-8 TMUX_PANE=%135 PS1=(dbrt_env) \[\e[37m\][\[\e[32m\]\u\[\e[32m\](\[\e[36m\]\[\e[37m\])\[\e[35m\]@\[\e[0m\]\h \[\e[33m\]\W\[\e[0m\]]\$ HISTCONTROL=ignoredups SHLVL=2 HOME=/data/user LOGNAME=user REALUSERNAME= CVS_RSH=ssh HISTTIMEFORMAT=%F %T G_BROKEN_FILENAMES=1 _=/bin/env .....but in crontab job, the envrioment variables is really few:SHELL=/bin/sh USER=user PATH=/usr/bin:/bin PWD=/data/user LANG=en_US.UTF-8 SHLVL=1 HOME=/data/user LOGNAME=user _=/usr/bin/envwhat's the designing intent of the difference?Why not make them the same?
Why is the login shell environment variables different from the cron environment variables?
There are many ways to approach this. Personally I would queue the emails on a schedule rather than adding them to the queue for later.So you run a scheduled task once a day (or hour, or minute) which runs a query to select which users require an email, then using that result set, you add a job to the queue for each result.This way, if a user unsubscribes, you don't have to worry about removing already queued jobs.Laravel offers quite a nice interface for creating scheduled jobs (https://laravel.com/docs/5.4/scheduling) which can then be called via a cronjob.
I want to send emails to various users based on the schedules they have set.I read aboutbeanstalkd,queuesandDelayed Message Queueingand for now it looks like fitting in:$when = Carbon::now()->addMinutes($minutes); // i can calculate minutes at this moment \Mail::to($user)->later($when, new \App\Mail\TestMail);But i'm not quite sure on few things:User can cancel a future schedule. In that case how do i cancel an email that's suppose to send in future. Can i set condition somewhere that gets checked before sending the actual email? Triedreturn falseonhandlemethod of\App\Mail\TestMailand it started throwing errorAm i using the right approach. I also read aboutSchedulerbut i don't get how i am going to cancel future emails(if they need to be)
Laravel - Send Mail in Future based on condition
Try this expression:0 0/20 6-17 * * ?Fires every 20 minutes from 6 am to 5:40 pm (06:00 to 17:40)
I'm trying to write a Spring cron expression to have my code execute after a fixed interval of time and between a given interval of time. I would like the code to be executed after every 20 minutes and between 6.00am to 6.00pm that is during day time.Following is the expression for running the code every 20 min but i am not getting how to restrict it to run between a given interval of time (Can i restrict the schedular in cron expression or i will have to implement the logic in the code that is java class).<task:scheduled-tasks> <task:scheduled ref="commonSchedulerForSms" method="sendCommonSmsReport" cron="0 0/20 * * * ?" /> </task:scheduled-tasks>I am working on Spring VERSION 3.0, Servlet version 2.5 and Java version 1.6.Thanks in Advance.
How to set scheduler task in spring to run after a fixed interval of time and between a given interval of time
Suppose you want to update using you bash script everyday at 12:15am. Then add an entry into/etc/crontablike this15 0 * * * /home/your_bash_script.shJust for additional information, the time entries in cron are added as* * * * * * <your-bash-script-path> | | | | | | | | | | | +-- Year (range: 1900-3000) | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) | | | +------ Month of the Year (range: 1-12) | | +-------- Day of the Month (range: 1-31) | +---------- Hour (range: 0-23) +------------ Minute (range: 0-59)
I have a script I want to update every day. So I have to use a crontab. How can I run the script using the Crontab?UPDATEi use ubuntu.script file
MongoDB - Run script with Crontab
try with this on your python file:import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project.settings") import django django.setup()and I suggest you to locate yourmy_scriptfile to your project root directory where themanage.pyfile is.If not working, try like this:import os os.environ.setdefault("DJANGO_SETTINGS_MODULE", "django_project.django_project.settings") import django django.setup()
Well, I have a project structure like this one:my_project |-scripts | |- my_script.py | |-django_project |- myApp | |- models.py | |- ... |- django_project |- settings.py |- ...I run Django inside a virtualenv and inmy_script.pyI have to use some ofmyApp.modelsSo, here is how I did:my_script.py:#!/usr/bin/env python import django django.setup() from myApp.models import foo # do thingsSince I am inside a virtualenv, to makedjango.setup()work properly I set in my virtualenv ($VIRTUAL_ENV/bin/postactivate):export DJANGO_SETTINGS_MODULE = django_project.settingsand I addeddjango_projectto the path:$ workon my_virtualenv $ python -c "import sys; print sys.path" ['', '/my_project/django_project', ...]And that's all.If I activate my virtualenv and then I runmy_script.pyall works fine.But If I schedule a similarcronjob:00 00 * * * /.../.virtualenvs/my_virtualenv/bin/python /.../my_project/scripts/my_script.py >> /.../test/test.log 2>&1I get this error:django.core.exceptions.ImproperlyConfigured: Requested setting LOGGING_CONFIG, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.It seems likemy_virtualenvactivation settings are not properly loaded.Why does this happen, and how can I fix?
Cron job does not load virtualenv env var for working with django.setup()
Note that the pattern match test applies to the whole file name, starting from one of the start points named on the command line. It would only make sense to use an absolute path name here if the relevant start point is also an absolute path. This means that this command will never match anything:find bar -path /foo/bar/myfile -printYou need to use the absolute path as search base too, so exchange the first.(the starting point of the search) with the same absolute path you use for the-patharguments.find /usr -path "/usr/src/linux*" -prune -o -path "/usr/inclu*" -prune -o -name "*.txt" -printThis will list all*.txtfiles but the content of any directory starting with/usr/src/linux*or/usr/inclu*.
I'm running a script to look through some folders for a file type, but I need to prune a few folders. The script works when I run it via PuTTy using relative filepaths, but when I add in the absolute file paths so I can run it as a cron task, it doesn't prune correctly.Here's my command:/bin/find . -not \( -path "./Ready" -prune \) -not \( -path "./Loading" -prune \) -not \( -path "./Backups" -prune \) -name "*.txt"However, when I replace "./" with the full path, it returns results for files in the folders it shouldn't be searching.Any ideas? Thanks in advance.
Bash: Prune find results using absolute paths
Read up onDirectoryIterator,ClassesSomething like that should work(just structure):// main.php <?php require_once('jobHandler.php'); foreach (new DirectoryIterator('your/folder') as $fileInfo) { // If it is not folder skip; if($fileInfo->isDot() || ! $file->isDir) continue; $worker = new jobHandler($folderToProcess); $worker->run(); } // jobHandler.php <?php class jobHandler { public function __construct($folder) { // validation here } public function run() { echo $this->folder . PHP_EOL; // Do your work here. } private $folder = null; }And you can store job configuration in the customer folders.
I have set up a cronjob 'cron_parent.php' in my root directory. It works like this:Check which customer folders exist (ie. subdomains)Usingforeach(), include the cron_child.php for each relevant subdomainThe actual work is done in each subdomain's cron_child.php.All subdomains contain identical php files and functions. So of course, I ran into trouble when using this on more than one subdomain, because<b>Fatal error</b>: Cannot redeclare fserror() (previously declared in /home/***hidden***/public_html/demo/dbconfig.php:28) in <b>/home/***hidden***/public_html/dev/dbconfig.php</b> on line <b>44</b><br />I realize thatinclude()is probably not the right option here. Is there a way to run the cron_child.php's 'detached' from cron_parent.php?Edit: Added code from cron_parent.php:<?php date_default_timezone_set("Europe/Oslo"); header('Content-Type: text/html; charset=utf-8'); $root = dirname(dirname(__FILE__)); $directories = glob($root . '/*' , GLOB_ONLYDIR); foreach($directories as $path) { if(is_file("$path/cron_child.php")) { include("$path/cron_child.php"); } }For testing purposes, cron_child.php only contains this right now:<?php include "functions.php"; return false;
One cronjob for multiple subdomains
I see a couple potential problems with this:Is the home directory for the user (/home/tomato) that is running the cron job? If not, then you'll need tocd /home/tomato/bizzz.You are also creating the directorybizzzthen runningdocker-compose upwith nothing in it...?Alsodocker-composemay not be in thePATHfor cron. For example, one way to ensure that it is addPATH=/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/sbinto the beginning of your cron job.
I am trying to run Docker container every day using a cron job.* * * * * /Desktop/cron1.shThis is my cron1.sh file:#!/bin/sh mkdir /home/tomato/bizzz #working cd bizzz #not working docker-compose up #not working
How to run "docker-compose up" command in .sh file?
It was actually due to TypeError in javascript. While your application is running with too many logs, you may never dig deep in your log files to look for a TypeError. During the TypeError, the node server doesn't crash but the request made on the port will not be served unless handled using some exception handler or other mechanism, which will result into too many requests being queued up unserved eventually causing the nodejs server port to stop listening for any new requests. So better look into your log files and search for any TypeError.
My NodeJS server stops listening to requests after some random interval of time(days). My node server is running on 3 load balancers with clusters on 4 nodes each. PM2 logs show that internal cron is still running and I don't think any request is left open that doen't responds.These are the logs from production server while hitting from inside:[root@app_inst_1 ~]# curl localhost:3000 curl: (7) couldn't connect to hostPM2 logs:0|server | No records found to reconcile 0|server | undefinedAfter pm2 restart:[root@app_inst_1 ~]# curl localhost:3000 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Redirect URL</title> <script type="text/javascript"> function postResponse(data) { document.write(data); CitrusResponse.loadWalletResponse(data); } var url = window.location.href; var index = url.indexOf("#"); if(index != -1){ var queryString = url.substring(index + 1); postResponse("#"+queryString); } </script> </head> <body> </body>
NodeJS server stops listening on port after sometime
You script must be in the folder management/commands/ of the app.Script example:# -*- coding: utf-8 -*- # example.py from django.core.management.base import BaseCommand class Command(BaseCommand): def handle(self, *args, **options): print "Hello word!!"To run the script:./manage.py exampleHere you have the documentationhttps://docs.djangoproject.com/en/1.10/howto/custom-management-commands/
I have problem with executing script in Django application. This script must serve for job in crontab. So ill provide example of my script:Specification: Python:3.5.x Django:1.10.5my_script.pyclass SayHello(object): def print_args(self, arg1, arg2): print (arg1, arg2) if __name__ == "__main__": foo = SayHello() foo.print_args(sys.argv[1], sys.argv[2])But the main problem is when i want to include models in this script i got error:ImportError: No module named "app"Folder structure:say_hello (main folder)->init.py-> my_script.pyHow to run script but don't get errors from import statement into this script. Any advice would be great.
Django execute class in script
This should do*/15 * * * * /command/to/execute >/dev/null 2>&1 5,20,35,50 * * * * /command/to/execute >/dev/null 2>&1 10,25,40,55 * * * * /command/to/execute >/dev/null 2>&1
So what I want to do is have 3 cronjobs running. But I want it to be like this:1st cronjob: Starts :00 every hour, and runs every 15 minute. So it will be :00, :15, :30, :45, :002nd cronjob: Starts :05 every hour, and runs every 15 minutes. So it will be: :05, :20, :35, :50, :053rd cronjob: Starts :10 every hour, and runs every 15 minutes. So it will be: :10, :25, :40, :55, :10What is the correct syntax for these 3? I only find out how to start them at :00, :05, :10 but how do i make them run every 15 minutes? Constantly, 24/7
3 cronjobs, 15 minute intervals, 5 minutes apart
I think you are scheduling the command for 12:16 :)When you runsudo crontab -e, you can see the following comment right before where you are supposed to write:# m h dom mon dow commandThat means minutes first, then hours. Maybe try:12 16 * * * <command>Finally could simpify the command doing the following:/home/<your username>/web/im2txt/im2txt/train.shTL;DR:12 16 * * * /home/<your username>/web/im2txt/im2txt/train.sh
If I runcrontab -l 16 12 * * * cd ~/web/im2txt/im2txt && ./train.shI have also triedcd ~/web/im2txt/im2txt && ./train.shIt works.I'm waiting till my osx system-clock is 16:12 (I set it up at 16:11) I have tried it with the terminal open and closed. Nothing happens, no error no nothing. The shell-command outputs data to the terminal when you run it normally.What Can I do?
Crontab nothing happens?
I recently began using Amazon's linux distro on ec2 instances and after trying all kinds of things for cron all I needed was:sudo service crond startcrontab -eThis allowed me to set a cron job as "ec2-user" without specifying the user or env variables or anything. For example, this single line worked:0 12 * * * python3 /home/ec2-user/example.pyI also posted thishere.
I have a simple Python script on EC2 to write a text to a file:with open("/home/ec2-user/python/test.txt", "a") as f: f.write("test\n\n")That python script is here:/home/ec2-user/python/write_file.pyWhen I run that script manually ('python /home/ec2-user/python/write_file.py') - new text is being written to a file ('/home/ec2-user/python/test.txt').When schedule same script using Cronjob - no data is being added to a file. My Cronjob looks like this:* * * * * python /home/ec2-user/python/write_file.pyI verify Cronjob is running and suspecting some ENV parameters are not the same during Cronjob execution (or something else is happening). What could be the case and how to fix it in a simple way?Thanks.
Running cronjob (Python write file) on EC2
You can set the path or use the full path ofqsubas @Jens mentioned.However, this situation typically also means that your login shell is sourcing a file that is setting a bunch of environment variables for you (includingSGE_ROOT). When your cronjob is run, that file is not being sourced. So in addition to fixing your path (or providing a full path forqsub), you also need to find that file, then at the top of your script, you need to source that file (or else go through and manually set each relevant environment variable).On my system, that file is at/u/local/etc/profile.d/sge.sh(so I just put. /u/local/etc/profile.d/sge.shat the top of my script), but the location of the file varies from setup to setup. You just need to hunt down which file is settingSGE_ROOTwhen you log in (as well as the several other relevant environment variables such asSGE_ARCH).(If you have a particularly hard time finding which file it is, you may find this answer useful:https://unix.stackexchange.com/a/154971/157777.)
My qsub resides in SGE. So while running sh script through cronjob, I am getting error: qsub: command not found. Currently set path is: PATH=/usr/bin:/bin
How to set PATH of SGE in cronjob
You must save the check box information of admin user in your server such as mysql.And Then, you can try to implement that the file what will be excuted by crontab checkes this admin value.Following to that value, you can finish directly or continue.
I have two mysqli query which I want run on different time. I have read how can I do it fromhereBut I want only run if admin have set it automated from checkbox value. Anyone can please suggest me how can I do it in cpanel ?
Cron Job in Cpanel with PHP
The problem is you didn'tchmod +xyour scripts. That's needed to make them executable.
I use let's encrypt for getting certificates and I want to setup renewal for certificates.So, I decided to check if cron works fine.I created three file indaily.hourlyfolder:test-h:/sbin/ifconfig >/home/bitnami/ipttest-h2:#!/bin/bash/sbin/ifconfig > /home/bitnami/ipt2test-h3.sh:#!/bin/bash/sbin/ifconfig >/home/bitnami/ipt3But, I don't see my files in home directory. How to properly use cron.daily?PS. The cron servive is started, I checked.I restarted it also just to make sure that changes is applied.Thecrontabfile contains record forcron.hourly:17 * * * * root cd / && run-parts --report /etc/cron.hourlyI am not linux guy, so, if it possible get me detailed answer please.
Run cron job hourly
This problem could be either caused by 1. server configuration or by your 2. python code.for point 2, to be sure to exclude your code from the error, try this:import MySQLdb def dbconnect(): try: db = MySQLdb.connect( host='localhost', user='root', passwd='XXX', db='myDB' ) except Exception as e: sys.exit(e) return db print dbconnect()this code runs whit cron on a STOCK, RHEL server:* * * * * root /var/www/html/myApp/stopClock/stopClock.pyIf this does not work, and you get the same error, the problem is your server cron config:SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/var/www/html/myAppThis is what I use. and of course the Shebang in my first line of code:#!/usr/bin/env python
I have a script that connects to a MySQL DB viapyMySQL.It works like a charm when I execute it manually from the console, but gives this output when I run this cronjob:@reboot sudo python3 /var/www/html/ls/src/AppBundle/Command/crawl.py true > /tmp/listener.log 2>&1Result:Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/pymysql/connections.py", line 890, in connect (self.host, self.port), self.connect_timeout) File "/usr/lib/python3.5/socket.py", line 711, in create_connection raise err File "/usr/lib/python3.5/socket.py", line 702, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refusedWhy is that? I followed all of thishttps://stackoverflow.com/a/15684341/1092632w/out any success.Any hint appreciated!EditThings I tried:connect = pymysql.connect(host=constants.HOST, user=constants.USERNAME, passwd=constants.PASSWORD, db=constants.DATABASE, charset='utf8mb4', port=constants.PORT, unix_socket=constants.SOCKET, cursorclass=pymysql.cursors.DictCursor)constants.py# MySQL # HOST = '127.0.0.1' // Localhost, 127.0.0.1 and public IP of Server (having bind to 0.0.0.0 USERNAME = 'admin' PASSWORD = 'XXX' DATABASE = 'ls_base' PORT = '3306' // With and without '' SOCKET = '/var/run/mysqld/mysqld.sock' // File exists
PyMySQL does not connect from Cron but from Console
Double check the server timezone. It probably run but just at different timezone as yours.Better use this example to check if it's working properly. The example prints every second which make timezone different doesn't matter.var CronJob = require('cron').CronJob; new CronJob('* * * * * *', function() { console.log('You will see this message every second'); }, null, true, 'America/Los_Angeles');
I am using this cron job package:https://www.npmjs.com/package/cronIt works fine on my laptop (executes events at certain times properly). However, the cron jobs won't run on my AWS Ubuntu server. Does anyone know if there's additional configuration I need to make it work on AWS Ubuntu? Here's my code:var CronJob = require('cron').CronJob; //Server app.listen(process.env.APP_PORT, function() { var job = new CronJob('0 35 0 * * *', function() { console.log('job runningggg'); }, function () { console.log('job done!'); }, true ); job.start();
cron job not running on aws + ubuntu + node.js
From the docs:"You can usegcloud app deploy cron.yamlto upload cron jobs."
I am trying to have a php script in my google app engine by run every 1 minute using theCron.yamlprocedure.My cron.yaml file looks like belowcron: - description: "ping apple apns service" url: /folder/ios_push_notification schedule: every 1 minutesand myapp.yamlfile is as follows:runtime: php55 api_version: 1 handlers: - url: /(.+\.php)$ script: \1 - url: /folder/ios_push_notification script: ios_push_notification.phpI upload all my files using the google cloud shell downloaded to my computer with thegcloud app deploycommand.To clarify, all my files work. I can go to the script through the url on a browser. The only part that isn't working is the script isn't being run every 15 seconds.
Cron.yaml not creating re-occuring task
You will need to source hawq binaries on your script if you run it from cron.#!/bin/bash # Source hawq binaries . /usr/local/hawq/greenplum_path.sh #Change into your exact binaries location # Location to place backup. backup_dir="/home/backup/"
When I am trying to backup PIVOTAL HAWQ database using shell script.Getting error :/home/gpadmin/backup_db.sh: line 12: pg_dump: command not foundInput shell script:backup_db.sh#!/bin/bash # Location to place backup. backup_dir="/home/backup/" #String to append at the name of the backup files backup_date=`date +%d-%m-%Y` #Numbers of days we want to keep copy databases number_of_days=7 databases=(prod test gpadmin) for i in ${databases[@]}; do if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then echo Dumping $i to $backup_dir$i\_$backup_date pg_dump $i|gzip > $backup_dir$i\_$backup_date.gz fi done find $backup_dir -type f -prune -mtime +$number_of_days -exec rm -f {} \;CRONTAB :ENTRY FOR SHELL SCRIPT - */5 * * * * /home/gpadmin/backup_db.sh > /tmp/bkp.logWhen running the shell manually dumping the data. But at the same time not working via crontab which runs every 5 minute.Any help on it would be much appreciated.
PIVOTAL HAWQ Backup - shell script error
The error pointed out in the comment section is resolved by using split() fuction in the following codefrom datetime import datetime, timedelta from pandas import DataFrame import pandas as pd from io import StringIO starttime_str = str(datetime.today()).replace("-", ":") arr = starttime_str.split(':') starttime = datetime(int(arr[0]), int(arr[1]), int(arr[2].split(' ')[0]), int(arr[2].split(' ')[1]), int(arr[3]), int(float(arr[4])))
I am trying to edit my python script so that it will pull data from a database daily. Currently my script asks for a manual (raw) input of the date in the format: YYY:MM:DD:HH:MM:SS.Butwhat I really need is for the start time to be calculated based on the current date.This is what my script looks like now:from datetime import datetime, timedelta from pandas import DataFrame import pandas as pd from io import StringIO starttime_str = raw_input("enter time:") # example 2016:10:18:00:00:00 arr = starttime_str.split(':') starttime = datetime(int(arr[0]), int(arr[1]), int(arr[2]), int(arr[3]), int(arr[4]), int(arr[5]))But what I really need for thestarttime_strto be the computer's date. So perhaps I should write a function to find the date and then calculate the time stamp? I will update the cron entry so that the script runs daily & automatically. I know I should use something likepd.tslib.Timestamp.now()but I don't need the whole string.
How to write function to calculate current date as time input for python script
'logger -s` sends a copy of the message to stderr, not stdout. Also, you can pass the message as an argument, rather than via stdin. Try this:logger -s "Net script" 2>> /Library/Logs/netlog.log
I have a bash script, that runs just fine from the command line. After adding it to the root users crontab (sudo crontab -e), I find it does not run. Here is the cron task:0,15,30,45 * * * * /Users/lorenzot/Documents/scripts/restart-net.shHere is the script:#!/bin/bash echo "Net script" | logger -s >> /Library/Logs/netlog.log # Ping twice just to be sure /sbin/ping -c 2 8.8.8.8 /sbin/ping -c 2 8.8.8.8 if [ $? -ge 1 ]; then echo "Network down :(" ifconfig en1 down ifconfig en1 up exit 1 else echo "Network up! :)" exit 0 fiThe script is owned by root and of course, it is executable (766) and it does exist at the correct path.I'm not seeing an entry in the log file, but I'm not sure if this is the correct way of writing to a log file. I've tried a few different variations including:syslog -s -k Facility com.apple.console \ Level Error \ Sender restartscript \ Message "Restart network script run"But nothing is written to any log. Nevertheless, I would expect to see a log entry for the cron task having executed. Any ideas? Thanks
OSX bash script does not run from cron
Rather than using wget to fetch your file from a web server, you should execute the command directly from the command line by invoking PHP on your codeigniter's index.php file:php /path/to/your/project/html/index.php cron_jobs/cron_job_TESTIf you execute your codeigniter app this way, you should be able to see the output and also the errors that might bubble up during its execution.I'd also suggest writing a log file from your cron method so you can output values and, if you are clever, be sure which branches of your code are executing.EDIT: you might also alter your cron job, whichever method you choose, to route the output of the cron job into a file so you might have some clue about what went down:wget `http://www.domain/APP/index.php?/cron_jobs/cron_job_TEST` > /path/to/file.txtEDIT: I think you can also route both stderr and stdout to the same file like sowget `http://www.domain/APP/index.php?/cron_jobs/cron_job_TEST` &> /path/to/file.txtObviously, the user that owns this cron job must have permission to write that output file if the file already exists or must have permission to write the directory if the file does not exist.Once the cron job has run, you might be able to look in this file and get a better idea of what transpired.
My cron job is triggering properly from a godaddy shared server:wget `http://www.domain/APP/index.php?/cron_jobs/cron_job_TEST`There are no errors returned in the cron result:HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: `index.php?%2Fcron_jobs%2Fcron_job_TEST'I set the test to write to a text file - and also to write to a database table.When I test this - by visiting the URL -http://www.domain/APP/index.php?/cron_jobs/cron_job_TESTThe text file and table are written to properly.But when the cron runs - there is no result to the text file or the database table.CONTROLLER/FUNTION 'cron_jobs/cron_job_TEST':// CRON_1 public function cron_job_TEST(){ file_put_contents("test_cron_job.txt", "cron_job_TEST-> ".date('l jS \of F Y h:i:s A') . "\n", FILE_APPEND); $this->db->query("INSERT INTO `TEST_CRON` (`ID`, `DATE`) VALUES ('', NOW());"); }// END CRON_1I have also tried the/web/cgi-bin/php5 $HOME/html/methods without luck. I thought the point of the wget was that it would behave just as though you are visiting the URL.What could be going wrong here?How do I debug this?
How to debug cron job - codeigniter 3
For this kind of job, you should use theLaravel Task Sheduling. This gives the option to execute a script every hour (or any other interval you choose).$schedule->call(function () { // Your code })->hourly();However, you need to be able to add a cron job on the server where your website is hosted.
I read here :https://laravel.com/docs/5.3/queues#running-the-queue-workerI wanted to create a cron job using the tutorial, but I am confused how to implement itMy case is like this :For example, I have table order. Table order has fields like this :The status, int (10)checkout_at, datetimecanceled_at, datetimeI only mention a few fieldnote:status = 1 -> Receivedstatus = 2 -> Canceledstatus = 3 -> Waitng For PaymentI want to make logic like this:If the buyer did not paid the order in 2 hour after checkout_at, change the status to "canceled" and insert value "canceled_at"I create a function like this :public function cron_job() { $users = DB::table('orders') ->select('*') ->first(); $checkout_at = $users->checkout_at; $after = strtotime("+2 hours", strtotime($checkout_at)); if($checkout_at > $after) { DB::table('orders') ->where('id', $users->id) ->update(['status ' => 2, 'canceled_at' => date("Y-m-d H:i:s")]); } }How do I create a cron job to call the function by implementing the above tutorial?
How can I do cron job with Queue Worker? (laravel 5.3)
Add this to your cronTabs:0 1 * * * /home/metrics.shChange the location to your metrics.sh's location.
I have a ridiculously simple shell script, nothing more than a few instructions to run some php files ...#!/bin/bash clear cd /home/************** // Just for privacy here php cron-cpt.php php cron-lvt.php php cron-plots.php php cron-m.php php cron-a.phpThe script is called metrics.sh which is chmod'd and just sits in my local binary folder.If I run the script from the command line, it works perfectly.If I add the same script to the cron tab to run once a day, it runs over and over. I assumed the cron was the same as invoking it manually from the command line?I'm using the same user to invoke in cron as logged on cmd line and have tried as root and a standard user, but the same results prevail.Google has not been helpful with this. Any suggestions?
Shell script runs php files over and over again
You can do it in many ways:Just before the cron line:PATH=$PATH:/full/path/to/oracle/binOr on the cron line itself:00 15 * * * PATH=$PATH:/full/path/to/oracle/bin /u01/test.shLet your scripttest.shsource another shared script that sets up your Oracle environment:source /path/to/oracle_env.shI prefer the third method because it is very flexible and it helps us keep the crontab uncluttered. .bash_profile should be meant for interactive shell only - it is not good to share it with scheduled scripts, especially in production.
This is the case, I made 3 files to execute backup database command inrmantest.sh:#!/bin/bash sqlplus /nolog @/u01/conectar.sqlconectar.sql:connect sys/manager as sysdba ho rman target mydatabase/mypassword @/u01/backup.shbackup.sh:#!/bin/bash RUN {backup database;}and then I did all thechmod u+xfor the files to make them executable, thenexport EDITOR=nanoto change the cron editor.when I go tocrontab -e i put00 15 * * * /u01/test.shIf I clic this test.sh manually, the operation runs normally, but then in the crontab I get the "you got a mail" thing with this messageFrom[email protected]Thu Dec 22 16:20:01 2016 Return-Path: X-Original-To: oracle Delivered-To:[email protected]Received: by localhost.localdomain (Postfix, from userid 500) id 956CD41D4B; Thu, 22 Dec 2016 16:20:01 -0400 (AST) From:[email protected](Cron Daemon) To:[email protected]Subject: Cron /u01/test.sh Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: X-Cron-Env: X-Cron-Env: X-Cron-Env: X-Cron-Env: Message-Id: <[email protected]> Date: Thu, 22 Dec 2016 16:20:01 -0400 (AST) /u01/test.sh: line 3: sqlplus: command not found"Please can you remake the script or the crontab for me? If you can answer with the exactly modifications I would appreciate it, I'm not an expert in this environment so a generalknowledge neededanswer will leave me the same, thanks.
Sqlplus command not found when running from crontab
Here is a link to the post that solve it for me:http://support.hostgator.com/articles/how-to-replace-wordpress-cron-with-a-real-cron-jobTakeaways:You seem to have prepared everything correctly - your code schedules the execution ofdo_this_hourlyfunction every hour. However, due todefine('DISABLE_WP_CRON', 'true');inwp-config.phpthe function is only scheduled and never executed unless you make a request tohttp://yourwebsite.com/wp-cron.php?doing_wp_cronyourself.The only thing left to do is to setup system cronIf you are on Unix-based system (Linux/Mac), then trycrontab -efrom the command line on your server, and add a line like this:wget -q -O - http://yourwebsite.com/wp-cron.php?doing_wp_cron >/dev/null 2>&1Don't forget to replacehttp://yourwebsite.comwith whatever your website domain is.Good luck!
I have donedefine('DISABLE_WP_CRON', 'true');inwp-config.php.I set up the cron path but I actually do not know how to write code so that calling that URL will execute my code. I want to write real cron job over WordPress cron job.I tried with this infunction.phpbut it did not work:if (! wp_next_scheduled ( 'my_hourly_event' )) { wp_schedule_event(time(), 'hourly', 'my_hourly_event'); } add_action('my_hourly_event', 'do_this_hourly'); function do_this_hourly() { // do something every hour }
How to write real cron job in wordpress?
To use Google's cron scheduler, you will have to pay for the app engine running 24x7. Whereas Azure Scheduler is a true microservice and you only pay based on number of jobs/job collections, not the underlying resources consumed.
I want to trigger a https endpoint every 1 minute I was using cron-job.org but it is not that reliable and goes down often. I have looked at 2 options Microsoft azure scheduler and Google app engine cron scheduler. Microsoft schedulerpricingis very clear, however, I dont understand how to setup googlecron joband pricing to run the cron job every minute.
Google app engine cron job scheduling setup and pricing
#!/bin/shcd path of the folder with scripts/usr/local/bin/aws ec2 stop-instances --instance-ids idoftheinstanceInclude the path of aws/usr/local/bin/awsand good.
Before I proceed, please let me tell that I tried all methods mentioned at stackoverflow and other forums but nothing worked on my CentOS 6.8 server.Here is what I have written in crontab00 5 * * * /usr/bin/aws /var/www/html/james/crons/s3_downloader.shAnd s3_downloader.sh file full content is:#!/bin/bash aws s3 sync "s3://my_bucket/my_folder/" "/var/www/html/james/downloads/my_folder/";But nothing is working when cron tab runs it. However everything works fine when I run it via command line screen on server.My server has installed the AWS at path (using ROOT user):/usr/bin/aws(usingwhich aws)Here is the methods I have tried (but nothing worked for me):-->Changed the path for aws in file contents:#!/usr/bin/aws aws s3 sync "s3://my_bucket/my_folder/" "/var/www/html/james/downloads/my_folder/";--> Did export settings on ROOT consoleexport AWS_CONFIG_FILE="/root/.aws/config" export AWS_ACCESS_KEY_ID=XXXX export AWS_SECRET_ACCESS_KEY=YYYYEdit: When I logged the response from crontab to a logusage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters] To see help text, you can run: aws help aws <command> help aws <command> <subcommand> help aws: error: argument command: Invalid choice, valid choices are:Here is full response:http://pastebin.com/XAKQUVzTEdit 2After more debugging, I can see the error coming out (in cron log) is:env: /var/www/html/james/crons/s3_downloader.sh: Permission denied
AWS commands not getting executed on CRONTAB
With more research, I find a solution but i'm not proud of this :0 1 19-25 * 5Hope, it will help.Thanks
I'm looking for a CRON expression which can find the first Friday after the third Monday of the month.It's easy to find the third Monday (5 9 * * 1 [date +\%d-le 7 ]) but i want the first friday after.Thanks for your help.
Cron Expression : First Friday after the third Monday of the month
Your syntax is incorrect. Please use the following code#every minute * * * * * wget -O - -q "http://example.com/cron/test1.php">/dev/null 2>&1 #every 15 minutes */15 * * * * wget -O - -q "http://example.com/cron/test2.php">/dev/null 2>&1You can use online crontab generators likehttp://www.crontab-generator.org/
In my cron job file I have two cronjobs defined:#Yo1 MAILTO="[email protected]" *1****wget -O - -q "http://example.com/cron/test1.php">/dev/null 2>&1 #Yo1 MAILTO="[email protected]" *15****wget -O - -q "http://example.com/cron/test2.php">/dev/null 2>&1The PHP files are simple just sending mails with different subjects.The issue is that both cronjobs are running on the same time every minute, but as you can see I want them to run on different times. First - every minute, second - every 15 minutes.Can you help me with this. I can't figure out whats wrong.
Cron Jobs Run At Same Time
The sequence is minute, hour, day of month, month, day of week, [user] command, so you have to put*/5 0,20,21,22,23 * * * user /path/to/command
I want to have cron run a task every5thminute from20-24hour, but also need it to run once at00:15:00. How do I accomplish this?* */5 20,21,22,23,24 * * *
Crontab schedule every hour + custom rule
Using the full path is defintely better then first using cd. To get the result of the cronjob, you could just output to file like this:59 * * * * /home/sansal/Scripts/usbreset /dev/bus/usb/002/003 &>> /home/sansal/usbreset.log
I need to run a task every hour . I first change directory to the path where script is and then operate that script. So I try to use a cron job as :59 * * * * cd /home/sansal/Scripts && sudo ./usbreset /dev/bus/usb/002/003I added that line to crontab. But I cant make sure if it is true. And I dont see any output in terminal about that.
Run two commands in cron
What you can do is , add following line in your .bashrc file in home location.export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/binthen you can have following entry in crontab* * * * * source ~/.bashrc;sh run_example.shThis line will execute your .bashrc file first, which will set the PATH value, then it will execute run_example.shAlternatively, you can set the PATH in run_example.sh only,export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/bin rm test.txt ~/Desktop/spark-2.0.0/bin/spark-submit \ --master local[8] \ --driver-memory 4g \ --executor-memory 4g \ example.py
First, I assume that we haveSPARK_HOMEset up, in my case it's at~/Desktop/spark-2.0.0. Basically, I want to run my PySpark script using Cronjob (e.g.crontab -e). My question is how to add environment path to make Spark script works with Cronjob. Here is my sample script,example.pyimport os from pyspark import SparkConf, SparkContext # Configure the environment if 'SPARK_HOME' not in os.environ: os.environ['SPARK_HOME'] = '~/Desktop/spark-2.0.0' conf = SparkConf().setAppName('example').setMaster('local[8]') sc = SparkContext(conf=conf) if __name__ == '__main__': ls = range(100) ls_rdd = sc.parallelize(ls, numSlices=10) ls_out = ls_rdd.map(lambda x: x+1).collect() f = open('test.txt', 'w') for item in ls_out: f.write("%s\n" % item) # save list to test.txtMy bash script inrun_example.shis as followsrm test.txt ~/Desktop/spark-2.0.0/bin/spark-submit \ --master local[8] \ --driver-memory 4g \ --executor-memory 4g \ example.pyHere, I want to runrun_example.shevery minutes usingcrontab. However, I don't know how to custom path when I runcrontab -e. So far, I only see thisGitbook link. I have something like this in my Cronjob editor that doesn't run my code yet.#!/bin/bash # add path to cron (this line is the one I don't know) PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:$HOME/anaconda/bin # run script every minutes * * * * * source run_example.shThanks in advance!
Running PySpark using Cronjob (crontab)
Got it working! Use path:import sys sys.path.insert(0, 'lib')Additional:Also need to add protobuf in requirements:protobuf==3.1.0.post1create__init__.pyin google folder:# this is a namespace package try: import pkg_resources pkg_resources.declare_namespace(__name__) except ImportError: import pkgutil __path__ = pkgutil.extend_path(__path__, __name__)also usepip install -t lib --upgrade protobufgcloud==0.18.1used.Sorry for the late post
app-engine fails to import gcloud used gcloud app deploy app.yaml \cron.yaml to deploy on google app engineopened on browser and get:Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle handler = _config_handle.add_wsgi_middleware(self._LoadHandler()) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler handler, path, err = LoadObject(self._handler) File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject obj = __import__(path[0]) File "/base/data/home/apps/s~gcp-project-01/20160916t160552.395688991947248655/main.py", line 18, in <module> import update_datastore as ud File "/base/data/home/apps/s~vehicle-monitors-api/20160916t160552.395688991947248655/update_datastore.py", line 20, in <module> from gcloud import datastore, logging ImportError: No module named gcloudThe app.yaml file:runtime: python27 api_version: 1 threadsafe: true handlers: - url: / script: main login: adminThe cron.yaml file:cron: - description: run main app url: / target: main schedule: every 2 minutesthe requirements.txt file:gcloud==0.14.0
google-app-engine fails to run Cron job and gives an ImportError: No module named gcloud
I solved the issue by creating a PHP file and load the page on reboot then do its work and redirect back to such and such.
Basic information about my system: I have a music system where people can schedule songs to start and end at a specific time.OS: Arch linuxIt sets two crons at the moment. One lets say at 1.50 (start time with a command like "play etc") and another set at 3.20 (end time with a command like "end etc").My setup works perfectly and i can end delete schedules etc etc but i now noticed an issue! If i set the above times and turn the system off (My system is a raspberry pi) and turn back on at lets say 2.00 and i missed the 1.50 deadline, the music doesnt start (obviously) and i want to try make it so no matter what time i turn it on within a range lets say: 1.50 - 3.20 it will start the play command. But it will run the command once!I looked around and the commands i got was like:0 1.50-3.20/2 * * * your_command.shBut thats to run every 2 hours. I want it to run once only between these times? Thanks!
How do i activate cron command once within specific time frame?
Try installing the cronR package.Once you do this, you should be able to navigate to tools > Addins where you can execute the package. It will bring up a scheduler that will allow you to schedule a time for your script to run.If you have permission issues, go to System preferences > security > Privacy. Click the 'full disk access' and grant RStudio / R access. This should allow you to schedule jobs to run in the future.
I would like to set up a crontab thatruns the emailSender.R script daily at 5pm Monday to Friday.The script of emailSender.R is as follows:library(rmarkdown) rmarkdown::render("htmlmarkdown.Rmd") library(gmailR) gmailR::gmail( to =c("[email protected]"), subject = "Subject", message = "Message", username = "[email protected]", password = "password", attachment = "htmlmarkdown.html" )I then open up terminal to set up the crontab by first typing crontab -e.Then a window pops up where I try to set up my cronjob using the following code.0 17 * * * Rscript /Users/username/emailSender.RUnfortunately, emailSender.R doesn't run as scheduled.Would greatly appreciate any help on getting a crontab to schedule my R scriptEDIT: After going back to my terminal and typing Rscript I am prompted:-bash: Rscript: command not foundPerhaps I have to set Rscript in my PATH before cron can set-up the task. Unsure how to do that despite searching extensively.
Running R script daily using crontab in OSX
First of alltestis a bad name for a function as almost all shells havetestbuiltin and also there is externaltestcommand available in almost all systems.Now, when you run something incron, unlike starting of a login and/or interactive shell session no session startup script is read hence the function defined in~/.bash_profile(source-d while starting login session) is not being available.Note that, many systems do not usebashto runcronscripts, for example Ubuntu usesdash. Anyway, thetestyou are executing incronis presumably that shell's builtintestcommand which will return exit code 1 without any argument.
I have a simple python script,test.py, which prints the date and time and then raises an error.I have a bash function defined in.bash_profileand namedtest(), which calls the script with$ python3 ~/test.pyFinally, I have a cron line set to call thetest()bash function once a minute for testing withtest >> ~/$(date +\%Y-\%m-\%d_\%H:\%M:\%S).log 2>&1When I run the python script or the bash function, I correctly get both the print and the error to the terminal. When cron calls the python script directory, it logs correctly. But when cron calls the bash function, nothing is written to the log file.QuestionHow do I correctly direct the output of the python script to the log file when cron calls the bash function?
Directing output from cron (calls) bash (calls) python3
The program executed bycrondoes not have an active window, so you would need to explicitly specify which window you want the keystroke to be sent to using the--windowoption.You can get the window id of your currently active window withxdotool getactivewindowand then use that number in anxdotoolcommand. Or you can usexdotool searchwith various options to find the window you want to direct the keystroke to. Readman xdotoolfor the various search options. (You can do that in a single command:xdotool search --name Foo key F5will send F5 to a window withFOOin its name.)But that will only work if the indicated window accepts the events, and many windows don't.
I typecrontab -emy crontab looks like*/1 * * * * /home/sara/Desktop/kioskscripts/reloadpage.sh >> /home/sara/Desktop/kioskscripts/logfile.loglogfile is created in /kioskscripts but remains empty.reloadpage.sh looks like this#!/bin/bash sleep 5 /usr/bin/xdotool key F5sh reloadpage.shworks as expected and simulates f5 being pressed 5 seconds after execution.
Bash script executes fine on its own, but not with cron
I think this should work:0/10 17-19 * * * <cmd>or:0/10 17,18,19 * * * <cmd>
I want to send emails to clients every daybetween 17:00 to 20:00.I want to run my commandevery 10 minutes in this period.So the script will be executed 6 times per hour. That's a total of 18 times.Is this possible with the crontab? How should I write the syntax?
Using crontab to execute script between 17:00–20:00 for every 10 minutes
Look at the error message you are getting:no display name and no $DISPLAY environment variableYou are attempting to run something that requires an X11 display, which isn't going to be available from within cron's context (and likely not via plink either, unless you are running an X11 display server locallyandhave enabled X11 forwarding).Typically, if you have something that needs access to the display you need to run it from within an existing desktop session. There are ways to work around this; for some thoughts on that topic see:https://unix.stackexchange.com/questions/25684/how-to-access-x-display-from-a-cron-job-when-using-gdm3https://unix.stackexchange.com/questions/10121/open-a-window-on-a-remote-x-display-why-cannot-open-display
I have written ascript.py, which opens a tk window and draws with turtle in the canvas the window contains. I want to start this script via a plink using:plink.exe -pw raspberry pi@pi-fisch00 python /home/pi/script.pyBut I always receive an error:script.py line 32, in <module> root = Tk() no display name and no $DISPLAY environment variableI think the same error is causing that the crontab is not executing thescript.py.My entry in the crontab:*/1 * * * * python /home/pi/script.pyThe syntax should be right, because other scripts are working and if I putpython /home/pi/script.pyin the cmd manually everything is fine. Thescript.pygets executed. How can I fix this and let the crontab execute thescript.py? Why can't I execute thescript.pyvia plink?
Starting a Python script on a raspberry via plink (not responding crontab)
You could easily create an entry in the fstab using the UUID or the Label of the hard drive partition instead of the assigned path.You can get the UUID by running one of the following commands:sudo blkid ls -l /dev/disk/by-uuid/Then, depending on the partition type, you can add an entry to/etc/fstab, remember to change the line according to your UUID andext2to your partition type:UUID=30fcb748-ad1e-4228-af2f-951e8e7b56df /media/HDD ext2 defaults,nofail 0 2Then, you could just mount all drives in the fstab:sudo mount -a
Unlike desktop versions of Ubuntu, in Ubuntu server 14.04, I cannot docd /media/HDDI have to create a directory and then mount width External drive to work with external drive or USB$sudo fdisk -l $sudo mount /dev/sdb1 `~/directoryBut the problem here is external drive is not always the same. Sometimes it becomes /dev/sdc1, /dev/sdd1 and /dev/sde1 etc. So its unlikely to keep HDD as backup option.I am keeping backup using...$sudo vi /etc/crontabButton line is, how to plug external HDD to ubuntu server and do backup without problem? For an example (which is not working).cp -rv /var/www/backup /media/HDDnameOr any other solutions for my problems?
How to copy directory into external driver in Ubuntu server 14.04?
The crontab isn't running every 15 min; it's running on the 15th of an hour. If you'd like it to run every 15 minutes of an hour, change the crontab to:0,15,30,45 * * * * /usr/bin/java -jar xxxxxx.jar >> /var/log/cron.log
I have batch program and i do get some data from one server and update the data in my database and i want to trigger my batch program for every 15 mins. For that i use the crontab concept, i just open the crontab with the commandcrontab -e //i add the command in that crontab 15 * * * * /usr/bin/java -jar xxxxxx.jar >> /var/log/cron.logfinally after that my batch program is not running and i did not get log in cron.log. whether it will automatically run the batch program or we have to trigger it
crontab does not get the log in log file
Minutes go up to 60 so you can't have an interval of 120. 120 minutes is two hours so you want0 0 0/2 * * ?
Hi can someone help me in creating a quartz expression which triggers every 120 min 7 days a week?I tried something like<0 0/120 * * * ?* MON-SUN>but its not working.
Hi can someone help me in creating a quartz expression which triggers every 120 min 7 days a week?