Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
You could chuck file redirection onto either the command shown or the actual command in the crontab for both stdout and stderr - like command > /tmp/log.txt 2>&1 .
If you want several users to receive this log, you could insert a MAILTO=nameofmailinglist at the top of you cron file.
|
I know that default cron's behavior is to send normal and error output to cron's owner local email box.
Is there other ways to get theses results (for example to send it by email to a bunch of people, to store them somewhere, and so on) ?
|
What are options available to get cron's results and how to set them up?
|
Use e.g. 0 0 3,15 * * ? That'll run a job at 3am and 3pm. That's twice a day, with 12 hours between.
You could use 0 0 0/12 * * ? which means every 12 hours. Here's some examples.
|
I need a cron-expression (0 0/60 * * * ?) to fire application every 12 hours (twice a day).
|
Cron Expression to execute cron triggers for 12 hours of a day?
|
apt-get source cron
|
I have searched for the source for cron, to adapt it to modify/exend it. I can't locate it (thought it would be in coreutils).
Anyone knows where I can get the sources for cron?
BTW, I am running on Linux (Ubuntu 10.0.4).
|
Where can I download the source for cron utility?
|
Change your cron job to run a shell script. Inside the shell script, set PYTHONPATH and then call the python program.
Change your cron job to this:
/path/to/my_shell_script.sh
Contents of my_shell_script.sh:
export PYTHONPATH=something
python /path/to/py/python/program.py
If you don't want to have a separate shell script, you can cram it all into the cron entry, although it can get very long:
PYTHONPATH=something python /path/to/py/python/program.py
|
I had issues running my python script on shared hosting (bluehost), and with the help of other SO threads I was able to set PYTHONPATH and run the script with no issues.
Now I need to run the script via a cron job. The cron jobs in shared hosting environment are just one line which I can call the script, but can't figure out how to set PYTHONPATH before calling the script.
Example:
python /path/to/my/script.py
I am sure this issue should be common but I couldn't find any answer in other threads.
Any idea how to set PYTHONPATH for the cron jobs?
Also the codebase is developed in a local environment and the server gets a copy through git pull. So my preferred solution is not to change the source code for the server. It's ok to call another script from cron job which calls the main script and set the variables there, but changing the main script I prefer not to happen so that I don't need to maintain two versions of the code one for local and one for the server.
|
Set PYTHONPATH for cron jobs in shared hosting
|
I added notification object in my json.
I found out that in my remoteMessage.getNotification().getBody() it returns null that's why it doesn't receive any notification send by my cron.
Edit
Here's my json object
$message = array(
'registration_ids' => $registrationIDs,
'notification' => array(
"title" => $id,
"body" => $messageText,
"icon" => "name_of_icon" ),
'data' => array(
"message" => $messageText,
"id" => $id,
),
);
|
After migrating to Firebase, i tested sending notification by using the firebase console it works fine, but i need a daily notification on a specific time so instead of using the firebase console i use my former cron job to send notification daily. I changed https://android.googleapis.com/gcm/send to https://fcm.googleapis.com/fcm/send but my device doesn't receive any notification.
Is there any way to solve this? Or did I miss anything? my cron job is working fine for my devices that is still using GCM.
Here's my code
function sendNotificationFCM($apiKey, $registrationIDs, $messageText,$id) {
$headers = array(
'Content-Type:application/json',
'Authorization:key=' . $apiKey
);
$message = array(
'registration_ids' => $registrationIDs,
'data' => array(
"message" => $messageText,
"id" => $id,
),
);
$ch = curl_init();
curl_setopt_array($ch, array(
CURLOPT_URL => 'https://fcm.googleapis.com/fcm/send',
CURLOPT_HTTPHEADER => $headers,
CURLOPT_POST => true,
CURLOPT_RETURNTRANSFER => true,
CURLOPT_POSTFIELDS => json_encode($message)
));
$response = curl_exec($ch);
curl_close($ch);
return $response;
}
|
How to implement firebase cloud messaging in server side?
|
Having started crond with supervisor, your cron jobs should be executed. Here are the troubleshooting steps you can take to make sure cron is running
Is the cron daemon running in the container? Login to the container and run ps a | grep cron to find out. Use docker exec -ti CONTAINERID /bin/bash to login to the container.
Is supervisord running?
In my setup for instance, the following supervisor configuration works without a problem. The image is ubuntu:14.04. I have CMD ["/usr/bin/supervisord"] in the Dockerfile.
[supervisord]
nodaemon=true
[program:crond]
command = /usr/sbin/cron
user = root
autostart = true
Try another simple cron job to findout whether the problem is your cron entry or the cron daemon. Add this when logged in to the container with crontab -e :
* * * * * echo "hi there" >> /tmp/test
Check the container logs for any further information on cron:
docker logs CONTAINERID | grep -i cron
These are just a few troubleshooting tips you can follow.
|
While searching for this issue I found that: cron -f should start the service.
So I have:
RUN apt-get install -qq -y git cron
Next I have:
CMD cron -f && crontab -l > pullCron && echo "* * * * * git -C ${HOMEDIR} pull" >> pullCron && crontab pullCron && rm pullCron
My dockerfile deploys without errors but the cron doesn't run. What can I do to start the cron service with an added line?
PS:
I know that the git function in my cron should actually be a hook, but for me (and probably for others) this is about learning how to set crons with Docker :-)
PPS:
Complete Dockerfile (UPDATED):
RUN apt-get update && apt-get upgrade -y
RUN mkdir -p /var/log/supervisor
RUN apt-get install -qq -y nginx git supervisor cron wget
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN wget -O ./supervisord.conf https://raw.githubusercontent.com/..../supervisord.conf
RUN mv ./supervisord.conf /etc/supervisor/conf.d/supervisord.conf
RUN apt-get install software-properties-common -y && apt-key adv --recv-keys --keyserver hkp://keyserver.ubuntu.com:80 0x5a16e7281be7a449 && add-apt-repository 'deb http://dl.hhvm.com/ubuntu utopic main' && apt-get update && apt-get install hhvm -y
RUN cd ${HOMEDIR} && git clone ${GITDIR} && mv ./tybalt/* ./ && rm -r ./tybalt && git init
RUN echo "* * * * * 'cd ${HOMEDIR} && /usr/bin/git pull origin master'" >> pullCron && crontab pullCron && rm pullCron
EXPOSE 80
CMD ["/usr/bin/supervisord"]
PPPS:
Supervisord.conf:
[supervisord]
autostart=true
autorestart=true
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -c /etc/nginx/nginx.conf
[program:cron]
command = cron -f -L 15
autostart=true
autorestart=true
|
Why doesn't the cron service in Dockerfile run?
|
If you check in crontab.guru, both of these are almost equivalent:
* * * * *
* 1/0 * * *
This is because X/Y means: starting from X, every Y. That is, all X + Yn. So if you say */2 it will do every 2 hours.
In this case: 1/0 means "starting from 1, every hour", so it matches from 1 to 23, whereas * matches from 0 to 23.
Following your question, */6 matches every 6 hour, so it will precisely run at hour 0, 6, 12 and 18.
Regarding your question on what is the 6th parameter ? doing, I read that:
I believe that's processed by the CronExpression class which has six
constants: minute, hour, day, month, weekday, year. Cron uses minute,
hour, day, month, weekday. The addition of the year for the yearly()
method seems to be the reason for the extra *.
So instead of having the common syntax
+---------------- minute (0 - 59)
| +------------- hour (0 - 23)
| | +---------- day of month (1 - 31)
| | | +------- month (1 - 12)
| | | | +---- day of week (0 - 6) (Sunday=0 or 7)
| | | | |
* * * * * command to be executed
With Java you have
X/Y0
This last parameter can have a value as well, but in your case it specifies X/Y1. As for what I read in crontab.guru, it means:
? blank (non-standard)
So I would schedule it normally with the 5 usual parameters and then add X/Y2 at the end so that it runs in all years.
|
I am very new to Java. As my first project, I am going to work with cron job scheduler. I want some clarification on scheduling. I have a code which will run every hour.
CronTrigger ct = new CronTrigger("cronTrigger", "group2", "0 1/0 * * * ?");
I have read the documents about scheduling, but I got confused
In one document i have read like in the below
("0 0 * * * ?")
1st 0 indicates seconds
2nd indicates mins
3rd hour
4th which day of the month
5th which month.
In some document I read that 1st indicates mins 2nd - hour etc.
Can anyone please explain me this(0 1/0 * * * ?) and also what it means (1/0)?
And I want to run a job in every six hours.
If i give like this (0 */6 * * * ?) whether it will run in every six hours?
|
Cron Job sixth parameter in Java
|
cron can start jobs easily enough, but if you need to run something from 6am to 11.30pm, you'll need to issue a start command at 6am, and a stop command at 11.30pm.
something like this:
## start the job (6am)
0 6 * * * /usr/bin/start-my-job
## stop the job (11.30pm)
30 23 * * * /usr/bin/stop-my-job
edit: i think i see what you're asking for now. try this:
## every three minutes between 6am and 11.30pm
*/3 6-22 * * * my-command
0-30/3 23 * * * my-command
edit: okay, if you want 6pm until midday the following day, you need:
## every three minutes between midnight and midday
*/3 0-11 * * * my-command
## every three minutes between 6pm and midnight
*/3 18-23 * * * my-command
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Could you help me please to run my cron job daily
from 6 am to 11:30 pm in the following format
command path
Thanks
|
Running a Cron job daily from 6 am to 11:30 pm [closed]
|
Your package can simply put a file in /etc/cron.d/
The text file should contain something like this, to run a command every 10 minutes:
*/10 * * * * root /path/to/command
Google 'cron format' for more info, and yes, this belongs in askubuntu or superuser.
You need to add the username (root) to the line, as shown above. Apparently this is necessary for files in cron.d, but I can't find a definitive document.**
cron should pick this new job up automatically.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I need to install some cron jobs with my Ubuntu installation package. The ones that run every day or hour are easy: I can just create a symlink from /etc/cron.daily to my script.
However, I also have a script that I would like to run every 10 minutes. There is no such thing as /etc/cron.minutely. Also I am not sure how to edit crontab without using the interactive editor (crontab -e). What is the best way to go about this?
|
Add 10 minute cron job to Ubuntu package [closed]
|
NO, There's no such direct privilege in PHP.
But (On Dedicated Server) you can write a PHP script to read /etc/crontab file and parse the info to check if a specific cron exists on the server.
|
Is there a way to know if a cronjob already exists with php?
I want it to work on most hostings, even shared.
|
How to check if cronjob exists with PHP?
|
I suggest you read Cron and Crontab usage and examples .
And you can run this:
➜ ( printf -- '0 4 8-14 * * test $(date +\%u) -eq 7 && echo "2nd Sunday"' ) | crontab
➜ crontab -l
0 4 8-14 * * test $(date +\0) -eq 7 && echo "2nd Sunday"
Or
#!/bin/bash
cronjob="* * * * * /path/to/command"
(crontab -u userhere -l; echo "$cronjob" ) | crontab -u userhere -
Hope this helps.
|
I tried the below command and crontab stopped running any jobs:
echo "@reboot /bin/echo 'test' > /home/user/test.sh"| crontab -
What is the correct way to script adding a job to crontab in linux?
|
How to add a crontab job to crontab using a bash script?
|
I managed to find a simple way to do it. I copy the crontab file to the server and then update the crontab with the shell module if the file changed.
The crontab task:
---
- name: Ensure crontab file is up-to-date.
copy: src=tasks/crontab/files/{{ file }} dest={{ home }}/cronfile
register: result
- name: Ensure crontab file is active.
shell: crontab cronfile
when: result|changed
In my playbook:
- include: tasks/crontab/main.yml file=backend.cron
|
I have a crontab containing around 80 entries on a server. And I would like to manage that crontab using Ansible.
Ideally I would copy the server's crontab to my Ansible directory and create an Ansible task to ensure that crontab is set on the server.
But the cron module only seems to manage individual cron entries and not whole crontab files.
Manually migrating the crontab to Ansible tasks is tedious. And even if I find or make a tool that does it automatically, I feel the YAML file will be far less readable than the crontab file.
Any idea how I can handle that big crontab using Ansible?
|
Manage whole crontab files in Ansible
|
Yes, crotab lines can get arguments as the man page says so.
Most likely something goes wrong while calling that command that resides in the change of environment from your console to the not-a-console cron environment.
Usually its best to add logging functions to your cron line to get the output of whats happening.
*/5 * * * * sh /home/adhikarisubir/test/basic_unix/trace_bkp.sh 2 /home/adhikarisubir/test/basic_unix /home/adhikarisubir/test_bkp >> /home/adhikarisubir/test/basic_unix/cron.log 2>&1
Then read that log and you will see how it goes wrong.
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have written a code which moves .trc files from source directive to backup directive. Now I have taken time (how much time older), source path and backup path as command line arguments for this file. Now when I invoke the script from sh, it works fine. But in crontab it is not working, which left me wondering if crontab allows passing command line arguments or not. My sh command is :
sh trace_bkp.sh 2 /home/adhikarisubir/test/basic_unix /home/adhikarisubir/test_bkp
where 2 defines 2 minutes older files, next one is soruce path and the last one is target path. I set the same in crontab as:
*/5 * * * * sh /home/adhikarisubir/test/basic_unix/trace_bkp.sh 2 /home/adhikarisubir/test/basic_unix /home/adhikarisubir/test_bkp
|
Does crontab take command line arguments? [closed]
|
Thank you Stephen, it was simpler than I though. I checked the logs
tail -f /var/log/syslog | grep cron -i
And I found this
ERROR (Missing newline before EOF, this crontab file will be ignored)
Adding a newline to the end of my crontab files fixed the problem. On top of that I had a return carriage character from Windows that was causing bash to choke.
|
I can't seem to get anything inside of the cron.d folder working. I need to be able to drop cron files inside or as least just get one file working that I can edit. Currently the folder has a "php5" file that already works but my other files wont run. I made the file the same permissions as the "php5" file (644 root:root)
This is my current cron file under /etc/cron.d/mycron
* * * * * root /usr/bin/php /var/www/private/cron/checkstatus.php
Is there some kind of magicaly hidden file I need to my cron file to?
Running debian 7.5.0 minimal server install.
|
/etc/cron.d Not working under debian
|
There are a lot of components to this problem. I'm ignoring the MTA error because that's just about an email notification when your cron job finishes. I'll also assume you have permissions set properly and that your script runs fine when run manually in the shell.
The biggest issue is that the CRON command isn't the same as running the command from the terminal "shell". You have to specify that your script get run using bash. Change your cron job from:
*/5 * * * * /home/group_name/path/to/script/run.sh
to:
*/5 * * * * bash /home/group_name/path/to/script/run.sh
This question has more information and additional options for solving the problem.
|
I try to run a shell script with crontab which runs python3 scripts.
The crontab is for a user group. Now it runs the script but not the python3 scripts inside it. I try to debug it but I can't figure out what happens. It might be a permission issue or a path problem but I can't figure out.
This is the line crontab
*/5 * * * * /home/group_name/path/to/script/run.sh
As I said the cron job is executed or at least thats what I think since when I run sudo grep CRON /var/log/syslog I get lines like
Feb 16 20:35:01 ip-**-**-*-*** CRON[4947]: (group_name) CMD (/home/group_name/path/to/script/run.sh)
right below I also get a line which might have something to do with the problem
Feb 16 20:35:01 ip-**-**-*-*** CRON[4946]: (CRON) info (No MTA installed, discarding output)
and finally run.sh looks like this
#!/bin/bash
# get path to script and path to script directory
SCRIPT=$(readlink -f "$0")
SCRIPTPATH=$(dirname "$SCRIPT")
echo "set directory"
cd "$SCRIPTPATH"
echo "run first script"
/usr/bin/python3 ./first_script.py > ./log1.txt
However when the cron job executes it nothing happens, when I run it manually the cahnges to the database happen as expected. The group has the same rights as I have. The shell file can be executed by me and the group and the python files can't be executed by me so I don't know why the group would need this.
PS: I want to execute the python script in a shell since we have a lot of scripts with some times a lot of arguments and hence the crontab would become overpopulated and some scripts have to be executed in a certain order.
EDIT:
Adding exec >> /tmp/output 2>&1 right after #! /bin.bash writes the echoes to /tmp/output whenever I run it manually but not when I run it in cron, not even the one before running any python script.
Running one of the python scripts directly from cron works, however even if I copy paste the exact same line as the one that works in cron, into the shell file, nothing happens.
|
Using python3 in shell script in crontab
|
Great answers already provided by pah and Sameer Naik, but I ended up going with an even simpler solution using AppleScripts, inspired by an SO answer to a similar question.
0 23 * * * osascript -e 'tell application "Terminal" to do script "echo \"Hello, world\"!"'
|
I scheduled a cron job on a Mac to open Terminal every day at 11PM as follows:
0 23 * * * open -a Terminal
That works great! But what I would like is to not only open Terminal, but also to run a simple command within it. From looking online, it looks as if cron commands can be chained with &&:
0 23 * * * open -a Terminal && echo 'Hello, world!'
However, this modified cron job only opens Terminal without running the second command there. Any thoughts on how I can get the cron job to do both?
|
Schedule cron job to open Terminal and run command sequentially
|
I solved it using a separate trigger that only fires (an hour early) on the beginning date of DST for the configurations that happen between 2am and 3am Eastern.
Seems kludgey, but it works...
|
I have a quartz cron trigger that looks like so:
<bean id="batchProcessCronTrigger" class="org.springframework.scheduling.quartz.CronTriggerBean">
<property name="jobDetail" ref="batchProcessJobDetail" />
<property name="cronExpression" value="0 30 2 * * ?" />
</bean>
How should I solve this, if I have several configurations that happen within the 2-3am period? Is there an accepted best practice?
Relevant link: http://www.quartz-scheduler.org/docs/faq.html#FAQ-daylightSavings
Basically it says "Deal with it." But my question is how!
|
Ways to deal with Daylight Savings time with Quartz Cron Trigger
|
To schedule jobs from the db server we'll need to enable trust authentication in pg_hba.conf for the user running the cron job.
We'll also need to either run UPDATE cron.job SET nodename = '' to make pg_cron connect via a local (unix domain) socket or add host all all 127.0.0.1/32 in pg_hba.conf
to allow access to the pg_cron background worker via a local TCP connection.
As a basic sanity check to see if logging is enabled, we run SELECT cron.schedule('* * * * *', 'SELECT 1') which will run SELECT 1 at the start of every minute and should show up in the regular postgres log.
|
We're trying to configure periodic jobs in Postgresql.
To do this, we have installed on linux machine, with postgres 9.6 running, the
citusdata pg_cron project.
System information
OS: Linux pg 4.4.0-72-generic #93-Ubuntu SMP
PG: Postgres 9.6.3 installed from repo 'deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main'
Citusdata pg_cron project
https://github.com/citusdata/pg_cron
Following the instructions in the pg_cron repository, we set in postgresql.conf
the configuration below
shared_preload_libraries = 'pg_cron'
cron.database_name = 'our db_name'
Then, on db_name, we created the EXTENSION pg_cron
CREATE EXTENSION pg_cron;
and we scheduled our first postgres job:
SELECT cron.schedule('45 12 * * *', $$CREATE TABLE testCron AS Select 'Test Cron' as Cron$$);
So, jobid 1 is created and listed in table cron.job.
We expect that at 12:45 the command of the scheduled job will be launched.
But nothing happens.
testCron table is not created and we have no trace in any logs.
We have also defined LOG_FILE in /usr/src/pg_cron/include/pathnames.h to enable logging.
But, after re-compiling the project and restarting postgres service, we did not track log for pg_cron.
Can someone help us?
How can we enable logs for pg_cron to check scheduling result?
Thanks in advance!
|
Pg_cron crontab log
|
Found the answer here
Ignoring the repeatInterval and setting repeatCount = 0 does what I wanted.
|
I am trying to integrate a Quartz job in my spring application. I got this example from here. The example shows jobs executing at repeated intervals using a simpletrigger and at a specific time using a crontrigger.
My requirement is to run the job only once on application startup. I removed the property repeatInterval, but the application throws an exception :
org.quartz.SchedulerException: Repeat Interval cannot be zero
Is there any way to schedule a job just once ?
Thanks..
|
Quartz one time job on application startup
|
7
you need to put the python file at the end again.
./bin/spark-submit --jars spark-cassandra-connector-2.0.0-M2-s_2.11.jar --py-files example.py example.py
Share
Improve this answer
Follow
answered Feb 17, 2018 at 13:04
X.ZhaoX.Zhao
7911 silver badge22 bronze badges
1
"The spark-submit command actually expects a jar file as the last argument which holds the compiled code. eg. spark-submit --jars spark-nlp.jar --class Main app.jar" github.com/JohnSnowLabs/spark-nlp/issues/35
– Jamie
Nov 2, 2022 at 12:17
Add a comment
|
|
I have been trying to execute a script .py by pyspark but I keep getting this error:
11:55 $ ./bin/spark-submit --jars spark-cassandra-connector-2.0.0-M2-s_2.11.jar --py-files example.py
Exception in thread "main" java.lang.IllegalArgumentException: Missing application resource.
at org.apache.spark.launcher.CommandBuilderUtils.checkArgument(CommandBuilderUtils.java:241)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitArgs(SparkSubmitCommandBuilder.java:160)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildSparkSubmitCommand(SparkSubmitCommandBuilder.java:276)
at org.apache.spark.launcher.SparkSubmitCommandBuilder.buildCommand(SparkSubmitCommandBuilder.java:151)
at org.apache.spark.launcher.Main.main(Main.java:86)
I can easily execute it by doing this:
11:57 $ pyspark --jars spark-cassandra-connector-2.0.0-M2-s_2.11.jar
then paste the code block by block in the IPython (interactive shell) . but I want to put the script in a cronjob so that It can be executed automatically. I need a command to put in cronjob and the spark-submit is not working. Any ideas?
|
Missing application resource while running script in pyspark
|
6
When you set up a GitHub Actions workflow with a schedule, you're essentially requesting GitHub to schedule that workflow for you. There is no guarantee that the workflow will run exactly at that time.
In a discussion in the GitHub Support Community (No assurance on scheduled jobs?), Github partner @brightran said that many times, there may be a delay when triggering the scheduled workflow:
Generally, the delay time is about 3 to 10 minutes. Sometimes, it may
be more, even dozens of minutes, or more than one hour.
He also said that if the delay time is too long, the scheduled workflow may be not triggered at that day. Therefore, it's not recommended to use GitHub Actions scheduled workflows for production tasks that require execution guarantee.
Source
Share
Improve this answer
Follow
answered Apr 13, 2021 at 23:57
GuiFalourdGuiFalourd
18.6k88 gold badges5656 silver badges8080 bronze badges
1
1
Thanks. I think ultimately my question evolved into when is the best time to run it to minimize the delay as everyone seems to schedule it for UTC 0.
– astrochun
Apr 30, 2021 at 23:59
Add a comment
|
|
While I have used GitHub Actions (with push triggers), I'm fairly new to scheduling them based on specified time. Simply: I have a simple cronjob running on GitHub Actions with the following trigger:
on:
schedule:
- cron: "0 0 * * *"
This should run at UTC 0h daily, but what I'm seeing in the logs is that it starts at least 1 hour later, between 01:04-01:11 UTC. I understand that GitHub Actions scheduling is such that it can be delayed by minutes and such, but this seem odd that it's delayed by more than an hour in a fairly consistent manner for a week and a half now.
Anyone have an idea how to fix this? I know this is small, but it's kind of nagging and something I wanted to understand should I need events to happen at specified time.
|
GitHub Actions cronjob trigger seems to trigger an hour later
|
A quick and dirty workaround could be for the script to update a row in a database with a date column set to CURRENT_TIMESTAMP. Have the cron second script check if the timestamp of this row is recent.
|
I have a script that listens to a jabber server and responds accordingly. Though it's not supposed to stop, last night it did. Now I want to run a cron job every minute to check if the script is running, and start it if not.
The question is, how do I check if a particular script is still running?
Some solutions have been posted here, but those are all for Linux, while I am looking for a Windows solution. Any ideas please? Thanks.
|
Check if a php script is still running
|
2
Have a look on
/etc/rsyslog.d/
there you should can change loglevel of cron .
#cron.* /var/log/cron.log
removing the # = maximum logging
cron.err /var/log/cron.log
only error log
Share
Improve this answer
Follow
answered Jan 21, 2015 at 13:32
DanielDaniel
3344 bronze badges
2
4
This is for things that log to syslog using the cron facility. Which, while it might impact cron jobs, isn't a direct answer to the question (unless the implication is that cron logs stdout and stderr of jobs to syslog using the cron facility in which case you should explicitly mention that).
– Etan Reisner
Jan 21, 2015 at 14:05
probably need to run sudo service crond reload after changing this
– hanshenrik
Oct 14, 2022 at 14:08
Add a comment
|
|
Does any body know where is the STDOUT and STDERR of a normal crontab job output in CentOS?
I checked the /var/log/cron file, but it only record the time and command of a cron job executed, no STDOUT or STDERR content found there.
|
Where is the STDOUT and STDERR output of a crontab job
|
0
What do you use to manage ECS? Terraform, some other IaC, or manually?
So far I got it just fine with Terraform having ECS (EC2), ASG for it (1,1,1 for min,max,desired) and setting which allowed to kill current task before deploying new one, to have RAM to do so: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service#deployment_minimum_healthy_percent
Some other answer I've read (AWS ECS. How to ensure only one instance of a task is running?) suggests that you may be not using ECS "service" functionality to its potential, and per https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html you are encouraged to do so.
Share
Improve this answer
Follow
answered Jan 26, 2023 at 23:06
RandychRandych
5655 bronze badges
1
I think you are confusing ECS "services" (always running) with ECS "scheduled tasks" (periodic tasks, not running all the time). Hence 'desired = 1' is not applicable
– Mark Sowul
Aug 22, 2023 at 19:50
Add a comment
|
|
Just like cron, AWS ECS Scheduled Tasks do not provide any built-in solution to prevent multiple instances of a scheduled task run at the same time.
One common way to solve this problem for cron is to create lock files.
An example that I found is https://github.com/bgentry/lock-smith. However, it says it is still in beta quality and has not been updated for several years.
Are there any other established solutions/utilities to solve this problem for ECS?
|
How to ensure only one instance of an AWS ECS scheduled task run at the same time?
|
0
try to create file like message.sh
inside it run your .py file
#!/bin/sh
python path/to/python_script.py
and make this file executable with chmod a+x message.sh
*/1 11-17 * * 1-7 path/to/message.sh 2>&1
Share
Improve this answer
Follow
answered Feb 22, 2023 at 3:49
Artem StroginArtem Strogin
18211 silver badge66 bronze badges
Add a comment
|
|
I tried running a Python script using cronjob but I get the following error:
cron[44405]: no path for address 0x10ff7a000
in grep cron /var/log/system.log
When I ran the script without using cronjob it worked:
/usr/bin/python /Users/anuj/Desktop/message.py
I tried adding the cron job using $sudo crontab. This is the CRON script:
*/1 11-17 * * 1-7 /usr/bin/python /Users/anuj/Desktop/message.py
Both paths are correct for root mode and user mode as I am running cron with sudo.
|
cronjob error in OSX: "no path for address"
|
Should be pretty straightforward:
- backup your database using mysqldump
mysqldump -u [uname] -p[pass] myfleet | gzip -9 > myfleet.sql.gz
- upload your dump file to S3 using a command line client (e.g. http://s3tools.org/s3cmd:
s3cmd put myfleet.sql.gz s3://<bucketname>/myfleet.sql.gz
Just add this to your cron job (you might want to use some kind of numbering scheme for the dump files, in case you want to keep several versions).
|
I have got one server in Rackspace and i'm already running a cron job evry day night to process something...(some account related operation- that will send me email every mid night). my application is in groovy on grails. now i want to take mysql database (called myfleet) backup on every mid night and put that file in Amezon S3 . how can i do that? do i need to write any java or groovy file to process that? or is it can be done from Linux box itself? i have already got account in Amezon S3 (bucket name is fleetBucket)
|
How to take MySQL Database backup and put it in Amazon s3 every Night by using Cron tab?
|
You need to open a screen session. You can do this by typing screen in the bash prompt. Once you have fired in your command, type CTRL+A followed by d. This will detach your screen session. However the script that you run will continue to run.
Now, you can disconnect from your SSH session. And once you log back in, you can resume your session by typing in screen -r. This is assuming that you have only one detached session. In case you have more than one, you will have to type in the pid of the session you want to attach back to.
To check the pid of the session, you can do:
ps aux | grep screen
This will list out the currently running screen processes. A sample output is:
dhanush 1327 0.0 0.0 2443312 56 s000 S+ 3Sep15 0:00.99 screen -r server -p server
dhanush 79037 0.0 0.0 2432784 600 s008 S+ 5:03PM 0:00.00 grep screen
Here the pid for the relevant screen process is 1327.
The second line belongs to the grep command that searched for the keyword resume0 itself.
tmux is another alternative for the same.
|
I have a script that will take around 10-15 hours to complete. I want to run it on EC2 instance and after say 10 hours stop the process forever.
Approaches--
CRONTAB -if I make a cronjob for the process . How to ensure it runs only once in lifetime and delete after 10 hour?
|
How to make a process run on AWS EC2 even after closing the local machine?
|
21
I worked on a project that tried to use DelayedJob to schedule future items. It sucked.
Instead I recommend you use the whenever gem:
http://github.com/javan/whenever
Whenever is a Ruby gem that provides a
clear syntax for defining cron jobs.
It outputs valid cron syntax and can
even write your crontab file for you.
It is designed to work well with Rails
applications and can be deployed with
Capistrano. Whenever works fine
independently as well.
Code looks like this (from github)
every 3.hours do
runner "MyModel.some_process"
rake "my:rake:task"
command "/usr/bin/my_great_command"
end
every 1.day, :at => '4:30 am' do
runner "MyModel.task_to_run_at_four_thirty_in_the_morning"
end
every :hour do # Many shortcuts available: :hour, :day, :month, :year, :reboot
runner "SomeModel.ladeeda"
end
every :sunday, :at => '12pm' do # Use any day of the week or :weekend, :weekday
runner "Task.do_something_great"
end
Here's a RailsCast video on how to use it.
And the corresponding ASCIICast.
Share
Improve this answer
Follow
answered Sep 3, 2010 at 1:18
marshallymarshally
3,55722 gold badges2424 silver badges2626 bronze badges
3
Why/how did delayed_job suck for you?
– sscirrus
Oct 9, 2011 at 23:06
1
I found that DJ workers were freezing constantly, and they were hard to keep running. I have since switched to Sidekiq and never looked back. It's perfect. Works absolutely fantasticly. I wish my company would fork over the money for the added funtionality of Sidekiq Pro just because it's so awesome.
– cpuguy83
Jan 7, 2013 at 15:29
1
It does suck. I've been there too.
– OneChillDude
Oct 1, 2014 at 20:42
Add a comment
|
|
delayed_job is at http://github.com/collectiveidea/delayed_job
Can delayed_job have the ability to do cron task? Such as running a script every night at 1am. Or run a script every 1 hour.
If not, what are the suitable gems that can do that? And can it be monitored remotely using a browser, and have logging of success and error?
|
Is Rails's "delayed_job" for cron task really?
|
To Execute a script every day at 23:59:00, use the following:
59 23 * * * root /path_to_file_from_root
Seconds cannot be defined using Cron, but this should do for you.
To execute the script at 23:59:59, use the PHP sleep() function to delay the execution of the script by 59 seconds. I would advise 58 seconds though, just to make sure the script doesn't delay until after midnight.
This is very basic, you could make it a little more complex and run tests to ensure that the script is always run at 23:59:59 by retrieving the time and delaying appropriately. This should not be necessary though.
<?php
// Any work that the script can do without altering the database, do here
// Delay the script by 58 seconds
sleep(58);
// Carry on with the rest of the script here, database updates etc
?>
|
I am about to trigger a call to a PHP file via curl in a schedule basis. I am thinking of having the script to be executed every 23:59:59 or simply a minute before the day turns tomorrow. Any best approach for this? Quite confused still on the cron settings.
I need to ensure that I run at exactly a second before the next day.
|
cron job minute before tomorrow
|
When running from cron, your script runs in a different working directory. This means the filename it is trying to open is relative to a different directory.
You should use the absolute path to the config file in your script.
A common way of doing it in python, assuming the config file resides in the same directory as the script you're running, is using the __file__ builtin:
config_file = os.path.join(os.path.dirname(__file__), 'config_test.ini')
|
i have a simple python3 Script that works when I'm running it from console:
import configparser
file = 'config_test.ini'
config = configparser.ConfigParser()
config.read(file)
for key in config['test_section']: print(key)
the called ini-file looks like this:
[test_section]
testkey1 = 5
testkey2 = 42878180
testkey3 = WR50MS10:1100012307
testkey4 = WR50MS04:1100012010
testkex5 = 192.168.200.168
and the script runs fine and returns the five keys of the ini-file.
No I configured it as a cronjob every minute (running on rasbian on Raspberry Pi) via:
* * * * * python3 /home/pi/s0/testconfig.py >> /tmp/cron_debug_log.log 2>&1
and the log looks like this:
Traceback (most recent call last):
File "/home/pi/s0/testconfig.py", line 7, in <module>
for key in config['test_section']: print(key)
File "/usr/lib/python3.2/configparser.py", line 941, in __getitem__
raise KeyError(key)
KeyError: 'test_section'
Does anyone have an idea what I did wrong
Kind regards
|
Python3: configparser KeyError when run as Cronjob
|
The Laravel scheduler assumes you have a cronjob every minutes. The scheduler is only useful if you want to have multiple tasks.
Normally you have one single cronjob configured in cPanel and you can set the scheduler to everyWeek() and have another task that would be everyDay() without having to add of change the cronjobs in your cPanel.
Laravel will automagically know if the task has already been run.
https://laravel.com/docs/5.4/scheduling
This Cron will call the Laravel command scheduler every minute. When
the schedule:run command is executed, Laravel will evaluate your
scheduled tasks and runs the tasks that are due.
|
I have the following function:
protected function schedule(Schedule $schedule)
{
$schedule->command('email:users')->everyMinute();
}
when I run the command
artisan schedule:run
it sends an email but when I add the following command to the cpanel as a cron job it doesn't send any email. Cpanel suppose to email me a notification when the cron job is run but I haven't receive a single email.
php /home/rain/artisan schedule:run 1>> /dev/null 2>&1
Where am I doing wrong?
Also when I run the command artisan schedule:run it runs it only once. I am very curious why do I have to add ->everyMinute(); if it is not going to run every minute? If I want to send it weekly I can setup the cron job. Why do I have to write to add ->weekly(); in the function if cron job is sending it weekly?
|
Setting up a Laravel cron job in cPanel
|
I don't think you need a lint for crontab. There's 5 fields that are space separated then a space then the command to run and its args finish off the line.
Also, on Ubuntu at least, crontab won't let you save a bum file. I just tried a few things and it barfed on all of them. I guess that means that crontab is its own 'lint for cron'.
|
Is there anything like lint for crontab? I'd like to know that i've got all my spaces and stars sorted out without waiting for something to not work.
|
Is there a lint like program for crontab?
|
21
If you use Spring Boot, you can use RandomValuePropertySource:
@Scheduled(cron="0 ${random.int[0,30]} 4 * * ?")
Note that the random value is calculated at Spring context initialization time (i.e. usually at app startup), afterwards the same time will be used every day until restart.
It is not completely random but suits the OP's problem of avoiding an activity peak.
Share
Improve this answer
Follow
edited Jun 7, 2023 at 8:48
sleske
82.6k3535 gold badges192192 silver badges231231 bronze badges
answered Dec 7, 2016 at 13:23
tibortrutibortru
73266 silver badges2828 bronze badges
2
1
Note that the random value is calculated at initialization time, the same time will be used every day. It is not completely random but suits the OP's problem of avoiding an activity peak.
– Florian F
Mar 10, 2021 at 7:28
@FlorianF: Good point, took the liberty of adding it to the answer.
– sleske
Jun 7, 2023 at 8:49
Add a comment
|
|
as stated in the title of a question, I need to set Spring Scheduler that will run method to load something from database into memory, every day around 4AM
The thing is that I have multiple instances of this server and I don't want all to start executing at the same time cause it will slow down the DB. So I want the time to be at a random minute somewhere between 4:00AM and 4:30AM
So lets say one instance will start everyday at 4:03AM, the other at 4:09AM, third at 4:21AM etc. The execution of query lasts for 1 minute.
Is this possible to do with cron expression, but without using $RANDOM bash (cause I think I dont have it) , or maybe I need to inject this random value some other way into
@Scheduled(cron="* randomMinuteValue 4 * * *")
|
Spring @Scheduled to be started every day at a random minute between 4:00AM and 4:30AM
|
Using cron seems to add another entry point into your application, while Quartz would integrate into it. So you would be forced to deal with some inter-process communication if you wanted to pass some information to/from the process invoked from cron. In Quartz you simply (hehe) run multiple threads.
cron is platform dependent, Quartz is not.
Quartz may allow you to reliably make sure a task is run at the given time or some time after if the server was down for some time. Pure cron wouldn't do it for you (unless you handle it manually).
Quartz has a more flexible language of expressing occurences (when the tasks should be fired).
Consider the memory footprint. If your single tasks share nothing or little, then it might be better to run them from the operating system as a separate process. If they share a lot of information, it's better to have them as threads within one process.
Not quite sure how you could handle the clustering in the cron approach. Quartz0 might be used with Terracotta following the scaling out pattern (I haven't tried it, but I believe it's doable).
|
I already asked a separate question on how to create time triggered event in Java. I was introduced to Quartz.
At the same time, I also google it online, and people are saying cron in Unix is a neat solution.
Which one is better? What's the cons and pros?
Some specification of the system:
* Unix OS
* program written in Java
* I have a task queue with 1000+ entries, for each timestamp, up to 500 tasks might be triggered.
|
Time triggered job Cron or Quartz?
|
22
To create the cron job, add the following to your cron file to reindex every day at 6am
0 6 * * * php -f /shell/indexer.php reindexall
Note: If you get an error telling you you’re out of memory similar to:
PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 7680 bytes) in …/app/code/core/Mage/Index/Model/Indexer.php on line 163
Try commenting out php_value memory_limit and php_value max_execution_time in your .htaccess file.
Share
Improve this answer
Follow
edited Mar 12, 2013 at 14:51
Zak
7,30922 gold badges2626 silver badges5454 bronze badges
answered Mar 12, 2013 at 6:59
user1776226user1776226
3
thanks for your reply. i saw this in many forums. here only i confused. let me know can i create a file in my root directory. could you please tell me clearly.
– krishna
Mar 12, 2013 at 7:12
2
You can access your crontab in the shell prompt by typing -- > crontab -e .. Then simply insert the line that @liyakat has provided.
– Zak
Mar 12, 2013 at 14:49
also use your CPanel (or whatever your server has) to set your cron if you need to , it is a bit simpler and tells you if your syntax is wrong!
– Luke Barker
Apr 30, 2013 at 12:30
Add a comment
|
|
How to setup cron job for clear cache and re-indexing in Magento. i don't know how to set the cron for re-index. but i saw some where every day cron runs defaultly in magento. still i am facing re indexing issues in my site. i need to clear cache also.On Magento website here
they said that logcleaning and reindexing is commented out in the code so in which file I can un-comment to setup cron job fo log cleaning and reindexing?
thanks,
murali.
|
how to set cron job for reindex
|
That error is usually due to a pam security issue.
It has been fixed in debian recently: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=726661 and in Ubuntu Wily (15.10).
As a workaround you can try commenting the module pam_loginuid.so inside /etc/pam.d/cron and restart cron (or the docker container).
You can use something similar to this in your Dockerfile:
RUN sed -i '/session required pam_loginuid.so/c\#session required pam_loginuid.so' /etc/pam.d/cron
|
I am running docker in amazon linux. I have setup a cron job for a specific action. It returns an error stating Cannot make/remove an entry for the specified session
Docker Version : 1.12.6 (Client and Server)
API Version : 1.24 (Client and Server)
|
Cannot make/remove an entry for the specified session - cron
|
23
You have to escape the % character. man 5 crontab says:
Percent-signs (%) in the command, unless escaped with backslash (\),
will be changed into newline characters, and all data after the first %
will be sent to the command as standard input.
Share
Improve this answer
Follow
edited Jan 7, 2015 at 19:21
answered Jul 22, 2012 at 16:44
DanielDaniel
27.2k1212 gold badges6262 silver badges8888 bronze badges
1
2
This tiny piece of knowledge helped me enormously. I could not fathom why date +%Y-%m-%d_%H-%M-%S worked perfectly in script but not in a (daisy chained crontab) command line. Thanks Daniel!
– Vacilando
Apr 10, 2013 at 19:18
Add a comment
|
|
I can see that this is "usual" error, but can not find solution in my case...
Running Crontab job with:
expr `date +%W` % 2 > /dev/null && curl https://mysite.com/myscript
It causes errors:
/bin/sh: -c: line 0: unexpected EOF while looking for matching ``'
/bin/sh: -c: line 1: syntax error: unexpected end of file
Can you help me how to avoid them? Thank you very much in advance!
|
Running cron job creates an error unexpected EOF while looking for matching ``'
|
I don't think cron kills any processes. However, cron isn't really suitable for long running processes. What may be happening here is that your script tramples all over itself when it is executed multiple times. For example, both PHP processes may be trying to write to the same file at the same time.
First, make sure you not only look in the php error log but also try to capture output from the PHP file itself. E.g:
*/1 * * * * * php /path/to/convert.php & >> /var/log/convert.log
You could also use a simplistic lockfile to ensure that convert.php isn't executed multiple times. Something like:
if (file_exists('/tmp/convert.lock')) {
exit();
}
touch('/tmp/convert.lock');
// convert here
unlink('/tmp/convert.lock');
|
I have a cron job the executes a PHP script. The cron is setup to run every minute, this is done only for testing purposes. The PHP script it is executing is designed to convert videos uploaded to the server by users to a flash format (eg... .flv). The script executes fine when manually doing it via command line, however when executing via cron it starts fine but after one minute it just stops.
It seems that when the next cron is executed it "kills" the last cron execution.
I added the following PHP function:
ignore_user_abort(true);
In hopes that it would not abort the last execution, I tested setting the cron to run every 5 minutes, which works fine, however a conversion of a video may take over 5 minutes so I need to figure out why its stoping when another cron is executed.
Any help would be appreciated.
Thank you!
EDIT:
My cron looks like:
*/1 * * * * php /path_to_file/convert.php
|
Does a cron job kill last cron execution?
|
The crontab entry should look like this :
* * 1 * * cmd_to_run
The columns mean
every minute
every hour
1st day of month
every month
any day of week, and then the command.
I'm not sure about cpanel admin
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 10 years ago.
Improve this question
I need to create a cron job that will run on the every 1st day of the month every minute of this day. I will create it from cpanel.
Any help is appreciated. Thanks
|
Cron job to run every 1st day of the month [closed]
|
The cron module expects the job name to be unique. Change it to:
name="Send emails {{ item }}"
See: cron – Manage cron.d and crontab entries
|
This is my playbook. Pretty simple. The problem is with the "with_items". It iterates over all the items, but, it only writes the last item to the crontab file. I think it is overwriting it. Why is this happening?
- name: Create cron jobs to send emails
cron:
name="Send emails"
state=present
special_time=daily
job="/home/testuser/deployments/{{ item }}/artisan --env={{
item }} send:healthemail"
with_items:
- london
- toronto
- vancouver
|
Ansible with_items keeps overwriting last line of loop
|
You can use Scheduler.getTriggerOfJob. This class returns all triggers for a given jobName and groupName, in a Trigger[].
Then, analyse the content of this array, test if the Trigger is a CronTrigger, and cast it to get the CronTrigger instance. Then, the getCronExpression() method should return what you are looking for.
Here is a code sample:
Trigger[] triggers = // ... (getTriggersOfJob)
for (Trigger trigger : triggers) {
if (trigger instanceof CronTrigger) {
CronTrigger cronTrigger = (CronTrigger) trigger;
String cronExpr = cronTrigger.getCronExpression();
}
}
|
I'm using Quartz Scheduler v.1.8.0.
How do I get the cron expression which was assigned/attached to a Job and scheduled using CronTrigger? I have the job name and group name in this case. Though many Triggers can point to the same Job, in my case it is only one.
There is a method available in Scheduler class, Scheduler.getTriggersOfJob(jobName, groupName), but it returns only Trigger array.
Example cronexpression: 0 /5 10-20 * * ?
NOTE: Class CronTrigger extends Trigger
|
How to get cron expression given job name and group name?
|
In some cases (in particular multi-service apps from my experience) simply uploading the app/service may not update the cron configuration automatically. Most likely because the cron config is not a service-level config, it's an app-level one, independent from a particular service.
Which is why there are commands specifically for deploying just the cron configuration. From Uploading cron jobs:
Option 2: Upload only your cron updates
To update just the cron configuration without uploading the rest of
the application, run the following command:
./appengine-java-sdk/bin/appcfg.sh update_cron [YOUR_APP_DIR]
And right below that you have Deleting all cron jobs - basically uploading an empty cron config file (as opposed to just deleting the file):
To delete all cron jobs:
Edit the contents of the cron.xml file to:
<?xml version="1.0" encoding="UTF-8"?>
<cronentries/>
Upload the cron.xml file to App Engine.
|
I deployed an application to my Google App Engine that uses a CRON job. I followed this Tutorial. It works fine, I could confirm it in my GAE console. In my Stackdriver logs I can also see that the CRON job is running.
But all changes that I made to my cron.xml file did not apply after I deployed my application again. I even deleted the cron.xml file and deployed my application again - no effect. I do not want that CRON job to exceed any quota.
Is there a way to cancel /disable / delete a CRON job from the GAE console?
Am I doing anything wrong to cancel the CRON job by modifying and deleting the cron.xml file?
|
How to cancel or stop Google App Engine Cron Job
|
Wrapping the execution in a shell script, it's likely the execution of the script in cron doesn't have the same environment set as when you run from the command line.
Try prefacing the execution of the shell script in cron with setting NODE_PATH & PATH
(if you need these values, on a command line type: echo $NODE_PATH and echo $PATH)
So, your cron entry would look like:
*/1 * * * * NODE_PATH=/usr/local/lib/node_modules PATH=/opt/local/bin:ABC:XYZ /home/bryce/scripts/wudu/exe.sh
Just make sure to substitute the actual values for NODE_PATH & PATH with what you get from the echo commands that you first did.
|
I want my server to execute a node script every minute. The program executes perfectly if I execute the file manually (./main.js), so I'm pretty sure it's not the problem. But when I hand it over to cron to execute, nothing happens.
Here's the line from the cron file.
*/1 * * * * /home/bryce/scripts/wudu/main.js
And here's a sample log:
Oct 11 15:21:01 server CROND[2564]: (root) CMD (/home/bryce/scripts/wudu/main.js)
The executable: home/bryce/scripts/wudu/main.js
#!/usr/bin/env node
var program = require('commander');
var v = require('./cli/validation');
var a = require('./cli/actions');
program
.version('0.0.1')
.option('-u, --url', 'Register url')
.option('-s, --selector', 'Register selector')
.option('-p, --pagination', 'Register pagination')
.option('-i, --index', 'Pass an index, to destroy')
.parse(process.argv);
var args = process.argv.slice(2),
mode = v.mode(args[0]),
options = v.hasArgs(mode, program);
a.init(mode, options);
Any idea why I'm getting radio silence? Somewhere else I should be looking to debug?
UPDATE:
I believe the problem has to do with my relative filepaths, and main.js being executed from outside its own directory.
So now, I've placed exe.sh in the wudu directory. It looks like this:
#!/bin/bash
cd ${0%/*}
./main.js mail
exit
Now, I've set cron to execute this file every minute. I tried executing this file from other folders, and it works as expected. But again, cron isn't picking it up.
|
Why won't cron execute my node.js script?
|
Assuming you'd like the work done ASAP, don't use cron. Cron is good for things that need to happen at specific times. It's often abused to simulate a background process that would ideally process work as soon as work appears. You should probably write a daemon that runs continuously. (Note: you could also look at a message/work-queue type system, there are nice libraries out there to do this too)
You can write a daemon from scratch using the pcntl functions (since you don't care about multiple worker processes, it's super-easy to get a process running in the background.), or cheat and just make a script that runs forever and run it via screen, or leverage some solid library code like PEAR's System:Daemon or nanoserv
Once the daemonization stuff is taken care of, all you really care about is having a loop that runs forever. You'll want to take care that your script doesn't leak memory, or consume too many resources.
Generally, you can do something like:
<?PHP
// some setup code
while(true){
$todo = figureOutIfIHaveWorkToDo();
foreach($todo as $something){
//do stuff with $something
//remember to clean up resources so you don't leak memory!
usleep(/*some integer*/);
}
usleep(/* some other integer */);
}
And it'll work pretty well.
|
I have a php script run as a cron job that executes a set of simple tasks that loops for each user in the database and takes about 30 mins to complete. This process starts over every hour and needs to be as fast and efficient as possible. The problem Im having, is like with any server script, execution time varies and I need to figure out the best cron time settings.
If I run cron every minute, I need to stop the last loop of the script 20 seconds before the end of the minute to make sure that the current loop finishes in time. Over the course of the hour this adds up to a lot of wasted time.
Im wondering if its a bad idea to simple remove the php execution time limit and run the script once an hour and let it run to completion.... is this a bad idea?
|
Cron job for php script that requires VERY long execution time
|
Either create a locking mechanism so the scripts won't overlap. This is quite simple as scripts only run every minute, a simple .lock file would suffice:
<?php
if (file_exists("foo.lock")) exit(0);
file_put_contents("foo.lock", getmypid());
do_stuff_here();
unlink("foo.lock");
?>
This will make sure scripts don't run in parallel, you just have to make sure the .lock file is deleted when the program exits, so you should have a single point of exit (except for the exit at the beginning).
A good alternative - as Brian Roach suggested - is a dedicated server process that runs all the time and keeps the connection to the IMAP server up. This reduces overhead a lot and is not much harder than writing a normal php script:
<?php
connect();
while (is_world_not_invaded_by_aliens())
{
get_mails();
get_images();
sleep(time_to_next_check());
}
disconnect();
?>
|
I'm trying to figure out the most efficient way to running a pretty hefty PHP task thousands of times a day. It needs to make an IMAP connection to Gmail, loop over the emails, save this info to the database and save images locally.
Running this task every so often using a cron isn't that big of a deal, but I need to run it every minute and I know eventually the crons will start running on top of each other and cause memory issues.
What is the next step up when you need to efficiently run a task multiple times a minute? I've been reading about beanstalk & pheanstalk and I'm not entirely sure if that will do what I need. Thoughts???
|
What do I use when a cron job isn't enough? (php)
|
There are several possibilities:
1) add the script(s) to the crontab of root. To do this you have to do sudo su - or su - to become root, then add the cron jobs by using crontab -e
2) allow a non-root user to use a crontab, and add the cron job to that user's crontab , by using crontab -e
and set the set-uid flag of your script and change ownership to root, so it will execute as root chmod +s scriptname; chown root scriptname
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
I have few bash scripts which are adding to cron jobs with specified timing, but it needs to be executed as root user. I am trying to run those scripts i.e., crob jobs but it needs root user permission, since I am running this jobs in ubuntu ec2 instance where root user is restricted. What would be the work around to run those scripts as root user.
Thanks
|
How to run the cron job as a user instead of root user [closed]
|
There is few things that you can do:
make sure that your script use permanent connection, this way you won't loose time connecting and disconnecting from the database server.
implement a logging mechanism, so you can identify which part of the script run slowly, logging the time spent on each database queries would be a good idea
try to optimize your database as much as possible, you should use explain on slow queries and create the needed indexes.
|
I have few cronjob that summarize data and validate data for my site. Some of them have processes that needs to be run in background.
Example:
cronjob1.php execute cronjob2.php using exec
This cronjob2.php runs another cronjob3.php using exec and cronjob3 needs to be completed then cronjob2 and then cronjob finish.
I currently have an issue where the cronjob1.php takes 2 hours to finish.
is there a better way to run this so it run faster?
Thank You
|
running processes in background php
|
6
I found out a way to do requests in AWS Lambda.
Courtesy of https://stackoverflow.com/a/58994120/129202 - this answer has a couple of other solutions too, that could be worth checking out. The below one worked for me.
import urllib.request
import json
res = urllib.request.urlopen(urllib.request.Request(
url='http://asdfast.beobit.net/api/',
headers={'Accept': 'application/json'},
method='GET'),
timeout=5)
print(res.status)
print(res.reason)
print(json.loads(res.read()))
Share
Improve this answer
Follow
answered Feb 15, 2022 at 13:37
JonnyJonny
16.1k1919 gold badges112112 silver badges234234 bronze badges
Add a comment
|
|
I want to use AWS lambda to make a cronjob-style HTTP request. Unfortunately I don't know Python.
I found they had a "Canary" function which seems to be similar to what I want. How do I simplify this to simply make the HTTP request? I just need to trigger a PHP file.
from __future__ import print_function
from datetime import datetime
from urllib2 import urlopen
SITE = 'https://www.amazon.com/' # URL of the site to check
EXPECTED = 'Online Shopping' # String expected to be on the page
def validate(res):
'''Return False to trigger the canary
Currently this simply checks whether the EXPECTED string is present.
However, you could modify this to perform any number of arbitrary
checks on the contents of SITE.
'''
return EXPECTED in res
def lambda_handler(event, context):
print('Checking {} at {}...'.format(SITE, event['time']))
try:
if not validate(urlopen(SITE).read()):
raise Exception('Validation failed')
except:
print('Check failed!')
raise
else:
print('Check passed!')
return event['time']
finally:
print('Check complete at {}'.format(str(datetime.now())))
|
Lambda function to make simple HTTP request
|
twiceDaily accepts parameters as hours.
Default values is public function twiceDaily($first = 1, $second = 13) which means task will be execute daily at 1:00 & 13:00.
You are trying to run task 23:27 and 23:28 which is technically not available using this (twiceDaily) method, because it does not accepts minutes as parameter.
Solution
Change twiceDaily() to <command>->cron('27,28 23 * * *');.
It will run Your command at 23:27 and 23:28.
Or if You want to run command two times a day, with a different hour and minutes, You should use two separated commands with dailyAt() (two dailyAt() on one command, will override and will not work as You want).
|
I'm trying to schedule a command twice a day:
Here is my code:
protected function schedule(Schedule $schedule)
{
$morningCarbonHour = Carbon::now();
$morningCarbonHour->hour = 23;
$morningCarbonHour->minute = 27;
$morningCarbonHour->second = 00;
$hourIni = $morningCarbonHour->format('H:i');
$nightCarbonHour = Carbon::now();
$nightCarbonHour->hour = 23;
$nightCarbonHour->minute = 28;
$nightCarbonHour->second = 00;
$hourFin = $nightCarbonHour->format('H:i');
$schedule->command('check:time')
->twiceDaily($hourIni,$hourFin)
->timezone('America/Mexico_City');
}
I get this error message:
[2016-05-31 23:29:01] production.ERROR: exception 'InvalidArgumentException' with message 'Invalid CRON field value 23:27,23:28 at position 1' in /home/forge/myproject/vendor/mtdowling/cron-expression/src/Cron/CronExpression.php:147
Stack trace:
I don't really know why??? Any Idea???
|
scheduling a task twice a day with Laravel
|
On some Macs, sysctl is located in /sbin/ instead of /usr/sbin/. You should add /sbin to your PATH variable
|
I have a python script, script.py, and am using cron to run this script periodically. The script runs as expected, but once the cron job finishes, I get the following error in /var/mail/[myusername]:
sh: sysctl Command Not Found
The following is the cron job:
0 14 * * * PATH=$PATH:/usr/sbin PYTHONPATH=/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/ /usr/bin/python2.7 ~/.../script.py
I was told to include both PATH and PYTHONPATH in the task (as before, python wouldn't recognize several modules I had imported and had installed), so at this point, I'm not sure what the problem could be
|
"sh: sysctl Command not Found " for Mac OS X running a cron job
|
15
As crontab tag wiki says, percent characters are problematic in crontabs; % gets converted to a newline:
[..] > ~/state/app-state-$(hostname)-$(date +%F).json
would run command as
[..] > ~/state/app-state-$(hostname)-$(date +\nF).json
“Escaping” percent characters is possible, but the escape character gets executed in the command. A job like following would run the command with \ in front of percent character.
0 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +%F).json
One way to circumvent this problem is to have the command in a script and execute that as a cron job:
/usr/local/bin/app_state_cron.sh:
#!/bin/sh
~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +%F).json
And in crontab:
0 5 * * * /bin/sh /usr/local/bin/app_state_cron.sh
Share
Improve this answer
Follow
edited Apr 10, 2022 at 5:32
answered Aug 27, 2019 at 7:33
SmarSmar
8,32533 gold badges3636 silver badges4949 bronze badges
Add a comment
|
|
This question already has answers here:
How is % (percent sign) special in crontab?
(2 answers)
Closed last year.
I have the following user crontab entry on a RHEL 6 machine (sensitive values have been replaced):
[email protected]
0 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +%F).json
Which produces this entry in /var/log/cron:
Apr 23 05:00:08 host CROND[13901]: (dbjobs) CMD (~/bin/app_state.sh host-arg 9200 > ~/state/app-state-$(hostname)-$(date +)
But no file.
After changing the statement to:
43 5 * * * ~/bin/app_state.sh host-arg 9200 > ~/state/app-state-static.json
I get a better log entry and the file is created at ~/state/app-state-static.json
I'm sure there's some issue with not escaping the +%F but can't for the life of me find details of how I should be escaping it. I could wrap the filename generation inside another shell script but this is more easy to read for people coming looking for the file.
|
Special escaping for crontab [duplicate]
|
You could wrap your script in a
while True:
...
block, or with a bash script:
while true ; do
yourpythonscript.py
done
|
I have a python script that will be running that basically collects data and inserts it into a database based on the last time the database was updated. Basically, i want this script to keep running and never stop, and to start up again after it finishes. What would be the best way to do this?
I considered using a cronjob and creating a lockfile and just have the script run every minute, but i feel like there may be a more effective way.
This script currently is written in python 2.7 on an ubuntu OS
Thanks for the help!
|
How to restart a python script after it finishes
|
Thanks all for your answers. However the solution that worked for me came from this site Howto: Zend Framework Cron. The original link is dead, but its copy can be found on Internet Archive.
I am posting a cut of the code here. But please this is not my solution. All credits goes to the original author.
The trick with cronjobs is that you do not want to load the whole View
part of ZF, we don't need any kind of HTML output! To get this to
work, I defined a new constant in the cronjob.php which I will check
for in the index.php.
cronjob.php
define("_CRONJOB_",true);
require('/var/www/vhosts/domain.com/public/index.php');
// rest of your code goes here, you can use all Zend components now!
index.php
date_default_timezone_set('Europe/Amsterdam');
// Define path to application directory
defined('APPLICATION_PATH')
|| define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application'));
// Define application environment
defined('APPLICATION_ENV')
|| define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'production'));
// Ensure library/ is on include_path
set_include_path(implode(PATH_SEPARATOR, array(
realpath(APPLICATION_PATH . '/../library'),
get_include_path(),
)));
/** Zend_Application */
require_once 'Zend/Application.php';
// Create application, bootstrap, and run
$application = new Zend_Application(
APPLICATION_ENV,
APPLICATION_PATH . '/configs/application.ini'
);
$application->bootstrap();
/** Cronjobs don't need all the extra's **/
if(!defined('_CRONJOB_') || _CRONJOB_ == false)
{
$application->bootstrap()->run();
}
|
I have a zendframework project for which i need to run a script periodically to upload the content of a folder and another do download. The script itself is ready but I am struggling to figure out where or how to set up the script to run. I have tried lynx and curl so far. I first had an error about specified controller being wrong and i fixed that but now I just get a blank screen when I run the script but file(s) are not uploaded.
For a zendframework project how do I setup script to be run by cron?
EDIT:
My project structure looks like this:
mydomain.com
application
library
logs
public
index.php
scripts
cronjob.php
tests
cronjob.php is the script i need to run. The first few lines of which are:
<?php
define("_CRONJOB_",true);
require('/var/www/remotedomain.info/public/index.php');
I also modified my index.php file like below:
// Create application, bootstrap, and run
$application = new Zend_Application(
APPLICATION_ENV,
APPLICATION_PATH . '/configs/application.ini'
);
$application->bootstrap();
/** Cronjobs don't need all the extra's **/
if(!defined('_CRONJOB_') || _CRONJOB_ == false)
{
$application->bootstrap()->run();
}
However now when i now try to run the script, I get the message:
Message: Invalid controller specified (scripts).
Does it mean that I need to create a controller for the purpose? But the script folder is outside the application folder. How do i fix this?
|
How do I setup a cron job a script that is part of a zend framework project
|
To save attachments as files, you need to parse the structure of the message and take out all parts that are attachments on it's own (content disposition). You should wrap that into classes of their own so you have an easy access and you can handle errors more easily over time, email parsing can be fragile:
$savedir = __DIR__ . '/imap-dump/';
$inbox = new IMAPMailbox($hostname, $username, $password);
$emails = $inbox->search('ALL');
if ($emails) {
rsort($emails);
foreach ($emails as $email) {
foreach ($email->getAttachments() as $attachment) {
$savepath = $savedir . $attachment->getFilename();
file_put_contents($savepath, $attachment);
}
}
}
The code of these classes is more or less wrapping the imap_... functions, but for the attachment classes, it's doing the parsing of the structures as well. You find the code on github. Hope this is helpful.
|
i am developing a site in which users can mail tickets and attach any type of files to a specific mail id. I need to add the mail subject, content and attachment to the database. I am doing this using cron. Except the attachments every thing works perfect. I have seen some post which create download links. Since i am using cron i can't do it manually.
$hostname = '{xxxx.net:143/novalidate-cert}INBOX';
$username = '[email protected]';
$password = 'zzzz';
/* try to connect */
$inbox = imap_open($hostname,$username,$password) or die('Cannot connect to : ' . imap_last_error());
$emails = imap_search($inbox,'ALL');
if($emails) {
$output = '';
rsort($emails);
foreach($emails as $email_number) {
$structure = imap_fetchstructure($inbox, $email_number);
$name = $structure->parts[1]->dparameters[0]->value; // name of the file
$type = $structure->parts[1]->type; //type of the file
}}
I am able to get type and name of the files but don't know how to proceed further
Any one please help me. thank you...
|
how to download mails attachment to a specific folder using IMAP and php
|
It looks like you are searching for the log file in the wrong folder.
Try this
* * * * * cd /path/to/script.php ; ./script.php arg1 arg2 >> logfile.log
Then look for your log file in the /path/to/script folder.
It can also be a write permission problem.
Also, check your script for errors.
Your crontab command seems OK.
|
Example:
* * * * * /usr/bin/php /full/path/to/script.php arg1 arg2 > /full/path/to/logfile.log
The script runs and accesses the arguments just fine, but the results are never printed to the logfile.log. Also, my logfile.log is chmod 777, so I know it has write access.
Can you fix my syntax?
|
How can I run a cron job with arguments and pass results to a log?
|
Try this
php -q /path/to/cron.php
From here: http://www.php.net/manual/en/features.commandline.php#24970
|
I have a PHP script that is called via a cron job, with the results sent to my email address:
"php /path/to/cron.php"
I only echo errors, otherwise nothing is outputted by me. This way I can get an error report when things go wrong. The problem is, I receive an email with ever cron execution, that only has the HTTP headers in it:
X-Powered-By: PHP/5.2.10
Content-type: text/html
This is obviously a pain, receiving multiple emails every few minutes. All I'd like to see are emails for cron jobs where I've echo'd something.
I want to keep the email being generated by the cron job if possible (instead of sending the email in-script). And I don't want to run it via wget, because my host counts that against my bandwidth.
All my searching has only shown me how to set headers, not remove/suppress the default ones. Am I going about this wrong? Has anybody else seen this?
Thanks
|
PHP CRON job, not output HTTP headers
|
You can disable the gzip version of the HTMLEditor. I've seen this happen before. Try adding the following to your config/config.yml:
HTMLEditorField:
use_gzip: false
After that, do a full flush and try again?
Another option, is the javascript not syncing correctly. For that, you'll need to change the way the ?m=12345 is built. By default, it's built based on the timestamp.
I'll see if I can dig up the md5-based one, which might otherwise solve your problem.
*edit
Here ya go, try creating this somewhere in your project, and add the following to _config.php
Requirements::set_backend(new MysiteRequirementsBackend());
https://gist.github.com/Firesphere/794dc0b5a8508cd4c192a1fc88271bbf
Actual work is by one of my colleagues, when we ran into the same issue.
|
I've been struggling to put SilverStripe behind a load balancer and I've been fixing multiple problems with rsyncing the instances and using shared storage and have almost got it stable, however I've found another issue which breaks the CMS.
Specifically when you try to add a link in the CMS in the TinyMCE editor, when the pop-up screen shows to select the page/file the JavaScript throws an exception that tinyMCE.activeEditor returns null.
I've rsynced the cache directory silverstripe-cache between the two servers and still there is a discrepancy between the m=timestamp of only a few seconds, but I'm guessing this is enough to cause tiny_mce_gzip.php to be forced to load again.
I have a shared redis cache for session storage, shared db, have rsynced the cache directory and use CodeDeploy to deploy the app so it should all be in sync. What other storage areas could cause the different m timestamp? Has anyone had success with SilverStripe CMS being used behind a load balancer without sticky sessions?
|
Silverstripe TinyMCE crashes behind load balancer.
|
You can make any service you want implement the Symfony\Component\HttpKernel\CacheClearer\CacheClearerInterface to be used by symfony native cache:clear
use Symfony\Component\HttpKernel\CacheClearer\CacheClearerInterface;
use Symfony\Component\Cache\Adapter\AdapterInterface;
final class SomeCacheClearer implements CacheClearerInterface
{
private $cache;
public function __construct(AdapterInterface $filesystemAdapter)
{
$this->cache = $filesystemAdapter;
}
public function clear($cacheDir)
{
$this->cache->clear();
}
}
Config is done by the tag kernel.cache_clearer
cache_provider.clearer:
class: AppBundle\Path\SomeCacheClearer
arguments:
- 'my_app_cache'
tags:
- { name: kernel.cache_clearer }
EDIT: precision about cache as a service:
Your cache service could be defined like that
my_app_cache:
class: Symfony\Component\Cache\Adapter\FilesystemAdapter
Or if you want to specify a namespace and ttl
my_app_cache:
class: Symfony\Component\Cache\Adapter\FilesystemAdapter
arguments:
- 'my_cache_namespace'
- 30 #cache ttl (in seconds)
So you should not use
$cache = new FilesystemAdapter();
But with service injection
$cache = $this->get('my_app_cache');
|
Hi is there way to clear ALL cached data from symfony cache component?
Here http://symfony.com/doc/current/components/cache/cache_pools.html on bottom is: (i need console command)
$cacheIsEmpty = $cache->clear();
and command:
bin/console cache:clear
keeps this cache untouched
I am lookig for console command witch i can call in *.sh script every deploy.
EDIT (example):
Default input options:
$cache = new FilesystemAdapter();
$defaultInputOptions = $cache->getItem('mainFilter.defaultInputOptions');
if (!$defaultInputOptions->isHit()) {
// collect data, format etc.
$expiration = new \DateInterval('P1D');
$defaultInputOptions->expiresAfter($expiration);
$cache->save($defaultInputOptions);
} else {
return $defaultInputOptions->get();
}
But if i change something in 'collect data, format etc.' on my machine and after that make deploy (git pull, composer install, bin/console cache:clear...) then new version on server has still valid cache (1 day) and take data from it...
|
Symfony clear cache component after deploy
|
Image URLs generated using Wagtail's generate_signature function are not cached by the browser automatically. This is because by default there is no cache-control setting in the request header for requests to these URLs.
There is a work-around, however (undocumented, reference). To implement a cache-control parameter in the request header for image URLs created using generate_signature, you need to decorate your images URL with a cache-control max-age, as follows:
from django.views.decorators.cache import cache_control
...
from wagtail.wagtailimages import urls as wagtailimages_urls
from wagtail.utils.urlpatterns import decorate_urlpatterns
# attach cache-control parameter to your /images/* URL
urlpatterns += decorate_urlpatterns(
[url(r'^images/', include(wagtailimages_urls))],
cache_control(max_age=1209600)
)
All requests to any image URL at /images will now have the cache-control: max-age=120900 parameter in the response header and will be cached by the browser.
|
I'm loading a lot of thumbnails on a page via an API endpoint outside of Wagtail, using the technique shown in Generating dynamic image URLs in Python. That seems to work at first, but on closer inspection with Webkit Inspector, it appears that all of the thumbnails are being generated on every page load, not served from cache.
The docs say "the rendition is generated on the first call and subsequent calls are served from a cache."
But in Inspector, I see that every thumbnail generates a 200, not a 304, and they only show up when I select "All" (not Image) in the Network tab. Inspector shows that the calls are of type "document" (not image).
The code I'm using:
image = s.main_image()
filter_spec = 'fill-300x186|jpegquality-60'
signature = generate_signature(image.id, filter_spec)
url = reverse('wagtailimages_serve', args=(signature, image.id, filter_spec))
url += image.file.name[len('original_images/'):]
shop['img_url'] = url
and an example image URL is:
/images/OGJXq3f3oz0AAzD9vFo-HE24Sz8=/414/fill-300x186%7Cjpegquality-60/ceram_marhc_2920120329_0247_1_Sia8Kgl.jpg
Ideas?
Update: While the accepted answer works, it turns out we were overcomplicating this. A better approach is not to use the custom signature and url generation routine. Instead, just use Wagtail's get_rendition() method:
image = s.main_image()
shop['img_url'] = image.get_rendition('fill-300x186|jpegquality-60').url
and don't use the URL decorator at all. The images are generated and stored on first access, and return 304 on subsequent accesses just fine.
|
Wagtail: Dynamic image generation and caching
|
Assuming that what you mean by cache is not to repeat making same request during life cycle of that page load you could store the promise as a variable and return the same promise each time.
The first time a specific path is requested will make a new request, subsequent times only the stored promise will be returned
var promises ={};
loadTplAsync: function(path) {
// create new promise if it doesn't already exist for path instance
if(!promises[path]){
promises[path] = Q.Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest();
xhr.open("GET", path, true);
xhr.onload = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
resolve(_.template(xhr.responseText));
} else {
reject(xhr.responseText);
}
}
};
xhr.onerror = error => reject(error);
xhr.send(null);
});
}
// return the stored promise
return promises[path];
}
Note this is not a persistent cache and new requests would be made on subsequent page loads
|
I have a function that loads my html templates asynchronously:
loadTplAsync: function(path) {
return Q.Promise(function(resolve, reject) {
var xhr = new XMLHttpRequest();
xhr.open("GET", path, true);
xhr.onload = () => {
if (xhr.readyState === 4) {
if (xhr.status === 200) {
resolve(_.template(xhr.responseText));
} else {
reject(xhr.responseText);
}
}
};
xhr.onerror = error => reject(error);
xhr.send(null);
});
}
How to extend this function to cache responses by browser?
|
How to cache XMLHttpRequest response in Javascript?
|
3
In that example the Length is constant, so the larger the stride - the less elements you go through.
The interesting phenomena is that it doesn't apply below a cache line, and that's because you can't bring parts of a line. So below 16, you pay the same penalty of fetching all cache lines. Above 16, you start skipping some lines. above 32 for example (128B) you fetch every other line - hence +/- half the time (assuming your execution time is dominated by memory latency)
Share
Improve this answer
Follow
answered Dec 29, 2016 at 13:08
LeeorLeeor
19.5k55 gold badges5959 silver badges8888 bronze badges
2
So when you say "below 16, you pay the same penalty of fetching all cache lines" does that mean the whole array (all elements) is loaded into the cache and when above 16 parts of the array are loaded into the cache?. I was under the impression that the number of elements loaded would depend on the stride
– zer0c00l
Dec 29, 2016 at 20:10
1
caching is done at 64 byte granularity. If you access even one element of a cache line, you still have to fetch the full line. However, if your stride is two cache lines wide, you won't have to fetch the lines in the middle. If you draw it, you'll see that any stride over 64B will allow some skips, and the longer the stride the more skips you'll get
– Leeor
Dec 30, 2016 at 16:06
Add a comment
|
|
I was learning about cache line, and the effect of loop stride on the cache. I came across this page which shows the execution time of a loop vs the loop stride. According to the benchmark, increasing the loop stride decreases the execution time which is very confusing to me. As I understand if the cache line is 64 bytes, and lets assume if in the first case the loop stride is just 1 which means the loop goes over the array element sequentially then that should have the least execution time because 16 integers (4byte x 16 = 64bytes) are loaded into the cache. The execution time should be lowest up to a stride of 16 because all 16 elements are loaded into the same cache line. When the stride is increased above 16 that should increase the execution time because the array element won't be in the cache line, but the graph on the page is completely opposite.
|
Loop stride and cache line
|
System.Runtime.Caching is not available in the current UWP SDK release. Depending on what type of caching you need, there are several options:
UI caching:
Page.NavigationCacheMode: remembers the rendering of the page on the backstack (including scrolling position, data on screen, ...).
UIElement.CacheMode: rendering the content of a UIElement as a bitmap (mainly for complex renderings).
'Real' data caching, using 3rd party libraries like:
Akavache: asynchronous key-value store based on SQLite, with expiration rules.
Save the data in JSON/XML format to the disk yourself.
Update on comment:
You can clear the NavigationCacheMode by setting it to Disabled. Note that you can't pass a parameter on the GoBack() to tell your previous page to clear the cache. So you'll have to add some sort of event messaging (e.g. Prism EventAggregator) or a global variable to track that as well.
If you want to change the value of NavigationCacheMode programmatically to Enabled or Required, you can only set these values in the constructor for the page.
If you change the value of NavigationCacheMode from Required or Enabled to Disabled, the page is flushed from the cache.
But since you're talking about JSON data from a web call, I'd go for Akavache.
|
I was looking for implementing caching in my UWP app, but I couldn't find the System.Runtime.Caching, I looked at msdn https://msdn.microsoft.com/en-us/library/mt185505.aspx couldn't find this reference. Is this supported on UWP? if not what is the alternative? I looked at other similar questions on the stackoverflow but couldn't find any viable answer on No System.Runtime.Caching available?
|
No system.runtime.caching in UWP
|
2
You can use
Do ^%G
to examine globals and you may also find
Do ^%GSIZE to get a quick size of the globals
Share
Improve this answer
Follow
edited Dec 15, 2016 at 16:22
user736893
answered Dec 9, 2016 at 15:03
Stephen CanzanoStephen Canzano
29611 silver badge33 bronze badges
2
Thank You, Is there an way to recuse through all of the globals to display the global's node structure?
– Intrinsic
Dec 9, 2016 at 16:01
A couple of thoughts. 1. Typically you can use $Query or $Order to work through the nodes in a globals. 2. As for all of the globals you could utilize [%SYS.GlobalQuery][1] [1]: docs.intersystems.com/latest/csp/documatic/… so long as you are also familiar with creating ResultSets/SQL Statements.
– Stephen Canzano
Dec 9, 2016 at 19:55
Add a comment
|
|
Using only the Cache terminal, what utility function, or Global do I use or look in to find a list of all the Globals which exist in a Cache database?
Again usin only the Cache terminal, what utility function or Global do I use or look in to find a list of all of the nodes for these Globals as well.
This site does not use any of the advanced Cache features such as CSP, SQL, VB or object scripting.
Thanks
|
InterSystems Cache, where to find global definitions
|
You are correct in your observations. Transformations on RDDs are lazy, so caching will happen after the first time the RDD is actually computed.
If you call an action on your parent RDD, it should be computed and cached. Then your subsequent operations will operate on the cached data.
|
Hello stackoverflow community.
I ask your help in understanding if my thoughts are correct or I'm missing some points in my Spark job.
I currently have two rdds that I want to subtract.
Both the rdds are built as different transformations on the same father RDD.
First of all, the father RDD is cached after it is obtained:
val fatherRdd = grandFather.repartition(n).mapPartitions(mapping).cache
Then the two rdds are transformed.
One is (pseudocode):
son1= rddFather.filter(filtering_logic).map(take_only_key).distinct
The other one is :
son2= rddFather.filter(filtering_logic2).map(take_only_key).distinct
The two sons are then subtracted to obtain only the keys in son1:
son1.subtract(son2)
I would expect the squence of the transformations to be the following:
mapPartitions
repartition
caching
Then, starting from cached data, map filter map distinct on both rdds and then subtracting.
This is not happening, what I see are the two distinct operations running in parallel, apparently not exploiting the benefits of caching (there are no skipped tasks), and taking almost the same computation time.
Below the image of the dag taken from spark ui.
Do you have any suggestions for me?
|
Spark: understanding the DAG and forcing transformations
|
Finally I have found a workaround using Middleware.
Create a middleware in server/middleware folder:
// cache.js
module.exports = function () {
return function cacheImages(req, res, next) {
// Check if download file:
if (req.originalUrl.includes('/api/files/') && req.originalUrl.includes('/download/')) {
console.log("Here at the middle ware");
console.log(req.originalUrl);
res.set('Cache-Control', 'max-age=315360000');
}
next();
}
}
and add this middleware in server/middleware.json config file:
...
"initial": {
"./middleware/cache": {}
}
...
Hope this help! :)
|
I'm using Loopback as an backend API, and also using Storage component as an CDN to upload and download image and sound file for my website.
My website using a lot of image from that. But all the image files is not cach-enable.
I want to enable cache by adding a "Cache-Control:max-age=2678400" header to the file but don't know how to do it. Can someone help me or suggest any better solution. I really appreciate it.
Thank you!
|
Enable caching for download method of Loopback Storage Component?
|
3
From Django’s cache framework:
There are a few other ways to control cache parameters. For example, HTTP allows applications to do the following:
Define the maximum time a page should be cached.
Specify whether a cache should always check for newer versions, only delivering the cached content when there are no changes. (Some caches might deliver cached content even if the server page changed, simply because the cache copy isn’t yet expired.)
In Django, use the cache_control view decorator to specify these cache parameters. In this example, cache_control tells caches to revalidate the cache on every access and to store cached versions for, at most, 3,600 seconds:
from django.views.decorators.cache import cache_control
@cache_control(must_revalidate=True, max_age=3600)
def my_view(request):
# ...
If the page you're caching varies frequently and you want to present those changes immediately (and you cache do not detect or check for changes automatically) without waiting for cache TTL, use cache_control.
Share
Improve this answer
Follow
answered Nov 3, 2016 at 13:28
Raydel MirandaRaydel Miranda
14.1k33 gold badges4040 silver badges6262 bronze badges
Add a comment
|
|
I'm using redis for caching in a django app. I'm also using Django Rest Framework, and here is my problem.
I'm using the cache system like this:
from django.views.decorators.cache import cache_page
urlpatterns = [
...
url(r'^some_url/$', cache_page(CACHE_TTL)(SomeView.as_view())
...
]
Here, SomeView is a class that inherits from APIView.
Now imagine we make a request to this url and we receive a json object containing one instance of whatever this url returns.
Then we proceed to delete (using django's admin interface) that object, and make the request again. The expected result is an empty json object, but what I'm receiving is the same object unchanged, the same happens if a new object is added, the response still only one object.
After some time (the TTL of the request in cache) the result is correct.
So, how can I tell django that a cache entry it is no valid any more?
|
Update cache if content changes before cache TTL expires
|
The antiCache parameter is added only in Ajax responses. If you add the image in the Ajax response (https://github.com/apache/wicket/blob/70606d73e9165d37c1d8b7c7820279fb4be18770/wicket-core/src/main/java/org/apache/wicket/markup/html/image/Image.java#L543) then Wicket assumes that it has to be repainted, so it modifies the url.
You can override this method and suppress this behavior.
|
I have png files both in the file system and in a database. So I tried to use
ByteArrayResource
which displayed the image only once. A browser refresh only showed an image placeholder. The image url had a parameter appended:
&antiCache=123456789
So ByteArrayResource looks to me like it can only be used once and has to be reloaded even when the page only gets refresehd. Next I tried
PackageResource
which displayed the image in the browser (even after a refresh) but also rendered the "antiCache" parameter. This happened even after explicitly calling
setCachingEnable( true );
Also "PackageResource" cannot use my png data from the database.
|
How can I make an image cacheable with Wicket 7?
|
2
It's not the web page itself you should be concerned about, but the mp4 file which is downloaded and cached separately.
Ensure the response headers of the mp4 file prevent browser caching.
Cache-Control: no-cache
Share
Improve this answer
Follow
answered Sep 28, 2016 at 10:46
CurtisCurtis
102k6666 gold badges272272 silver badges352352 bronze badges
Add a comment
|
|
I need to disable caching for single files in all browsers.
I have a website that generates small video clips. There is a preview stage where the results can be watched.
An mp4 called preview.mp4 is displayed. When the user goes back, edits the video and wants to preview it again, the old preview.mp4 is being displayed even though the file on the server is changed.
How can I prevent the caching of this video file? Or what are the other ways to fix it?
Note: it's a single page application so I don't reload any HTML files. Only PHP content. Hence the headers I set, are not useful in this scenario:
<meta http-equiv="X-UA-Compatible" content="IE=Edge"/>
<meta http-equiv="cache-control" content="no-store" />
Thanks.
|
Prevent Browser File Caching
|
Yes, this is expected behavior. First of all, marshaller is global in Ignite, as well as the metadata, so destroying the cache does not affect this. Second of all, binary format allows to dynamically change the schema, but the changes have to be compatible. I.e., you can change and/or remove fields, but not change their types, because in this case a client that uses older schema will not be able to deserialize the object if it wants to.
|
I've createted the binary type with the name 'SomeType' and filds:
f1:string
f2:string
And cache based on this type (via CacheConfiguration.setQueryEntities).
Now I want to change f1 from string to int. But I don't want to change the name of the type.
So when I'm trying
ignite.destroyCache(cacheName)
And then I'm creating the new cache (with the same name and binary type), I've got an exception while cache populating:
org.apache.ignite.binary.BinaryObjectException: Wrong value has been set [typeName=SomeType, fieldName=f1, fieldType=String, assignedValueType=int]
As I understand from http://apache-ignite-users.70518.x6.nabble.com/Ignite-client-reads-old-metadata-even-after-cache-is-destroyed-and-recreated-td5800.html it's an expected behaviour.
But how can I refresh my binary type matadata whithout creating the new one?
|
Apache Ignite binary type invalidation
|
This is a race condition. When TYPO3 creates a site, it first creates a page cache entry stating this The page is being generated. All other processes see this cache entry and stop rendering.
Once the first process finishes, it replaces this entry with the real cache content. This cache entry times out after some time (in case the process crashed, e.g. because of max execution timeout or memory limit).
This avoids a huge server load after clearing caches on busy (and even not so busy) sites.
A race condition on a dev server can still happen, when the browser reloads the page in the background, the developers accidentally reloads twice, etc.
|
For multiple TYPO3 7 LTS installations we occasionally get the "Page is being generated." screen for the first couple of clicks through the site after clearing all the page cache.
First I thought it was because of the multiple request on the live environment that causes a race-condition while filling up the caches again. But we also have the problem on our local dev environments where the developer is the only person accessing the site. So a race-condition would be strange here.
Edit:
Issue was that the page was being called twice. We had a bigtarget JavaScript in place that called the page twice where the first call build the cache but the request was canceled by the browser by the second request. And the second request then got the error message.
After fixing the js the users don't run into the message that often anymore.
|
"Page is being generated." message after clearing page cache TYPO3 7.6
|
3
You have at least two options here:
Either you warm your cache from inside the application
Or you do it outside
Either approach has its own properties and from here on you can derive a solution.
Warming inside the application
You have inside your application all infrastructure to perform cache warming. If cache warming is idempotent, then it could be fine to have it done by all your instances. If you don't want to deal all of your applications with cache warming, then you need to make a decision:
What will other instances do while the cache is warming?
I have two answers to that question:
Ignore and continue startup
Wait until the cache is warmed
In both cases, you can use Redis CAS (Compare-and-swap) with timeouts to create an expiring, distributed lock so the fastest starting instance will do warming and release the lock once it is done.
You could tackle that challenge also with a process that you deploy the first instance to your servers which perform cache warming. You wait until that application has finished its job and then you're good to go for the other instances.
Warming outside the application
With cache warming outside the application, you don't run into concurrency issues, but that requires some effort on the operating and development side. You need someone (or some process) that runs the cache warming before your application start/deployment. You also need to build that piece of code that will access your data and put it into the cache.
Building leader patterns would also work but require additional code/components. If you can, keep it simple.
HTH, Mark
Share
Improve this answer
Follow
answered Aug 31, 2016 at 10:04
mp911demp911de
17.8k22 gold badges5757 silver badges9797 bronze badges
0
Add a comment
|
|
I have a web application (spring based war) which is deployed in a Tomcat webserver. This web application is served by several server instances each running an instance of Tomcat. I intend to cache some data on a Redis datastore and all application instances contact this datastore to read data. As a pre-step I want some of the data to be cached up in Redis when the application starts.
If I do it via the web-app, all of the nodes will try to initialize the cache. Making one of the instances leader is one of the options, are there any better solutions to this?
Restarting: Indicates : Stop tomcat and then start it again. It could be done for several reasons: deploying a new version of the web app / server (machine) restart / new server being added to the pool. It's unlikely that all of the tomcat instances would be started at the same time but some of them may be started at the near same time.
Cache server is independent of the web-app, but in case it also crashed and the data was lost. I will maintain the "last read" TS in the Cache as well.
|
Distributed Cache Warmup
|
I'm not sure there is a way to store custom data structures permanently on executors. My suggestion here is to use some external caching system (like Redis, Memcached or even ZooKeeper in some cases). You can further connect to that system using such methods like foreachPartition or mapPartitions while processing RDD/DataFrame to reduce the number of connections to 1 connection per partition.
The reason of why this would work is that i.e. both Redis and Memcached are in-memory storages so there will be no overhead of spilling data to disk.
The two other ways to distribute some state across executors are Accumulators and Broadcast variables. For Accumulators all executors can write into it but reading can be performed only by driver. For Broadcast variable you write it only once on driver and then distribute it as a read-only data structure to executors. Both cases doesn't work for you so the described solution is the only possible way that I can see here.
|
I want to maintain a cache (HashMap) in Spark Executors memory (long lived cache) so that all tasks running on the executor (at different times) can do a lookup there and also be able to update the cache.
Is this possible in Spark streaming?
|
In Spark Streaming, can we store data (hashmap) in Executor memory
|
The way I got them caching was that I had to write all image url-s into JSON like this:
.setValue(url!.absoluteString)
And then read those as url-s
|
I have little problem. I got caching to work with the following URL:
let URL = NSURL(string: "https://raw.githubusercontent.com/onevcat/Kingfisher/master/images/kingfisher-\(indexPath.row + 1).jpg")!
But can't get it to work like this with this URL:
FIRStorage.storage().reference().child("\(productImageref!).png").downloadURLWithCompletion({(url, error)in
if error != nil{
print(error)
return
}else{
cell.snusProductImageView.kf_setImageWithURL(url , placeholderImage: nil,
optionsInfo: [.Transition(ImageTransition.Fade(1))],
progressBlock: { receivedSize, totalSize in
print("\(indexPath.row + 1): \(receivedSize)/\(totalSize)")
},
completionHandler: { image, error, cacheType, imageURL in
print("\(indexPath.row + 1): Finished")
})
}
})
What I am doing wrong here? Can you point me to the right direction? For caching I use the 3rd party library "KingFisher"
Edit: Firebase guy Mike McDonald's quote
"The Github one has Cache-Control: max-age=300 while Firebase Storage
doesn't have cache control set by default (you can set it when you
upload the file, or change it by updating metadata), so I assume
that's why KingFisher isn't caching it."
KingFisher owner quotes.
|
Images not caching with Firebase url but caching with other
|
This is because an In Memory cache doesn't add any serialization overhead and just stores your object instances in memory. Whereas when you use any of the other Caching Providers your values are serialized first then sent to the remote Caching Provider then when it's retrieved it's deserialized back so it's never reusing the same object instances.
If you plan on mutating cached values you'll need to clone the instances before mutating them, if you don't want to manually implement ICloneable you can serialize and deserialize them with:
var clone = TypeSerializer.Clone(obj);
|
I have a service that pulls statistics for a sales region. The service computes the stats for ALL regions and then caches that collection, then returns only the region requested.
public object Any(RegionTotals request)
{
string cacheKey = "urn:RegionTotals";
//make sure master list is in the cache...
base.Request.ToOptimizedResultUsingCache<RegionTotals>(
base.Cache, cacheKey, CacheExpiryTime.DailyLoad(), () =>
{
return RegionTotalsFactory.GetObject();
});
//then retrieve them. This is all teams
RegionTotals tots = base.Cache.Get<RegionTotals>(cacheKey);
//remove all except requested
tots.Records.RemoveAll(o => o.RegionID != request.RegionID);
return tots;
}
What I'm finding is that when I use a MemoryCacheClient (as part of a StaticAppHost that I use for Unit Tests), the line tots.Records.RemoveAll(...) actually affects the object in the cache. This means that I get the cached object, delete rows, and then the cache no longer contains all regions. Therefore, subsequent calls to this service for any other region return no records. If I use my normal Cache, of course the Cache.Get() makes a new copy of the object in the cache, and removing records from that object doesn't affect the cache.
|
MemoryCacheClient works differently than others - reference retained
|
You need some sort of splash screen to entertain user while app is loading. But there's a catch... as app is still being loaded how can it run any code to show a splash screen? In fact it can't really, but luckily there's a solution: during the loading time, the window manager draws a placeholder UI for your app, using elements from your theme, such as the background and status bar color. Setting the theme properly allows you to i.e. show static image (no code is loaded yet, so no fancy animation at that point) instantly. The key is to create custom theme that overrides android:windowBackground attribute, and once your app is loaded and starts running, you just replace that theme with your standard one, before calling super.onCreate() in your Activity.
Here's Google+ post of Ian Lake that describes this technique in details: Use cold start time effectively with a branded launch theme.
|
my android app cache size is around 40MB when i load the app into device and it is showing blank screen while launching the app for 1st time. i have no idea how to resolve this challenge, please help.
these are the libraries i am using.
compile(group: 'com.microsoft.azure', name: 'azure-notifications-handler', version: '1.0.1', ext: 'jar')
compile 'com.android.support:appcompat-v7:23.4.0'
compile 'com.android.support:design:23.4.0'
compile 'com.android.support:cardview-v7:23.4.0'
compile 'com.google.code.gson:gson:2.6.2'
compile 'com.android.support:support-v4:23.4.0'
compile 'com.google.android.gms:play-services:8.4.0'
compile 'com.squareup.picasso:picasso:2.5.2'
compile 'com.microsoft.azure.android:azure-storage-android:0.6.0@aar'
compile 'com.microsoft.azure:azure-mobile-services-android-sdk:2.0.3'
compile 'com.android.support:multidex:1.0.1'
compile 'com.android.support:recyclerview-v7:22.2.0'
compile 'com.mcxiaoke.volley:library-aar:1.0.0'
|
How to minimizing android app cache size
|
3
I had a similar problem and django-cache-machine worked like a charm. It uses the Django caching features to cache the results of your queries. It is very easy to set up (assuming you have already configured Django's cache backend):
pip install django-cache-machine
Then in the model you want to cache:
from caching.base import CachingManager, CachingMixin
class MyModel(CachingMixin, models.Model):
objects = CachingManager()
And that's it, your queries will be cached.
Share
Improve this answer
Follow
answered Jul 16, 2016 at 17:50
pintochpintoch
2,39811 gold badge1818 silver badges2828 bronze badges
Add a comment
|
|
I have a Django web application that is currently live and receiving a lot of queries. I am looking for ways to optimize its performance and one area that can be improved is how it interacts with its database.
In its current state, each request to a particular view loads an entire database table into a pandas dataframe, against which queries are done. This table consists of over 55,000 rows of text data (co-ordinates mostly).
To avoid needless queries, I have been advised to cache the database into memory and have it be cached upon the first time its loaded. This will remove some overhead on the DB side of things. I've never used this feature of Django before so I am a bit lost.
The Django manual does not seem to have a concrete implementation of what I want to do. Would it be a good idea to just store the entire table in memory or would storing it in a file be a better idea?
|
Caching a static Database table in Django
|
A cache line has a certain size (for example 64 bytes) and the processor reads and writes complete cache lines. Compare the number of bytes that are processed and the number of bytes that are read and written.
|
In C you're told to iterate through a matrix in a row-major order since that's how the arrays are stored underneath the hood and row-major iteration is utilizes the whole cache-line, which leads to fewer cache misses. And indeed, I do see a massive performance difference between row-major and column-major iteration on my machine. Test code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <sys/resource.h>
int getTime()
{
struct timespec tsi;
clock_gettime(CLOCK_MONOTONIC, &tsi);
double elaps_s = tsi.tv_sec;
long elaps_ns = tsi.tv_nsec;
return (int) ((elaps_s + ((double)elaps_ns) / 1.0e9) * 1.0e3);
}
#define N 1000000
#define M 100
void main()
{
int *src = malloc(sizeof(int) * N * M);
int **arr = malloc(sizeof(int*) * N);
for(int i = 0; i < N; ++i)
arr[i] = &src[i * M];
for(int i = 0; i < N; ++i)
for(int j = 0; j < M; ++j)
arr[i][j] = 1;
int total = 0;
int pre = getTime();
for(int j = 0; j < M; ++j)
for(int i = 0; i < N; ++i)
total += arr[i][j];
/*
for(int i = 0; i < N; ++i)
for(int j = 0; j < M; ++j)
total += arr[i][j];
*/
int post = getTime();
printf("Result: %d, took: %d ms\n", total, post - pre);
}
However, modern memory systems have prefetchers which can predict strided accesses and when you iterate through a column you are following a very regular pattern. Shouldn't this allow column-major iteration to perform similarly to row-major iteration?
|
Why does loop order matter when there's strided prefetching?
|
@Jaiwo99 is correct.
Spring's Cache Abstraction does not deal with the particular semantics and "low-level" details of "managing" a cache's contents (such as as size, or similarly related, eviction/expiration). This is due in large part because these low-level management details vary greatly from 1 caching provider to the next.
For instance, some caching providers/implementations are highly distributed, with different policies for consistency, redundancy and mechanisms that control latency, and so on. As such, it would be very difficult to provide a consistent abstraction on top of these features given some provider don't even implement said features, or have very different "consistency" policies, etc.
Anyway, this section in the Spring Reference Guide probably sums it up best...
8.7. How can I Set the TTL/TTI/Eviction policy/XXX feature?
Directly through your cache provider. The cache abstraction is an abstraction, not a cache implementation. The solution you use might support various data policies and different topologies that other solutions do not support (for example, the JDK ConcurrentHashMap — exposing that in the cache abstraction would be useless because there would no backing support). Such functionality should be controlled directly through the backing cache (when configuring it) or through its native API.
https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#cache-specific-config
Cheers!
|
I want to to configure my cache size. I am using @EnableCaching. Here is my cached repository.
VendorRepository
public interface VendorRepository extends Repository<Vendor, Long> {
@Cacheable("vendorByUsername")
Vendor getVendorByUsername(String username);
@CacheEvict(value = {"vendorByUsername", "vendor", "vendors"}, allEntries = true)
Vendor save(Vendor vendor);
@Cacheable("vendor")
Vendor findOne(Long id);
@Cacheable("vendors")
List<Vendor> findAll();
}
It is working good right now but I want to set maximum cache size. How can I configure this in my main config file?
|
Spring Boot Cachable Cache Size
|
3
Here's a code snipped that implements JB Nizeth's answer (Java 8):
long millisUntilMidnight = Duration
.between(LocalDateTime.now(), LocalDateTime.of(LocalDate.now().plusDays(1), LocalTime.MIDNIGHT))
.toMillis();
Executors.newSingleThreadScheduledExecutor()
.scheduleAtFixedRate(() -> cache.invalidateAll(), millisUntilMidnight,
TimeUnit.DAYS.toMillis(1), TimeUnit.MILLISECONDS);
Share
Improve this answer
Follow
answered Jun 7, 2016 at 20:31
MaltMalt
29.5k99 gold badges6868 silver badges110110 bronze badges
Add a comment
|
|
I need that my cache be refreshed everyday at a specific time, in my case, at midnight. I have way to do this with Guava LoadingCache?
So far I only got the cache be renewed after a day, with the next code:
private final LoadingCache<String, Long> cache = CacheBuilder.newBuilder()
.refreshAfterWrite(1, TimeUnit.DAYS)
.build(new CacheLoader<String, Long>() {
public Long load(String key) {
return getMyData("load", key);
}
}
|
Refresh Guava LoadingCache everyday at a specific time
|
You're using the Basic tier, which means it's not persistent and has no replication or failover. To prevent issues like this, you'd need either Standard or Premium tier.
|
I have an Azure Redis Cache that I've been happily using but all of a sudden it bombed everything out of it's store and was empty. Has anyone experienced this before?
As an overview of the setup:
Basic 1GB tier.
Oldest data in there was 2 days ago.
Max-memory policy = volatile-lru
Data persistence = N/A due to package level
I can re-add data back into the empty cache but I can't understand where the previous data could have gone?
Could this be down to a simple hiccup server-side causing data to drop?
|
Azure Redis Cache suddenly empty?
|
You could bind an arbitrary container / empty content to the popup and instead listen for the popupopen event on the relevant Map, Marker or Path.
scope.on('popupopen', function(ev){
var src = 'image.jpg?v=' + Date.now();
ev.popup.setContent('<img src="'+ src +'"/>');
});
|
I'm using a Leaflet map with popups which load updating images. However the images end up cacheing. I thought I'd fix this by adding Date.now() but this only adds the date when the page loads rather than when the pop up opens.
.bindPopup('<img src="image.jpg?'+ Date.now()+'" width="260" height="196" border="0"><br>Location One').addTo(map),
I've tried putting the date now in a separate function...
function foo () {
setInterval(Date.now(), 10000)
}
and calling that function from the pop up:
.bindPopup('<img src="image.jpg?'+ foo() +'" width="260" height="196" border="0"><br>Location One').addTo(map),
however that just loads: "image.jpg?undefined".
How can I get the cache busting timestamp to update?
(At the moment I'm just using a meta refresh to update the whole page which isn't very elegant and reloads the page just when you've got to the location on the map you want...)
|
Cache Busting in Leaflet marker popups
|
/* M is the number of rows; N is the number of columns. */
void matrix_shift(int M, int N, int A[M][N]) {
size_t rowbytes = N * sizeof(int);
int temprow[N];
memcpy(temprow, A, rowbytes); // store first row
memmove(A, A + 1, (M-1) * rowbytes); // shift up
memcpy(A + (M-1), temprow, rowbytes); // replace last row
}
This keeps it simple and relies on routines which should be highly optimized on any common platform. There is one extra row copied, but this is a minor inefficiency in the stated case of a square matrix.
|
I want to shift the first row of a 2D square matrix to the last row. So if I have a matrix like A, I want to get B.
I can do this using two simple for loops. E.g.
void shift(int M, int N, int A[M][N]){
int i, j,temp;
for (i = 1; i < M; i++){
for (j = 0; j < N; j++){
temp=A[i][j];
A[i][j]=A[i-1][j];
A[i-1][j]=temp;
}
}
}
But I want to get as few cache misses as possible. Any tip on how to do that?
|
Cache friendly matrix shift function
|
You can't store like this in Redis. Instead you can use a reference of that list inside the value and make use of it.
Here is an example:
I have a hash contains NAME and urls. where urls is a list.
hset("publisher","NAME","Domain");
hset("publisher","Urls","UrlsList");
When you get Urls from hget("publisher","Urls"). Do an lrange("UrlsList",0,-1) this will fetch you all values in that list.
|
I am trying to save a list as value in set for specific keys but could not find any way, Is it possible in redis?. I am not sure weather we can use redis save data like this . If not please correct me and help to do that.
I want to store sample data like in below format
publisher
{ NAME : Domain,
//list
Urls : {
url1,
url2,
}
}
......................
.....................
|
How to save list as a value in set in Redis
|
It is common to pass API version in the URI path (check out this question too). I'd suggest to use the second option, although rewrite it as /api/1.2.2/tilemenus, which looks more similar to how APIs operate on a bunch of popular websites.
|
We have a mobile app which calls a REST API to get the list of tiles to be displayed on the mobile primary screen. The authentication mechanism is AUTH Token using which we uniquely identify a user. The menu keeps changing depending on the version of the app. For this we have two approaches.
/api/tilemenus (Pass auth header only and not version)
Retrieve auth header and lookup the version of the app in the db table (We also store the user version in our database and update it whenever user upgrades the app) and return the data accordingly.
/api/tilemenus/1.2.2 (Pass auth header and version as well since client knows its version itself)
Here, no DB lookup is required since version is getting passed in REST request itself.
Which approach is better? I think approach 2 is better since we can pass the caching headers to cache this API for each version. For approach 1, there is no implicit way to discard this caching, when the user upgrades the app.
|
Nginx Caching of REST API
|
2
You can prevent caching of files in angular js by using cache: false parameter in the page where you have defined your states of different url.
$stateProvider.state('login', {
url: '/yourState',
cache: false,
templateUrl: 'yourTemplate.html',
controller: 'yourController as yourCtrlObj'
});
on setting cache to false in state parameters, all the files in the corresponding page will be reloaded once the page get loaded. Here i have given an example using controllerAs syntax in angular js
Share
Improve this answer
Follow
answered Mar 23, 2016 at 9:08
NitheeshNitheesh
19.6k33 gold badges2323 silver badges5050 bronze badges
1
What if you need to add a new state to this? How would you make sure this specific js file is non-cached?
– Ε Г И І И О
Oct 2, 2020 at 7:35
Add a comment
|
|
I'm working on angular js app, it's a single page app and we are using routing to switch views. All the files (stylesheets and javascripts) gets loaded only once when index page gets hit for the first time and suppose I update in one of the style sheets or javascript files then it takes older version of the file and not the updated one.
I want to force the client browser to take the latest file whenever there is an update in the server files without taking it from cache without refreshing the page (index.html) itself.
Any help would be appreciated.
|
Prevent caching in angular js application
|
Well the problem is $_SESSION value can be set and used on server side only.
if you want this content on client side javascript you will have to send a request to php server for dataCache value and then set it in your local storage.
You may use ajax call like
$.ajax({url: "getDataCache.php", success: function(result){
localStorage.setItem('dataCache', result);
}});
in getDataCache.php you need to do some thing like this
echo json_encode($_SESSION['dataCache']);
After that
if(localStorage.getItem("dataCache")) {
data = JSON.parse(localStorage.getItem("dataCache"));
will work
A good article on this issue http://www.devshed.com/c/a/php/html5-client-side-cache-in-php/
hope it helps :)
|
According to this answer, I set cache for my json data:
session_start();
if(isset($_SESSION['dataCache'])) {
echo json_encode($_SESSION['dataCache']);
} else {
$file = 'data.json';
if (!is_file($file) || !is_readable($file)) {
die("File not accessible.");
}
$contents = file_get_contents($file);
$_SESSION['dataCache'] = json_decode($contents, true);
echo $contents;
}
Now I want to read this cached from javascript by this code:
if(localStorage.getItem("dataCache")) {
data = JSON.parse(localStorage.getItem("dataCache"));
But, the problem is localStorage.getItem("dataCache") returns null.
How do I read cache that created in PHP session from JavaScript?
|
JQuery how to read json cache?
|
We implemented our own cache that just drops the data on the floor:
namespace AppBundle\Factory;
use Google\Auth\CacheInterface;
class NullGoogleCache implements CacheInterface
{
public function get($key, $expiration = false)
{
return false;
}
public function set($key, $value)
{
//do nothing
}
public function delete($key)
{
//do nothing
}
}
|
I am developing a web app that fetches and displays google analytics data for users that are not technical enough to do this themselves.
To do this, I:
1) have users log in with OAuth
2) store the access token
3) create a Google_Client and give it this access token
4) use this Google_Client to fetch the analytics data
This works no problem for the first user. However, it fails with an 'Access Denied' response for the second user. Following through the PHP code, I discovered that this is because the Google API Client caches the original access token (in the file system at /var/tmp/google-api-php-client), and uses this one instead of the fresh access token I have provided.
How do I prevent the Google API Client from caching the access token in the file system?
(Background information on the cache the Google_Client is using: when you provide an access token, it stores this with a key derived from the token scope. As the scope remains the same when the access token changes, the Google_Client does not create a new cache entry for each access token.)
|
PHP Google API Client caching access token
|
===== Update ====
In another way, you can disable SPEAK UI extender.
Go to /website/App_Config/Include/Sitecore.MvcExperienceEditor.config file and Enable SheerUI Ribbon and disable SPEAK UI Ribbon.
<!-- The SheerUI-based Experience Editor ribbon. -->
<processor type="Sitecore.Mvc.ExperienceEditor.Pipelines.RenderPageExtenders.RenderPageEditorExtender, Sitecore.Mvc.ExperienceEditor"></processor>
<processor type="Sitecore.Mvc.ExperienceEditor.Pipelines.RenderPageExtenders.RenderPreviewExtender, Sitecore.Mvc.ExperienceEditor"></processor>
<processor type="Sitecore.Mvc.ExperienceEditor.Pipelines.RenderPageExtenders.RenderDebugExtender, Sitecore.Mvc.ExperienceEditor"></processor>
<!-- The SPEAK-based Experience Editor ribbon
<processor type="Sitecore.Mvc.ExperienceEditor.Pipelines.RenderPageExtenders.SpeakRibbon.RenderPageEditorSpeakExtender, Sitecore.Mvc.ExperienceEditor"></processor>
-->
Even you disable SPEAK-based ribbon, there would be no UI changes.
|
Recently, I have installed Sitecore 8.0.
The problem is slowness. Whenever I reload page(s) in Edit Mode, it is too slow.
Below is Console in Development tool in Chrome and most final message is "ApplicationCache is not declared"
It seems like I need to declare ApplicationCache in Sitecore. How can I do??
=== Updated ===
By enabling Application Cache, I could see caching is correctly working. However, after caching loads, SPEAK still calls all SPEAK tool ribbons.
Why????????????? how can I make Sitecore stop loading these???
|
Sitecore ApplicationCache
|
Ephemeral mode will not use any cache.
NSURLSessionConfiguration *ephemeralConfigObject = [NSURLSessionConfiguration ephemeralSessionConfiguration];
|
I'm using NSURLSession to perform an HTTP Post NSMutableURLRequest, using the dataTaskWithRequest:completionHandler: method. When I perform a request for the first time, things take a reasonable amount of time to complete and show some feedback in the completion handler. On the next time that same request is fired, it happens almost instantaneously with very little time in between, which leads me to believe that the system is caching the contents of this data task.
As I don't need to view the returned data, is NSURLSession the best way to do this? It needs to work well with WatchKit, which NSURLSession does, which is why I chose it in the fist place. I would preferably like to find a way to just clear the cache after each request. If need be, I could switch to NSURLConnection, but this would be best. Thanks!
|
Clear NSURLSession Cache
|
Looks like it is used as part of the tagging feature for caching.
https://github.com/laravel/framework/blob/5.1/src/Illuminate/Cache/RedisTaggedCache.php
The hash is a unique namespace that changes when any of the tags are flushed.
|
We're using Redis caching with Laravel. Sometimes we store objects with keys such as:
Product-4151-Details
Category-4123-Products
When we run redis-cli keys * we get keys such as the following:
laravel:af6e03943c3803e85bbf455fa26:Category-4123-Products
laravel:af6e03943c3803e85bbf455fa26:Product-4151-Details
We have thousands of these keys (we cache a lot), and these hashes are often duplicated multiple times. What are these hashes, what do they mean, and why are they sometimes duplicated? (When I refer to the hashes I'm referring to this part of the key: af6e03943c3803e85bbf455fa26). The laravel portion is the cache prefix we have set up in our cache.php file.
|
Laravel caching with Redis - what does this key mean?
|
ERROR: type should be string, got "\nhttps://github.com/ben-manes/caffeine\nCaffeine provides exactly the behavior I want out of the box using refreshAfterWrite:\nLoadingCache<K, V> cache = Caffeine.newBuilder()\n .refreshAfterWrite(expireTime, timeUnit)\n .maximumSize(maxCountOfItems)\n .build(k->loader.load(k));\n\n" |
I'd like to have a cache that works like this:
A. If request is not cached: load and return results.
B. If request is cached, has not expired: return results.
C. If request is cached, has expired: return old results immediately, start to reload results (async)
D. If request is cached, has expired, reload is already running: return old results immediately.
E. If reloading fails (Exception): continue to return previous successful load results to requests.
(After a failed reload (case E), next request is handled following case C.)
(If case A ends in Exception, Exception is thrown)
Does anyone know an existing implementation, or will I have to implement it myself?
|
Is it possible to configure Guava Cache (or other library) behaviour to be: If time to reload, return previous entry, reload in background (see specs)
|
You need not to worry about cached memory as system will reclaim it when required.
however if still you want to do something about it you can call finish() in your onStop() method.
also this is a great answer on this topic by CommonsWare.
"cached background processes" usually refers to processes that do not
have a foreground activity and do not have a running service. These
processes are kept in memory simply because we have enough memory to
do so, and therefore, as you note, the user can switch back to these
processes quickly. As Android starts to need more system RAM for yet
other processes, the "cached background processes" tend to be the
processes that get terminated to free up system RAM
|
I've developed a simple application that loads four mobile webviews side by side.
On a fresh install the app fully opens and loads these pages in under 0.5 seconds.
However if i minimize this app, for some reason its "cached background process" is over 200mbs! sometimes 250... Seems completely unnecessary as the app loads lightning fast on a fresh install
How can I clear this cache when the app is minimized (onbackpressed etc)
|
Programmatically clear cached background process
|
Your php code needs to return the 304 Not Modified header when the browser asks if the cached image is still valid. Put an if statement at the top of your script to handle that request before sending the image again.
You are always sending the image that's why the browser is showing a 200 response.
|
I'm writing a web service who generate thumbs of images with Phalcon.
I try to HTTP cache it.
This is my code :
$seconds = 43200;
$expireDate = new DateTime();
$expireDate->modify("+ $seconds seconds");
$finfo = new finfo(FILEINFO_MIME_TYPE);
$app->response->setHeader('Content-Type', 'Content-type: ' . $finfo->buffer($data));
$app->response->setExpires($expireDate);
$app->response->setHeader('Pragma', 'cache');
$app->response->setHeader('Cache-Control', "private, max-age=$seconds");
$app->response->setHeader('E-Tag', md5(filemtime($path)));
$app->response->setHeader('Last-Modified', gmdate('D, d M Y H:i:s', filemtime($path)).' GMT');
$app->response->sendHeaders();
echo $data;
The image is corretly displayed. But when you refresh it, the http code is always 200, I try on another image of another website and I've got 200, 304, 304, 304...
This is my raw response header :
HTTP/1.1 200 OK
Date: Thu, 27 Aug 2015 18:38:41 GMT
Server: Apache/2.4.10 (Debian)
Expires: Fri, 28 Aug 2015 06:38:41 GMT
Pragma: cache
Cache-Control: private, max-age=43200
E-Tag: 501a8d62f276eb5b165b8a709bf4e5b4
Last-Modified: Sun, 05 Jul 2015 20:34:14 GMT
Keep-Alive: timeout=5, max=90
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: image/jpeg
Someone see what i'm doing wrong ?
Thanks in advance.
|
Phalcon, HTTP Cache of a generated image
|
You can use .diskCacheStrategy() to manually control whether and how an individual request is cached on disk and skipMemoryCache() to control whether an individual request is cached in memory.
|
I´m working in an App which does have images that are used quite often, however there are others download used only once.
Does Glide have any way for deciding on the fly which images should be stored only in Disk or only in memory?
As far as I´ve seen it does the cache depends on the configuration buy I´de like to be able to say by myself which ones should be in disk and which ones no.
|
Android Glide alternate cache in memory or disk
|
3
You can't control the caching of resources served from third party. htaccess is to control caching for resources served out of your own boxes.
Share
Improve this answer
Follow
answered Aug 10, 2015 at 10:49
AmitAmit
1,8641515 silver badges2424 bronze badges
2
I suppose hosting the js on my own server and manually updating the files when they start to break would be the only option then.
– Chris Lad
Aug 10, 2015 at 11:06
Which is what precisely some people do. OR you could trust a third party like Google enough to say that they know what they are doing ... that low expiration of 30 minutes is by design .... that "wait for break then fix" might mean that you are forever doing that every 30 minutes. There is no right answer,
– Amit
Aug 10, 2015 at 11:26
Add a comment
|
|
So I have tested my page via Googles Page Insights
And it is currently telling me to:
Leverage browser caching for the following cacheable resources:
http://maps.google.com/maps/api/js?sensor=false&language=en (30
minutes)
Its rather ironic as its a google resource from a google server But
Its always good to know how to do things I've tried to read about how to do this on a link google provided on the test page however it didn't really give an example of how to cache this external resource I've tried reading as much as I can and adding bits into my htaccess file but nothing seems to work.
So I guess my question firstly is, is it even possible via the .htaccess file to cache this resouce?
And if so how what code would I need to put in there to get it to cache the resource?
Thanks you In advance for any help.
|
How to Leverage Browser Caching using .htaccess ? (Google Maps api)
|
3
Neo4j has (up to 2.2.x) a two layered cache architecture. With cache_type=node you switch of just the object cache. To disable page cache, you can use dbms.pagecache.memory=0. However if all caches are disabled you basically measure the speed of your IO subsystem since every query goes down to the bare metal and reads from disc.
I recommend a different approach: enable both caches and run the queries you want to compare multiple times to warm up caches. Take measurement on warmed cache since this is much closer to a real production scenario.
On a side note: in Neo4j 2.3 the object cache will go away and we just have the page cache.
Share
Improve this answer
Follow
answered Jul 23, 2015 at 7:58
Stefan ArmbrusterStefan Armbruster
39.7k66 gold badges8888 silver badges9797 bronze badges
2
Hi Stefan, Thanks for your answer! Tried doing as you recommended, changed the cache_type from none to soft. In both case (with & w/o the USING INDEX hint) the queries at first seems running faster and faster but after a while they started getting slower. As far as I understand from the documentation, query which has less db hits suppose to be faster but since the times are not consistent, how can it be tested? Should I go with the one which has less db hits (the one with the USING INDEX hint) anyway although it's not always the faster?
– Eyal
Jul 23, 2015 at 9:45
Looks like as of 3.5, the pagecache size property has changed to dbms.memory.pagecache.size: neo4j.com/docs/operations-manual/current/reference/…
– James Conkling
Jul 18, 2019 at 14:11
Add a comment
|
|
I've been trying to compare queries performance in neo4j.
In order to make the queries more efficient, I added index, analysed the result using profile, and tried doing the same while using the USING INDEX.
On most queries, DB Hits were much better using the second option (with the USING INDEX), rows were the same or less, but the time performance seems not to be reliable: on several queries adding the USING INDEX was slower though the better performance parameters (db hits & rows)and times got much better by re-executing a query.
In order to stop the cache's interfering, went to the the properties file, changed the cache_type in the neo4j.properties to none and restarted neo, but it still seems like the results of the same query comes faster each time (until a certain point).
What will be the best way to test it?
|
How to compare performance on neo4j queries without cache?
|
3
Let's try:
SqlDependency = "[database]:[table1];[database]:[table2]"
Share
Improve this answer
Follow
answered Jul 3, 2015 at 8:28
Piotr CzarneckiPiotr Czarnecki
1,68833 gold badges1414 silver badges2222 bronze badges
2
@Piotr Czarnecki, Hi, I have tried the same thing what you have explained. For first table change it works perfectly but for table2 its not working properly. It access the action after 5 to 6 times to page refresh.
– Golda
Jul 8, 2015 at 7:06
1
@Piotr Czarnecki, I have found the issue its due to the "pollTime" in webConfig. I have reduce the values so its works fine
– Golda
Jul 8, 2015 at 9:41
Add a comment
|
|
I am using outputcache sqldependency and it works absolutely fine, but now I want to depend on multiple tables.
[OutputCache(Duration = 600, SqlDependency = "db:table1")]
My question: Is sqldependency support relying on multiple tables? if yes, then what is the syntax?
I tried the following syntax, but it did not work, it considers this table1,table2 as a name for one table.
[OutputCache(Duration = 600, SqlDependency = "db:table1,table2")]
Thanks in advance.
|
Multiple tables in ountputcache attribute sqldependency mvc
|
When you set a particular cache key and the item you are setting is larger than the size allotted for a cached item, it fails silently and your key gets set to None. (I know this because I have been bitten by it.)
Memcached uses pickle to cache objects, so at some point new_cache is getting pickled and it's simply larger than the size allotted for cached items.
The memcached default size is 1MB, and you can increase it, but the bigger issue that seems a bit odd is that that you are using the same key over and over again and your single cached item just gets bigger and bigger.
Wouldn't a better strategy be to set new items in the cache and to be sure that those items are small enough to be cached?
Anyway, if you want to see how large your item is growing, so you can test whether or not it's going to go into the cache, you can do some of the following:
>>> import pickle
>>> some_object = [1, 2, 3]
>>> len(pickle.dumps(some_object, -1))
22
>>> new_object = list(range(1000000))
>>> len(pickle.dumps(new_object, -1))
4871352 # Wow, that got pretty big!
Note that this can grow a lot larger if you are pickling Django model instances, in which case it's probably recommended just to pickle the values you want from the instance.
For more reading, see this other answer:
How to get the size of a python object in bytes on Google AppEngine?
|
I had my django application configured with memcached and everything was working smoothly.
I am trying to populate the cache over time, adding to it as new data comes in from external API's. Here is the gist of what I have going on:
main view
api_query, more_results = apiQuery(**params)
cache_key = "mystring"
cache.set(cache_key, data_list, 600)
if more_results:
t = Thread(target = 'apiMoreResultsQuery', args = (param1, param2, param3))
t.daemon = True
t.start()
more results function
cache_key = "mystring"
my_cache = cache.get(cache_key)
api_query, more_results = apiQuery(**params)
new_cache = my_cache + api_query
cache.set(cache_key, new_cache, 600)
if more_results:
apiMoreResultsQuery(param1, param2, param3)
This method works for several iterations through the apiMoreResultsQuery but at some point the cache returns None causing the whole loop to crash. I've tried increasing the cache expiration but that didn't change anything. Why would the cache be vanishing all of a sudden?
For clarification I am running the apiMoreResultsQuery in a distinct thread because I need to return a response from the initial call faster then the full data-set will populate so I want to keep the populating going in the background while a response can still be returned.
|
Django Memcached Cache Disappears
|
3
From the documentation:
"The statement must not contain subqueries, outer joins, or self-joins."
IN usually maps to a semi-join. Damien_The_Unbeliever notes, that:
IN() is defined as either a subquery or a sequence of expressions - the one in the OPs question s a subquery.
Therefore it probably is not supported.
Can you express the IN as an inner join? This preserves semantics if the IN query produces at most one row for each outer row. If there are more then rows will be duplicated. That does not matter for the notification, though. It should fire under exactly the same circumstances.
Share
Improve this answer
Follow
answered Jun 22, 2015 at 10:46
usrusr
170k3535 gold badges242242 silver badges374374 bronze badges
0
Add a comment
|
|
Does sqldependency work with IN operator?
My Command is:
SELECT n.PublishedAt
FROM dbo.Navigations n
WHERE NavigationGroupId IN (SELECT ng.Id FROM dbo.NavigationGroups ng
WHERE ng.SiteId = 1
AND ng.IsActive = 1
AND ng.[Type]= 'Secondary')
AND n.IsActive = 1
I had a look to MSDN Dodumentation but nothing mentioned about IN operator.
Thanks in advance.
|
IN Operator with SqlDependency is not working
|
The answer to this question is a moving target. As time progresses and PATCH either becomes more or less popular, the systems in the network may or may not support it.
Generally only the network entities that will care about HTTP verbs will be OSI Level 3 (IP) and up devices (firewalls, proxies). Some of those are 'dumb' in the sense that they do not inspect the OSI Level 4 (TCP). Others are 'smart' and can do protocol-level enforcement. For example, they will prevent you opening port 80 and send STMP messages.
Even if a device is 'smart', it still can be configured to allow or not allow more uncommon HTTP verbs like PATCH. So now we must factor in the security posture of the organization hosting the device. Places with open wifi like Starbucks and Airports may be quite draconian and lock down security. Same with some corporations especially if they deal with sensitive data (financial, personal info).
The upshot is that depending on the demographic for your users, PATCH might be problematic if you do not have a fallback mechanism. I would consider restricted users in the following domains more likely to have issues: sensitive corporate environments, schools, military organizations.
|
Suppose my server exposes an HTTP-based API that uses the PATCH method introduced by RFC 5789. Is it possible that clients (browsers or otherwise) behind corporate firewalls, proxies, caches, parental controls filters and the like will encounter any problems using this method? If so, how likely is this?
Given that PATCH was not part of the original HTTP specs, but introduced later on, I could imagine that some programs will simply reject such requests because of the "invalid" method. On the other hand, I hope that such software simply passes through everything and at most apply some restrictions to certain HTTP methods such as POST (e.g. not caching its results).
Note that I do not ask about PATCH support on the server side or within the browser, but only about components between client and server that I neither know nor control. Also, the question whether or not PATCH in itself is a good idea for APIs is out of scope for this question.
|
Can the HTTP method "PATCH" be safely used across proxies etc.?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.