Response
stringlengths 8
2k
| Instruction
stringlengths 18
2k
| Prompt
stringlengths 14
160
|
---|---|---|
No such feature exists. http://bugs.mysql.com takes "feature requests".
Such a feature would necessarily involve MySQL; it cannot be done entirely in the OS's filesystem. This is because a running mysql caches information in RAM that the FS does not know about. And because the information about a table/db/proc/trigger/etc is not located entirely in a single file. Instead extra info exists in other, more general, files.
With MyISAM, your goal was partially possible in the fs. A MyISAM table was composed of 3 files: .frm, .MYD',.MYI`. Still MySQL would need to flush something to forget that it know about the table before the fs could move the 3 files somewhere else. MyISAM is going away; so don't even think about using that 'Engine'.
In InnoDB, a table is composed of a .ibd file (if using file_per_table) plus a .frm file, plus some info in the communal ibdata1 file. If the table is PARTITIONed, the layout is more complex.
In version 8.0, most of the previous paragraph will become incorrect -- a major change is occurring.
"Transactions" are a way of undoing writes to a table...
BEGIN;
INSERT/UPDATE/DELETE/etc...
if ( change-mind )
then ROLLBACK;
else COMMIT;
Effectively, the undo log acts as a recycle bin -- but only at the record level, and only until you execute COMMIT.
MySQL 8.0 will add the ability to have DDL statements (eg, DROP TABLE) in a transaction. But, again, it is only until COMMIT.
Think of .MYD',0 as flushing the recycle bin.
|
I'm using MardiaDB and i'm wondering if there is a way how to install a 'recycle bin' on my server where if someone deleted a table or anything it gets shifted to the recycle bin and restoring it is easy.
Not talking about mounting things to restore it and all that stuff but litterly a 'save place' where it gets stored (i have more then enough space) until i decide to delete it or just keep it there for 24 hours.
Any thoughts?
|
MYSQL/MardiaDB 'recycle bin'
|
Have you searched for any package for backup... here is one i am currenlty using and is working fine for me for local backups and has scheduling feature as well, you can explore more...
https://github.com/spatie/laravel-backup
|
I currently own an application developed in Laravel 5.5 deployed in a shared hosting service.
I need to implement an automated backup system with these features:
Run automatically (cron) every day (ideally cron laravel)
Include (laravel files, uploaded files (storage / app) and database)
Copy the backup to the cloud as a final step (Dropbox, Goggle Dive for example)
Is there any recommendation on how to do this for a Laravel application?
I guess there are different alternatives
From a unix script (I'm not an expert)
An application in PHP
The one that I think would be ideal is to add the code within the same Laravel application and use Laravel's own cron
But I do not know if it's possible from laravel
Compress files
Backup the database
Upload files to the cloud
I accept ideas and recommendations, thank you!
|
Laravel Backup to Cloud programming
|
You need to send the command by ssh. The modified version of your script should do the task.
#! /bin/bash
TIMESTAMP=$(date +"%Y-%m-%d")
PAST=$(date +"%Y-%m-%d" -d "30 days ago")
BACKUP_DIR="/volume1/dir/dir2/$TIMESTAMP"
PAST_DIR="/volume1/dir/di2/$PAST"
HOST="myip"
MYSQL_USER="username"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="password"
MYSQLDUMP=/usr/bin/mysqldump
SSH_KEY=""
SSH_USER=""
SSH_HOST=""
REMOTE_COMMAND="$MYSQL -h $HOST --user=$MYSQL_USER -p$MYSQL_PASSWORD -s -e 'SHOW DATABASES' | grep -v 'information_schema'"
databases=$(ssh -i $SSH_KEY ${SSH_USER}@${SSH_HOST} "$REMOTE_COMMAND")
created=0
for db in $databases; do
DUMP="$BACKUP_DIR/$db.sql"
# To save backup on remote machine
case $created in
0) REMOTE_COMMAND="rm -rv $PAST_DIR; mkdir -p $BACKUP_DIR"; created=1
ssh -i $SSH_KEY ${SSH_USER}@${SSH_HOST} "$REMOTE_COMMAND" ;;
*) ;;
esac
REMOTE_COMMAND="$MYSQLDUMP --force --opt -h $HOST --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip -9 > $DUMP.gz"
ssh -i $SSH_KEY ${SSH_USER}@${SSH_HOST} "$REMOTE_COMMAND"
# To save backup on local machine
# case $created in
# 0) rm -rv $PAST_DIR; mkdir -p $BACKUP_DIR; created=1 ;;
# *) ;;
# esac
# REMOTE_COMMAND="$MYSQLDUMP --force --opt -h $HOST --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db"
# ssh -i $SSH_KEY ${SSH_USER}@${SSH_HOST} "$REMOTE_COMMAND" | gzip -9 > $DUMP.gz
done
NOTE: you need to set the SSH_KEY (if you have ssh key based authentication), SSH_USER, SSH_HOST variables. Also I provided two solutions in case you want to save the backup on your local machine, from where you started the command. Also note the -s flag with the mysql command (SHOW DATABASES part), which will not print out the colum name (here Database).
|
I'm using a shell script to backup my databases from a remote server to my Synology NAS. The script is working fine but when I set the Bind Address in de mysql config @ the remote server to 'localhost', the script isn't working anymore. The script needs a SSH verification.
The question: how can I create the SSH connection / credentials in this script?
Underneath the script I use:
#! /bin/bash
TIMESTAMP=$(date +"%Y-%m-%d")
PAST=$(date +"%Y-%m-%d" -d "30 days ago")
BACKUP_DIR="/volume1/dir/dir2/$TIMESTAMP"
PAST_DIR="/volume1/dir/di2/$PAST"
HOST="myip"
MYSQL_USER="username"
MYSQL=/usr/bin/mysql
MYSQL_PASSWORD="password"
MYSQLDUMP=/usr/bin/mysqldump
rm -rv $PAST_DIR
mkdir -p $BACKUP_DIR
databases=`$MYSQL -h $HOST --user=$MYSQL_USER -p$MYSQL_PASSWORD -e "SHOW DATABASES;" | grep -Ev "(Database|information_schema)"`
for db in $databases; do
$MYSQLDUMP --force --opt -h $HOST --user=$MYSQL_USER -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/$db.sql.gz"
done
|
Backup Mysql database with shell script - bind-address set to localhost
|
1
In Storage Volume Gateway Cached mode, data is written to S3 and cached locally for frequently accessed files.
Cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway's cache and upload buffer storage.
Cached volumes can range from 1 GiB to 32 TiB in size and must be rounded to the nearest GiB. Each gateway configured for cached volumes can support up to 32 volumes for a total maximum storage volume of 1,024 TiB (1 PiB).
In the cached volumes solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon S3.
Cached Volume Architecture
Share
Improve this answer
Follow
answered Nov 2, 2017 at 0:07
John HanleyJohn Hanley
78.1k66 gold badges103103 silver badges168168 bronze badges
Add a comment
|
|
AWS documentation clearly mentions Gateway Stored Volumes- "This data is asynchronously backed up to S3 in the form of Amazon EBS snapshots."
But there is no mention how the Storage Volume Gateway Cached volumes data is replicated - Aync/Async snapshots ?
The documentation reads
"Cached volumes let you use Amazon Simple Storage Service (Amazon S3) as your primary data storage while retaining frequently accessed data locally in your storage gateway."
"In the cached volumes solution, AWS Storage Gateway stores all your on-premises application data in a storage volume in Amazon S3. "
Can someone explain
Thanks
|
AWS Storage Volume Gateway - Cached volumes
|
rsync manual says
INCLUDE/EXCLUDE PATTERN RULES
...
o use ’**’ to match anything, including slashes.
so you can do
rsync -a --exclude='Folder2/**.tar' Source/ Target
Note this is different from bash's globstar option where you would use Folder/**/*.tar.
|
I backup my data with rsync and would like to exclude a specific filetype in only one directory (and its subdirectories). For example I have:
$ ls Source/
Folder1/a.tar
Folder1/b.dat
Folder2/c.tar
Folder2/d.dat
Folder2/Subfolder3/e.tar
Folder2/Subfolder3/f.dat
Folder2/Subfolder3/g.pdf
Now I would like to sync all files except for the .tar files in Folder2 and its subfolder. At the end it should look like this:
$ ls Target/
Folder1/a.tar
Folder1/b.dat
Folder2/d.dat
Folder2/Subfolder3/f.dat
Folder2/Subfolder3/g.pdf
Does someone know how to do that? I played around with the --exclude option, but without luck.
|
rsync: Exclude specific filetype in only one directory
|
2
I'm skeptical that there is a custom solution, holistic, multi machine, multi container, application/container agnostic approach. From my point of view there is a lot of orchestration activities necessary in the first place. And I wonder if you wouldn't use something like Kubernetes anyways that - supposedly - comes with its own backup solution.
For single machine, multi container setup I suggest to store your container's data, configuration, and eventual build scripts within one directory tree (e.g. /docker/) and use a standard file based backup program to backup the root directory.
Use docker-compose to managed your containers. This lets you store the configuration and even build options in a file(s). I have an individual compose file for each service, but a single one would also work.
Have a subdirectory for each service. Mount bind-mount directories aka volumes of the container there. If you need to adapt the build process more thoroughly you can easily store scripts, sources, Dockerfiles, etc. in there as well.
Since containers are supposed to be ephemeral, all persistent data should be in bind-mount and therefore in the main docker directory.
Share
Improve this answer
Follow
answered Sep 12, 2017 at 15:07
fzgregorfzgregor
1,8471515 silver badges2020 bronze badges
1
Thank you! I agree with and are usually doing pretty much all of what you write. The one problem that I haven't solved is how to deal with containers whose state cannot be backed up by just copying the data volume (e.g. databases)
– Krumelur
Sep 19, 2017 at 20:32
Add a comment
|
|
I want to take a holistic approach backing up multiple machines running multiple Docker containers. Some might run, for example, Postgres databases. I want to back up this system, without having to have specific backup commands for different types of volumes.
It is fine to have a custom external script that sends e.g. signals to containers or runs Docker commands, but I strongly want to avoid anything specific to a certain image or type of image. In the example of Postgres, the documentation suggests running postgres-specific commands to backup databases, which goes against the design goals for the backup solution I am trying to create.
It is OK if I have to impose restrictions on the Docker images, as long as it is reasonably easy to implement by starting from existing Docker images and extending.
Any thoughts on how to solve this?
I just want to stress that I am not looking for a solution for how to back up Postgres databases under Docker, there are already many answers explaining how to do so. I am specifically looking for a way to back up any volume, without having to know what it is or having to run specific commands for its data.
(I considered whether this question belonged on SO or Serverfault, but I believe this is a problem to be solved by developers, hence it belongs here. Happy to move it if consensus is otherwise)
EDIT: To clarify, I want do something similar to what is explained in this question
How to deal with persistent storage (e.g. databases) in docker
but using the approach in the accepted answer is not going to work with Postgres (and I am sure other database containers) according to documentation.
|
Backup-friendly Docker volumes
|
1
You could use the dd command and stream the output to s3
From within the instance:
$ dd if=/dev/xvda bs=1M status=progress | aws s3 cp - s3://your-bucket-name/root_device.img
substitute the /dev/xvda with the file system you want to back up
Share
Improve this answer
Follow
answered Aug 28, 2017 at 12:47
maafkmaafk
6,44677 gold badges3838 silver badges6464 bronze badges
Add a comment
|
|
I have a running instance in EC2. Its "Root Device Type" is Instance-store (not EBS).
And I'd like to back it up manually into S3.
Is it possible?
Thank you!
|
Is it possible to backup an AWS EC2 "instance-store" type instance into S3?
|
This script will do the trick. You could name it vimBackup or anything, do not use existing commands name. Then you could copy the command somewhere into the paths in $PATH variable or append $PATH variable with a custom script folder containing this script. Then you able to use it without using the full path to execute it.
#!/bin/sh
[[ $1 == "" ]] && echo "Command expect a file path" && exit 1
[[ ! $# == 1 ]] && echo "Command expect only one parameter" && exit 1
[[ -w $1 ]] && cp $1 $1_$(date +%Y-%m-%d_%H:%M) # if exist and writable make a copy
vim $1
|
Is there a way to configure vim so that, instead of creating a temporary .swp, every time a save is made, suppose that I'm editing the file name_of_the_file.txt, the program automatically creates a file containing the previous save, named with the data and time of the saving, e.g. name_of_the_file-05-17-2017-11:20.txt in a folder, let say ~/.vim-bckp/name_of_the_file/? Or, better, having a custom command for saving that do the above request and avoid flooding the HDD with minor changes.
|
How to configure vim to save extra copies
|
user and pas[sword] parameters should be before the path to files
gbak -r -p 4096 -o -user sysdba -pas masterkey e:\mybackup.fbk localhost:e:\bddados.fdb
gbak documentation
|
I previously asked how to make a backup of a Firebird database in
I need to backup or clone one remote firebird database or export it to Sql server
Now the backup is complete, but when I try to restore it to Firebird on my computer, I get an error.
I use this command:
gbak -r -p 4096 -o e:\mybackup.fbk localhost:e:\bddados.fdb -user sysdba -pas masterkey
The error I receive is
gbak: ERROR:Your user name and password are not defined. Ask your database administrator to set up a Firebird login. gbak:Exiting before completion due to errors
But I test my Firebird locally with this user and password and it's ok. Does the created backup database need to specify in generate command a password or do I need to use the same of the old database?
|
recover backup of firebird fail
|
You can use GBAKcommand to backup remote database to local hard disk.
Here's the GBAK command:
gbak -b -v 192.168.0.20:/dbases/mydb.fdb C:\mybackup.fbk -user
SYSDBA -pass 123456
|
I have a legacy system using a remote Firebird 2.5 database. I need to clone this database for backup. I do not have access to file system of the server, I can only access it with connection string.
How can I do this?
|
I need to backup or clone one remote firebird database or export it to Sql server
|
1
You can backup your cloud data to some local storage, CloudBerry has option "Cloud to Local".
Share
Improve this answer
Follow
answered Apr 19, 2017 at 12:39
Anton AEAnton AE
12622 bronze badges
1
There is also an ability to make Cloud to Cloud migration
– Alex
May 30, 2017 at 14:15
Add a comment
|
|
I pulled data into Google BigQuery tables and also generate some new datasets based on these data daily.
These original data and generated datesets, I would save in Google Cloud Storage for two purposes,
These are the backup copy of my Google BigQuery data.
Also some of these datasets saved in Google Cloud Storage would be dump loaded to AWS elasticsearch (so they are also the backup copy data for AWS Elasticsearch)
BigQuery or AWS Elasticsearch may only keep 2 months to 1 year data. So the data older than that, I only have one copy on Google Cloud Storage. (I need to have some backup options, such as 1 months snapshots for Google Cloud Storage which I can go back to if needed.)
My questions are
How could I keep a backup or snapshot of Google Cloud Storage data to prevent the data loss in Google Cloud Storage. Such as let me at least trace back 7 days or 1 months of the data in Google Cloud Storage?
So in the case of data lost, (accidentally delete data etc), I can go back a few days to get the data back.
Thanks!
|
Backup options or snapshots of Google Cloud Storage data?
|
According to your simple expression, I could only answer below.
If you want to run the exactly the same website in another computer. you will need to backup all the related items including the database and plugins installed and then restore them in your new computer.
|
I have one Joomla site and one WordPress site in it. My laptop hard drive is failing. I have zipped my htdocs folder but that's about it
* excellent answers, just exported my database right before a crash. *
Thank you so much
|
How to backup XAMPP content? I need to run it on another laptop soon
|
Please follow the steps provided on below links.
Secure Your Mongodb Database
Backup of your Mongodb Database on server
|
How to secure MongoDB database with authentication?
How automated backup of Mongodb database using Node.js cron job and store on server?
|
Authenticate Mongodb and Auto Backup of Database on Server Using Cron Job
|
I hope it help you
<?php
$DBUSER="";
$DBPASSWD="";
$DATABASE="";
$filename = "backup-" . date("d-m-Y") . ".sql.gz";
$mime = "application/x-gzip";
header( "Content-Type: " . $mime );
header( 'Content-Disposition: attachment; filename="' . $filename . '"' );
$cmd = "mysqldump -u $DBUSER --password=$DBPASSWD $DATABASE | gzip --best";
passthru( $cmd );
echo ('Proc is OK !');
exit(0);
?>
|
How to create a mysql database backup using one file php script?
system() is disabled.
I have no cpanel account details, from site admin panel I just can read and write php files.
|
How to create mysql database backup using one file php script (system() is disabled)
|
A here document is treated as a double-quoted string, so parameter expansions and command substitutions are evaluated before the command reads from them. Quote any part of the delimiter to have the here document treated as a single-quoted string.
cat <<\EOT >> ~/.bashrc
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# create a file backup in ~/filebackup/ with timestamp
filebackup () { cp "${@}" ~/"filebackup/${@}_$(date +%Y-%m-%d_%H:%M:%S).bk"; }
EOT
By any part, I mean any of the following would work just as well:
'EOT'
E\OT
"E"OT
et cetera.
|
This question already has answers here:
How to avoid heredoc expanding variables? [duplicate]
(2 answers)
Closed 7 years ago.
I am attempting to create a bash script that will allow me to install the same bash function across multiple machines. This particular function creates a copy of a file with a timestamp in a backup directory:
filebackup () { cp "${@}" ~/"filebackup/${@}_$(date +%Y-%m-%d_%H:%M:%S).bk"; }
Here is my bash script:
cat <<EOT >> ~/.bashrc
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# create a file backup in ~/filebackup/ with timestamp
filebackup () { cp "${@}" ~/"filebackup/${@}_$(date +%Y-%m-%d_%H:%M:%S).bk"; }
EOT
source ~/.bashrc
When I execute the script, however, the ${@} are missing and the $(date +%Y-%m-%d_%H:%M:%S) has been evaluated. Here is what has been appended to the .bashrc file:
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# create a file backup in ~/filebackup/ with timestamp
filebackup () { cp "" ~/"filebackup/_2017-01-05_12:07:56.bk"; }
How can I ensure that the function is copied literally into the file?
|
How to install a bash function containing variables using a bash script? [duplicate]
|
2
If you are certain about the fact that - the gets killed due to consuming too much memory, then you can try increasing the swappiness value in /proc/sys/vm/swappiness. By increasing swappiness you might able to get away from this scenario. You can also try tuning oom_kill_allocating_task, default is 0 , which tries to find out the rouge memory-hogging task and kills that one. If you change that one to 1, oom_killer will kill the calling task.
If none of the above works then you can try oom_score_adj under /proc/$pid/oom_score_adj. oom_score_adj accepts value range from /proc/sys/vm/swappiness0 to /proc/sys/vm/swappiness1. Lower the value less likely to be killed by /proc/sys/vm/swappiness2. If you set this value to /proc/sys/vm/swappiness3 then it disables oom killing. But, you should know what exactly you are doing.
Hope this will give you some idea.
Share
Improve this answer
Follow
answered Oct 3, 2016 at 17:08
rakib_rakib_
140k44 gold badges2020 silver badges2727 bronze badges
Add a comment
|
|
How do you prevent a long-running memory-intensive tar-based backup script from getting killed?
I have a cron job that runs daily a command like:
tar --create --verbose --preserve-permissions --gzip --file "{backup_fn}" {excludes} / 2> /var/log/backup.log
It writes to an external USB drive. Normally the file generated is 100GB, but after I upgraded to Ubuntu 16, now the log file shows the process gets killed about 25% of the way through, presumably because it's consuming a lot of memory and/or putting the system under too much load.
How do I tell the kernel not to kill this process, or tweak it so it doesn't consume so many resources that it needs to be killed?
|
How to prevent long-running backup job from being killed
|
Unfournately, it's not possible to do that.
Can I migrate a Backup vault to a Recovery Services vault?
Unfortunately no, at this time you can't migrate the contents of a
Backup vault to a Recovery Services vault. We are working on adding
this functionality, but it is not currently available.
https://learn.microsoft.com/en-us/azure/backup/backup-azure-backup-faq
It's better wait for a while.
|
I've setup an Azure Backup vault some time ago and made backups of my systems to it. The backup vault is of type 'Backup vault (classic)'
Now there is a new kind of Azure Backup vault that enables alert among other options. I need to make use of that options.
How can I migrate the classic vault (which contains a lot of historical information) to the new vault type (Recovery Services Vault)?
I cannot find any option in the portal, nor can I find a Powershell script to execute the migration.
|
How to migrate an Azure 'Backup vault classis" to 'Recovery Services vault'?
|
It depends, if it's just a small DB, the bpipe plugin is probably the easiest to get started. There's also the Bareos mysql-python plugin, which is more flexible than bpipe, but like bpipe it makes use of mysqldump. So both of them only do full backups. The third option is the newer Bareos MySQL / MariaDB Percona xtrabackup Plugin, which can also do incremental backups of InnoDB tables.
All Bareos plugins for backing up MySQL/MariaDB are documented at
http://doc.bareos.org/master/html/bareos-manual-main-reference.html#BackupOfAMySQLDatabase
|
What plugin I should use for that?
Can you show me your config from bareos-fd.conf and FileSet section from bareos-dir.conf
Thank you!
|
How can I backup mysql database with bareos?
|
Kernel.system calls the given shell command. When it fails it return a false value.
In your case it means git clone --bare #{path_to_bundle} #{project.repository.path_to_repo} > /dev/null 2>&1 fails.
You can check why it fails when you execute this command on the command line by hand without > /dev/null 2>&1 .
To get the command you can at a debug print before the command
if Kernel.system(pp("git clone --bare #{path_to_bundle} #{project.repository.path_to_repo} > /dev/null 2>&1"))
|
I am trying to make a backup restore of gitlab and it kind of works but th command line always says that the restore of the repositories failed. I think I found the conditional statement in the code which responsible for the [failed] statement. Has someone a clue what this is doing or know a direction in which I should go to find my mistake?
if Kernel.system("git clone --bare #{path_to_bundle} #{project.repository.path_to_repo} > /dev/null 2>&1")
puts "[DONE]".green
else
puts "[FAILED]".red
end
|
What does this Ruby code mean?
|
If you want to have a "backup" of some sort that is in-sync with the data in the cluster, consider building two clusters. Whatever indexing, updating, deleting operations the "main" cluster has, you need to mirror those operations on the "backup" cluster as well. There is no other way.
|
I use snapshot method to backup my elasticsearch nodes, it works as follow:
PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true
but after new data added to elasticsearch, it's not contained in snapshot, so we need to run it periodically, but there will be a data loss if something goes wrong between 2 snapshots, is there anyway to handle it?
is there any continuously backup method for elasticsearch?
|
Is there any continuously backup method for elasticsearch?
|
2
There is quirk with bucket permissions, where you need to specify the bucket itself and its keys separately, using the /* wildcard specification. Additionally, even for a write operation, a List action may be required.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1465916250000",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::atlas-nas-backups",
"arn:aws:s3:::atlas-nas-backups/*"
]
}
]
}
I also added the "s3:GetBucketLocation" and "s3:ListBucket" actions. As previously noted, even if you are only writing objects, the service may want to list the item and get the location (region) of the bucket. You may not need these last two, but just wanted to show you them just in case.
Share
Improve this answer
Follow
edited Dec 21, 2016 at 23:04
Sirex
21944 silver badges2222 bronze badges
answered Jun 17, 2016 at 21:19
Rodrigo MurilloRodrigo Murillo
13.4k22 gold badges3131 silver badges5151 bronze badges
Add a comment
|
|
I have a NAS which supports backup of files to AWS S3. I have created a user under IAM in the AWS console and I have tried to generate a policy which only allows this user access to a specific S3 bucket with read/write permissions. The following is the policy I have generated:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1465916250000",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::atlas-nas-backups"
]
}
]
}
However when I run this through the policy simulator against all actions for S3, each one fails. What am I missed that this user can't write objects to the bucket? I don't want this user to have access to any other AWS resources other than the ability to backup files to a specific bucket.
|
AWS IAM Policy to allow user access to specific S3 bucket for backup
|
You can simple use show databases command. Read all the databases in for loop like this and call mysqldump command
databases=`mysql --user=$DB_USER --password=$DB_PASS -e "SHOW DATABASES;" | tr -d "| " | grep -v Database`
for db in $databases; do
echo "Dumping database: $db"
mysqldump --force --opt --user=$DB_USER--password=$DB_PASS --databases $db > $BACKUP_DIR/`date +%Y%m%d`.$db.sql
done
|
I have a simple shell script which backup once a week whole www directory and single database. It is working perfectly.. Now I want to extend it a little bit and make it to backup all databases not just one. How can I do this?
This is the current script
#!/bin/bash
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="backup.$NOW.tar"
BACKUP_DIR="/home/user/backups/"
WWW_DIR="/var/www/"
DB_USER="dbuser"
DB_PASS="dbpass"
DB_NAME="dbName"
DB_FILE="backup.$NOW.sql"
WWW_TRANSFORM='s,^home/user/www,'
DB_TRANSFORM='s,^home/user/backups,database,'
tar -cf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR
mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE
tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
rm $BACKUP_DIR/$DB_FILE
gzip -9 $BACKUP_DIR/$FILE
echo 'backup finished', $FILE
|
Shell script to backup all databases and www directory
|
It is perfectly safe.
The compression provided by Windows is transparent. This means that from the perspective of any application, those are just normal files which can be read from and written to like any other file. Internally, Windows compresses and uncompresses the data on the fly one layer below what is visible to applications.
Therefore, 7-Zip will make no difference between files already compressed by Windows and others. However it probably won't make much sense to use Windows' compression anymore if you immediately uncompress them afterwards anyway. It will probably be a lot faster then as well, because even when you don't manually uncompress the files, Windows implicitly has to uncompress the data when 7-Zip wants to read it.
|
I've been searching on the Internet for last week but I haven't found anything. I have such a problem:
We have daily SQL backups in company starting from 2012 up to today. All those files are "compressed" using standard disk compression.
It's about ~500 files per month. It's about 24 000 files. It's huge amount of disks space (every day cost about 9 gb now, 6 gb in 2012). And it's backup, so have those files in three different places.
I want to 7z those files, because, for example, if database have 2 gb, after Windows disk compression feature it has ~1200 mb. When I 7z this file, it has 200 mb.
I've made simple batch script that decompress those files, pack them and delete. After one day it has done one month. Decompressing cost much time.
I see ziping sql BAK files is very common and safe. But now the real question is - do I have to decompress each file, and then pack it, or is it safe to 7z "compressed" files?
Here is the script:
echo off
FOR /R %%i IN (*.bak) DO (
echo Decompressing %%i
compact /u "%%i" >> log.txt
ping 127.0.0.1 -n 2 > nul
echo Zipping %%i
"C:\Program Files\7-Zip\7z.exe" a "%%i.7z" "%%i" >> log.txt
echo Deleting %%i
del "%%i" >> log.txt
echo _____)
|
Is it bad to 7zip SQL backups compressed using NTFS compression?
|
In detaching localDB,
we have to run ALTER DATABASE ROLLBACK IMMEDIATE command first to terminate all the incomplete transactions.
Just to explain easily, Before we close a Restaurant, we have to announce to the customers in the Restaurant, 'This Restaurant will be closed very soon, please complete your eating and get outside before closing of Restaurant'
If you're needed to re-attach the localDB to same computer(same localDB Server),
Some activities like these have to be avoided to prevent the ghost(bug?).
1) Trial to open the localDB in code programmatically
2) It seems counting with the name of detached localDB also reminds the existence of localDB to the localDB Server.(SELECT COUNT dbname command in master database)
Strange thing which has to be fixed as a bug is,
if we detach a localDB from master DB, I think it has to be not able to open the detached localDB in code programmatically. However, code like SqlConnection.Open(); runs and pass by without any exception(error) and immediately the fullpath ghost is created.
It seems the name of detached localDB is deleted on master DB but the Server connects the detached localDB through the physical path in the provided connectionstring.
And to decide some localDB is needed to attached or to check it's detached or not, I've developed my own solution(simple code) to do this.
Hope my experience helps someone else.
|
My development environment is C#, SQL Server 2014 LocalDB, SQL Server 2012 Express, Windows 10, Visual Studio 2015.
When users of my application need to move their localDB (.mdf) file to another place, another computer (LocalDB server), detaching from computer A and attaching to computer B and then, we can run BACKUP database command successfully.
However, in case users mistakenly detached or users changed their mind to use continuously in computer A, my application has to be able to re-attach the detached LocalDB database file (.mdf) to the same computer (same LocalDB server).
When I run BACKUP DATABASE command after my application re-attached the database file to same computer successfully, error message shows as,
Unable to open physical file, The process cannot access the dbfile because the dbfile is in use by another process
BACKUP DATABASE terminated abnormally
So, I entered Microsoft Server Management Studio and can see 2 dbfile with specific name as first is greendb.mdf (only name), second is c:\users\kay\appdata\greendb.mdf (with full path).
I think the c:\users\kay\appdata\greendb.mdf (with full path) is created when the database is detached. And when I click it through security-login-kay-user mapping, unlike other databases show their permissions inside, the detached database with full path doesn't show their permissions and show error message like,
Unable to cast 'System.DBNull' object to 'System.String' (Microsoft.SqlServer.Smo)
It seems Microsoft LocalDB Server still recognizes the detached database with full path and is confused with newly attached database (only name without full path).
Any excellent ideas will be highly appreciated !
Thank you so much !
|
SQL Server LocalDB: after detach and re-attach database to same computer(machine, same path), cannot backup database
|
Basically I follow the approach as described in the question. I have added the following files and folders to the Visual Studio project and then later to version control (I have just expanded the more interesting folders which are not part of the project file by default, but needed when you redeploy the solution from scratch):
As described the backend is hosted on Azure SQL.
Open Live Writer makes it very easy to host article content on another ftp server.
By following this approach it is very easy to redeploy the complete solution, e.g. for umbraco upgrades or major changes on the site.
|
I did the following steps:
I have created a new Umbraco instance by using the nuget package and visual studio.
I have deployed to Azure, using Azure DB as backend.
Installed the articulate package.
Added my project to version control (including App_Plugins folder, articulate dlls and so on).
I am able delete the umbraco installation and I can restore it completely from version control including Articulate.
Now I am starting to add content, articles, pictures and so on.
Think I do not need to backup the whole folder on the web server. I am doing regular backups of my Azure DB and I need some folders which are also filled with new content, like
media (filling with pictures which I am adding to my articles)
App_Plugins (keeping installed packages in umbraco)
App_Data/packages (file directory for installed packages)
App_Data/umbraco.config (keeping some content for Articulate)
So, is this everything I need to be able to restore the whole system by using the version control part, azure db backup and the listed folders?
|
Which Umbraco folders do I need to backup after deploying from VS and adding to version control?
|
It appears that you are referring to the Instance Store SSD volume that is provided as part of an m3.large Amazon EC2 instance.
Instance Store volumes are temporary (aka "ephemeral") and the content is lost when the instance is Stopped, Terminated or fails. Therefore, it is recommended only for temporary files and swap files. Be sure to copy off any data you wish to keep before the instance is Stopped.
Instance Store volumes are not the same as Elastic Block Store (EBS) volumes. While EBS provides a snapshot capability, this is not available for Instance Store volumes.
Instead, you must copy off any data you wish to keep via normal filesystem commands, or run traditional backup software. There is no snapshot-like capability available for Instance Store volumes.
|
I have an m3 large. Although I can find the other EBS volumes associated with that instance in the Volumes Section.
But I am not able to find my 32GB SSD disk.
How can we take backup of this SSD?
|
How to take a backup of an SSD AWS [ec2]?
|
The single quotes around the name of the directory which is to be excluded were causing the problem(explained in this answer).
Also I stored all the options in an array as explained here.
Removing the single quotes, storing the options in an array, double quoting the variables as suggested by @Cyrus in the comments, solved the problem.
Also I had to change #!/bin/sh to #!/bin/bash.
Updated script:
#!/bin/bash
DRY_RUN=""
if [ "$1" = "-n" ]; then
DRY_RUN="n"
fi
OPTS=( "-a""$DRY_RUN""v" "--delete" "--delete-excluded" "--exclude=/bin/" )
SRC="/home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system/"
DEST="/home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system_backup"
echo "rsync ${OPTS[@]} $SRC $DEST"
rsync "${OPTS[@]}" "$SRC" "$DEST"
|
I have written a bash script to backup my project directory but the exclude option isn't working.
backup.sh
#!/bin/sh
DRY_RUN=""
if [ $1="-n" ]; then
DRY_RUN="n"
fi
OPTIONS="-a"$DRY_RUN"v --delete --delete-excluded --exclude='/bin/'"
SOURCE="/home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system/"
DEST="/home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system_backup"
rsync $OPTIONS $SOURCE $DEST
When I am executing the command separately on the terminal, it works.
vikram:student_information_system$ rsync -anv --delete --delete-excluded --exclude='/bin/' /home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system/ /home/vikram/Documents/sem4/oop/lab/java_assignments/student_information_system_backup
sending incremental file list
deleting bin/student_information_system/model/StudentTest.class
deleting bin/student_information_system/model/Student.class
deleting bin/student_information_system/model/
deleting bin/student_information_system/
deleting bin/
./
.backup.sh.swp
backup.sh
backup.sh~
sent 507 bytes received 228 bytes 1,470.00 bytes/sec
total size is 16,033 speedup is 21.81 (DRY RUN)
vikram:student_information_system$
|
Not being able to exclude directory from rsync in a bash script
|
please refer to this http://orientdb.com/docs/last/Console-Command-Export.html
You can even export only classes you need.
|
I have a big (?) db (more than 3G). I would like to copy to another server only the db structure and the data for one class.
Is there an easy way to do it other than backup + restore of the whole db and than empty all the classes that I don't need?
Thanks
|
OrientDB Backup structure and class value
|
2
On computer "STUDENT-PC2" locate the file share that is destination for backups. In file share permissions add account named as your SQL machine plus dollar sign, like this: "SERVER$" - type that instead of username. Tick write+read permissions for that artificial "user", and hit OK to close. On the file system level configure permissions the same (write+read) or let Everyone write at first to test and then reduce.
In summary:
if both computers are not part of the same domain, the other computer (STUDENT-PC2) sees all local accounts from "SERVER" as "SERVER$" account, no matter what they real name is.
you need to set permissions on destination (STUDENT-PC2) for account "SERVER$" at two levels: file share permission level, and ntfs file system permission level.
Share
Improve this answer
Follow
answered Feb 17, 2021 at 17:05
DataMasterDataMaster
16111 silver badge66 bronze badges
Add a comment
|
|
I am currently trying to take backup of a database to a network UNC share, but its giving me error.
I have two PCs connected in a simple NETWORK, not a domain.
From both pcs, I can easily create and edit files on either one.
One PC on which SQL SERVER is running and database files are located, is named SERVER.
Another PC on which I want to take backup, is named STUDENT-PC2. On this pc, drive d: is a shared drive and I set the full permission for this folder for Everyone, IUSER,NETWORK,NETWORK SERVICE
When I run following command from SQL SERVER MANAGEMENT STUDIO on SERVER, it throws me error as below.
I am running sql server service as NETWORK SERVICE
COMMAND
backup database dpmt to disk='\\STUDENT-PC2\d\DPMT_BACKUP_17032016_102719.Bak'
ERROR
Msg 3201, Level 16, State 1, Line 1
Cannot open backup device '\\STUDENT-PC2\d\DPMT_BACKUP_17032016_102719.Bak'. Operating system error 5(Access is denied.).
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
|
SQL Server backup fails with network UNC share
|
To the best of my knowledge, you cannot. The file system in your running container only exists for the duration of the run. Without mounting a volume, you have no way to allow a second container access to the backup.
For future backups, you could create a second volume only container that mounts /var/lib/postgresql/backup.
|
I have a data only postgresql container
docker create -v /var/lib/postgresql/data --name bevdata mdillon/postgis /bin/true
I have a running Postgis container
docker run --name bevaddress -e POSTGRES_USER=bevsu -e POSTGRES_DB=bevaddress -P -d --volumes-from bevdata mdillon/postgis
I have made a backup of that database into the bavaddress container into directory /var/lib/postgresql/backup
I think this means that the backup data is in container bevaddress (the running process) and NOT the data only container bevdata which I think is good.
Now if I docker pull mdillon/postgis to a new version, how can I attach the folder /var/lib/postgresql/backup of container bevaddress so that a new instance and version of mdillon/postgis can access that folder to restore the database?
|
Migrate data from a data only postgresql docker volume
|
The database will have occurrences of the old url within. You need to search and replace the old url and replace with the new url. However you can't simply get a text editor and do a find replace as the WordPress db uses serialised arrays. To account for the serialised data use this tool -https://interconnectit.com/products/search-and-replace-for-wordpress-databases/. You upload this tool to your server and run it by navigating to it's location in your browser. So if you have http://www.oldurl.com to http://www.newurl.com do find 'oldurl.com', replace 'newurl.com'. When you're done delete the tool as it's a huge security risk.
If you do this you shouldn't need to define urls in your wp-config.php. the rest of your steps are fine.
|
This is my first WP website, in the past I've only dealt with normal html websites which are pretty easy to move around between new hosts and domains.
I'm not using a plug-in to backup and restore, here's what I have so far:
I backed up all the site files via FTP.
I backed up the database by using phpMyAdmin and exported my WP site database as an SQL file using the quick method.
This is what I want to do to move the website to the new domain and host:
Upload files to the new server
Create new DB and import the site db there using phpmyadmin.
Edit the wp-config.php file with the new server's database details (name, user).
Now what I'm stuck with is dealing with the URL, would it be enough to just add these lines in the wp-config.php file? :
define('WP_HOME','http://example.com');
define('WP_SITEURL','http://example.com');
replacing example with the new website url, as mentioned here http://codex.wordpress.org/Changing_The_Site_URL
Any help to properly move the website would be much appreciated and would be great if you could let me know if I'm doing something wrong, like for example if I should install a fresh WP first on the new host before moving files and DB to it.
|
How to adjust the URL settings when moving a WP site manually to a new host and domain?
|
Method 1
rdiff-backup foo /media/bar/foo
... just saying. ;-)
Method 2
Create an include list like this:
/home/me/foo
/home/me/other-foo
- /**
Then backup like this:
rdiff-backup --include-globbing-filelist include-list / /media/bar
In other words, tell rdiff-backup to backup everything, but then exclude everything you don't explicitly mention with the catch-all - /** rule at the foot of the include file.
My example starts at the root directory, but you could start at any level you like:
/foo
/other-foo
- /**
and
rdiff-backup --include-globbing-filelist include-list /home/me /media/bar
I like to start at root because a) it gives me maximum freedom to include and exclude stuff later, and b) I'm backing up some files from /etc anyway.
|
I have a issue about a backup procedure with rdiff-backup. Let's say that i want to make a backup in the following way:
rdiff-backup foo /media/bar
When I do that, all the contents of "foo" are stored in "/media/bar/", but not "foo" itself. This is a problem to me because I want to backup multiple directories with the --include-globbing-filelist include-list option, if I do this, all the contents of the folders listed in include-list will be messy in the destination folder.
With rsync, if I do:
rsync -a foo /media/bar
"foo" and all of his contents will be transferred to "/media/bar" instead of his contents only.
So, is there any way to backup "foo" instead of only his contents?
|
rdiff-backup: Backup whole folder instead of his contents
|
Every time you apply a change in Kura, the changes are saved in a snapshot file. Each file is appended with a timestamp to denote the most recent change. These files are stored on disk at /opt/eclipse/kura/data/snapshots. If you have the latest snapshot backed up, you can:
Reinstall and start Kura
Open the Kura Web UI and navigate to Settings -> Snapshots
Use the 'Upload and Apply' button to upload your backed up file
Note: The snapshot files are encrypted on disk, to view them in plain text you must use the Kura Web UI to download the file. Also, you cannot manually copy the saved snapshot file to the new installation. You must use the Kura web UI to upload your file.
Thanks,
--Dave
|
I have installed KURA on my Raspberry Pi but my microSD card has been corrupted last week and I hade to re-install and re-configure KURA again after reformating the SD card.
I want to be able to back up my work, is it possible that I can copy KURA files on another location so in case the SD card get corrupted I manage to have it work again fast without the need to re-install and reconfigure it from zero.
Thanks in adavnce for your help!
|
How can we back up KURA installation and configuration on a Raspberry Pi
|
2
Why not deploy a second container that is linked to the PostgreSQL one that does the backups?
It can contain a crontab within, together with instructions on how to upload the backup to Amazon S3, or some other secure storage in the cloud that will not fail even in case of an atomic war :)
Here's some basic information on linking containers: https://docs.docker.com/userguide/dockerlinks/
You can also use Docker Compose in order to deploy a fleet of containers (at least 2, in your case). If your "backup container" uploads stuff to the cloud, make sure you don't put your secrets (such as AWS keys) into the image at build time. Put them into the container at run-time. Here's more information on managing secrets using Docker.
Share
Improve this answer
Follow
edited May 23, 2017 at 12:15
CommunityBot
111 silver badge
answered Nov 2, 2015 at 11:28
Andrei IsmailAndrei Ismail
5611 bronze badge
Add a comment
|
|
I would like to deploy an application using docker and would like to use a postgresql container to hold my data.
However I am worried about losing data, so I need back-ups.
I know I could run a cron job on the host to dump the data out from the container, however this approach is not containerized and when I deploy to a new location, I have to remember to add the cronjob.
What is a good , preferably containerized, approach to implement rotating data backups from a postgresql docker container?
|
Cyclic backups of a docker postgresql container
|
It's like a timeout of RUN command in udev.
Instead of running backup script (which normally takes long time to complete) directly from udev, you can run it from separate process, activated by udev.
For example you can use at command:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup_at.sh"
backup_at.sh:
#!/bin/sh
echo /home/steve/backup/backup.sh | at now
Or you can try to run it in background:
ACTION=="add", KERNEL=="sd*", ENV{ID_FS_UUID_ENC}=="f91b8373-6349-4de3-86e1-6a2557f2c3f7", RUN+="/home/steve/backup/backup.sh &"
but I don't check this method.
From http://lists.freedesktop.org/archives/systemd-devel/2012-November/007390.html:
It's completely wrong to launch any long running task from a udev rule
and you should expect that it will be killed. If you need to launch a
process from a udev rule, use ENV{SYSTEMD_WANTS} to activate a
service.
|
Problem: I have a simple custom backup script that is set to run whenever my backup drive is detected, this is done via udev. All is well until about halfway down through the script it seems to hang after the rsync command. My code is below:
#!/bin/bash
#Mount the Backup Drive
wall "backup is starting"
mount -U f91b8373-6349-4de3-86e1-6a2557f2c3f7 /media/backupdrive
#Get updated package-list
mv /media/backupdrive/package-selections /media/backupdrive/package-selections.old
dpkg --get-selections >/media/backupdrive/package-selections
wall "pacakge list updated"
#Run Backup
mv /home/user/backup/rsync.log /home/user/backup/rsync.log.old
rsync --log-file=/media/backupdrive/backup/rsync.log -ravzX --delete --exclude /var/tmp --exclude /var/lock --exclude /var/run /home /etc /var /usr /media/backupdrive/backup
wall "rsync complete"
#Sync changes to disk and unmount
sync
cp /media/backupdrive/backup/rsync.log /home/user/backup/rsync.log
umount /media/backupdrive
wall "Backup is complete, the logfile can be viewed at /home/user/backup/rsync.log"
Question: What am I doing wrong here, why is the script not continuing after the rsync?
PS - The wall commands are not important to the script I placed them in at various points to troubleshoot, yes I'm new to this :)
Edit - I have tried removing the "z" option as was mentioned on a similar question, however this has made no difference
|
Rsync script does not continue after sync
|
exporting data is done via COPY TO. Exports can go to the local filesystem or amazon s3.
every node containing data will export in parallel. you can export to a network attached storage or s3 or simply copy the exported files to another host using scp.
|
I have seen docs on how to import data but how do I do backups?
Is there tooling, is there an API, or is it safe to copy the files on the file system?
|
Backing up multi-node crate cluster
|
You have to add another /1024 and modify the name in "GB Bkp"
SELECT substr(entity,1,20) AS "Node", CAST(sum(bytes/1024/1024/1024) AS decimal(8,2)) AS "GB Bkp" FROM summary WHERE activity='BACKUP' AND start_time>=current_timestamp - 24 hours GROUP BY entity order by 2 desc
Hope this will help you
|
I am working with IBM Tivoli Management Storage and I have to run a a daily report of how much data have been backed-up.
The command below give me the result in Megabytes which is OK, But to save time I would like to have have the result in Gigabytes as my backups are on average bigger than 0ne Gigabyte.
I have tried few variation, but it didn't work, I know very little of SQL and TSM use similar command could someone help me with it.
SELECT substr(entity,1,20) AS "Node", CAST(sum(bytes/1024/1024) AS decimal(8,2)) AS "MB Bkp" FROM summary WHERE activity='BACKUP' AND start_time>=current_timestamp - 24 hours GROUP BY entity order by 2 desc
The Result is:
Node MB Bkp
--------------------- -----------
SRWLON0xxxx 510298.00
SRWLON0xxxx 18999.00
SRWLON0xxxx 18960.00
SRWLON0xxxx 9023.00
SVWLON0xxxx 7581.00
SRWLON0xxxx 6436.00
Thank you in advance.
|
IBM TSM - Select command to view size of Daily Backup
|
For importing you do not need mysqldump rather
mysql -u root -pxxxxxx test < test2backup.sql
|
I'm trying to back up and restore a MySQL database using mysqldump. With my commands I am making a back up file and then restoring that file, but where I'm restoring the database shows no change.
This creates the backup folder in the same directory as the mysqldump.exe file:
In the windows cmd: mysqldump -u root -pxxxxxx test2 > test2backup.sql
Restoring with that file: mysqldump -u root -pxxxxxx test < test2backup.sql
test is an empty database. test2 is a database with tables and data. Running this should fill test with test2's data using the test2backup.sql file, should it not?
|
MySQLdump is executing, but not actually working
|
2
If you copy/paste your whole directory from one machine to the other (including hidden files like .git, ...), you will be keeping:
stashes (that's the most important thing)
all your history, including all local branches, ...
notes
Git always stores the data of one repository inside the .git repository, and nowhere else
Share
Improve this answer
Follow
answered Jun 4, 2015 at 14:44
edi9999edi9999
20.1k1313 gold badges9292 silver badges131131 bronze badges
1
Yeah I was trying that but I asked because I got the result that I explained in the EDIT of my original question. There are effects that I cannot explain; and that doesn't assure me that everything went wrong; I suspect something else could have changed.
– Kamafeather
Jun 5, 2015 at 9:34
Add a comment
|
|
As in the title, how can I backup completely a local Git repository and its state, restore it on another machine, and have the new repository to be in the exact state as the one on the previous machine?
I mainly care to not loose local stuff, like:
stashes (that's the most important thing)
reflog (I want to keep my operations history)
notes
all the rest (possibly)
EDIT
I already tried to compress my local directory, including hidden files like .git, and restore it on the other machine.
What I get, after that, with git status is:
a HUGE list of changed files (and I don't recognize the changes)
the repository is in a detached HEAD
If everything is in the .git folder, why I get the HEAD detached and I don't get the same pointing of the old repository?
And what are all these modified/deleted/typechanged files?
It seems that .git is not really taking everything from the old repository.
|
Backup local Git repository (notes, stashes, reflog included) and restore it on another machine
|
2
If you backed up the database your best bet is to restore from the .bak file. While it is possible to restore an .mdf without the corresponding log it is in no way a sure bet, it really depends on the state of the database. You can try the options in Attaching an MDF file without LDF file and if that doesn't work you may need to use the EMERGENCY command (keep in mind that this is a last resort):
USE [master]
GO
ALTER DATABASE [MyDatabase] SET EMERGENCY
GO
ALTER DATABASE [MyDatabase] SET SINGLE_USER
GO
DBCC CHECKDB ([MyDatabase], REPAIR_ALLOW_DATA_LOSS)
GO
ALTER DATABASE [MyDatabase] SET MULTI_USER
GO
ALTER DATABASE [MyDatabase] SET ONLINE
GO
Share
Improve this answer
Follow
edited May 23, 2017 at 12:21
CommunityBot
111 silver badge
answered Jun 4, 2015 at 1:10
bumble_bee_tunabumble_bee_tuna
3,55377 gold badges4545 silver badges8484 bronze badges
Add a comment
|
|
I backed up a database 2 days ago (but I only have the .mdf file and not the .ldf file).
I want to now create a different database on the same server, using that .mdf file (so I can compare data between now and 2 days ago). Is this possible without having the .ldf file from 2 days ago, but having the current .ldf file? If I can use the current .ldf file, should I use a copy of the file as it is referenced by the current database?
Or should I forget about the current .ldf file and try to restore without it, per Attaching an MDF file without LDF file?
|
Attach database using old .MDF file but current .LDF file
|
2
The problem is you're trying to sync databases bound in large files, i.e. you're not sync'ing the only values that changed, but the whole database all the times.
If you've got a "last time modified" column in each table you edit, or at least in the tables of the strong entities), you'd run a sql export of all the modified entities (and all the related tables) and import them in the (backup?) other drive.
Otherwise no, you'll have to backup the whole SQLite files all the times.
Share
Improve this answer
Follow
answered May 25, 2015 at 10:05
JackJack
1,75911 gold badge1313 silver badges2121 bronze badges
Add a comment
|
|
I'm using rsync to sync two disk's (in the same machine, say /dev/sdb and /dev/sdc) sqlite DB files. On each my program start, I must run rsync to backup the 2 disk like that:
$ rsync -rtv /path/to/sqlite_db/ /path/to/sqlite_db_bkup/
Each time, there may be 500G sqlite DB files to be synced, and this will takes about hours to complete, is there some option that can make the rsync more faster? Or is there any other tools to backup so many DB files in a short time?
|
How to make rsync faster?
|
2
Waitfor sends signals to batch files and waits in batch files for that signal.
See waitfor /?.
It seems about batch to me.
Share
Improve this answer
Follow
answered Apr 9, 2015 at 9:20
SerenitySerenity
17433 bronze badges
Add a comment
|
|
I have a long backup job running on one server that needs to run before another backup job on another server. Is there any way I could have Server A signal Server B to start? These backup jobs take a long time as need to be done on weekend days when there's no one around. How would I go about having unsupervised server A (which would finish its backup job late on a Saturday night) signal unsupervised server B to start it's job? Could I do with a .bat file or script?
Thanks,
Eoghan
|
Start running a batch file when it receives a signal
|
A dictionary backup will backup the definitions of all objects in the database, included the dictionary entries for stored procedures. If you specify a single object, then only that object's definition will be archived.
/* Dictionary Backup - Object Definitions only */
ARCHIVE DICTIONARY TABLES
(DBNAME.TABLENAME1),
(DBNAME.TABLENAME2),
(...)
RELEASE LOCK,
FILE=NVDSID1;
/* Data Backup - Object Definitions and Data */
ARCHIVE DATA TABLES
(DBNAME.TABLENAME1),
(DBNAME.TABLENAME2),
(...)
RELEASE LOCK,
FILE=NVDSID2;
|
I need to backup tables and be able to control what to store: full table or just its structure. Unfortunately, I didn't figure it out. So I looked the official site, then tried to look the full guide, but it is so full unnecessary information.
So far I know how do it default way:
logon ZZZZ/YYYY,XXXX;
ARCHIVE DATA TABLE
(DATABASENAME.TABLENAME1),
(DATABASENAME.TABLENAME2),
(DATABASENAME.TABLENAME3),
RELEASE LOCK,
FILE=NVDSID1;
Example for a Restore of tables:
--------------------------------
logon ZZZZ/YYYY,XXXX;
COPY DATA TABLES
(DATABASENAME.TABLENAME11) (FROM(DATABASENAME.TABLENAME1)),
(DATABASENAME.TABLENAME12) (FROM(DATABASENAME.TABLENAME2)),
(DATABASENAME.TABLENAME13) (FROM(DATABASENAME.TABLENAME3)),
RELEASE LOCK,
FILE=NVDSID1;
But how can I specify what to dump as I asked before? And one more question: how to backup and restore views and procedures?
|
How to use arcmain to backup table with all his rows and without them(just structure)?
|
2
On AWS S3 you can configure event notifications (Ex: s3:ObjectCreated:*). To request notification when an object is created. It supports SNS, SQS and Lambda services. So you can have an application that listens on the event and updates the statistics. You may also want to ad timestamp as part of the statistic. Then just "query" the result for a certain period of time and you will get your delta.
Share
Improve this answer
Follow
answered Mar 6, 2015 at 21:19
VorVor
34k4545 gold badges137137 silver badges193193 bronze badges
1
I was going to recommend this :).
– Max
Mar 7, 2015 at 18:51
Add a comment
|
|
Trying to sync a large (millions of files) S3 bucket from cloud to local storage seems to be troublesome process for most S3 tools, as virtually everything I've seen so far uses GET Bucket operation, patiently getting the whole list of files in bucket, then diffing it against a list local of files, then performing the actual file transfer.
This looks extremely unoptimal. For example, if one could list files in a bucket that were created / changed since the given date, this could be done quickly, as list of files to be transferred would include just a handful, not millions.
However, given that answer to this question is still true, it's not possible to do so in S3 API.
Are there any other approaches to do periodic incremental backups of a given large S3 bucket?
|
Amazon S3 sync millions of files to local for incremental backup
|
Edit, since the setup was not clear enough for me from the original question.
Based on the update of the question the situation is, that you need to pull the data on the backup server from the windows system via ftp. In this case you could adapt the script you find yourself (see comment) or use a similar idea like:
Use cp -lr to clone the previous backup with hard links.
Use lftp --mirror to overwrite this copy with anything which got updated on the remote system.
But I assumed initially that you need to push the data from the windows system to the backup server, that is the FTP server is on the backup system. This case can not handled this way (original answer follows):
Since FTP has no idea of links at all any transfers will only result in new or overwritten files. The only way would be to using the SITE command to issue site specific commands and deal this way with hard links. But site specific commands are usually restricted heavily so that you can do something like change permissions but not do anything with hard links.
And even if you could support hard links with SITE you have to implement the logic which decides when to use such links. With rsync this logic is built into the rsync server and executed on the server site. With FTP you have to built all the logic at the client site, which means that you would have to download a file to compare it with a local file and then decide if you would need to upload the new file or if a hard link to an existing file could be used.
|
Usually I use rsync based backup.
But now I have to make backup script from Windows server to linux.
So, there is no rsync - only FTP.
I like ideas of hard links using to save disk space and incremental backup to minimize traffic.
Is there any similar backup script for ftp instead of rsync?
UPDATE:
I need to backup Windows server through FTP. Backup script executes at Linux backup server.
SOLUTION:
I found this useful script to backup through FTP with hard links and incremental feature.
Note for Ubuntu users: there is no md5 command in Ubuntu. Use md5sum instead.
# filehash1="$(md5 -q "$curfile"".gz")"
# filehash2="$(md5 -q "$mysqltmpfile")"
filehash1="$(md5sum "$curfile"".gz" | awk '{ print $1 }')"
filehash2="$(md5sum "$mysqltmpfile" | awk '{ print $1 }')"
|
FTP backup script with hard links using
|
2
The short answer is that RESTORE DATABASE will produce a target database that occupies about as much disk space as the source database did when it was backed up.
On its own, the size of a DB2 backup image is not a reliable indicator of how big the target database will be. For one thing, DB2 provides the option to compress the data being backed up, which can make the backup image significantly smaller than the DB2 object data it contains.
As you correctly point out, the backup image only contains non-empty extents (blocks of contiguous pages), but the RESTORE DATABASE command will recreate each tablespace container to its original size (including empty pages) unless you specify different container locations and sizes via the REDIRECT parameter.
The 302GB of capacity you're seeing is from GET_DBSIZE_INFO and similar utilities, and is quite often larger than the total storage the database currently occupies. This is because DB2's capacity calculation includes not only unused pages in DMS tablespaces, but also any free space on volumes or drives that are used by an SMS tablespace (most DB2 LUW databases contain at least one SMS tablespace).
Share
Improve this answer
Follow
answered Feb 18, 2015 at 22:01
Fred SobotkaFred Sobotka
5,2722323 silver badges3232 bronze badges
Add a comment
|
|
I have database TESTDB with following details:
Database size: 3.2GB
Database Capacity: 302 GB
One of its tablespaces has its HWM too high due to an SMP extent, so it is not letting me reduce the high water mark.
My backup size is around 3.2 GB (As backups contains only used pages)
If I restore this database backup image via a redirected restore, what will be the newly restored database's size?
Will it be around 3.2 GB or around 302 GB?
|
What affects DB2 restored database size?
|
Solution 1: If you are searching free tool for auto backup mysql then you should check this Auto MY SQL Backup
Features:
Email notification of backups
Backup Compression and Encryption
Configurable backup rotation
Incremental database backups
Solution 2:
Another Solution will be Cron Jobs
5 0 * * * /path/to/mysqldump ... > /path/to/backup/mydata_$( date +"%Y_%m_%d" ).sql
Read man date
How to (Cron)
Introduction to cron,covers the basics of what cron does,
and how to use it.
Solution 3:
For windows machines go for this link windows auto backup
solution 4: In windows i would prefer Task Scheduler, it looks some thing like this
schtasks /create /sc daily /st 08:20 /ru SYSTEM /tn MySQL_backup /tr "\"C:\My Project\MySQL\MySQL Server 5.1\bin\dump.exe\" -B <DataBase_NAME> -u <USER_NAME> -p<PASSWORD> -r C:\MySQL_backup\<DataBase_NAME>_%date:~0,2%.sql
I would prefer Cron Jobs. Hope this helps you.
|
I'm trying to backup MYSQL automatically on specified time every day, i searched for a free tool but i didn't find any thing !
Is there any free tool or application for backup a MYSQL database from my domain?
|
Application for backup a MYSQL
|
Perhaps I'm misunderstanding your doubt. You seem to think that $file will change its value if it is used a second time.
Bash variables don't change their values. $date and $file are set once in your script, and then they keep the same value until they disappear at the end of the script (unless they are reassigned). So you can just use $file again, and it will have the same value.
|
I run the following script:
#!/bin/bash
{
date=$( date '+%Y-%m-%d_%H-%M-%S' )
file="mysql_dump_$date"
echo '###Start DB Backup '$date'###'
mysqldump -h localhost -u 'USERNAME' -pPASSWD 'DBNAME' > /backup/$file
echo '###DB Backuo Finished '$date'###'
} >> /backup/backup.log
As you can see in the mysqldump line, $file will be named mysql_dump_date
I would like to cp the file somewhere else but don't know how I can specify to use the $file which is used in the mysqldump line. Obviously, I could just used $file but if the DB backup starts at 10:49:59 and the cp will start at 10:50:00, it won't have the same timestamp and wouldn't be able to find the file and the copy would fail.
Any ideas?
|
How can I copy variable file from bash script
|
You can use this command to export couch-base to SQLlight.
> cbbackup http://<host>:8091 <backup_folder> -u <user> -p <pass> -b <bucket>
|
I want to be able to make a backup of the data in my application, and potentially share this backup between Android devices. I am using Couchbase. I have found many resources regarding exporting an SQLite database to XML, but none for Couchbase.
Does anyone know how to do this?
|
Android: How can I share the contents of a Couchbase database between devices? (And/or how to export Couchbase to XML)
|
For a backup, you might just need to remember the passphrase and the options you used to set up the encrypted folder, so everything in the example page you linked:
To see the files again, just mount the directory with ecryptfs
filesystem.
# mount -t ecryptfs /home/sk/unixmen/ /home/sk/unixmen/
Select key type to use for newly created files:
1) tspi
2) passphrase
Selection: 2 <---- Type 2 and press enter
Passphrase: <---- Enter the passphrase
Select cipher:
1) aes: blocksize = 16; min keysize = 16; max keysize = 32
2) blowfish: blocksize = 8; min keysize = 16; max keysize = 56
3) des3_ede: blocksize = 8; min keysize = 24; max keysize = 24
4) twofish: blocksize = 16; min keysize = 16; max keysize = 32
5) cast6: blocksize = 16; min keysize = 16; max keysize = 32
6) cast5: blocksize = 8; min keysize = 5; max keysize = 16
Selection [aes]: <---- Press Enter
Select key bytes:
1) 16
2) 32
3) 24
Selection [16]: <---- Press Enter
Enable plaintext passthrough (y/n) [n]: <---- Press Enter
Enable filename encryption (y/n) [n]: <---- Press Enter
Attempting to mount with the following options:
ecryptfs_unlink_sigs
ecryptfs_key_bytes=16
ecryptfs_cipher=aes
ecryptfs_sig=5c116acdf1d0dd89
Mounted eCryptfs
The ecryptfs_sig is derived from the passphrase, so is really just to verify you've entered the right passphrase, not really essential to the mount command.
I can't say I like the "Add your passphrase in this file" part of the automatic mount section, detracts from the security by having the passphrase in plain text. Your system can use eCryptFS & PAM to automatically mount encrypted folders on login, using your login passphrase to "wrap"/encrypt the eCryptFS key. See man ecryptfs & the man pages for it's tools, like ecryptfs-setup-private
|
I'm using ecryptfs to backup the entire contents of my Ubuntu box to an external hard drive enclosure. I've followed this guide and have things properly backing-up and encrypted as I want.
That's all well and good until I have to actually use the encrypted backup, and that's got me wondering. In the event that I lose my entire primary hard drive, what files/info should I readily have access to in order to de-crypt my backup? Besides the options used to setup the initial encryption, are these the only two things I need:?
passphrase
sig key
|
securely restoring an ecryptfs encrypted backup
|
As you pointed out in your question - being that your Windows machine is behind a NAT router, it may be simpler for your windows machine to 'pull' files from your Debian VM, as opposed to your Debian VM 'pushing' files to your Windows machine. Pushing files from your Debian VM to your Windows machine would require you to setup some type of server on your Windows machine that would listen for incoming connections from your Debian VM on some designated port, it would required that you setup a port-forwarding rule on your NAT router, and it would require you to setup a dynamic DNS hostname that would change whenever your router's public IP changes. And, since you would be opening a port up to the public, it would also require you to take into account security considerations to make sure that nothing gets compromised.
So, pulling files to your Windows machine from your Debian VM would be simpler. One way to do this would be to install Cygwin and use rsync, as you mentioned. Another solution may be to install putty on the windows machine, then use pscp on the windows machine to copy files from the remote debian host to the windows machine. The pscp command can be scripted using a DOS batch script, Powershell, or any number of other windows scripting tools. See http://the.earth.li/~sgtatham/putty/0.60/htmldoc/Chapter5.html for more info.
|
I have a Debian server (VPS) and a Windows server (at home). I would like to backup periodically some paths of my Debian to My Windows server. My WS act as NAS and I use it for my all backup.
Firstly I started to configure a cron task with rsync on my Debian but as there is no native ssh server on Windows server it may not be the best solution. Then I was wondering if it would not be better to use my windows server to pull data from my debian to windows.
Here is the only link I found that make things on this way: http://troy.jdmz.net/rsync/
(server pull from client)
Also my windows server is at my home and it bring one constraint which is that I can change my home location, so my ip change too and all the configuration of router. I would like to just plug the windows server and let it continue to work normally.
What do you guys think about all of that ? Is it an elegant solution to make this on this way ?
Do I have to install cygwin with rsync ? Is it possible to set a periodic task on my windows server ?
Thanks in advance.
|
Backup from linux to Windows Server
|
Check the owner of all files under /data/data/com.my.game. It is possibly root and it should probably be com.my.game (or more precisely the userId of the app com.my.game, but most of the time they are the same).
|
Recently my Nexus 7 died somehow and had to reflash the factory android image. Luckily, my bootloader was already unlocked so I could save the important data using adb from recovery mode.
I also backuped the data of a game (at least I think so), simply saving
/data/data/com.my.game
I reflashed the factory image, the tablet runs well now. I also reinstalled the apps, including my game, then rooted the tablet so I could restore saved data to /data/data. Then I pushed the saves back to the folder (I had to be tricky because adb push did not work directly to /data, so I pushed it to /sdcard then copied it with adb shell as root).
Everything seems to be OK but unfortunately the game refuses to use the saved data, it simply crashes. When I restart my tablet, it runs again but without any saves, so I guess it deletes all saved data.
Could you give me any advice how to make my game work with the saved data?
|
Android app data recovery
|
Meteor actually support this out of the box. I guess I was searching with the wrong terms. Check out the link below for more information.
How can Meteor apps work offline?
|
This question already has answers here:
How can Meteor apps work offline?
(4 answers)
Closed 9 years ago.
I am planning to create a web application using Node.js and Meteor Framework with mongoDB. This application will be critical for the business operation, so ideally it should be able to handle network failure.Is this possible? Or my only option here is to create a stand-alone application? The application will probably be run on either a PC or a tablet.
Are there any existing solution for this?
One Idea I have is, is it possible to have a local cache of the user's database on the machine. When the network is up, this cache might not be used but continually updated. But when the network failed, then the connection will be hand off to this database so operation can continue as usual. When the network is back up, this database will sync with the our server and back to normal mode.
In case of a PC, we might be able to run a local server manually to get the webpage backup. I couldn't think of a solution for the tablet though.
|
Business Contuinity for Node.js Web Application during Network Failure [duplicate]
|
Psdrive is a feature for powershell cmdlet not for extrrnal command , change this line:
robocopy "\\localhost\C$\nova5" "$TargetPath" /e
|
I am just starting with PowerShell, so please be kind.
All I want to do is backup my directories and files from my laptop to the desktop computer, i.e. "server", using PowerShell and robocopy. I am the administrator to both machines (Windows 7).
This fails with access denied on the "server", i.e., desktop, despite the permissions being set for "Everybody" to do everything.
Any help (or better way) is really appreciated! Thanks.
$cred=get-credential
$sourcepath = ("\\localhost\C$\nova5");
$TargetPath = ("\\library\E$\nova5");
New-PSDrive -Name source -PSProvider FileSystem -Root $SourcePath
New-PSDrive -Name target -PSProvider FileSystem -Root $TargetPath -Credential $cred
robocopy source target /e;
return;
|
How to use PowerShell and robocopy to backup files to a server?
|
If you are not using explicit transactions, SQLite will automatically use a transaction around each SQL statement.
To ensure that the database files cannot be accessed by another database connection while you are doing the backup, open an exclusive transaction around the backup.
|
I am trying to implement a service to backup the SQLite database of my Android app. I am planning to both schedule this service for frequent backups (every day for example), and add an option to launch it immediately.
My problem is that the service might start while the application is running, or the user might start the application while the backup is in progress. And they may write to the database while I am copying it.
Is there any way to make sure that the copy and write will not run concurrently, without adding synchronization locks to all my queries ?
Thanks !
|
Locking SQLite database file during backup
|
2
1F A0
Is associated with .tar based zip files as found at
Wikipedia List of file signatures
Share
Improve this answer
Follow
answered Sep 13, 2014 at 16:52
DRappDRapp
47.9k1212 gold badges7474 silver badges145145 bronze badges
Add a comment
|
|
I have, what I believe to be, a FoxPro Backup file with file extension .02A.
The first seven characters of this 150MB file are ' !Pƒõ' in hex: 1F A0 21 50 83 9D F5.
Who knows what kind of file this is exactly and how do I get to the contents?
|
FoxPro 'Zipped' backup
|
@ECHO Off
SETLOCAL enabledelayedexpansion
SET "sourcedir=u:\sourcedir"
SET /a month=99
SET /a year=99
PUSHD "%sourcedir%"
FOR /f "skip=4tokens=1,2,3,5,*delims=/- " %%a IN (
'dir /tc /a-d /-c /od "*" '
) DO (
IF "%%d"=="" GOTO done
IF %%b-%%c neq !month!-!year! (
ECHO(leave "%%e" ".\x\"
SET month=%%b
SET year=%%c
) ELSE (
ECHO(MOVE "%%e" ".\x\"
)
)
:done
POPD
GOTO :EOF
You would need to change the setting of sourcedir to suit your circumstances.
You don't indicate what your date format and separator are. I use dd/mm/yy. If you use mm/dd/yy then substitute %%a for %%b in the action part of the for loop (ie. the part ofter the do)
The required MOVE commands are merely ECHOed for testing purposes. After you've verified that the commands are correct, change ECHO(MOVE to MOVE to actually move the files. Append >nul to suppress report messages (eg. sourcedir0)
The sourcedir1 switch on the sourcedir2 statement explicitly selects the sourcedir3 date, as requested. This term is often used to be synonymous with last-write date which is the common sourcedir4 reported by sourcedir5. If you actually want last-write date, simply omit the sourcedir6 switch.
|
Our backup system is running out of space and we'd like to keep every first backup of the month (creation date) and move the other files to subfolder x.
Is there an easy script for this, as I really have no clue on how to do this.
|
.bat or .cmd to move all files except first of month
|
IIRC, this is how the backup script works. Time values are always rendered in UTC, not server local time. This allows for unambiguous ordering of backup files.
If you look at the source of the repozo script used for backup you can see that the date portion of the filename always uses time.gmtime() so this is not something you can change.
|
I am working on a development Plone 4.3.3 site with Zope 2.13.22 on Debian and noticed when I ran my backup using collective.recipe.backup that the backup time in the file name is 6 hours ahead of my system.
Example:
Backup name = 2014-08-06-17-08-15.fsz
System time (and write time according to properties) = 2014-08-06 11:08:15
I have checked multiple areas of Plone and they all match my system time.
My buildout.cfg contains the correct Time Zone information.
Any ideas as to what might be causing this or how to correct it? Thank you in advance.
|
Plone backup time stamps don't match system
|
mysqlbackup utility isn't available in Community installation !
I'll try msqldump.
First I list all the InnoDB tables, then I execute the msqldump on each of these tables (loop).
|
How, can I backup only the Innodb tables.
I'm looking for command line solution.
Thanks for your feedback.
|
MySQL backup only innodb tables
|
The TFS Backup wizard is the preferred way to do backups because it uses Marked Transactions to ensure a consistent backup set. This is important because TFS data spans several databases (Collection + Config), and you need a way to ensure that the backups are consistent (from the same moment in time for all/both databases).
You can do this manually but it's tedious to setup. You can see the instructions here: http://msdn.microsoft.com/en-us/library/ms253070.aspx
|
For various reasons, our support are backing up the TFS databases with their own tool, rather than using the built in TFS Scheduled Backup tool (we're using 2012 update 3).
I know the report encryption key is one thing that is also backed up using the tool.
Is there anything else the TFS tool backs up that I haven't mentioned?
|
What does the TFS Scheduled Backup Tool do that a database backup doesn't?
|
Data that the user created or that is private to the user and that must not just disappear should be in the Documents directory. Also, data which is valuable to the user and that cannot be recreated should be in the Documents directory. Files in the Documents directory are automatically backed up to iCloud.
Data that is downloaded from the internet should be saved into the Caches directory, because it can easily be redownloaded again, and so it makes no sense to save them to iCloud. The data would just occupy space in the iCloud. If the user needs to restore his device, the app can just redownload the data from wherever it was downloaded from in the first place. The Caches directory can be wiped by the operating system if the amount of free space on the hard disk becomes very low.
There is a special case, that is files saved in the Documents directory with the NSURLIsExcludedFromBackupKey flag set. This is intended for data which is valuable to the user and that should be available at all time, even when no internet connection is present, but that - on the other hand - can easily be redownloaded or updated whenever needed.
Contained in the iCloud backup is the contents of the Documents directory excluding the files that have the NSURLIsExcludedFromBackupKey flag set. All other directories that lie inside the app container are not backed up.
The "Documents and Data" in the settings refer to the size of the Documents folder. The calculation doesn't take into account if anything was excluded from the iCloud backup. The size does not refer to the size of the data in the iCloud backup. However, if the Settings.app says "30MB", you can be sure that the iCloud backup will at least not be greater than 30MB.
|
My app was rejected by Apple because they said it doesn't follow the "iOS Data Storage Guidelines."
I have updated the app with the following code to flag downloaded content so that it isn't backed up to the cloud...
- (BOOL)addSkipBackupAttributeToItemAtPath:(NSString *)filePathString {
NSURL *fileURL = [NSURL fileURLWithPath:filePathString];
NSLog(@"Going to add skip attribute to " @"%@", filePathString);
assert([[NSFileManager defaultManager] fileExistsAtPath: [fileURL path]]);
NSError *error = nil;
BOOL success = [fileURL setResourceValue:[NSNumber numberWithBool: YES]
forKey: NSURLIsExcludedFromBackupKey
error: &error];
NSLog(@"Added skip attribute to " @"%@", filePathString);
return success;
}
However, when I check Settings -> General -> Usage -> MyApp, it displays "Documents and Data" as > 30Mb. This is correct from the point of view that 30Mb of content has been downloaded to the app, but does this also mean that these 30Mb will be backed up to the cloud?
Put in another way, if I flag files so they are not to be backed up, would they be included or excluded from the "Documents and Data" value?
Cheers!
|
How to understand iOS "Documents and data" (related to cloud backup)
|
proxy_interfaces = 1.2.3.4
because 1.2.3.4 is the proxy
mail mail go to mail2 and you will have to fetchmail it.
|
I have 2 mail server
mail1.domain.com 1.2.3.4
mail2.domain.com 1.2.3.5
I want mail2 to be proxy for mail1 and mail2 to be backup mx for mail1
Is this possible ?
I found in the postfix man the following :
proxy_interfaces (default: empty)
The network interface addresses that this mail system receives mail on by way of a proxy or network address translation unit. [...] You must specify your "outside" proxy/NAT addresses when your system is a backup MX host for other domains, otherwise mail delivery loops will happen when the primary MX host is down.
Example:
proxy_interfaces = 1.2.3.4
is this the setting I must supply to mail2 main.cf ?
Or do I need to say:
proxy_interfaces = 1.2.3.5 ?
I do not catch this exactly ......
What happens when mail1 goes offline ?
second question:
how do i transport the mails from the backupmx tp the main mx when the main mx goes online again?
|
postfix backupmx and proxy_interfaces
|
1
You can do the following, I'm not sure how this will behave with your recurse parameter but assume it is working correctly so I have left it in the example:
Get-ChildItem –Path “d:\Backup\hl” –Recurse | Sort-Object LastWriteTime –Descending | Select-Object -Skip 31 | Remove-Item -Force -Recurse
Share
Improve this answer
Follow
answered May 19, 2014 at 11:15
arco444arco444
22.4k1212 gold badges6464 silver badges6767 bronze badges
3
Thanks, I have tested this with -whatif and it says it will delete 16 folders which it shouldn't be doing. There are 31 folders with 1 file inside each folder. By running the command it theoretically shouldn't delete anything as there is already 31 files?
– user3652273
May 19, 2014 at 11:31
I'm guessing the problem is the 1 file inside each of the 31 folders?
– user3652273
May 19, 2014 at 11:42
Yes I guess so. That's what I meant about using the recurse parameter, likely to have unexpected results. Look at the other answer as he has edited to exclude directories.
– arco444
May 19, 2014 at 11:51
Add a comment
|
|
I'm using this powershell command to keep only backup files that are 31 days old.
Get-ChildItem –Path “d:\Backup\hl” –Recurse | Where-Object{$_.LastWriteTime –lt (Get-Date).AddDays(-31)} | Remove-Item -Force -Recurse
My question is if the daily backup was to fail and I checked the backup folder e.g after a month, the powershell script would delete all or most of the backups because they are older then 31 days.
Is it possible to change the powershell command to keep the last 31 files depending on lastwritetime , but not because they are within 31 days old?
Thanks
|
powershell keep last 31 files
|
Try something like this:
restore verifyonly from disk = 'D:\sample.bak';
|
Is there a short T-SQL query or command line option to check the integrity of a SQL Server backup created with "compressed" option?
I need it to make sure I downloaded it correctly from network.
|
How to check the integrity of a SQL Server compressed backup?
|
One thing you could do is just separately download the version of Cassandra compatible with the one that packaged with your Titan version. I routinely do that to get nodetool and the cassandra-cli.
|
I have the Titan server, with Cassandra, installed here, with multiple keyspaces configured.
I've read many threads about how to back up and restore a keyspace, but all talk about using sstableloader.
However, I didn't find this tool, since the Titan installation I've used came with Cassandra, and there is not an exclusive bin folder for Cassandra on it.
I wonder how do I backup and restore a keyspace with these conditions.
Thanks in advance.
|
How to backup and restore data in Titan-Server (with Cassandra and Elastic Search) without sstableloader
|
Eclipse has Replace With > Local History for individual files (and Restore from Local History for deleted files).
By default the local history is only kept for a few days, you can configure this in Preferences > Workspace > Local History.
For anything more complex you can use one of the many source control systems supported by Eclipse such as SVN or Git. This is worth doing just for the extra backup and does not require a separate server.
|
I've heard that Eclipse has a few fancy backup options.
I've been working on an Eclipse project (using Pydev), and I would like to somehow switch back to the way the project was a few days ago. Is there such an option?
What I've found till now was only the backup files (specifically what I changed and when), but I don't know how to get to the exact state the whole project was on a specific date.
Please help :-(
|
Eclipse Backup - how to get back to a specific date
|
You can't rollback, but if the database is in full recovery model, then you can restore to another servar with stopat, and recover the deleted rows from there.
|
Yesterday I wrote some code module, that wrote wrong data in almost 400 existing records in important database on SQL Server 2008. I didn't make backup of this database (my mistake). So the question is how do I rollback these 400 transactions? Is there any way to do this? Thanks.
|
Is there any way to rollback transactions in SQL Server?
|
I think what you're seeing might be a difference in disk usage based on filesystem. Remember, du doesn't really show file sizes , but rather an "estimate" of "file space usage". ls or stat are accurate depictions of file size.
Don't use filesize as a checksum. If you want to make sure 2 files are exactly the same, use a real checksum - or a few ( md5/sha comes to mind). If you think you might be seeing a hash collision ( extremely unlikely), use 2 checksums. The likelihood of having 2 hash collisions with different checksums on the same input data is infinitesimal.
|
I am using rsync to backup a folder regularly to another server, like this, creating a duplicate failsafe version.
rsync --partial --progress -avzl -e ssh /backup_source [email protected]:/backup_dest/ >> /backup.log
I understand it uses compression when transferring the files. I've noticed some unusual differences in the destination folder's storage usage. Depending on the command used on the destination folder, I get:
ls -lart: returns identical list of files with filesize numbers matching between src/dest
du: returned folder size on destination is anywhere from 20-50% of the same du results on the source folder.
If I run "du [filename]" comparison on the same file on the source/destination, the destination is once again 20-50% the size. The contents are often text, and appear to be the same and entirely intact.
How can I account for this file size difference? Is there some sort of compression carrying over to the destination file? Yet how can the file appear identical in contents but take up less space? Confused.
EDIT:
md5sum comparison of a couple files returns the same result, which is a good sign. Still curious about "du" though. Or a more reliable way to compare file size of a directory structure I suppose.
|
rsync backup with compression, file size differences
|
Your fastest solution is probably to just do this via the file system.
Stop the server and make a local copy of your entire database cluster, i.e. everything under $PGDATA, inclusive. Start the server and do your mangling. When you need to refresh your database, stop the server and copy the files back in from your backup location. Note that this affects the entire cluster, so you cannot do this if other databases in the same cluster are in production use: everything is frozen in the state it was in when you first made the backup.
The alternative is to use pg_dump in binary mode, but probably quite a bit slower than the manual method. It is the only solution if other databases in the cluster need to be preserved.
|
This is a general Postgres backup and restore method question, based on the following use case for a non production server (i.e. a local testing server).
I have a ~20gb database that I will mangle during the testing of a php script that will result in the need to drop it and recreate it quite often.
Running dumped SQL to restore it takes quite a lot of time, and I'm on a tight deadline, so I wondered if there was a method whereby I could speed up the process. I thought the following may work:
Create and populate the database initially
Copy its data files to a secondary location
mangle the database with my testing.
delete the data files and the copy the copies back restoring the original state.
But I don't know where to start or if there's some internal stuff happening that would prevent this from working.
Is the above possible, if so how is it achieved?
This isn't a closed question, if there are faster alternatives to what I'm asking for, please enlighten me. I'm open to suggestions.
Thanks.
|
Is it possible to restore a Postgres database by simply swapping out some files for speed?
|
I've used migration tool of plesk (dont need to use backup files anymore) !!
|
I'm trying to perform data migration from server to server.
My hosting provider allow me a personal FTP to backup all my server data everyday.
I just installed the new server with a Plesk 11 panel manager as the old one (same version).
Now I want to use personal FTP to transfer data (databases, websites, domaines configuration) and get it in the new server.
I did the first part (transfert from personal FTP with ncftp) with success.
Now I have a .tar file (15 GO) containing all my data from the old server, I put it in /var/lib/psa/dump, But I can't find it in Plesk backup manager to perform restore action.
can I do it otherwise ? or Am I messing something here ?
Thank you very much.
Update: I've used migration tool of plesk (dont need to use backup files anymore) :)
|
Restore plesk 11 via personal FTP
|
2
You can invoke the backup directly by passing in the URI of your embedded eXist instance. For example:
import org.exist.backup.Backup;
//omitted for brevity
final Backup backup = new Backup("admin, "adminPass", "xmldb:exist:///db")
backup.backup(false, null);
You can use any collection path instead of just /db. Also if you are running this from within a Swing application you can use:
backup.backup(true, frame);
To have a backup dialog appear.
Hope that helps.
Share
Improve this answer
Follow
answered Mar 11, 2014 at 13:23
adamretteradamretter
3,92522 gold badges2424 silver badges4343 bronze badges
Add a comment
|
|
I'm attempting to use eXist-db in embedded mode in a Java program to produce an interactive fiction game.
Is there any information on invoking backups and restores from within my own java application, so as to initially load the story and all files, and then to perform a save/restore function?
Also, any suggestions on how to format my xml for such use would be appreciated.
|
eXist-db embedded mode backup / restore
|
To get the backup of the package, do take a look at Get Package operation in Service Management API. This operation takes a backup of package and config file and stores in a blob container of your choice.
|
I want to take a backup of the code of cloud service that is running currently.Is there any way to take backup of current ?
|
Is there any way to get backup of running code of azure service
|
I think you'll want to use generate to get the LATEST timestamp rather than exec inside of a custom type. Something like this perhaps (note that you need to change the format of the uri for the download as well):
$latest_file = generate(
'/usr/bin/curl',
'-s',
'http://myaws.com/LATEST'
)
define download ($uri, $timeout = 300) {
exec {
"download $uri":
path => '/usr/bin',
command => "wget --timestamping -q '$uri' -O $name",
creates => $name,
timeout => $timeout
}
}
download {
"$data_file":
uri => "http://myaws.com/${latest_file}/mycompany-data-${latest_file}.tgz",
timeout => 900;
}
|
I have a backup script that stores the latest backup timestamp in an url like this http://myaws.com/LATEST, the file contains only a string representing the timestamp, for instance, "201402230400". The same script store the real backups in http://myaws.com/201402230400/mycompany-dump-201402230400.gz and http://myaws.com/201402230400/mycompany-data-201402230400.tgz.
The thing is I'm creating a puppet class that will read those urls and restore the files in my new VM based on the LATEST timestamp value. What I'm missing is how can I build a url from a content store in a file?
define download ($uri, $timeout = 300) {
exec {
"download $uri":
path => '/usr/bin',
command => "wget --timestamping -q '$uri' -O $name",
creates => $name,
timeout => $timeout
}
}
download {
"$latest_file":
uri => "http://myaws.com/LATEST",
timeout => 900;
}
download {
"$data_file":
uri => "http://myaws.com/file($latest_file)/mycompany-data-file($latest_file).tgz",
timeout => 900;
}
The call file($latest_file) is not working as expected. What I'm doing wrong?
|
How can I build an url from a file content on puppet?
|
2
To move all the files except current date you can do:
cd /source
dt=$(date '+%Y%m%d')
for f in record_*; do
[[ "$f" != *"$dt" ]] && mv "$f" /target
done
Share
Improve this answer
Follow
edited Feb 12, 2014 at 10:28
answered Feb 12, 2014 at 9:53
anubhavaanubhava
771k6666 gold badges582582 silver badges649649 bronze badges
2
+1 Nice. I might be tempted to go for "for f in record_*" to be on the safe side in case OP has other stuff in there.
– Mark Setchell
Feb 12, 2014 at 9:55
Sure thing, if OP has other files too then record_* is safer (edited in my answer).
– anubhava
Feb 12, 2014 at 10:28
Add a comment
|
|
I wish to move files contained in a directory but I do not wish to move the file with the current date. eg. Current date is February 12, 2014 translates to 20140212
When I list files in source directory: /source
record_20140209
record_20140210
record_20140211
record_20140212
I wish to move to a target directory :/ target
all files except the file with current date which is record_20140212.
So in the above list, the ones below should be moved to /target directory.
record_20140209
record_20140210
record_20140211
Any ideas would be of great help. I wish to write the script using bash script but php will also be fine. My OS is Centos 5.Let me know if you have questions.
Thanks!
|
Moving files in a directory except file with current date
|
1
rsync will return a non-zero result if it fails. So something like this:
backup() {
$RSYNC -avxz --bwlimit=2000 [email protected]:/backup $BackupDIR/
}
backup
while [ ! $? ]; do
backup
done
Share
Improve this answer
Follow
edited Jan 14, 2014 at 10:47
answered Jan 14, 2014 at 9:10
Emil VikströmEmil Vikström
91k1616 gold badges142142 silver badges175175 bronze badges
0
Add a comment
|
|
Once a week, I download with rsync remote server backup in my local network.
I created a bash script to do this and I setup crontab to start once a week.
the problem is: if for any reason during the night Internet crashes, rsync stop the synchronizations.
I'm asking that rsync in case of lack of connection retry starting again.
How do I fix it?
this is the script:
#!/bin/bash
EMAIL="[email protected]"
MAIL="$(which mail)"
RSYNC="$(which rsync)"
DATA="$(date +"%d-%m-%Y")"
BackupDIR="/media/back_up/Remote_repository"
$RSYNC -avxz --bwlimit=2000 [email protected]:/backup $BackupDIR/
echo "Backup Done" | $MAIL -s "Backup of $DATA synchronized on local network" $EMAIL
|
restart rsync in case of lack of connection
|
Parse
Parse handles everything you need to store data securely and
efficiently in the cloud. Store basic data types, locations, photos,
and query across them in just a few lines of code.
Pricing: Free < 15 million request a month otherwise 199$ month
https://parse.com/products/data
Stackmob
The StackMob platform lets you build applications using data from any
cloud or behind the firewall, so you can get to market faster than
ever. The platform has the features to build any application, but the
platform is not an island; we’ve engineered it to work with countless
third party services and data sources.
Pricing: Free, pro plans with extra features
https://www.stackmob.com/product/
Kinvey
User management: Add any number of users, using the authentication
systems of your choice. You can use Kinvey, Facebook, Twitter, Google,
LinkedIn and any other OAuth2 authentication provider.
Data storage: Instantly and securely sync data between your app and Kinvey backend.
Your data is stored on the device or browser, ensuring content remains
available even when your user is offline.
Pricing: Free to start, pay per 100(0) users. Upgrade for extra storage space or "free users"
http://www.kinvey.com/developer/features
|
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 10 years ago.
Improve this question
I want to use a database on a web service which supports user management, public tables and private tables.
My use case is, that I want users to save their data in this public table and to share some data with all other app users.
I would solve that with a web page with php and reading and writing JSON objects to a database.
But before, I wanted to know if their exists a service that I could use for that.... So that I don't have to write a complete homepage
|
servers for android app to save and share user data - does their exist a service? [closed]
|
crontab has a very small environment, so you have to indicate full paths.
10 10 * * 0 /path/of/php /home/your_user/v2/symfony/symfony cc; tar -czf /home/your_user/backups/schroeder/v2/v2_site_backup_`date +%Y-%m-%d_%H-%M`.tgz /home/your_user/v2/symfony | tee -a /home/your_user/log/weekly_backup.log
^^^^^^^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^
instead of
10 10 * * 0 php ~/v2/symfony/symfony cc; tar -czf ~/backups/schroeder/v2/v2_site_backup_`date +%Y-%m-%d_%H-%M`.tgz ~/v2/symfony | tee -a ~/log/weekly_backup.log
That is, change all ~ for /home/your_user/.
Also! Escape each % as read here:
date +%Y-%m-%d_%H-%M
has to be
date +\%Y-\%m-\%d_\%H-\%M
|
I have a backup cron job that is supposed to run once a week. It works perfectly fine when I execute it on the command line, but it never executes in cron. My other cron tasks execute without issue. I assume it must have something to do with my options or incorrect syntax. Here is the cron entry in question:
10 10 * * 0 php ~/v2/symfony/symfony cc; tar -czf ~/backups/schroeder/v2/v2_site_backup_`date +%Y-%m-%d_%H-%M`.tgz ~/v2/symfony | tee -a ~/log/weekly_backup.log
SOLUTION
Turns out I needed to escape the % when inputting it my cron list. So the command that works is:
10 10 * * 0 php ~/v2/symfony/symfony cc; tar -czf ~/backups/schroeder/v2/v2_site_backup_`date +\%Y-\%m-\%d_\%H-\%M`.tgz ~/v2/symfony | tee -a ~/log/weekly_backup.log
|
Ubuntu cron not executing, but works fine on command line
|
You could access each database that might need to be dumped and ask for the last modified time. It's available through the information_schema database:
SELECT UPDATE_TIME
FROM information_schema.tables
WHERE TABLE_SCHEMA = 'dbname'
AND TABLE_NAME = 'tabname'
Also see here.
|
I am trying to backup my database over the network with mysqldump and rsync.
I wanted to ask if there is any way to know if the database has been modified since the last time I did my old dump, before doing a new dump or update the old one.
Thank you.
|
Backup with mysqldump & rsync
|
2
I don't believe any of the existing components will do what you want out of the box, but you can always run a script as part of a data pipeline. I've used it that way to run a script that grabs files from an external FTP and then loads them into an S3 bucket every hour.
Share
Improve this answer
Follow
answered Nov 7, 2013 at 15:49
Gordon Seidoh WorleyGordon Seidoh Worley
7,93877 gold badges4747 silver badges8282 bronze badges
4
Could you please go into more detail on this solution? Currently I am writing a script which uses "staging" and copies the files to the Output bucket using the environment varible ${OUTPUT1_STAGING_DIR} in the bash script. This sadly does not work, as I get this error message: "taging local files to S3 failed. The request signature we calculated does not match the signature you provided. Check your key and signing method." Thank you very much!
– Biffy
Nov 20, 2013 at 15:30
@Biffy, I'm not sure what the problem is there, but I'd post it as a separate question here on SO so that you get the best chance of people seeing it and answering it.
– Gordon Seidoh Worley
Nov 20, 2013 at 16:23
what did you write the script in? Did you just add it as a ShellCommandActivity script that pulled from ftp and copied to s3? Any examples?
– MonkeyBonkey
Aug 12, 2014 at 22:26
I don't know there's any existing component in data pipeline allowing you to run script on an external server either. You can use ShellCommandActivity on one EC2 instance you created inside the pipeline, then how to access your external server from this script is something you have to design (like through FTP). To backup to S3 you can install s3cmd tool in the EC2 instance.
– piggybox
Oct 29, 2014 at 23:49
Add a comment
|
|
I am trying to move some Logfiles, which are located on an external Webserver to an Amazon S3 bucket. This should happen every 7 days without manually activating it. Additionally I'd like it to be "failsafe", so it probably would be best if the copying operation would be done in the Amazon Cloud. I have already read something about the AWS Data Pipelining solution but I couldn't find anything on how to get it to work with an external (that means not hosted by Amazon) data source, let alone downloading a file from a webserver and then processing it.
Has somebody got experience with a similar problem and any advice for me where to start?
Thank you!
|
Backup from external Datasource to AWS S3 (using Data Pipelining)?
|
It's fine to do this - I have done it myself, but not on OSX.
The Dropbox client will index the files that it finds on your computer and compare them to the ones which are already in your account (on the server). I believe that it uses some kind of hash function to do this - the client creates a small hash value for each file and then this value is compared to the value on the server. If the value is the same then the client assumes that the file is the same and it does not need to be re-uploaded. However, if you have thousands of files, this can take some time.
Source: https://www.dropbox.com/help/1941/en - "The application will index the files and see that they are the same files in your account."
If you want to do it, when you install Dropbox again, you should sign-in to your account, let it create the Dropbox folder and then click "Pause Syncing" so that it doesn't start downloading everything. Then you should copy the backed-up Dropbox files into the new Dropbox folder and resume syncing.
|
I am about to install Maverick and before I do that I am going to reformat my macbook air. I use dropbox and have about 15gb of (small) files on it (mainly documents/ebooks).
My question is: Is it possible to backup my Dropbox folder now, reformat my SSD and and install dropbox again. After wish I replace the dropbox folder with my backup without getting Dropbox confused (It might think it are new files? So dropbox could upload them or/and download the same files again).
Does anyone got any experience with this?
|
Replacing a empty dropbox's (fresh install) folder with a previously (uptodate) dropbox folder. Is it possible?
|
I found that I needed to run the command using the manage.py file:
python manage.py dbbackup
It was not very explicit in the documentation.
|
I am trying to use the django-dbbackup tool to backup my PostgreSQL database. I have setup everything as written in the documentation. However, when I run in the python shell:
import dbbackup
dbbackup
I get:
module 'dbbackup' from
'/Users/poiuytrez/.virtualenvs/ariseio/lib/python2.7/site-packages/dbbackup/init.pyc'
I am not sure on how to use the tool.
|
How to use django-dbbackup?
|
1
Your wordpress pages are stored in database. If you want to backup your wordpress installation, you will have to backup your FTP files AND your database.
Have a look at the docs for more information on Wordpress database structure.
Note you can also use plugins such as BackUpWordpress or other ones listed here to do this automatically.
Share
Improve this answer
Follow
answered Oct 5, 2013 at 15:30
AgateAgate
3,19211 gold badge2020 silver badges3131 bronze badges
2
Those backup systems are good, but I need systems that can backup and restore; not only backup! Can you help me with that?
– The Quantum Physicist
Oct 5, 2013 at 17:06
Have a look at Wordpress plugin directory for this. It seems, there are many plugins that suit your needs.
– Agate
Oct 6, 2013 at 10:01
Add a comment
|
|
I have installed wordpress on my website and it's working fine (so I downloaded it, extracted it in the ftp-server of my page, and initialized a database).
I'm doing this because I want to be able to backup everything in one-click. The plan was to copy the wordpress folder back to my computer in order to do a backup, but I can't find the individual pages I have in my Blog.
The question is: where do those page lie? And how can I reach them? Are there better techniques for backing up stuff from wordpress completely?
|
How to access individual pages in wordpress
|
1
You should be able to specify indexes=N and constraints=N to ignore.
You can get available options for imp using
imp help=y
There is an option DATA_ONLY=Y, but i'm not sure it is exist in your oracle version.
Share
Improve this answer
Follow
answered Sep 10, 2013 at 16:35
ChamalChamal
1,4391010 silver badges1515 bronze badges
Add a comment
|
|
I have dmp file for oracle database version "oracle orahome 81" and I want to import just the data not tables or views...etc coz I export the dmp file on pc1 ,and I want to import data on pc2 but pc2 already have old database and I wana to import metadata to pc2 coz the database already exists so, when use this command :
imp username/password@orcl file=d:\backup.dmp full=y
the error shown that I have already existed database so I cannot import new data to the pc2
so how could solve these problem?
|
imp/exp database in oracle
|
You can transfer data using the iCloud, but this is NOT really secure! Save this data ENCRYPTED on your server and let the app read this data would be a solution.
Btw. you shouldn't save sensible data unencrypted in the keychain. The keychain can be read really easy after jailbreak.
For more information about handling with sensible data you may read this book:
Hacking and securing iOS Applications
|
I thought about storing important and sensitive information in iOS' keychain. But now I read that the keychain is only restored if the backup is encrypted in iTunes (don't know about iCloud backups). This is especially a problem when users buy a new iPhone/iPad and restore them from a backup. The information stored in the keychain by the old device will be lost.
Is there any (secure) possibility to transfer the data to new devices or on restores independently of the backup settings?
|
iOS: keychain on new devices or on restores
|
Thanks to the both of you. As it turns out, we actually had a backup server where both the daily and weekly backup folders were located.
|
I am a beginner with cPanel and databases and am having trouble finding the /home/cpbackuptmp/cpbackup/daily folder. I just took over a wordpress site and I was moved to a list where I get emails saying that "[cpbackup] Backup complete on [my site]". In the email it says that the files were backed up to the above location. Where do I go to access this folder? Can I find it through FTP or through the cPanel interface. I do not see it anywhere. Am I looking in the wrong place?
Thanks. I have been looking all day and have not found a good description of where to find the files.
|
How to access your /home/cpbackuptmp/cpbackup/daily folder cPanel
|
So this script is generating a file with all of the SQL commands to recreate the current database.
So once the script is done executing you could "import" the file it creates in phpmyadmin and it will drop the tables if they exist and then insert all of the data the was in the tables at the time of the backup.
It will not modify anything from the current database
For example, this is what the "export" function in phpmyadmin will create for a test table:
--
-- Database: `test`
--
-- --------------------------------------------------------
--
-- Table structure for table `test_table`
--
DROP TABLE IF EXISTS `test_table`;
CREATE TABLE IF NOT EXISTS `test_table` (
`id` int(11) NOT NULL,
`num` int(11) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Dumping data for table `test_table`
--
INSERT INTO `test_table` (`id`, `num`) VALUES
(1, 23),
(2, 45);
Just a serious of SQL statements to recreate a database.
|
I need a script to back up a MYSQL database that I don't have cpanel, shell, or phpmyadmin access to.
I'm just concerned about the DROP TABLE part of this script and why it would be needed. I do not want to modify the database at all, I just want a backup.
Here is my code:
backup_tables('localhost','username','password','blog');
/* backup the db OR just a table */
function backup_tables($host,$user,$pass,$name,$tables = '*')
{
$link = mysql_connect($host,$user,$pass);
mysql_select_db($name,$link);
//get all of the tables
if($tables == '*')
{
$tables = array();
$result = mysql_query('SHOW TABLES');
while($row = mysql_fetch_row($result))
{
$tables[] = $row[0];
}
}
else
{
$tables = is_array($tables) ? $tables : explode(',',$tables);
}
//cycle through
foreach($tables as $table)
{
$result = mysql_query('SELECT * FROM '.$table);
$num_fields = mysql_num_fields($result);
$return.= 'DROP TABLE '.$table.';';
$row2 = mysql_fetch_row(mysql_query('SHOW CREATE TABLE '.$table));
$return.= "\n\n".$row2[1].";\n\n";
for ($i = 0; $i < $num_fields; $i++)
{
while($row = mysql_fetch_row($result))
{
$return.= 'INSERT INTO '.$table.' VALUES(';
for($j=0; $j<$num_fields; $j++)
{
$row[$j] = addslashes($row[$j]);
$row[$j] = ereg_replace("\n","\\n",$row[$j]);
if (isset($row[$j])) { $return.= '"'.$row[$j].'"' ; } else { $return.= '""'; }
if ($j<($num_fields-1)) { $return.= ','; }
}
$return.= ");\n";
}
}
$return.="\n\n\n";
}
//save file
$handle = fopen('db-backup-'.time().'-'.(md5(implode(',',$tables))).'.sql','w+');
fwrite($handle,$return);
fclose($handle);
}
|
What do the "drop table" and "create table" options in this backup script do?
|
Custom made SQL request
You can find description of the sp_helpindex using sp_helptext command like this :
use sybsystemprocs
go
sp_helptext sp_helpindex
go
This will give you the definition of your stored prcedure. Then you can extract your required SQL request (which could be tricky, another store_proc).
Shell script to process output of sp_helpindex
On other side, it seems it is just a shell problem, once you can call your SQL server.
For example, using the sqsh program (an isql like program), you can have a file myindexes.sql containing :
use databaname
go
sp_helpindex tablename
go
Then the command
sqsh -U username -P password -S SYBASESERVER -i myindexes.sql -h > myindexes.txt
will give you the sp_helpindex output that you can process.
Using the sqsh output, the line 3 contain index name and keys, the line 4 the description.
I use :
#!/bin/bash
# Call sqh command : output in myindex.txt
sqsh -U username -P password -S SYBASESERVER -i myindexes.sql -h > myindex.txt
# Then process the output
INAME=`sed '3!d' myindex.txt | tr -s ' ' | cut -d ' ' -f 2`
IKEYS=`sed '3!d' myindex.txt | tr -s ' ' | cut -d ' ' -f 3`
IDESC=`sed '4!d' myindex.txt | tr -s ' ' | cut -f 2`
# print out the values
echo "$INAME $IKEYS $IDESC"
# Clean up the files
rm myindex.txt
Hope this helps a little.
|
I want to create index for specific table from Unix shell script, so that I need index name, index keys, and index description for that particular table.
"sp_helpindexes" gives all these details with some unwanted lines, but I need alternative way to get only index name, key and description.
Anyone please help me on this ....?
|
How to get index name, key and description from sybase table
|
I don't think there's much to be gained by trying to do something more advanced than iterating over the 26 drive letters.
Before attempting to check whether or not the marker path exists you could add a call to GetDriveType and compare the return value against DRIVE_REMOVABLE. This will make sure that your code doesn't spin up the CD/DVD drive, or hit the network in the case of a mapped share.
|
I'm using Borland C++ Builder for a project which connects to a database.
There is a configuration which does backups to a USB drive. The problem with the current approach is that the drive is manually configured by the end user and sometimes things get messed up. E.g. People move the USB drive to a different port, get a different letter and then the back up process no longer works. As a side note, we have other "better" processes for backing up to the cloud, etc., however some locations don't have internet access and aren't running on a RAID... so backing up to USB gives them a saving grace from a HD crash.
I'm hoping to do some coding to help remove this issue. I'm hoping to be able to get a handle to the OS (Windows 8/7/XP) and be able to identify the drives on the machine. Once I have those, I can then iterate through them and check for a path location (e.g. File marker, so if a the file exists, I know its the USB we supplied). Then once I have that, I can do the back up.
As a worse case scenario, I will be able to iterate through all 26 letters to test each drive. However I'm using this as a learning opportunity and hoping to get a handle to the OS so reduce the number of checks/fails I can run into. Besides, I'm curious if anyone has a better approach :)
|
C++ Builder - Handle to OS to Iterate Through All Drives
|
However, I don't know what it means to cherry pick items.. How do I do that? .. manually?
It's a manual process, yes. You can for example use dblink[0] to connect from one DB to the other and pull in the relevant records.
[0] http://www.postgresql.org/docs/9.1/static/dblink.html
Here's a full example on how to use dblink on heroku:
https://gist.github.com/hgmnz/5100682
|
I would like to use Heroku PG Backups to able to restore my data in case anything was deleted by mistake. My question is, what if I want to restore just a certain record as well as its associations. Like a student and his grades for example..
I found a similar question: How do I restore three items from a backup made using Heroku PG Backups?
However, I don't know what it means to cherry pick items.. How do I do that? .. manually?
Thank you,
|
Heroku PG Backups, how to restore certain records only
|
First, backups can contain more than one backup set, so are you sure you are overwriting it and not just appending another set within the same file? Otherwise you need to add the time of day to your filename. Hope this helps. Also if its just one or two db, then there is an excellent (free for limited use) app, google SQLBackupAndFTP HTH
|
My code ,for agent's job on SQL server 2008 , generate the backup file, but it keep OVERWRITE the first bak file everytime the agent's job triggered !!?
how can i make backup with different name time-related
e.g:
testDB201313328.bak
and after 1 minute create file with name :
testDB201313329.bak
instead overwriting the first one
USE msdb ;
GO
DECLARE @fileName VARCHAR(90);
DECLARE @db_name VARCHAR(20);
DECLARE @fileDate VARCHAR(20);
DECLARE @commandtxt VARCHAR(100);
SET @fileName = 'C:\Test_Backups\';
SET @db_name = 'testDB';
SET @fileDate = CONVERT(VARCHAR(8), GETDATE(),112) + convert (varchar(4),DATEPART(HOUR, GETDATE())) + convert ( varchar(4) ,DATEPART(MINUTE, GETDATE())) + convert ( varchar(4) ,DATEPART(SECOND, GETDATE()));
SET @fileName = @fileName + @db_name + RTRIM(@fileDate) + '.bak';
SET @commandtxt = 'BACKUP LOG testDB TO DISK =''' + @fileName + ''' WITH INIT'
-- add a job
EXEC dbo.sp_add_job
@job_name = N'LogBackup',
@description =N'Log Backup on weekdays every 15 minutes from 8am till 6pm' ;
-- add job steps to job
EXEC sp_add_jobstep
@job_name = N'LogBackup',
@step_name = N'Weekdays_Log_Backup',
@subsystem = N'TSQL',
@command = @commandtxt ,
@on_success_action = 1,
@retry_attempts = 5,
@retry_interval = 1 ;
GO
...
|
Backup database with different name time-related
|
The key you pass to SharedPreferencesBackupHelper's constructor isn't the key for a Preference inside your SharedPreferences: it's the name of the SharedPreferences file. That is, it's the String you pass to Context.getSharedPreferences(String,int). If you create your SharedPreferences file by calling Activity.getPreferences(int), you should pass the class name of that Activity.
|
I'm trying to back up my app data using SharedPreferencesBackupHelper. As I understand it you first start by calling
SharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this, "KEY1", "KEY2");
My problem is that I'm doing a list application and there I back up the data for each list item using a separate key. That is a String combined with an int. It looks something like this:
spEdit.putString(Integer.toString(5) + "KEY_FOR_THIS", "value");
The 5 in the example can of course change and can be any number depending on how many items the user has added. Is there some good way to do this with a for loop for example?
|
Android SharedPreferences backup
|
What you're looking for is revision control. This works independent of the language you're dealing with, since all the VCS is concerned with is the state of the software at a particular snapshot in time.
Some recommendations:
Subversion
Git
Mercurial
IntelliJ IDEA also comes with a built-in local revision system, which allows you to visit a particular file's history. It'd still be preferable to use either Git or Subversion.
There are also sites that you can host your project on to better preserve your project, such as Github or Google Code. Github uses...Git, but Google Code will allow you to use a few others, such as Subversion and Mercurial.
|
I'm currently using Java and I'm looking for a program that saves a new version of what I'm doing each time I compile. I don't mind if it doesn't run, I can go in and edit the class name to make it match the .java name afterwards. As I'm a beginner, I keep getting caught by overextending myself and then breaking the project I'm working on irreparably. I'm just looking for a way to go back to a safe state.
I'm sure their are programs for this, but because I don't know the collective noun for them, finding one is next to impossible.
All help is much appreciated.
|
Backing up Java?
|
Can you try with full path name for mysqldump and mysql inside your script.
So:
if which mysql is equal to /usr/local/mysql/bin/mysql
and
if which mysqldump is equal to /usr/local/mysql/bin/mysqldump
Modify your script to:
for db in $(echo "SHOW DATABASES;" | /usr/local/mysql/bin/mysql --user=$MySQLuser --password=$MySQLpass | grep -v -e "Database" -e "information_schema")
do
/usr/local/mysql/bin/mysqldump --skip-lock-tables --ignore-table=log.log --user="$MySQLuser" --password="$MySQLpass" $db >$BACKUPD/$ROK/$MIESIAC/$DZIEN/$db.sql
done
|
First of all, I'm saying that it doesn't work properly with the crontab because when I run the script manually it works fine.
The problem is that when I run the backup script with the cronjob and... it's coming to tar up the mysql dump, the tar archive has only 16 bytes size (and its empty, so it looks like there were no files to pack into the archive), the strange thing about that is that when I run the script manually, it runs almost 5~ minutes, and the tar package size is ~1.8GB.
Here is my bash code:
#!/usr/local/bin/bash
# Configuration
BACKUPD="/backup/mysql"
MySQLuser='root'
MySQLpass='xxxx'
# End configuration
ROK=`date +%Y`
MIESIAC=`date +%m`
DZIEN=`date +%d`
GIM=`date +%H-%M`
if [ -d $BACKUPD/$ROK/$MIESIAC/$DZIEN ]
then
echo
else
mkdir -p $BACKUPD/$ROK/$MIESIAC/$DZIEN
fi
for db in $(echo "SHOW DATABASES;" | mysql --user=$MySQLuser --password=$MySQLpass | grep -v -e "Database" -e "information_schema")
do
mysqldump --skip-lock-tables --ignore-table=log.log --user="$MySQLuser" --password="$MySQLpass" $db >$BACKUPD/$ROK/$MIESIAC/$DZIEN/$db.sql
done
cd $BACKUPD/$ROK/$MIESIAC/$DZIEN && tar jcPf $BACKUPD/$ROK/$MIESIAC/$DZIEN/mysql-$GIM.tar.bz2 *.sql && rm -rf *.sql
Where is the problem? Did anyone experienced a problem like this before?
Regards.
|
TAR doesn't work properly with the crontab
|
1
sorry for the late reply. I found a quite easy and simple backup option builtin with codeigniter. Hope this helps someone
$this->load->library('zip');
$path='C:\\xampp\\htdocs\\CodeIgniter\\';
$this->zip->read_dir($path);
$this->zip->download('my_backup.zip');
i used the code directly from the view and then just called it using the controller.
~muttalebm
Share
Improve this answer
Follow
answered Dec 30, 2012 at 17:37
muttalebmmuttalebm
55211 gold badge66 silver badges2222 bronze badges
Add a comment
|
|
I am trying to create a web app using codeigniter which will be used over a home or office network. Now Im looking for a backup option which can be done from the web protal. For example, in my htdocs folder i have: App1, App2 etc.
i want to backup and download the App1 folder directly from the webapp which can be done from any client machine which is connected to the server. is it possible. if yes then can you please let me know how?
~muttalebm
|
Application Folder backup using Codeigniter
|
It's much more likely that your EC2 instance will be down than S3 will be down. For one, you have a single instance running on a single host with a single network connection in a single availability zone. Past that, on a platform level, EC2 (particularly involving EBS) has had several protracted outages, whereas S3 has not had a significant availability event since 2008.
S3 is a distributed system spread all across your region of choice. Operating at the object level with eventual consistency guarantees is frankly a lot simpler than the problems addressed by EBS and EC2, all of which add additional consistency guarantees (and thus ways to fail) by design.
I generally make upload processes treat S3 as a backing store -- upload to S3 directly, or upload via an EC2 instance in a write-through fashion -- and accept that if S3 is down, then I can't handle uploads. Doing it this way introduces a failure mode where your app is running but S3 is not, but it significantly reduces the potential for data loss, which is usually a more serious problem than unavailability. This also allows you to simultaneously handle uploads via different EC2 instances in different availability zones, hedging against EC2 failures, as well as via instance-store instances, hedging against EBS failures.
|
Currently, we are uploading all of our user-generated-content to a medium-size EC2 Instance, and then from there we run a cron job to sync all of the uploaded content to S3. We have some code that runs on the backend (every time you need to access any uploaded file) that checks to see whether or not the resource has been moved to S3, or if it is just available on our uploads instance.
This seems a little wasteful, but it does provide redundency -- if S3 is down, we have some javascript code in place that forces the files to be served from our upload box. The actual file uploads are stored in EBS, not on the instance.
We've got about 150GB worth of files in the S3 bucket right now; which makes performing a separate backup of the S3 Bucket extremely time consuming and nearly impossible to run on any sort of regular basis.
So, my question is, is this even necessary? Can anyone point me to some uptime statistics between S3 and EC2? Does it ever happen that S3 is down, but EC2 is available? It seems like it might be simpler to just upload everything directly to S3 and trust that it is up.... On the other hand, we could just store everything in EBS and forget S3 completely, which seems like it makes more sense.
|
Is Amazon S3 ever unavailable independent of EC2?
|
I think that Packages/User is the one in which you are supposed to put settings (according to Sublime's official and unofficial documentation). However, some people put them in the other folders from time to time.
The Dropbox advice may be a hedge against poor practice.
|
How to save/restore Sublime Text 2 configs/plugins to migrate to another computer? states that, to backup a Sublime Text 2 installation, a user should preserve the ~/Packages/User directory (from the user's local data folder on whatever OS they're using).
However, http://andrew.hedges.name/blog/2012/01/19/sublime-text-2-more-sublime-with-a-drop-of-dropbox and most other walkthroughs for using Dropbox to sync Sublime's settings specify three directories: ~/Packages, ~/Installed Packages and ~/Pristine Packages.
What is the functional difference between backing up just ~/Packages/User, and the other 3 directories?
|
What is stored in Packages/User directory?
|
I'm going to assume your source name is "image.jpeg" and your destination has the appended suffix.
I recommend putting a dot before the appended suffix to make it clear where the original name ends and the suffix begins. Your original name could already have a number at the end.
Here is a crude but very effective brute force method that supports up to 100 copies. Obviously the upper limit can easily be increased.
call :backup "c:\image.jpeg"
exit /b
:backup
for /l %%N in (1 1 100) do (
if not exist "G:\backup\%~n1.%%N.%~x1" (
echo F|xcopy %1 "G:\backup\%~n1.%%N.%~x1" >nul
)
exit /b
)
But there is a potential problem. Suppose image.1.txt and image.2.txt already exist, but then you delete image.1.txt. The next time you backup it will re-create image.1.txt and then you might think that image.2.txt is the most recent backup.
The following can be used to always create a new backup with the number suffix 1 greater than the largest existing suffix, even if there are wholes in the numbers.
@echo off
call :backup "c:\image.jpeg"
exit /b
:backup
setlocal disableDelayedExpansion
set /a n=0
for /f "eol=: delims=" %%A in (
'dir /b "g:\backup\%~n1.*%~x1"^|findstr /rec:"\.[0-9][0-9]*\%~x1"'
) do for %%B in ("%%~nA") do (
setlocal enableDelayedExpansion
set "n2=%%~xB"
set "n2=!n2:~1!"
if !n2! gtr !n! (
for %%N in (!n2!) do (
endlocal
set "n=%%N"
)
) else endlocal
)
set /a n+=1
echo F|xcopy %1 "g:\backup\%~n1.%n%%~x1" >nul
|
I want to copy specific file from pc to usb
my code :
xcopy /H /Y /C /R "C:\image1.jpeg" "G:\backup\image.jpeg"
i want to do following :
if G:\backup\image1.jpeg exist, copy image.jpeg as image2.jpeg (or as another name),
if image2.jpeg exist, copy as image3.jpeg and ect..
Is it possible to do this?
|
Copy file as another name if file exist
|
I'd guess that you want this:
system($x, 'a', 'library.7z', $library, '-r')
That would give you the same effect as this at the command prompt:
C:\Program Files\7-Zip\7z.exe a library.7z "C:\Users\maste_000\Documents\Calibre Library" -r
Assuming, of course, that I haven't completely forgotten how quoting works with the Windows command line.
The space in $library won't matter when you use the multi-argument form of the system; part of the reason that you use this version of system is to avoid all the weird quoting that you need when the shell is in the way. The main thing is that you probably need to separate 'a' and 'library.7z' so that they are two arguments rather a single argument with a space as the second character; without that separation you'd be saying this:
C:\Program Files\7-Zip\7z.exe "a library.7z" "C:\Users\maste_000\Documents\Calibre Library" -r
and that doesn't look right.
|
I'm writing a ruby script that will automate backing up my laptop to my ssh server. I seem to be stuck on a system call though.
$x = "C:\\Program Files\\7-Zip\\7z.exe"
###calibre library###
$library = "C:\\Users\\maste_000\\Documents\\Calibre Library"
system($x, "a library.7z", $library, " -r")
I'm trying to call the exe 7z and create a file called library.7z, from the directory pointed to by $library. No matter how I arrange this though I keep getting a 7z command error. I'm assuming this has something to do with how I'm performing the system call though.
|
Stuck on a system call with variables
|
Why don't you use regular site collection backups? It includes permissions and web parts:
http://technet.microsoft.com/en-us/library/ff607901.aspx
Alternatively you can make regular content database backups:
http://msdn.microsoft.com/en-us/library/ms191304(v=sql.105).aspx
It will also contains all permissions and web parts.
|
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I want to backup all the users permission and web parts on my sharepoint, so I can restore them anytime. because few weeks ago, all of user permission and web parts on my sharepoint has gone mysteriously. I've backup all the content, but not the user permissions and web parts. Can it done without using 3rd party software?
|
How to backup users permissions and web parts on sharepoint? [closed]
|
2
The backup script you've quoted is very old, and uses old techniques -- techniques that were old even for Oracle 9i. Even in that version, RMAN (Recovery Manager) was available and the use of RMAN is to be preferred in all cases.
See Tim Hall's excellent site for an overview of how to use this: http://www.oracle-base.com/articles/9i/recovery-manager-9i.php
Oracle's RMAN Backup Concepts guide is here: http://docs.oracle.com/cd/B28359_01/backup.111/b28270/rcmcncpt.htm
Even easier, if you have dbconsole installed, you can configure backups via a webgui.
Share
Improve this answer
Follow
edited Oct 26, 2012 at 12:55
answered Oct 26, 2012 at 12:49
Colin 't HartColin 't Hart
7,54733 gold badges2828 silver badges5252 bronze badges
1
thanks, im going to get some info about recovery manager, I'll let you know if I have some trouble
– WizLiz
Oct 26, 2012 at 12:52
Add a comment
|
|
I'm currently learning what hot backup is on an ORACLE guide which present a generic script for hot backup. I dont understand some point of this script :
PROMPT Path to destination directory:
ACCEPT repertory
PROMPT Path for first file
ACCEPT file
PROMPT Path for second file
ACCEPT spool
SPOOL &file
PROMPT spool &spool ;;
PROMPT archive log list ;;
I dont get what the first and seconde file are,and what does spool mean ? I assume this is some hot backup vocabluary however I didn't find any explanation. Any clue would be appreciated. Thanks
EDIT: Source : http://oracle.developpez.com/guide/sauvegarde/generalites/#L3.2
|
Hot backup DB script understanding
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.