Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
0 The problem is the how you deal with the variable. xxxxx is not a valid name, it should be preceded by two percentages: %%xxxxx @ECHO OFF FOR %%xxxx IN (OPNACT BLOGS SNCOMM DOGEAR FILES FORUM HOMEPAGE PEOPLEDB WIKIS) DO ( DB2 CONNECT TO %%xxxx DB2 QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS DB2 CONNECT RESET DB2 BACKUP DATABASE %%xxxx TO "C:\Backup\DB2" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 3 WITHOUT PROMPTING DB2 CONNECT TO %%xxxx DB2 UNQUIESCE DATABASE DB2 CONNECT RESET ) For more information you can check this question What is the difference between % and %% in a cmd file? Share Improve this answer Follow answered Feb 10, 2015 at 11:24 AngocAAngocA 7,68566 gold badges4040 silver badges5656 bronze badges 2 hi AngocA , i try like this too , but i get same error saying -> %xxxx was unexpected at this time. – Archangle Feb 12, 2015 at 5:05 This is not a DB2 error, but a batch problem. Have you tried with just one 'x': FOR %%x IN (OPNACT BLOGS SNCOMM DOGEAR FILES FORUM HOMEPAGE PEOPLEDB WIKIS) DO (echo %%x) Once it shows the values with the echo command, you can replace that with the db2 commands. – AngocA Feb 13, 2015 at 13:54 Add a comment  | 
I am trying to do a offline backup for my DB2(10.1.0) using script and schedule it. db2backup.bat @ECHO OFF FOR xxxx IN (OPNACT BLOGS SNCOMM DOGEAR FILES FORUM HOMEPAGE PEOPLEDB WIKIS) DO ( DB2 CONNECT TO xxxx DB2 QUIESCE DATABASE IMMEDIATE FORCE CONNECTIONS DB2 CONNECT RESET DB2 BACKUP DATABASE xxxx TO "C:\Backup\DB2" WITH 2 BUFFERS BUFFER 1024 PARALLELISM 3 WITHOUT PROMPTING DB2 CONNECT TO xxxx DB2 UNQUIESCE DATABASE DB2 CONNECT RESET ) But when i try to run it, DB2CMD /c /w /i C:\Backup\db2backup.bat I am getting a error , "xxxx was unexpected at this time." so why i am getting this?how can i avoid it ? Many Thanks for your input !!.
Offline Backup script for DB2
class BaseUploader < CarrierWave::Uploader::Base # Override the filename of the uploaded files: def filename return unless original_filename if model && model.read_attribute(mounted_as).present? && model.changed.blank? model.read_attribute(mounted_as) else ext = File.extname(original_filename) base = File.basename(original_filename, ext) "#{base}_#{token}#{ext}" end end # override method to avoid deletion of file def remove!; end protected def token var = :"@#{mounted_as}_token" model.instance_variable_get(var) or model.instance_variable_set(var, Time.now.to_i) end end I have used this approach to create unique filename every time a file is uploaded hence avoid the possibility of overwriting the previous file with same name. Also, I have configured CarrierWave not to remove previously stored files on update. So, now if I restore a DB backup, the images will be there. Snippet inspired from create random and unique filenames.
In Rails app I would like to take backups of MySQL database along with CarrierWave uploads stored in Amazon S3. I have looked into S3 object versioning but couldn't find any support in CarrierWave for it. Has anyone done this before? Or any ideas?
Backup MySQL DB along with CarrierWave uploads stored in S3
0 You will need to shrink your log files regularly. Seeing as how you have a backup strategy including FULL,DIFFERENTIAL and TRANSACTION LOG backups, you will do well to have a SQL job run on a schedule to shrink the log file in question to a bare minimum. I run a stored proc. with the shrink command regularly to DBCC SHRINKFILE helps you shrink your log files to a set size. DBCC SHRINKFILE (DBNAME_Log, 1); will shrink it down to 1MB for example. Share Improve this answer Follow answered Feb 4, 2015 at 15:45 GVashistGVashist 42744 silver badges1616 bronze badges 2 Thanks for the reply. As i said below, I thought it was a bad practice to shrink my LDF, due to the potentially excessive fragmentation, have you heard about that ? – Sylvain Pauly Feb 4, 2015 at 15:54 I suppose that point might be irrelevant based on your recent comment. When you're planning for a DB deployment, its better you take into account log file growth too. – GVashist Feb 4, 2015 at 16:16 Add a comment  | 
I'm facing a popular issue, i'm afraid : my transaction log is growing and growing again on SQL SERVER. But I can't find the answers on the web. I have a daily full backup, differential backup every hour and transaction log every 15 minutes. And they work fine, but what about that ldf file ? Is up to 100Go in 2-3 month, my database is about 15Go. I perform some maintenance task the weekend : index rebuilding or reorganization if the fragmentation is below 30%. Plus i'm running a recompute of my small datawarehouse (15 millions rows). There is some task every night but nothing that big. But, i dont know, why my transaction log is not truncate after the log backup ? When I check the use of the LDF file, only 1.7% is used when I write this post. Any idea ? Thanks a lot. Sorry for my poor english by the way... EDIT : I have 119 VLF file and no one is used.
Transaction log growing despite regular transaction log backup
0 Actually, backups are configured on database level. When you backup all of your databases, the database server is considered backed up. More information on the topic: Azure SQL Database Backup and Restore Share Improve this answer Follow answered Feb 3, 2015 at 12:44 ZGFubnkZGFubnk 50044 silver badges1313 bronze badges Add a comment  | 
We use Azure as infrastructure for our app and its SQL DBs. Currently Azure provides automatic backups for all tiers (Basic to Premium), but these settings are individual per DB. How can I set backup for the entire server, with all DBs inside?
Backup and restore ENTIRE Azure SQL server (not individual DBs)
0 I would just create PHP scripts to do it. I would create on script per table and run them from the command line. There are drivers for both MSSQL and MySQL. Since PHP is loosely typed this should be a breeze. You can get the create and insert statements right from SQL Server Management Studio. Share Improve this answer Follow answered Jan 21, 2015 at 21:25 itsbenitsben 1,02711 gold badge66 silver badges1111 bronze badges Add a comment  | 
It is a possible of duplicate question may be but please suggest me some work around I got this requirement. I have a MSSQL Database with almost 30 Tables with millions of records. Now i need to take Replica of the same in MySql. The Solution which i am thinking are (This may contains loopholes and it may not be good) Sol 1 :- I created a linked server of MySql and by using trigger in MSSQL Tables and insert in MySQl For the existing records by using import wizard of workbench import the data. Sol 2 :- Using SymmetricDS. What is the best way to achieve this. I am very new to Database Administration stuffs. Please help me in this regard. Note :- After we replicated to MySql it should be in sync with MSSQL. UPDATE :- If anyone knows as the way dan b said how to do it via SQL Server Replication Using ODBC please give some reference. I tried this steps here. In the second step if i click new publication under Replication SSMS i got this error SQL Server replication requires the actual server name to make a connection to the server. Connections through a server alias, IP address, or any other alternative name are not suppported. Specify the actual server name, 'USER3-PC'. (Replication.Utilities) I installed SQL 2008 R2 Express. And this i tried in my local machine.
Replicating MSSQL Database to MySQl Through MSSQL Replication
0 Alright, since no one has answered, it's actually pretty easy. I've literally just done this with my sisters Windows Phone 8.1 (Nokia Lumia 909) - use MicroUSB cable, make sure the phone is unlocked (i.e. you can see the "Live Tiles") and Ubuntu should mount it automagically. It'll show "Phone" as a drive, go into that and you'll see the following folders: Documents, Downloads, Music, Pictures, Ringtones, and Videos. Share Improve this answer Follow answered Jan 18, 2015 at 13:51 James GrovesJames Groves 7111 silver badge44 bronze badges Add a comment  | 
i know this question is not typical for stackoverflow, but maybe some of you are willing to help me managing my backup strategy. i want to upload certain folders from my android device, windows 8.1 tablet, windows 7 notebook and ubuntu on my workplace. the best strategy for me would be the following as i figured out: android folders should be backed up with the condition: 2 hours after i plugged the phone in for charging same condition for my windows 8.1 tablet: backup the folder 2 hours after i plugged in for charging my windows 7 notebook should backup: every time after restart same condition for ubuntu: backup every time after restart i appreciate every idea that brings me further to mange this, especially for apps that could help me on windows and ubuntuu. google didnt help me a lot yet. Best regards and sry for my bad german english
backup Android, Windows 7, Windows 8.1 and Ubuntuu Folders to the same server
0 That doesn't seem possible as rsync need to go in each folder to see what has changed. The best option remains the --update one, in order to not transfer all files, and skip any files which exist on the destination and have a modified time that is newer than the source file. Another option: For text files, a source control system like Git would indeed ignore dir2, as it would detect its associated SHA1 hasn't change (no need to go in that folder). Share Improve this answer Follow answered Oct 11, 2014 at 7:11 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I am developing my backup pet project and can't find a rsync feature I desperately need. Imagine this: parent1/ dir1/ file1 parent2/ dir2/ file2 If I move "dir1" to "parent2" like this: parent1/ parent2/ dir1/ file1 dir2/ file2 it leaves no other option than to recursively sync parent2/, otherwise file1 will be omitted. What I want to do is force rsync to recursively sync newly created dir1, but omit dir2 Any ideas?
Rsync recursively only new directories
Thank you every one for responding and helping me on this. It turns out to be a log file corruption. Below steps solved my issue Stop mirroring Switching the database to the Simple recovery model Performing a checkpoint (which should clear the active log as long as nothing else requires the log to be kept active) Switching back to the Full recovery model Reestablishing the log backup chain by performing a full backup Start mirroring http://sqlmag.com/blog/transaction-log-corruption-and-backups
I have a Sql Server 2008 Standard version. Mirroring is set up on the server in full safety mode. Its been working fine till today. The transaction log back-up fails every-time with an error "Error: 2014-09-25 08:34:33.17 Code: 0xC002F210 Source: JuneDB Log Backup Execute SQL Task Description: Executing the query "BACKUP LOG [JuneDB] TO DISK = N'H:\BKs\Hou..." failed with the following error: "Read on "E:\LDFs\JuneDB.ldf" failed: 1(Incorrect function.) BACKUP LOG is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly" I am using a Maintenance plan for taking backups. The drive also contains log files of 5 other databases and their log backups are fine. This problem started after successfully completing rebuild indexes maintenance plan. Full backups do not have any problem. I am not able to identify why reading the log file of this one database is erroring out. How am I supposed to proceed on this issue. Things I tried Ran DBCC CHECKDB([JuneDB]) WITH NO_INFOMSGS returned no error messages Ran a query to take transaction backup instead of using a Maintenance plan. It gave same error Edit Update I just noticed at 4:30 AM we ran a maintenance plan to rebuild all indexes. Looking at the error log, I started getting errors for Transaction log backups after 4:30 Am. I am not sure how rebuild indexes could possibly cause the transaction log backups to fail but they sure seem related
Sql Server Transaction Log Backup Fails
0 I was able to achieve this by creating a wrapper for my script that only outputs the file name to stdout and then using that on ansible: - name: Send backup script copy: src={{ item }} dest=/tmp owner=root group=root mode=744 with_items: - backup.sh - backup_wraper.sh - name: Exec the script command: /tmp/backup_wraper.sh register: backup_path - name: Get the file fetch: src=/root/{{ backup_path.stdout }} dest=/tmp/{{ backup_path.stdout }} flat=yes Share Improve this answer Follow answered Sep 11, 2014 at 13:42 Ignacio VeronaIgnacio Verona 65522 gold badges88 silver badges2222 bronze badges 1 1 Another option would have been to capture the output in a variable and pipe it to sed using the command module to get just the filename. Ansible also has a regex filter docs.ansible.com/playbooks_variables.html#other-useful-filters – jarv Sep 11, 2014 at 14:20 Add a comment  | 
I've a script for backup on my host. It's creating a file, that is specified on it's output: ******************************************************************************** Configuration files backup successfully. Backup file is put to /root/backup_201409111318.tar. ******************************************************************************** How can I copy that file to the ansible server, to be able to later restore it? Is there any way to parse the output of a shell/command task and then do a fetch over that file? Maybe using the script module? It's important to note that I can not just "fetch" files from the server (instead of using the backup script) because the script is performing some additional tasks to create the backup. Thanks in advance, Ignacio.
Ansible: Get the name of a file created by a script in the format name-<date>.tar
Try and break your code to a simpler version and work up to find the problem. I'd imagine its a problem with the file your trying to backup to: Check the path to file is correct. Original Code: $backup1 = fopen('auth_back_'.date('j-M-Y').'.sql','w+'); $CerereSQL = "SELECT * INTO OUTFILE '$backup1' FROM `angajati`,`concedii`"; $result = mysqli_query($con1,$CerereSQL); Simplified Code: $backupFile1 = 'auth_back.sql'; $CerereSQL = "SELECT * INTO OUTFILE '$backupFile1' FROM angajati"; if(mysqli_query($con1,$CerereSQL)) { echo 'Database backed up'; } else { echo 'There was an error backing that database up'; }
I want to create a script,to create a backup for every table of a database.Until now i have this: Connection to database: <?php $AdresaBD="localhost"; $UtilizatorBD="root"; $NumeBD="auth"; $NumeBD1="auth1"; $ParolaBD=""; $con1=mysqli_connect($AdresaBD,$UtilizatorBD,$ParolaBD,$NumeBD); $con2=mysqli_connect($AdresaBD,$UtilizatorBD,$ParolaBD,$NumeBD1); if( mysqli_connect_errno() ) { echo 'Nu ma pot conecta la baza de date!'.mysqli_connect_errno(); } ?> The script: <?php //auth $backup1 = fopen('auth_back_'.date('j-M-Y').'.sql','w+'); $CerereSQL = "SELECT * INTO OUTFILE '$backup1' FROM `angajati`,`concedii`"; $result = mysqli_query($con1,$CerereSQL); //auth1 $backup2 = fopen('auth1_back_'.date('j-M-Y').'.sql','w+'); $CerereSQL = "SELECT * INTO OUTFILE '$backup2' FROM `contact`,`persoane_active`"; $result = mysqli_query($con2,$CerereSQL); ?> (For example,i will choose just a database to explain)I have tables angajati,concedii in my database auth , this script should create a backup file sql with those tables,when i test the files to see if they actually have the data inside of them,in other words when i try to import them,it says succes but the data doesnt show up.That's because the files are blank,no matter what i do.
Creating a backup script for database in php
Would be awesome to see a solution that uses xargs or find's -exec. But here is how can do this with a shell loop and find: Note, this recursively backs up files in sub directories. For .php files: find . -iname '*.php' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done For all files: find . -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done For all files that have an extension: find . -iname '*.*' -type f -print0 | while read -d $'\0' file; do cp "$file" "$file.bak"; done
I am not sure how to word this question to find the solution easily online, so after much searching I thought I would ask here. I access my website's files using bitvise ssh client and I use command lines for various grep and sed functions that I've been recently taught, but I can't seem to find a simple way to do this: What is the command line to make a backup copy (.bak) of EVERY file that ends in .php? I am looking for the command to instantly make a backup of every php file at once, so when I go into my files I see things like... index.php index.php.bak For every php file. Also, what is the command line to do this for EVERY file at once, regardless of extension?
Recursively copy/backup all .php files to .php.bak files and keep them in their current paths
Depending on the kind of reformat, tools (like for instance Yodot Mac Data Recovery not free, but there are others) might be able to recover those files. If the Time Machine partition wasn't erased, the git repo might be there too. But other than that, a git repo is just a collection of files, and wouldn't be any more or less recoverable than any other group of files.
I will try to explain my situation the best I can. I had to format my computer (Mac mini / running mavericks). Later, while I'm resorting the backup I realized my last project isn't there. Does someone know how can I recover the project ? I was using git on my computer but I didn't push the repo (sadly). I don't know if that gives me any kind of advantage to recover the files. I'm really lost here... don't know what to do! Thanks
Git Repo recovery
0 robocopy /? says /S :: copy Subdirectories, but not empty ones. Share Improve this answer Follow edited Aug 18, 2014 at 2:15 Felipe Oriani 38.3k1919 gold badges135135 silver badges198198 bronze badges answered Aug 18, 2014 at 1:47 NoodlesNoodles 1,98111 gold badge1111 silver badges44 bronze badges 2 Is /copyall adding the /e, if so, how do i remove, conflict between /s and /e – user3795654 Aug 18, 2014 at 5:42 /mir is creating a mirror backup including empty folders. – foxidrive Aug 18, 2014 at 6:10 Add a comment  | 
I'm looking to modify a robocopy script that is taking way too long to complete. The directory it is copying has thousand's of empty folders, which I'm told i cannot get rid of. robocopy script switches are this: robocopy /copyall /sec /mir /r:1 /w:1 /mt:24 The log file produces this: robocopy /S /E /COPY:DATS /PURGE /MIR /MT:24 /R:1 /W:1 I think it will improve the time it takes to backup this directory if T can remove the /e switch. I assume this comes from /copyall. Question is how can I still use /copyall but remove the /e? Is it as simple as manually adding the below switches, and removing /e? Or is there a better way? /S /COPY:DATS /PURGE /MIR /MT:24 /R:1 /W:1 Below are the results from yesterday. Going from a file/print server to a NAS box across a 10GB link. ------------------------------------------------------------------------------ Total Copied Skipped Mismatch FAILED Extras Dirs : 188548 35 188513 0 0 0 Files : 1144788 1633 1143155 0 0 26 Bytes : 397.981 g 1.033 g 396.948 g 0 0 7.88 m Times : 0:57:10 0:00:44 0:00:00 0:56:18 Speed : 29329947 Bytes/sec. Speed : 1678.273 MegaBytes/min.
robocopy /copyall but not empty folders (/e)
0 You can use tar for creating backups for a full system backup tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz / for a single folder tar -cvpzf backup.tar.gz --exclude=/backup.tar.gz /your/folder to create a gzipped tar file of your whole system. You might need additional excludes like --exclude=/proc --exclude=/sys --exclude=/dev/pts. If you are outside of the single folder you want to backup the --exclude=/backup.tar.gz isn't needed. More details for example here (you can do it over network, split the archive etc.). Share Improve this answer Follow edited Aug 14, 2014 at 17:12 answered Aug 14, 2014 at 12:12 TrudbertTrudbert 3,1781515 silver badges1414 bronze badges Add a comment  | 
I've already done a backup of my database, using mysqldump like this: mysqldump -h localhost -u dbUsername -p dbDatabase > backup.sql After that the file is in a location outside public access, in my server, ready for download. How may I do something like that for files? I've tried to google it, but I get all kind of results, but that. I need to tell the server running ubuntu to backup all files inside folder X, and put them into a zip file. Thanks for your help.
How to do a backup of files using the terminal?
0 There are a few problems with your code. Let's open the file in the normal way, first: open my $fh, '<', 'backup.log' or die "Couldn't open `backup.log': $!"; my %backup; my $current_backup; while (<$fh>) { # when we see a new date... # set up a new hash ref for its details if (/^\w{3} \w{3} \d+ [0-9:]+ 20\d\d/) { chomp; $backup{$_} = {}; $current_backup = $_; } # look for other known types of lines if ($current_backup) { if (/^(\w+) Backup /) { if ($backup{$current_backup}{type}) { delete $backup{$current_backup}; $current_backup = ''; } else { $backup{$current_backup}{type} = $1; } } elsif (/^STATUS: (\w+)/) { $backup{$current_backup}{status} = $1; } elsif (/^Backup Completed: (.*)/) { $backup{$current_backup}{completed} = $1; } else { } } } close $fh; But even then, I don't think you actually want to skip the entry if it isn't normal. Share Improve this answer Follow answered Aug 7, 2014 at 21:55 JoshSNJoshSN 1133 bronze badges 1 Thanks for your input, I'll give your solution a try. I'm actually using my $stdout = $ssh->capture("cat path to log file"); to open the file. As I'm running the script remotely via SSH. I'd like to skip the entry if it isn't normal, the most important part is if the STATUS: was a success or a failure. – Ko_Na_H Aug 7, 2014 at 23:06 Add a comment  | 
I'm relativity new to perl and I'm trying to split and parse out data from a log file. The log file contains information of when a backup was and wither it was successful or not. However at one point in the log file an entry repeats itself and is causing issues parsing the data. How can I skip the entry if it doesn't parse? Normal entry on top and the problematic entry below. > $VAR1 = 'Thu Jul 31 00:35:00 2014'; > $VAR2 = 'Daily Backup for (Wed) Jul. 30, 2014 > STATUS: Successful Thu Jul 31 00:37:22 2014'; > VAR3 = 'Backup Completed: Thu Jul 31 00:40:07 2014 > $VAR1 = 'Fri May 16 00:35:00 2014'; > $VAR2 = 'Daily Backup for (Thu) May. 15, 2014 > STATUS: Successful Fri May 16 00:37:43 2014'; > $VAR3 = 'Daily Backup for (Thu) May. 15, 2014 > STATUS: Successful Fri May 16 00:39:54 2014'; > $VAR4 = 'Backup Completed: Fri May 16 00:42:37 2014 my $stdout = ("cat backup.log"); my @lines = split(/Backup Started: /, $stdout); shift @lines; foreach(@lines) { my @backupstarted = split(/\n\n/,$_); my $start = $backupstarted[0]; my @types = split(/ Backup /, $backuptype); my $type = $types[0]; my @statuses = split(/ /, $backupstatus); $statuses[1] =~ s/\://g; my $status = $statuses[1]; my @enddate = split(/ /, $backup); my $end = $enddate[0];
How do I skip a line in perl if an error occurs?
ez-vcard can parse vCard files (disclaimer: I am the author xD ) Reader reader = ... List<VCard> vcards = Ezvcard.parse(reader).all(); reader.close();
I am working in a project that will backup all contacts as a .vcf file in sdcard. At this time, I am able to get all information of a contact ( including number, emails, birthday etc.... ). But I want to get specific information from contacts. (ex: just number). How can I do it? I am trying to modify these codes... but cannot solve my problem. Please help.
How to save speicific data of a contact in .vcf file
0 The problem is due to Security reason, you can't email data from data folder, SO before emailing export it to SD card and then send, It should work. Happy Coding !!! Share Improve this answer Follow answered Aug 2, 2014 at 12:07 Ayush GhoshAyush Ghosh 48722 silver badges1010 bronze badges 2 i have tried it. But when the gmail app opens, it says invalid file format. It shows this error <qeglDrvAPI_eglTerminate:2531>: <<<< Reset Blob Cache Funcs >>>> – Gurjas Aug 2, 2014 at 12:19 yes got the solution. Actually i didn't realized that i was using the pathname as String. While, it should have been in file format. Anyways thanks for your answer. – Gurjas Aug 6, 2014 at 3:50 Add a comment  | 
I have created the backup file for the SQLite Database i have used in my application. All i want is to send this backup file through email. I have implemented the file sending Intent but when they open, it says, you can only send files like (Image, Coarse Location etc.) String pathname = Environment.getExternalStorageDirectory().getAbsolutePath(); String filename = "/Android/data/<package-name>/databases/hello.db"; File file=new File(pathname, filename); Intent i = new Intent(Intent.ACTION_SEND); i.putExtra(Intent.EXTRA_SUBJECT, "Database Backup"); i.putExtra(Intent.EXTRA_TEXT, "Hey there, database successfully sent."); i.putExtra(Intent.EXTRA_STREAM, Uri.fromFile(file));`i.setType("text/plain"); startActivity(Intent.createChooser(i, "Your email id"));
Email database backup file from application - android
0 If your updating your phone's internal system, you may want to backup individual files instead of the entire phone data in a single file. For that purpose, I use adb pull with a little trick. First, I list what I want into a file named folders.txt, then I run this batch below: # this is go.sh while read p; do file=$1$p if [ -e "$file" ] then echo Already exists: "$file" else echo Reading: "$file" adb pull "$file" fi done < folders.txt Example: chmod +x ./go.sh # make it executable ls sdcard > folders.txt # get the list of desired files, and edit it if needed ./go.sh sdcard # run this, as many times you need Share Improve this answer Follow answered Jul 26, 2021 at 3:10 Murilo PerroneMurilo Perrone 49455 silver badges1010 bronze badges Add a comment  | 
I'm interested in using a general backup command like: adb backup -f at_all_app.ab -noapk com.at_all_app on an android 4.1 mobile to backup an app (an 'at all' app) to the mobils SD Card. I try to use this command in the android terminal (shell) to backup something to SD Card but, how wonder, unable to connect for backup was the system answer. The reason for this will be, that there exists no adb-client-server pair, i guess. Normally you have a server client pair form the PC to the android mobil. Is there any way/idea to implement a code to backup a foreign installed package (not the apk) on an android phone SD Card without a PC like the adb command I wrote above? Any suggestions are welcome.
How to use a adb like backup command on android phone
I do not believe this is possible without custom plugin. If you read the vim help carefully, it says that the backup file will be created in the first directory in the list where this is possible. So the behavior you are seeing is by design. *'backupdir'* *'bdir'* 'backupdir' 'bdir' string (default for Amiga: ".,t:", for MS-DOS and Win32: ".,c:/tmp,c:/temp" for Unix: ".,~/tmp,~/") global {not in Vi} List of directories for the backup file, separated with commas. - The backup file will be created in the first directory in the list where this is possible. The directory must exist, Vim will not create it for you. If you really want to be able to backup to multiple directories, I would suggest writing a function to do this, and attaching it to BufWritePre.
I would like to create backup file in several directories after :w in vim, if statement is true. Vim :help says, that you need to put commas between directories and nothing else. But it's not working for me. It reads only the first directory. I tried different ways, such as usingset backupdir+=, or ~/. instead of .. set backup set nowritebackup set backupdir=~/Dropbox if expand("%")==".vimrc" set backupdir=.,~/.vim/backUpDir/,~/Dropbox endif In .vimrc expand returns :echo expand("%")==".vimrc" 1 vim --version VIM - Vi IMproved 7.4 MacOS X (unix) version
.vimrc backupdir several directories
0 The autobackup will run on the primary database -- you can only capture backups on a follower manually. Share Improve this answer Follow answered Jun 17, 2014 at 16:45 rdeggesrdegges 33.2k2121 gold badges8686 silver badges109109 bronze badges Add a comment  | 
From pgbackups documentation: Note that capturing a backup does add some load on your database for the duration of the backup. How this impacts your application will vary with the size of your database and the nature of the app. Consider taking backups on a follower if there is a significant impact from running them on the master. I know I can create a manual backup using the command heroku pgbackups:capture FOLLOWER_DATABASE_URL But when I add the pgbackups addon through the website https://addons.heroku.com/pgbackups it comes with autoback that I don't know how to turn off. When installing the addon, it asks me which app to add it to, but not which database. I have no idea when the automatic backup will run, nor do I know which database it will run on, the primary or the follower.
How to setup auto backup on a heroku pg follower?
I believe it is hanging waiting for you to enter a password, from the mysqldump documentation (http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_password) " If you use the short option form (-p), you cannot have a space between the option and the password." Try: executeCmd ="mysqldump -u root -proot mydb--add-drop-database -r D:\\database\\backup.sql";
Her is the code I use to back up my database. I succesfully get a "backup.sql" file when I run this but when I check it,it's empty. There's no data so meaning I got no backup but an empty sql file instead. Furthurmore, when I run this program, the system hangs and I have to end the process through task manager. public void Backupdbtosql() throws IOException, InterruptedException { String executeCmd = ""; executeCmd ="mysqldump -u root -p root mydb--add-drop-database -r D:\\database\\backup.sql"; Process runtimeProcess =Runtime.getRuntime().exec(executeCmd); int processComplete = runtimeProcess.waitFor(); if(processComplete == 0){ System.out.println("Backup taken successfully"); } else { out.println("Could not take mysql backup"); } }
MySQL database backup creates an empty file
I don't know why but in OSX 10.10 it works! For non-technical folks: open terminal type cd ~/Library/Messages/Archive type open . this opens the Archive folder for you. You can then copy the content.
Is there any way to store a iMessage conversation which i wrote on a Mac? I've found lot's of programs which allow to do this with conversations on an iPhone but not on a Mac. I tried the approach with the ~/Library/Messages/Archive - but there ain't no Archive folder - just the chat.db and a folder called /Attachments. Any suggestions?
How can i save iMessage conversations on OSX?
A direct adaptation of your command: find /root/ -name 'wallet.dat' -execdir bash -c 'echo cp "$0" "/home/backup/${PWD##*/}-${0#\./}"' {} \; Explanation. Each found file will be set as the 0-th positional argument for bash, with execdir, the bash process will be executed from the directory containing the found file, hence $PWD will expand to the containing directory, ${PWD##*/} expands to the current directory, after removing everything up to and including the last slash (so, very likely, the basename of the containing directory), ${0#./} expands to the name of the file found after removing the period and slash. Remove the echo if you're happy with this.
I want to create a backup script for my CryptoCurrency Wallets using bash. All wallets are in a subfolder of /home. find /root/ -name 'wallet.dat' -exec cp {} /home/backup \; This command copies the files; however, I want to do the following: The wallets are always in a structure like this: /home/<coinname>/.<coinname>/wallet.dat. And I want the backuped file to be named <coinname>-wallet.dat so the folder /home/backup has following files: bitcoin-wallet.dat dogecoin-wallet.dat and so on. Is there an easy way to do that?
Backing up specific files and renaming them via Bash
0 NO – AWS RDS does not give a direct/automatic option of creating a .bak file which you can get in to your local system and audit. YES – There are ways to do this. If your database is : small – Generate script from SSMS, with all schema and data. this will create a .sql file, then zip it, and get it locally for you. large -- use bcp command to export data from the database Share Improve this answer Follow answered Apr 10, 2014 at 13:21 Ratan SharmaRatan Sharma 18711 silver badge44 bronze badges 1 what is bcp ? any link / documentation ? – maumau Apr 10, 2014 at 14:48 Add a comment  | 
we need make regular backup and bring it localy for auditing porpouse. How we can have ms sql server backup from an aws rds ms sql server database ? Any automatic way to do it?
Ms sql server, backup and bring it local from aws rds ms sql server database
0 You could look for the file being closed and archive it. The phi notify library allows you to watch given files or directories for a number of events, including CLOSE-WRITE which allows you to detect those files which have closed with changes. Share Improve this answer Follow answered Apr 8, 2014 at 20:12 Tony Suffolk 66Tony Suffolk 66 9,50833 gold badges3030 silver badges3434 bronze badges Add a comment  | 
I'm writing a Python-based service that scans a specified drive for files changes and backs them up to a storage service. My concern is handling files which are open and being actively written to (primarily database files). I will be running this cross-platform so Windows/Linux/OSX. I do not want to have to tinker with volume shadow copy services. I am perfectly happy with throwing a notice to the user/log that a file had to be skipped or even retrying a copy operation x number of times in the event of an intermittent write lock on a small document or similar type of file. Successfully copying out a file in an inconsistent state and not failing would certainly be a Bad Thing(TM). The users of this service will be able to specify the path(s) they want backed-up so I have to be able to determine at runtime what to skip. I am thinking I could just identify any file which has a read/write handle and try to obtain exclusive access to it during the archival process, but I think this might be too intrusive(?) if the user was actively using the system. Ideas?
How to detect 'live' files during filesystem backup
0 Assuming that you have installed Centos, you obviously have crond tool. Put your routines into the cron, and it will execute any script at the specified time: su #login as root crontab -e This will run the FTP upload every day at hh:mm: mm hh * * * curl --upload-file testfile.zip ftp://user:[email protected]/ But i find it more useful to use direct filesystem access for creating backups (you need to configure ssh public key access before): mm hh * * day_of_week_number rsync -avh -e --updates --delete /source remote.host:/dest Share Improve this answer Follow edited Mar 31, 2014 at 9:13 answered Mar 30, 2014 at 21:02 Vitaly IsaevVitaly Isaev 5,51966 gold badges4545 silver badges6565 bronze badges Add a comment  | 
I have Linux Centos 6.5, and I have tried different backup scripts, but I have failed each time. I only have a small amount of experience with Linux, I've only used it to set up a server etc. so I don't know how to do proper backups. I have a 100GB FTP server connected to my Linux server that I can use for backups. I need a script that takes a weekly backup and also a daily incremental backup. I only need to backup certain directories, e.g. /home, /etc and so on. It should also automatically execute every week/day and take a backup and put that backup on the FTP server. Is there anyone who has a proper and working script for this?
Full Weekly Backup & Daily Incremental Backup
I eventually went for REM compress folders to zips for /d %%x in ("C:\Logs\*.*") do start "C:\Program Files (x86)\7-Zip\7z.exe" /b /low /wait "C:\Program Files (x86)\7-Zip\7z.exe" a -tzip "%%x.zip" "%%x\" REM delete the original folders used to create zips for /d %%F in ("C:\Logs\*") do rd /s /q "%%F"
I need to run a script to compress all folders within a folder that is 2 levels within a directory structure. I want the script to compress all folders within the log folder and then delete the original folder, thus saving loads on space. To illustrate the folders I wish to compress, see below: Drive Location-->machinename-->logtype-->folders_i_want_to_compress Within folder 2 there are folders with dates in the format yyyymmdd and it is these that I wish to compress as zip files. I cannot work out how to create a script to do this but I did find a similar script here:7-Zip compress files within a folder then delete files ... that looks like this: REM Usage: ZipFilesRecursively.bat "C:\My Files" for /R "%~f1" %%F in (*) do ( 7z a -mx9 "%%~dpnxF.7z" "%%F" if exist "%%~dpnxF.7z" del "%%F" ) But this is only for files. I cannot work how to change this so that it works for folders rather than files although I believe the start of it would be to use for with the /D switch rather than the /R. As the path to the folders is based partly on the machine name, I need to feed this into the script, so I was planning on using a call from another batch file to then run the compression/deletion script. The call script will look something like this Call zipdeletetest4 machinename1 Call zipdeletetest4 machinename2 Call zipdeletetest4 machinename3 Call zipdeletetest4 machinename4 Can anyone help with this?
Use 7-Zip to Compress folders within a directory and then delete the source folder used to create the .zip file
0 This can't be a rsync problem, there should be something else going on. rsync just does a binary copy from source to destination, the most probable explanation is a simple user error (e.g. you copied from the wrong source directory, source files where already without EXIF data, and so on). For normal copies on reliable hardware, rsync is without doubt the best tool for the job, especially considering the huge amount of filesystems it has to cope with. There are some corner cases where rsync may not behave as it should, at least with default parameters. For example, right now I'm investigating on an issue where, copying to a "not-so-reliable" USB drive, rsync continued to copy happily even when the drive disconnected from USB and the device disappeared. Share Improve this answer Follow edited Apr 13, 2017 at 12:22 CommunityBot 111 silver badge answered Dec 20, 2014 at 14:15 AvioAvio 2,71166 gold badges3030 silver badges5050 bronze badges Add a comment  | 
Trying to set up a simple backup solution for my wife's computer. Have a volume on my server upstairs mounted locally using OSX automount, so it should just be a simple rsync -a sourceDir targetDir When I look at the files it syncs over though, all metadata is lost on jpg files. The created date is preserved on the file and the modified date ends up being the timestamp when the rsync runs, but I can't imagine why EXIF data (Device, exposure etc) would disappear when it should just be a straight file copy. Hoping someone has run into this before and can shed some light on it.
Rsync seems to erase EXIF data from photos
Codeigniter Database Utillity Class doesn't allow you to make a Incremental backup of the database. If you need that you should create a custom library for that. Library should be able to run sql commands. Please read for details about Incremental backup in mysql: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/mysqlbackup.incremental.html
I am using this function for getting full backup of my mysql db. function backup() { date_default_timezone_set("Asia/Kolkata"); $date = date("d-m-Y, g:i A"); $folder = "application/backup"; $prefs = array( 'tables' => array('table1', 'table2') ); $backup =& $this->dbutil->backup($prefs); write_file("application/backup/backup $date.sql.gz", $backup); } But I need to get the Incremental backup of my mysql database hourly in codeigniter. What are the changes required to this code for getting the incremental backup?.
incremental backup in codeigniter
0 Try duplicity along with Dropbox. So you copy your data to the local Dropbox directory. When you have internet connection the dropbox client will sync your backup Share Improve this answer Follow answered Feb 12, 2014 at 13:02 akuzminskyakuzminsky 2,2301616 silver badges2222 bronze badges 2 dropbox isn't really useful because there space limitations and the fact that the backup is stored also on my disc that i actually would like to backup. the possibility to configure a temporary directory to permanently store not yet uploaded files would be a great solution. but this is not implemented right now, isn't it? – arbyter Feb 12, 2014 at 17:18 Well, I don't know all details that's why can suggest only direction to go, but not the ready recipe. Sure, there is free space limit (2G) if you need to backup more that would be additional cost. Technically there are no problems, if you backup let's say only $HOME you can put your Dropbox directory in /var or elsewhere. May be I overcomplicate things and simple shell script can do the job. – akuzminsky Feb 12, 2014 at 18:19 Add a comment  | 
I would like to use duplicity as a second and primarily as a remote backup for my macbook air. I would like to setup the backup as a regularly cronjob. I am traveling a lot so i can not ensure a fast or even an internet connection to my remote backup space at all. Has anyone an idea how to to create regularly backups and upload them only, if an internet connection is detected, with duplicity?
Creating continuously backups with duplicity, uploading them later
Answering each of your questions in order, then: Several options, the most common of which would be one of wget http://mywebsite.com/dump.php or curl http://mywebsite.com/dump.php. Since you have ssh access to the server, you can very easily use rsync to grab a snapshot of the files on-disk with e. g. rsync -essh --delete --stats -zav [email protected]:/path/to/files/ /path/to/local/backup. Once you have the snapshot from rsync, you can make a compressed, dated copy with cd /path/to/local/backup; tar cvf /path/to/archives/website-$(date +%Y-%m-%d).tgz * find /path/to/archives -mtime +120 -type f -exec rm -f '{}' \; will remove all backups older than 120 days.
I'm brand new to shell scripting and have been searching for examples on how to create a backup script for my website but I'm unable find something or at least something I understand. I have a Synology Diskstation server that I'd like to use to automatically (through its scheduler) take backups of my website. I currently am doing this via Automator on my Mac in conjunction with the Transmit FTP program, but making this a command line process is where I struggle. This is what I'm looking to do in a script: 1) Open a URL without a browser (this URL creates a mysql dump of the databases on the server to be downloaded later). example url would be http://mywebsite.com/dump.php 2) Use FTP to download all files from the server. (Currently Transmit FTP handles this as a sync function and only downloads files where the remote file date is newer than the local file. It also will remove any local files that don't exist on the remote server). 3) Create a compressed archive of the files from step 2, named as website_CURRENT-DATE 4) Move archive from step 3 to a specific folder and delete any file in this specific folder that's older than 120 Days. Right now I don't know how to do step 1, or the synchronization in step 2 (I see how I can use wget to download the whole site, but that seems as though it will download everything each time it runs, even if its not been changed). Steps 3 and 4 are probably easy to find via searching, but I haven't searched for that yet since I can't get past step 1. Thanks! Also FYI my web-host doesn't do these types of backups, so that's why I like to do my own.
Bash/Shell Script for automatic backup of website
0 You could use the export function from the open-source program "MySQL Workbench" and then import it on your laptop. MySQL Workbench: http://www.mysql.com/products/workbench/ And how to use MySQL Workbench to export & import: https://www.beastnode.com/portal/knowledgebase/48/MySQL-Workbench-Backup-and-Import-your-MySQL-Database.html Share Improve this answer Follow answered Jan 13, 2014 at 11:30 RononDexRononDex 4,1832424 silver badges4040 bronze badges 0 Add a comment  | 
I am developing with netbeans. I am using MySql database. I want to move my project on my another laptop. But when i copy my project to removable drive i don't get database files with project. Please tell me how to move whole project along with database files. I am copying from hard disk, is that a problem???
Project backup along with database to removable drive
0 I don't know if it is feasible for you, but you can add another member to replica set. This member would be hidden, so it would not be used for queries or writing operations. You can stop this server every day for make your database backups. Share Improve this answer Follow answered Dec 17, 2013 at 7:51 rubenfarubenfa 85111 gold badge77 silver badges2424 bronze badges Add a comment  | 
There is a Replica set (primary, secondary, arbiter) with 300GB data. i want to make daily backup without lock. The Replica is placedWe use Windows 2008R2, so seems not possible to use lvm tools. If i want to make folder copy on secondary, it needed to shut down mongod first (because its not possible copy mongod.lock while mongod is running). What is the best solution to make fastest daily backup
How to Backup from mongoDB without locking tables
Looks like it: http://dev.mysql.com/doc/refman/4.1/en/copying-databases.html though I'd probably stop the engine first...
I currently have a database with 3 MyISAM tables containing very large number of rows (~400,000,000). Even though the rows are not complex and consist of maybe 3 or 4 integer fields, I would like to be able to most effectively backup the database and restore in case of failure. I have tried using mysqldump, but when I recently restored the database it took a really long time (about 14 hours). My data is not mission critical in that it is updated only about once a week, but still I would not like to wait that long if I had to restore it. Since I am using MyISAM tables, is it possible to just copy the .MYD, .MYI, and .FRM files for each table, and, in case I needed to restore the database, just copy these individual tables' files back to where they were? Would that work? Or would I need to copy additional files/data or perform any additional tasks for restoring? Thanks in advance, Tim
MySQL MyISAM tables - Is it possible to just copy the .MYD, .MYI and .FRM files for backup?
Take a look at the "piecemeal backup and restore" - you will find it very useful for your scenario, which would benefit from different backup schedules for different filegroups/partitions. Here are a couple of articles to get you started: http://msdn.microsoft.com/en-us/library/ms177425(v=sql.120).aspx http://msdn.microsoft.com/en-us/library/dn387567(v=sql.120).aspx
We have to design an SQL Server 2008 R2 database storing many varbinary blobs. Each blob will have around 40K and there will be around 700.000 additional entries a day. The maximum size of the database estimated is 25 TB (30 months). The blobs will never change. They will only be stored and retrieved. The blobs will be either deleted the same day they are added, or only during cleanup after 30 months. In between there will be no change. Of course we will need table partitioning, but the general question is, what do we need to consider during implementation for a functioning backup (to tape) and restore strategy? Thanks for any recommendations!
Huge sql server database with varbinary entries
0 So I found that you have to check several things. List item Run TFS admin tool with the service account credentials or login to the server with the TFS service account. List item Make sure all your database names are [TFSServerName. Assuming it's installed on the current server. Mine were defaulted to longer domain names... like ServerName.xya.ado.com I changed it just ServerName After it worked fine. Share Improve this answer Follow answered Apr 8, 2016 at 19:09 Steve ColemanSteve Coleman 2,00722 gold badges1818 silver badges2828 bronze badges Add a comment  | 
I'm at a loss with trying to find a solution to this issue. When I am trying to create a backup plan in TFS 2012 I get "TF400998: The current user failed to retrieve the SQL Server service account information. Please make sure you have permissions to retrieve this information." My TFS server and SQL server are on separate VM's on the same domain. I login to my TFS server with my domain account which has admin rights to the machine which is also the same account I setup TFS with initially. I connect to my SQL server using my local machine using the same domain account and that account is in the sysadmin role. Everything seems to be working fine except for this. All the information I have read seems to point to the SQL service account being a local account, but I'm using my domain account for everything. Below is a link the closet post to some real answers, but I've tried everything I can think of, please help! Thank you! TFS 2010 Backup fails with "The current username failed to retrieve MSSQL Server service account. "
Error "TF400998: The current user failed to retrieve the SQL Server service account information" when trying to configure backup plan in TFS 2012
0 The issue isn't so much about the --delete option (which I would use to keep a consistent image of the repos on both sides), and more the risk of corruption when copying so many files. One solution would be to have a job updating incrementally local (to the server) bundles. A bundle is like a git repo, but condensed in one file. There, your rsync would have to copy over only one file per repo, but that wouldn't backup local config and local hooks of each repos. The other solution is to use an rsync-like solution, able to backup incrementally a huge volume of data: bup (presented in Git Minutes #24). Share Improve this answer Follow edited May 23, 2017 at 12:28 CommunityBot 111 silver badge answered Nov 1, 2013 at 10:50 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges Add a comment  | 
I have a huge number of git repositories on an on-site linux server that I need to back up daily to an off-site windows server. Because there are so many files, I want to use rsync instead of plain copy to save time and network bandwidth. (I will use rsync after mounting the windows destination drive.) I also want to avoid a solution that tars up all the files or otherwise compresses them into one big file. This is because the offsite-server is further replicated to other offsite servers, and if I uses a big tar file, instead of just having to copy the 1 or 2 small files that changed, the entire big tar file will have to be copied. (*1) My question is, should I or should I not use the delete flag (--delete) on rsync? The delete flag on rsync will delete files found on the destination but not on the source. I would ideally like to leave the delete flag off, because if a wanted file accidentally got deleted from the source, it would also get deleted on the destination. By no using the delete flag, we risk having the destination be a superset of all the files we want. Is this a problem? Would this somehow corrupt the git repositories on the destination? If it makes a difference in your answer, we only allow fast-forward commits to be pushed to our on-site git server. Maybe it's the case that git will never delete files in the .git directory if only fast-forward commits are done? (*1) Edit 1: Add note about wanting to avoid solutions that compress multiple files into one. Thus git bundle wouldn't be a wanted solution. If rsync isn't the way to go, I'd love to know of any alternative approach you recommend.
Use Delete Flag when Using Rsync to Backup Multiple Git Repos?
Half an answer: it's easy to set the commit date (and committer, and author and author date) with git commit: --author='A U Thor <[email protected]>', --date=<date> set environment variables: GIT_AUTHOR_NAME: this is the A U Thor part above GIT_AUTHOR_EMAIL: this is the [email protected] part above GIT_AUTHOR_DATE: this is the <date> part above GIT_COMMITTER_NAME, --author='A U Thor <[email protected]>'0, --author='A U Thor <[email protected]>'1: these are the "committer" variants of the three AUTHOR settings. These are documented (a bit awkwardly I think) in the manual pages for --author='A U Thor <[email protected]>'2, --author='A U Thor <[email protected]>'3, and --author='A U Thor <[email protected]>'4. See the one for --author='A U Thor <[email protected]>'5 for date formats. If you're unpacking things manually, there's no need to move the repo directory around, as all the git commands respect the --author='A U Thor <[email protected]>'6 environment variable, so you can point to --author='A U Thor <[email protected]>'7 or whatever. As for an automated importer, this question, Is there an opposite command to git archive for importing zip files, is not really answered and is more than two years old, so maybe not, but I don't know. I imagine it would be pretty easy to write a Python program to read zip files and dump them into --author='A U Thor <[email protected]>'8, which would be the obvious solution here. Python already has a built-in zip-file reader.
I have quite recently started to use Git and think it is great. Earlier I did my backups by creating a zip-package of all code and named it with the current date (e.g. "MyAndroidApp -1- 2013-03-10.zip"). That method resulted in that I stored many duplicates of a lot code. Now I want to take these backups and create Git repository out of them. I can't find if it is possible to set the date and time for the commit-stamp, is this possible and how? Further more is there a simple way to commit the code from the zip-archives or do I have to unzip all of them manually, move the .git-folder and commit the code for each one?
How to construct a Git repository out of old code backups
0 One option might be to use the windows Volume Shadow Service to make a snapshot of your C drive. If SQL Server is properly set up, that should also ensure that the database files are in a consistent state. You can then just copy the necessary files over. You may also want to have a look at my greenclone program ( open source and compiled binaries available through https://bitbucket.org/bilkusg/greenclone ) which can take a shadow copy and duplicate any desired directory hierarchy on windows. If you adopt this approach, do please test restoring your data somehow before relying on it for a production system! Share Improve this answer Follow answered Oct 14, 2013 at 13:21 bilkusgbilkusg 12611 silver badge77 bronze badges Add a comment  | 
Background: I m using a SharePoint 2010, and now i have to change my window. what i want is to take a backup of all databases of my SharePoint instance of SQ L server 2008. I have searched, but all methods are proper traditional, that is taking backup from sq l management studio. Problem: Actually i have not much time to backup each database, what i want is to copy that folder(contains SQ L databases) from C:\ . i want a copy of all databases saved. Question Is there any way to get all backups on one copy past ?
backup Sql Databases from file system
Making file system copies of the /sitecore/data/indexes directory will work just fine, but you need to be careful about how you're backing it up. If you try to take a backup while the site is running, you'll get a bad backup due to the way Lucene manages locking on the index's files. Make sure all aspects of your sitecore instance are offline before taking the backup. If this is not possible for you (which it sounds like it isn't because you're on a production environment), you have two options: 1) Make a staging environment where content is initially entered before being published to production and take your offline backups from there, or 2) Modify Velir's Lucene Index Refresher to make backups for you.
Sitecore.NET 6.6.0 (rev. 130404) Our production setup contains a separate web server and database server. Web server hosts the sitecore website as well as the sitecore data folder (including indexes). Database server (obviously) hosts the sitecore databases. In managing DB backups, taking SQL DB backups is not enough, we also have to include Lucene indexes in our backups. Otherwise, in an emergency situation, even if we have the SQL DBs, the website won't function because it depends on Lucene indexes for content searching. Rebuilding indexes is also not an option for us. Indexes based on web database will take an hour or two to rebuild. The ones based on Master database will take more than 40 hours to rebuild due to the large no. of content items in the master database. What are the usual practices involved in taking DB backups in this kind of a setup?
Sitecore - Managing Lucene indexes and database backups
Your filename contains characters ((, )) that would typically need to be escaped. You need to quote the variable. Say: curl --upload-file "$ARCNAME" ftp://$WEBDAVUSER:$WEBDAVPASS@$WEBDAVURL
Help my please. I have got one bug in my backup script. File size 0 bytes when uploaded curl ftp. And cut file name: "siteru=2013-09-27(17". Why? When the script is executed, there are no errors. Uploading to ftp is completely up to 100% #!/bin/bash # #ver 1.0 #2013-09-09 # DBHOST="mysql-host" DBUSER="mysql-user" DBPASS="mysql-pass" DBNAME="mysql-db" DBARC=$DBNAME.sql.gz # WEBDAVURL="ftp-url" WEBDAVUSER="ftp-usr" WEBDAVPASS="ftp-pass" # SCRIPTDIR="/home/site/site.com/docs/backup/" SCRDIR="/home/site/site.com/docs/" SCREXCLUDE="backup" SCRARC="site-com.tar.gz" # ARCNAME="sitecom"=$(date '+%F(%H:%M)')".tar" MAXARC="20" # cd $SCRDIR # tar cfz $SCRIPTDIR$SCRARC --exclude=$SCREXCLUDE * # cd $SCRIPTDIR # mysqldump -h$DBHOST -u$DBUSER -p$DBPASS $DBNAME | gzip > $DBARC # tar cf $SCRIPTDIR$ARCNAME $SCRARC $DBARC # curl --upload-file $ARCNAME ftp://$WEBDAVUSER:$WEBDAVPASS@$WEBDAVURL # rm *.gz # ls -t *.tar | tail -n+$MAXARC | xargs rm -f
File size 0 bytes when uploaded curl ftp
0 If /dev/sda1 is mounted as your root filesystem, doing a recursive copy on it would also include the mounted filesystems under its directories. You can mount it again on another directory e.g. /mnt/system then do a recursive copy from it. I suggest using cp -a and not just -r as well. Share Improve this answer Follow answered Sep 14, 2013 at 20:27 konsoleboxkonsolebox 73.5k1212 gold badges102102 silver badges107107 bronze badges 5 Hi konsolebox thanks for your reply. do you think if i only change from "cp -r" to "cp -a" the script will finish copying without errors? – Savio Da Silva Sep 14, 2013 at 20:31 I feel like it would work but I haven't tried that yet actually. Most of the time I just mount my system on another directory before backing it up. Do you intend to copy other files from other filesystems as well? e.g. tmpfs in /tmp, udev in /dev, sysfs on /sys? I don't think they would be necessary for the backup. – konsolebox Sep 14, 2013 at 20:33 You could just do mkdir -p /mnt/system; mount /dev/sda1 /mnt/system; cd /mnt/system; cp -a * /somewhere – konsolebox Sep 14, 2013 at 20:34 THe only backup required is the '/' on sda1 – Savio Da Silva Sep 14, 2013 at 20:39 i have a test linux machine at work and i will try your option on monday morning before aplly it onto a live environment – Savio Da Silva Sep 14, 2013 at 20:39 Add a comment  | 
I am trying to create a bash script that backup the whole /dev/sda1 to /mnt/Backup /dev/sda1 457G 3.5G 431G 1% / /dev/sdb1 2.8T 3.0G 2.8T 1% /mnt/Backup The script that have is : START=$(date +%D) FOLDER_NAME=`echo $START | tr -s '/' | tr '/' '_'` SOURCE_PATH='/media /bin /boot /cdrom /dev /etc /home /lib /opt /proc /root /run /sbin /selinux /srv /sys /tmp /usr /var' SOURCE_PATH='/' FOLDER_PATH='/mnt/Backup' BACKUP_PATH=$FOLDER_PATH/Bkp_$FOLDER_NAME mkdir -p '$BACKUP_PATH' cp -r $SOURCE_PATH $BACKUP_PATH As you can see above on the source path i have tried naming all the folders i wanted to back up but when i run with that path i get an error : this is not a directory Then i tried the source path "/" below and the copy start but get stucked on cp: reading `/proc/sysrq-trigger': Input/output error cp: failed to extend `/mnt/Backup/Bkp_09_14_13/proc/sysrq-trigger': Input/output error The question is how can i change my script to successfully backup the sda1 to sdb1 Thanks in advance for your help
Linux Backup Bash
0 There is a project TAwarehouse which can be used for doing backups. I don't have much knowledge about it. U can find more details in these link. backup-hadoop-and-hive README Share Improve this answer Follow edited Sep 5, 2013 at 5:37 answered Sep 5, 2013 at 5:31 piyush pankajpiyush pankaj 73511 gold badge1313 silver badges2424 bronze badges Add a comment  | 
Is there a way to backup an entire DB on HIVE like we do on MySQL using mysqldump? Thanks, Vivek
How to Backup Database on Hive?
For someone else looking to find a solution for the same problem. I could not find a way to pass the rails_env to the gem so the workaround was to have a static file deployed for each of the dev environments hard code the environment as development and or rails in that file and then link to that file using capistrano.
I am using the piece of code provided on this link. For some reason I am not able to get the correct RAILS_ENV = ENV['RAILS_ENV'] || 'development' no matter what I do. What may be the reason? What is the better to get the rails env in this case?
finding RAILS_ENV when using backup_gem
#!/bin/bash shopt -s extglob OLD=$(exec date -d "now - 7 days" '+%s') cd /backup || exit 1 ## If necessary. while read DIR; do if read DATE < <(exec date -d "${DIR#*folder_}" '+%s') && [[ $DATE == +([[:digit:]]) && DATE -lt OLD ]]; then echo "Removing $DIR." ## Just an example message. Or we could just exclude this and add -v option to rm. rm -ir "$DIR" ## Change to -fr to skip confirmation. fi done < <(exec find -maxdepth 1 -type d -name 'folder_*') exit 0 We could actually use more careful approaches like -rd $'\0', -print0 and IFS= but I don't think they are really necessary this time.
I have a bash script that rsyncs files onto my NAS to the directory below: mkdir /backup/folder_`date +%F` How would I go about writing a cleanup script that removes directories older than 7 days old based upon the date in directories name?
Removing old folders in bash backup script
You can try to clear your cache - delete all of the files in /var/cache/, there's a chance that the config is cached.
After failing to upgrade my Magento store from 1.4.1.1 to 1.7.02 I decided to go back to the backup I made before upgrading. Unfortunately this gives a few errors when trying to access my website: Notice: Trying to get property of non-object in /home/ziezap.nl/public_html/app/code/core/Mage/Core/Model/Config.php on line 1125 Notice: Trying to get property of non-object in /home/ziezap.nl/public_html/app/code/core/Mage/Core/Model/Config.php on line 1125 Notice: Trying to get property of non-object in /home/ziezap.nl/public_html/app/code/core/Mage/Core/Model/Config.php on line 1125 Fatal error: Call to a member function getIdFieldName() on a non-object in /home/ziezap.nl/public_html/app/code/core/Mage/Core/Model/Abstract.php on line 151 Any ideas how to solve this?
PHP errors on Magento store after restoring backup
Make sure your backup is safe. So long as we have that we can start again. Restore the PostgreSQL server software (check package titles) apt-get install postgresql-8.4 postgresql-client-8.4 postgresql-contrib-8.4 Stop the server /etc/init.d/postgresql stop Restore all your data files. Make sure the ownership is correct: cd /var/lib/postgresql/8.4/ mv main main.OLD cp -a /path/to/backup/main . /etc/init.d/postgresql start Check the logs (/var/log/postgresql/...) - if your backup occurred while the database was idle you are probably in luck. Note that you need everything in .../main/ - the database files are in main/base but there are the transaction logs and other assorted bits and pieces needed too. If you get problems, check your permissions, check your postgresql.conf file (restore that from backup too if you have it, pg_hba.conf etc too). There might be some other packages you need to install too if you were using pl/perl or some such earlier Now. if you get problems complaining about missing log-files or bad blocks then that means the backup happened while the database was writing to the disk and there may be corruption. However, let's be optimistic and hope for the best. If it works, check everything looks OK and take a pg_dump of any databases you want straight away.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 10 years ago. Improve this question I have a big problem - I managed to accidentally uninstall the whole PostgreSQL DBMS from my hard drive. I also lost my database and haven't made any dumps of the containing data. I do, however, have a backup of all files from the server. Is it possible to somehow restore the database from these files? The OS I am using is Debian 6, and the DBMS version is PostgreSQL 8.4. If it is indeed possible, then how should I go about achieving this? ps. Sorry for my English.
Restore postgresql from files [closed]
If your client sites are not going to invest in some kind of infrastructure to support some sort of fail safe for MSMQ, then you may be able to leverage the Audit/Gateway feature of NSB. With this turned on you could have the client messages audited and stored over to some infrastructure that you manage. You would have to work out the details on how to restore the messages, but at least you would have them offsite somewhere.
I am looking to set up a system that consist of various autonomous services communicating via NServiceBus. This system will be deployed in various configurations (some services may be excluded, services will be setup differently) at client locations. These locations will often be large warehouses and will not have a massive IT infrastructure. The services may all sit on one machine, or on many different ones; this may be different for every site. Most sites will have one active DB server (usually SQL server) and a back-up server. Sites will make scheduled back-ups of the DB at various intervals - let's say daily. Each service has it's own data store (this could be a truly separate database or segregated tables in a shared schema). Each service of course also has it's own message queues. Although the services are autonomous they do locally store (referentially in their own DB) information from other services as a type of local read-only cache, this data is derived from received messages. Here is the question: How do I make a meaningful (i.e. consistent and restorable) back-up of this system? I have read the following related answer by Udi Dahan on this subject: http://tech.groups.yahoo.com/group/nservicebus/message/12815 My problem with this answer is this: There is no data center, no SAN; no snapshots. There are "normal" sysadmins that are used to backing up DBs on-site and/or off-site.
NServiceBus Database Backup
0 Have a try with obnam 1.5 or later, these work fine for me. You should also make sure that obnam has the right to access those paths (i.e. the user executing obnam has) Share Improve this answer Follow answered Dec 6, 2013 at 12:19 BerBer 41.1k1616 gold badges7373 silver badges8888 bronze badges Add a comment  | 
Using obnam 1.4 how do I get both /etc and /var in the same root to enable me to back them up please? I've tried 'root = /etc, /var' which backs up /etc but not /var, and I've googled without success. This is to be used on Debian 7, under Linux.
How to use 'root = ' in Obnam?
The problem was in method for making URL of file path. Currently I've changed to + fileURLWithPath:isDirectory: and everything works great. Size for my application in iCloud storage is shown as 0.9KB (previosly it was 50.7Mb).
I have found several similar questions: link1 link2 I have added code as described in link2: I call this method after picture download and saving in directory. -(BOOL)addSkipBackupAttributeToItemAtURL:(NSURL *)URL { assert([[NSFileManager defaultManager] fileExistsAtPath: [URL path]]); if (&NSURLIsExcludedFromBackupKey == NULL) { // Use iOS 5.0.1 mechanism const char *filePath = [[URL path] fileSystemRepresentation]; const char *attrName = "com.apple.MobileBackup"; u_int8_t attrValue = 1; int result = setxattr(filePath, attrName, &attrValue, sizeof(attrValue), 0, 0); return result == 0; } else { // Use NSURLIsExcludedFromBackupKey mechanism, iOS 5.1+ NSError *error = nil; BOOL success = [URL setResourceValue:[NSNumber numberWithBool:YES] forKey:NSURLIsExcludedFromBackupKey error:&error]; if(!success) { NSLog(@"Error excluding %@ from backup %@", [URL lastPathComponent], error); } else { NSLog(@"Path is %@:", [URL path]); } //Check your error and take appropriate action return success; } } When checking in iCloud the size of application is still 50.7 Mb (I'm testing on iPhone with 5.1.1 iOS version), so using of flag didn't have any effect, although success has a "YES" value. Please tell me what I'm doing wrong?
Application size won't change in iCloud after do not backup flag is set
0 I'm no expert at this, as I just started using the Data Backup API. But, I believe if you declare an attribute twice, it will only use one of them. So in this case, you would have only actually registered .SQLBackupAgent What I'd do is have one BackupAgent class, for example, .MultiBackupAgent. Then that class would initialize and call the methods of your two backup agents. Share Improve this answer Follow answered Feb 7, 2014 at 3:03 KaylaKayla 48555 silver badges1414 bronze badges Add a comment  | 
In my android backup, i want to backup the SharedPreferences and some Data stored in a SQL database. Is it possible to register two backupAgents in the android manifest (one for each) or do i have to implement my own custom manager which stores both? If its possible <application android:backupAgent=".SharedPrefBackupAgentHelper" android:backupAgent=".SQLBackupAgent" />
Am i allowed to register more than one Backup Agent?
0 You should first test that script under your own account, where git config user.name and user.email must be set properly. Then you should register your script top cron, making sure it is executed as you, not as root, each user having his/her own crontab. Run cron jobs as a different user: su - <user> -c <command or script> The OP Yulong Tian confirms in the comments: I have solved my problem by changing the user (not root as before) in crontab. Share Improve this answer Follow edited May 23, 2017 at 11:49 CommunityBot 111 silver badge answered May 13, 2013 at 7:18 VonCVonC 1.3m539539 gold badges4.6k4.6k silver badges5.4k5.4k bronze badges 2 Thank you. I have solved my problem by changing the user(not root as pervious) in crontab. – Yulong Tian May 14, 2013 at 10:59 @YulongTian Ok, I have included your conclusion in the answer for more visibility. – VonC May 14, 2013 at 11:13 Add a comment  | 
I wrote a script to auto-backup a website and this script will push resources to github. I wrote some code in crontab to let it auto-execution. However, I don't know why resources can't be pushed. I can see the heads from .git that it has been modified (which means commit successfully). I guess the problem is that the incorrect use of username. The information below is the output of auto-backup script. Committer: root &lt;root@xxxx.(none)> Your name and email address were configured automatically based on your username and hostname. Please check that they are accurate. You can suppress this message by setting them explicitly: git config --global user.name "Your Name" git config --global user.email [email protected] After doing this, you may fix the identity used for this commit with: git commit --amend --reset-author 1 file changed, 14 insertions(+), 14 deletions(-) How can I deal with it? The command in crontab: */2 * * * * root /var/backWiki.sh >/home/xxx/Tmp/4.txt Here is the main part of this shell script: git pull originTyl master git add -A echo '2' git commit -a -m $nowtime echo '1' git push originTyl master echo '3' origninTyl means: `[remote "originTyl"]` `url = https://accoutName:[email protected]/xxxx/xxxx.git` `fetch = +refs/heads/*:refs/remotes/originTyl/*`
Auto-backup shell github
0 Most likely, it's because the settings.php file inside sites/default does not exist in your local version (was not copied during the backup). Make sure that your backup contains ALL the contents of the folder sites/default. There is a probability they are not copied due to Drupal's default permissions for sites/default folder. If that's the case, next time when you do a backup, it would be better if you compress the whole drupal folder and download it as one file. Share Improve this answer Follow answered May 11, 2013 at 6:59 Muhammad RedaMuhammad Reda 26.7k1414 gold badges9494 silver badges106106 bronze badges 1 Thanks for your suggestion. It turns out the only problem was that the user in the settings.php file had host "localhost" while the user that had access to the table had host "%". Changing this to "localhost" solved the problem :) – c.altosax May 15, 2013 at 18:46 Add a comment  | 
I recently set up WAMP server on my Windows 7 machine. I copied the code from my live Drupal 7 site to my local folder, and imported my database. However, when I try to access my local site, I get the Drupal install page. I'm not sure how Drupal tells whether it's a fresh install or not, so I'm not sure how to debug this. Any ideas?
After copying live drupal site to wamp server, local site still acts like fresh install
0 I believe they are playing directly with the index structures, not relying on SQL. Advantage of access to source code of MySQL. It should be possible to have such an option using SQL, per connection, but with multiple users connect through intermediate (web) servers would be more complicated, if at all possible. Share Improve this answer Follow answered Apr 10, 2013 at 13:23 korianderkoriander 3,17022 gold badges1616 silver badges2323 bronze badges Add a comment  | 
From the mk-archiver help, we can see there is an option to optimize "seek-then-scan". Any idea how do they do this? What I'm really looking for is, if I do have a table with one PKey, and queries SELECT col1,col2 FROM tbl LIMIT 1,10; SELECT col1,col2 FROM tbl LIMIT 11,20; ... SELECT col1,col2 FROM tbl LIMIT m,n; Any way to do this in an optimized way, given m and n are very large values and each select query is initiated in parallel from multiple machines? (will address host/network choking later) How do others tackle the situation if the table doesn't have a PKey? *Using MySQL The default ascending-index optimization causes mk-archiver to optimize repeated SELECT queries so they seek into the index where the previous query ended, then scan along it, rather than scanning from the beginning of the table every time. This is enabled by default because it is generally a good strategy for repeated accesses.
MySQL seek-then-scan optimization for Limit Offset
Quite simply, you will not be able to. BBM doesn't allow access for security reasons, and should be tied to a BlackBerry account anyway. I should think that it is a similar situation for Password Keeper.
I'm developing an application in blackberry to backup its data, BBM chats, memos, tasks, calender notes, password keeper data etc. which can be synchronized with other blackberry phones (in case a user purchases a new blackberry device). How can I proceed? Please give me some ideas/code to backup the above data. Also, I've come to know that the BBM doesn't allow its chats to be accessed. And how can we access password keeper data and how to backup all these data? Thanks in advance.
backup blackberry application data programmatically
Well the problem was with postgress - you have to add your ip in pg_hba.conf => host all ip trust/md5(depend on version). And thanks to a_horse_with_no_name for editing it in proper format
My my_backup.rb=> database PostgreSQL do |db| db.name = "xxxxx" db.username = "postgres" db.password = "*********" db.host = "localhost" db.port = 5432 end store_with SCP do |server| server.username = "username" server.password = "password" server.ip = "xxx.xxx.xxx.xxx" server.port = 300 server.path = "~/backups/" server.keep = 5 #server.passive_mode = false end And having this error=> CleanerError: Cleanup Warning The temporary backup folder '/home/ilfs/Backup/.tmp' appears to contain the package files from the previous backup! /home/ilfs/Backup/.tmp/2012.11.26.17.34.07.my_backup.tar These files will now be removed. Please check the log for messages and/or your notifications concerning this backup: 'Description for my_backup (my_backup)' The temporary files which had to be removed should not have existed. Performing Backup for 'Description for my_backup (my_backup)'! [ backup 3.0.27 : ruby 1.8.7 (2012-02-08 MBARI 8/0x6770 on patchlevel 358) [x86_64-linux], MBARI 0x6770, Ruby Enterprise Edition 2012.02 ] Database::PostgreSQL started dumping and archiving 'ces_dev'. Using Compressor::Bzip2 for compression. Command: '/bin/bzip2' Ext: '.bz2' Database::PostgreSQL Complete! Packaging the backup files... Splitter configured with a chunk size of 250MB. Packaging Complete! Cleaning up the temporary files... ModelError: Backup for Description for my_backup (my_backup) Failed! An Error occured which has caused this Backup to abort before completion. Reason: OpenSSL::PKey::PKeyError not a public key "/home/sumanta/.ssh/id_rsa.pub" can anybody please help with a quick reply?
Trying to have postgres database backup by backup rubygem nad ruby on rails
Good news, you can use normal Java I/O facilities on Android. As well as Android-specific ones. You will need to decide whether to use external storage (SD card) or internal, or maybe shared preferences. For the Java I/O, you'l need to know how to associate Java streams with files. For things such as shared preferences, there are specific APIs. You could start from a good overview here. The concepts are there, as well as the keywords to google further: http://www.vogella.com/articles/AndroidFileBasedPersistence/article.html (Note that Environment.getExternalStorageDirectory() is the right approach, while hardcoded paths such as /sdcard or /mnt/sdcard, which you can find in some top-ranked examples, are wrong.)
maybe my question is a little stupid but I couldn't find this answer. What I'm doing is a backup of the APNs list. I want to store the APNs list in a file. I already have the list of APNs but I want to store this list not as database but as a file. Also this file needs to be read by my device to load all the list eventually. Any one can guide me in the right direction? Thanks!
Can I store a Content Provider in a file? In Android
0 Have a look at SSMS Tools Pack. It's free for versions less than 2012 and let's you easily run queries against multiple target databases. Disclaimer: I have no stake in this tool other than as a user of it for the last few years. Share Improve this answer Follow answered Oct 25, 2012 at 13:16 SimonSimon 6,1221313 gold badges6262 silver badges9898 bronze badges Add a comment  | 
I've built a backup query in SSMSE 2005 and saved as a text query. Is there a way to loop through DB and write each as a unique file? What I've done is copy/paste and find/replace database_name and run both in a single pass. That works, but would like either: Run a single loop script which backs up each DB individually OR Batch backup from the SSMSE GUI BACKUP DATABASE [ION_Data_Nov08] TO DISK = N'C:\Documents and Settings\DB.control\Desktop\ION Database Backups\121025_0700\ION_Data_Nov08' WITH noformat, init, name = N'ION_Data_Nov08-Full Database Backup', skip, norewind, nounload, stats = 10 go DECLARE @backupSetId AS INT SELECT @backupSetId = position FROM msdb..backupset WHERE database_name = N'ION_Data_Nov08' AND backup_set_id = (SELECT Max(backup_set_id) FROM msdb..backupset WHERE database_name = N'ION_Data_Nov08') IF @backupSetId IS NULL BEGIN RAISERROR( N'Verify failed. Backup information for database ''ION_Data_Nov08'' not found.', 16,1) END RESTORE verifyonly FROM DISK = N'C:\Documents and Settings\DB.control\Desktop\ION Database Backups\121025_0700\ION_Data_Nov08' WITH FILE = @backupSetId, nounload, norewind go
SQL backup loop
0 Have you tried contacting the support for providing you with either VMDK or a VHD file? Why are you so sure they won't give it to you? If they don't you could do a full system backup with either Windows Backup or any imaging software. Get that backup locally. Restore the backup locally on a virtual Machine. Run sysprep on that local VM. Get the VHD, upload it to Azure and finally create a Windows Azure VM from that VHD. Share Improve this answer Follow answered Oct 24, 2012 at 7:35 astaykovastaykov 30.8k33 gold badges7171 silver badges8686 bronze badges 2 Hostway will not give me the VM file. Definitely! If I back it up and copy to the Azure VM and restore directly from the backup file would that work? – user1768829 Oct 25, 2012 at 23:45 It might. But I am not an expert on how the Windows Backup works. – astaykov Oct 26, 2012 at 6:36 Add a comment  | 
I want to move an existing server 2008 instance from Rackspace/Hostway to Azure. Can I do a full OS/data backup, copy the backup file to the Azure server, and then restore from the backup file? How do you suggest me migrating this server to Azure? Hostway will not let me get a copy of the VMDK file.
Azure Backup Restore
0 Assuming you have SQL Express, and thus no SQL Agent. Maybe that is the reason you want to export using Entity Framework? You can always do it with SQL I just found a link which explains how you can do it, but until now it is just all based on the fact that i think you don't have an SQL Agent and you want to make backups. http://www.winblogs.net/index.php/2009/11/22/automating-backup-of-databases-in-sql-20052008-express/ PS: This question has been asked before, in another way, take a look at the following answer https://stackoverflow.com/a/9184833/578552 Share Improve this answer Follow edited May 23, 2017 at 10:24 CommunityBot 111 silver badge answered Oct 14, 2012 at 14:34 rfcdejongrfcdejong 2,24911 gold badge2525 silver badges5252 bronze badges Add a comment  | 
how to backup/restore from database with using entity framework ? Is it possible with entity framework 4.0 ? I am using the c#4.0 and wpf and EF4.0
backup/restore from database with using entity framework
0 I don't know what you mean with "only one planned backup", could you explain this? On the other hand, why not doing an rsync and deleting the oldest ones if needed… This is how I do this: #!/bin/bash date=`/bin/date "+%Y-%m-%dT%H_%M_%S"` HOME=/root /bin/echo -e "\n\n# Backup from $date\n" >> /var/log/backup.log /usr/bin/rsync -axzP \ --delete \ --delete-excluded \ --exclude-from=$HOME/.rsync/exclude \ --link-dest=/COREBACKUP/CurrentBackup \ / /COREBACKUP/Backups/incomplete_back-$date >> /var/log/backup.log 2>&1 \ && mv /COREBACKUP/Backups/incomplete_back-$date /COREBACKUP/Backups/back-$date \ && rm -f /COREBACKUP/CurrentBackup \ && ln -s /COREBACKUP/Backups/back-$date /COREBACKUP/CurrentBackup \ && echo `/bin/date "+%Y-%m-%d - %H:%M:%S"` > /var/log/lastbackup.log 2>&1 This script is called every day via cron, and it makes a full backup of "/" excluding everything listed in $HOME/.rsync/exclude. The backups are stored in /COREBACKUP/Backups/back-$date, the latest backup is stored in /COREBACKUP/CurrentBackup. It works fine, although it could've been written more user friendly ;-) Share Improve this answer Follow answered Oct 4, 2012 at 13:41 tamasgaltamasgal 25.5k2020 gold badges9898 silver badges138138 bronze badges 3 "only one planned backup" -> I can plan only 1 backup, I can't plan 7 backup and choose for every backup in which folder save the file. – Cappec Oct 4, 2012 at 13:51 Anyway this script will backup the entire server, but I would like to save the file created via Plesk, so it will be easy to restore always via Plesk – Cappec Oct 4, 2012 at 13:56 Why separate folders are so much important? If not that, I would suggest simply to set daily backup with an option to store only 7 latest backups. – Sergey L Oct 5, 2012 at 6:10 Add a comment  | 
I have a server(centOS) with plesk installed and I need to planning some backups for each day. Plesk allows only one planned backup, so I created this solution: Create every night a backup inside a folder Launch a script that will read the day from the title of a txt file inside the folder (launched every night via cronTAB) Move the backup file inside the correct directoy (based on the name of the day) Change the name of the day in the title of the txt This is my script (not tested right now): BACKUPNAME="backupname" cd /backup/daily find . -type f | while IFS= read filename; do case "${filename,,*}" in mon.txt) mv $BACKUPNAME ../mon mv mon.txt tue.txt;; tue.txt) mv $BACKUPNAME ../tue mv tue.txt wed.txt;; wed.txt) mv $BACKUPNAME ../wed mv wed.txt thu.txt;; thu.txt) mv $BACKUPNAME ../thu mv thu.txt fri.txt;; fri.txt) mv $BACKUPNAME ../fri mv fri.txt sat.txt;; sat.txt) mv $BACKUPNAME ../sat mv sat.txt sun.txt;; sun.txt) mv $BACKUPNAME ../sun mv sun.txt mon.txt;; * : ;; #nothing esac done Do you think is it a good/stable solution? Thanks!
Suggestion about a BASH script for backups
0 I would say that it is safe to assume the two are not related. As per the google Blobstore Java API Overview: Note: Blobs as defined by the Blobstore service are not related to blob property values used by the datastore. Share Improve this answer Follow answered Nov 28, 2012 at 16:16 snielsonsnielson 6955 bronze badges Add a comment  | 
Google's documentation states the following on their help page for Backup/Restore, Copy and Delete Data: Note: Blob data is not backed up by this backup feature! https://developers.google.com/appengine/docs/adminconsole/datastoreadmin#Backup_And_Restore I did a simple backup/restore with an entity type in my application that contains a Blob field. After I backed up the entity, I removed the data that was stored in the Blob field. When I restored the entity it had that data once again. Is it safe to infer that the warning in the documentation refers to data in the Blobstore and not Blob fields of entities stored in the normal data store?
GAE backup, Blob fields vs. Blobstore
dbcc sqlperf(logspace) shows approximate usage of log space. Usually this value is higher (but never lower) than actual free space. This article describes in short why log space used is not set to “0” after log backup: http://support.microsoft.com/kb/281879. So your log space used may rise up to 25% and then fall back to e.g. 5%. You should not worry about increasing log space used value. As soon as you have regular log backups transaction log will have enough free space (except situation when one single transaction will span for 100 Mb of log file which is not very common situation I guess).
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 11 years ago. Improve this question I have been tasked with administering some MS SQL Server (Express) databases. My primary DBA experience has been with DB2 LUW and Oracle Database, and I have a hard time understanding the way SQL Server is handling its transaction log. I have a database set to "full" recovery model and a pre-allocated transaction log file of 100 MB. Using the Windows scheduler, I run daily full backups at 00:00 and log backups every 2 hours, starting at 02:00. It was my understanding that running the log backups will automatically free up transaction log space, but this does not seem to be the case -- the output of "dbcc sqlperf(logspace)" shows that the "log space used%" figure is continually (if slowly) rising. It is currently at 5%, so I'm not yet in a state of panic, but I am confused about the way SQL server is doing this. The TSQL statement I use for full backups: BACKUP DATABASE [xxx] TO DISK = N'E:\backups\xxx.bak' WITH DESCRIPTION = N'blah', RETAINDAYS = 7, NOFORMAT, NOINIT, NAME = N'blah', NOSKIP, REWIND, NOUNLOAD, STATS = 10, CHECKSUM The TSQL statement for log backups is: BACKUP LOG [xxx] TO DISK = N'E:\backups\xxx.trn' WITH DESCRIPTION = N'blah', RETAINDAYS = 7, NOFORMAT, NOINIT, NAME = N'blah', NOSKIP, REWIND, NOUNLOAD, STATS = 10, CHECKSUM So, what do I have to do in order to free up log space? Thanks in advance for your input.
MS SQL Server Log Utilization never decreases [closed]
0 First of all that 'index.php' in your URL is not pretty - enable URL rewrites in admin config to fix that. What you might want to do to reset all your URLs and get them to what they should be is empty the core_url_rewrite table. You will need to reindex after that, however, they should all be clean and work properly with no mystery product id's tacked onto the end of any URLs. You will need to clear down core_url_rewrite with phpmyadmin or command line mysql. Hope that helps. Share Improve this answer Follow answered Sep 10, 2012 at 10:34 TheodoresTheodores 1,20911 gold badge1111 silver badges1515 bronze badges Add a comment  | 
I moved my magento site from godaddy to hostgator. I copied all the files to hostgator, then imported the database from godaddy to hostgator. But, the new site has few problems, like few item pages are missing like - http://119.18.58.85/~homehero/index.php/shop/seating/union-jack-arm-chair (hostgator site), same link from godaddy site - http://homehero.in/index.php/shop/seating/union-jack-arm-chair.
Magento Backup Error
0 your tar command has stdout as file output (-f -) Is this really what you want to do? I think you would rather backup to a real file or pipe your command Share Improve this answer Follow answered Aug 4, 2012 at 18:34 Stephane RouberolStephane Rouberol 4,3461919 silver badges1818 bronze badges 2 The -f - is the default for tar backups, so seems right. I assume backuppc is reading in the tar data through its connection... – Mark Richards Aug 4, 2012 at 18:56 Note also that this same commandline works in backup of the backuppc host itself (localhost) as: $tarPath -c -v -f - -C $shareName+ --totals – Mark Richards Aug 4, 2012 at 19:02 Add a comment  | 
Ubuntu 11.10 Server; backuppc 3.2.1 Running cgi-bin version Installed as user www-data to avoid issues with perl and apache2 with user backuppc ssh works fine to servers on LAN being backed up The following commandline is reported in the failure log: /usr/bin/ssh -q -x -n -l root 192.168.1.70 env LC_ALL=C /bin/tar -c -v -f - -C / --totals . The log indicates: full backup started for directory / Xfer PIDs are now 26168,26167 Tar exited with error 65280 () status tarExtract: Done: 0 errors, 0 filesExist, 0 sizeExist, 0 sizeExistComp, 0 filesTotal, 0 sizeTotal Got fatal error during xfer (No files dumped for share /) Backup aborted (No files dumped for share /) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 0) Running /usr/bin/ssh -q -x -n -l root 192.168.1.70 env LC_ALL=C /bin/tar -c -v -f - -C / --totals . at the server commandline (backuppc's sever) I get a connection and see a bunch of tar output on the tty. The error 65280 seems odd. There are no further hints in the logs. Any experts out there care to exchange some wisdom on this one? Frustrating :)
backuppc method tar no files dumped for share
0 If you mount the SD card in read only no, you don't need to umount it. But if it is mounted RW, in case you make even one operation you're very likely to get an inconsistent filesystem. So either you umount it or you remount it in RO. Share Improve this answer Follow answered Jul 27, 2012 at 7:36 Ottavio CampanaOttavio Campana 4,10855 gold badges3333 silver badges5959 bronze badges 4 dmesg says it's mounted RW (because beaglebone doesn't support reading the read only switch) and when I try to do "umount /dev/mmcblk*" I get an error saying they aren't mounted. Any idea why that would be? I know the entire file system for booting linux is on the sd card in beaglebone, so I'm confused. – Eradicatore Jul 27, 2012 at 19:35 are you sure it says it's mounted RW? Or is it just that the device is RW? Check /proc/mounts , if it is not there, it is not mounted. – Ottavio Campana Jul 30, 2012 at 9:23 1 Ok, so it's not there in /proc/mounts. But then I'm confused how the sd card is being used. Clearly all my files and changes are persistent in linux. Its not like it's running out of RAM. So how is the sd card in /dev/mmcblk0 being updated with my changes if it's not mounted? And assuming there is somebody doing the updates (as there must be) then how do I prevent those updates to the sd card while dd is backing it up? – Eradicatore Aug 1, 2012 at 21:33 Well, that really depends on your application. Without seeing the system it's hard to tell hot it works. Maybe the card is just mounted when the system has to write and then umounted. – Ottavio Campana Aug 2, 2012 at 7:01 Add a comment  | 
I am using this command to back up my 4 GB MMC card on BeagleBone Black which is running Ångström Linux: beaglebone:/# dd if=/dev/mmcblk0 | ssh [email protected] "dd of=/volume1/homes/admin/test.img" It seems to be working great. But I'm wondering, do I need to unmount the SD card? What if things change the SD card during the backup? How do I avoid this or avoid getting a corrupt image?
Do I have to unmount /dev/mmcblk0 when doing dd over SSH?
Robocopy gives you two options for specifying dates (for maxage, minage, maxlad, minlad) - either relative (n<1900) or fixed dates (otherwise, treated as yyyymmdd). Full syntax here. You want to include files created or accessed on given date so you have to use min/max lad (last access date) and fixed dates, so let's specify your criteria using robocopy syntax: 1. Exclude files unused since yesterday: use /maxlad (today_date - 1 day) 2. Exclude files used today: use /minlad (today_date) Put those together: robocopy source_dir destination_dir file_spec /maxlad:%today_minus_1% /minlad:%today% today and today_minus_1 vars must be dates in yyyymmdd format (eg.20120710) - how to get them? Well, if you're constrained to pure batch you would have to find scripts to do the math for you, there are some available (eg. here) or write it yourself. If you can use powershell it's pretty simple: for /f %y in ('powershell get-date ^(get-date^).adddays^(-1^) -uformat %Y%m%d') do set today_minus_1=%y gets you first, and for /f %t in ('powershell get-date -uformat %Y%m%d') do set today=%t second variable. So to summarize: set your date vars and then run robocopy (using them in excludes). It's worth to use /L option when checking if it works properly, as actual copy will re-set access timestamps on your files! All vars and commands given as if executed directly from cmd line. If used in batch you will need to add some % (and probably wise to use setlocal) Note: version I use (XP010) does not allow to use negative numbers or space as in your example
I can't figure out how to write script that backpus only files that were created or modified previous day. So if I start script on 25.07 at 15:30, it backups files between 24.07 00:00 and 25.07 00:00. If it is possible preffered way is by using robocopy. I know for /maxage -1 switch but it works for files that are 1 day old counting form the time script was started (problem is because it includes files from current day also). set source="C:\Folder1" set destination="F:\Folder2" robocopy %source% %destination% /z /MAXAGE: -1
windows batch script for backup
0 There is no way, no process, no tools, no "hack", no workaround to do this. It just won't work. If you've "upped" a SQL Server database to 2008 R2 level, there's no way back to 2005. None. Zilch. Nada. Period. If you need to work with your client's SQL Server 2005 database and you need to be able to return it back to your client, you either need to install a SQL Server 2005 version on your machine (Express will be fine, as long as you don't exceed the 4 GB size limit), or you need to "ship back" all changes/modifications as SQL scripts which can be executed against the SQL Server 2005 version. Share Improve this answer Follow answered Jul 9, 2012 at 20:09 marc_smarc_s 742k177177 gold badges1.4k1.4k silver badges1.5k1.5k bronze badges 1 1 There was a database format version bump in SP2 for SQL Server 2005 (from 611 to 612), so if your customer is using SP1 or earlier make sure you do so too, or otherwise you may upgrade the database format to SP2 format and the customer won't be able to use it without applying SP2. – Krzysztof Kozielczyk Jul 10, 2012 at 16:23 Add a comment  | 
My goal is to work with a client's 2005 database using SQL Server 2008 R2 Express. When I am finished I would like to restore the database in it's 2005 format. Could someone post the process for backing up a 2005 database in 2008 R2 and or the link. Thank you, Mark
Can I backup a SQL Server 2005 database using SQL Server 2008 R2 Express
0 Was looking for this too, this is the best I could find: http://comments.gmane.org/gmane.comp.calendars.pimlical/21657 Also, ended up using this app to save a few calendars. https://android.stackexchange.com/questions/25210/how-to-backup-the-android-calendar-file-is-there-such-a-file Share Improve this answer Follow edited Apr 13, 2017 at 12:18 CommunityBot 111 silver badge answered Dec 15, 2012 at 16:40 Jean BarmashJean Barmash 4,78811 gold badge3333 silver badges4040 bronze badges Add a comment  | 
Does anybody know, which files I need to save in order to have a simple backup? Reason for my question is, that my Galaxy Android smartphone lost all calendar data without any interaction. I found tons of apps, tons of sync stuff, but NO simple list of files to save. I do NOT want to save my data remote on Google.
How to backup android the calendar file?
The following shell script will select all tables starting with 'm' and dump them to the current directory in a file called database.table.sql (for example: test.employees.sql): DB="test" TABLES=`mysql -uroot -BN -e "SHOW TABLES FROM $DB LIKE 'm%'"` for TABLE in $TABLES; do mysqldump -uroot $DB $TABLE > $DB.$TABLE.sql; done Note that to reduce the size of the backup produced by mysqldump, you can compress it: shell> mysqldump -u root newspress > /tmp/newspress.sql shell> gzip /tmp/newspress.sql A 2Gb dump will be reduced to quite a smaller size.
I have bulk data of 2 gb on mysql server and I want to get backup of it. I tried using mysqldump -u root newspress > /tmp/newspress.sql But to download from server to my local machine it take very long time. So I want to get specific tables in database that starts with J. Forexample: Jobseeker, Jobs, Joncategory... etc How to do it ?
How to get mysql back up tables starting with specific letter?
0 I had the same problem. I used the following procedure to accomplish the task. Attach the MSDE database file to MSSql 2005. Run SSMS, right click the attached database and select PROPERTIES Select the OPTIONS page in the dialog. On the dropdown combobox 'COMPATIBILTY LEVEL' select 90 Detach the database file Now to upgrade to MSSql 2012 Attach the MSSql 2005 database file to MSSql 2012. Run SSMS, right click the attached database and select PROPERTIES Select the OPTIONS page in the dialog. On the dropdown combobox 'COMPATIBILTY LEVEL' select 110 Detach the database file. Your MDB file is now a 2012 (level 110) database file Hope this helps... Share Improve this answer Follow answered Oct 18, 2013 at 18:10 Dan ArnoldDan Arnold 9111 silver badge22 bronze badges Add a comment  | 
I'm trying to copy a database from ye old MSDE to SQL Server 2012 Express, since Microsoft decided not to make MSDE compatible with Windows 7. Lo and behold, when I try the osql restore from disk command, I get the message that 8.0.2055 backups are not compatible with SQL Server 2012. How can I transfer the database and information in it? Every single Google result has either assumed I was trying to downgrade rather than upgrade, or assumed it was an xp-xp transfer.
MSDE .BAK not compatible with SQL Server 2012?
Some useful comments from Eric Hammond in another thread: After you initiate the creation of the snapshot, your application/database is free to use the file system on the volume, but if you have a lot of writes, you could experience high iowait, sometimes enough to create a noticeable slowdown of your application. The reason for this is that the background snapshot process needs to copy a block to S3 before it will allow a write to that block on the active volume. So from my understanding of Oracle's somewhat more robust ALTER DATABASE BEGIN BACKUP operation I will prefer to wait until the snapshot is complete (not pending) to issue the closing ALTER DATABASE END BACKUP.
No luck asking this question on the AWS forum, so will try my luck here: My rough understanding of the sequence of events during an EBS snapshot: sync (??) < 1s take snapshot < 1s (atomic?) copy to S3 the snapshot or any incremental differences from the previous snapshot of this volume (if any) < 1hr (hopefully) Please add any additional steps here, most importantly I'm asking about the actual snapshot event #2 above: Can I rely on this to be a short event (< 1s)? Is it an atomic operation within the block device? How do I know for certain when it is complete (when the ec2-create-snapshot command returns success)? What does the pending state refer to (just the copy process)? In short, can I safely do: ALTER DATABASE BEGIN BACKUP ec2-create-snapshot ALTER DATABASE END BACKUP or do I have to wait until the snapshot process is completely available (not pending) to END BACKUP?
Anatomy of an EBS snapshot for Oracle db begin/end backup
0 The following link has more details but basicly you need to firs backup and download the database used on yor dev drupal site and upload it on your copy of xampp. After that you need to edit your configuration and maybe your .htacess (if not using the default one that comes with drupal) for your local pc. http://learnbythedrop.com/drop/95 http://drupal.org/node/350271 Share Improve this answer Follow answered May 30, 2012 at 18:45 Shiin ZuShiin Zu 16699 bronze badges Add a comment  | 
I am beginner in Drupal, I have downloaded site file from the dev server using Git and while I am trying to run these files in the htdocs(xampp) by localhost. I got the below error Server error! The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there was an error in a CGI script. If you think this is a server error, please contact the webmaster. Error 500 localhost 05/30/12 14:30:17 Apache/2.2.21 (Win32) mod_ssl/2.2.21 OpenSSL/1.0.0e PHP/5.3.8 mod_perl/2.0.4 Perl/v5.10.1
Error 500 during uploading a backup file into Drupal on pc
a small test shows it does work. you must have other problem ... edit in netwoek enviroment - you should add /z
i try this command line xcopy e:\myfolder /EXCLUDE:excludeList.txt \\192.168.158.15\public\comp\myfolder /E/I/D/R/H/Y the command copy all files every time even unchanged files i use /d that suppose to copy just newer files.
xcopy /d copy all files every time, even unchanged files
0 I recommend you to use Perst as the local database solution for your windows phone application.It can be imported or exported as xml which you can upload/download to/from SkyDrive or other cloud system. Home page of Perst:http://www.mcobject.com/perst/ Share Improve this answer Follow answered May 10, 2012 at 5:00 foreverforever 6344 bronze badges 3 That would require me to change my code base and migrate existing users to use the db. However it's not impossible, I'm not saying that this is not a potential solution. How does Perst handle Bitmapimages? – wpcode8345 May 10, 2012 at 9:58 Hi @wpcode8345,Perst is a OO Database,you can storge anything you want. – forever May 10, 2012 at 23:52 Ah, but its GPLv3 so I would have to release my source code also and from the whole program, not just the database part... There is a commercial license available yes but it's too expensive for my project which has limited customer base. – wpcode8345 May 12, 2012 at 17:28 Add a comment  | 
I am developing a Windows Phone 7.1 application. The app serializes objects to JSON and saves them to the IsolatedStorageSettings file. The objects also have images that the user may capture with a camera. These images are saved to Isolated Storage as a jpeg file with the "Extensions.SaveJpeg" method. Images are referenced by a unique ID from the object JSON so they can be loaded from the storage with the object itself or loaded only when needed. Now that I have this up and running, I would like to create a backup to SkyDrive functionality with recovery. What I want to ask is how can I simply backup the Isolated Storage as whole, and recover as whole? I've been thinking if there is a way to (1) generate a zip file containing the whole Isolated Storage, (2) upload that to SkyDrive, (3) downloading from SkyDrive and (4) unzipping it replacing any existing files in the storage. The steps (2) and (3) I know how to do (instructions found easily by google). I can also do step (1) but with many lines of code. I am seeking for a simple solution to zip the whole storage and recover from it.
Backup Isolated Storage as whole and recover
1 I had similar issue migrating SPF2010 to different server. Sollution: database upgrade on source server. How: Open Sharepoint PowerShell, and type Upgrade-SPContentDatabase command, hit R(maybe Y) when promt. Cheers Share Improve this answer Follow edited Feb 8, 2013 at 21:41 answered Feb 8, 2013 at 18:32 TarasTaras 1933 bronze badges 3 Which server you are migrating from SP2010 to server? – Saifal Maluk Feb 8, 2013 at 22:37 If you are restoring the SharePoint 2010 portal on different sever and facing the same issue then you have to apply SharePoint 2010 SP1 on your SharePoint server and then restore portal backup. It will be hopefully restored. – Saifal Maluk Feb 8, 2013 at 22:41 Secondly if you don't want to install SharePoint 2010 SP1. Then you can save your entire sp site as template with content. Then upload this template to second server in solution gallery and create new site based on this template. This site will be same as you backup-ed from. – Saifal Maluk Feb 8, 2013 at 22:46 Add a comment  | 
I have create SharePoint 2010 site collection backup through Power shell, by using the command Backup-SPSite "http://sitename:85" -path "C:\backup.bak" -Force and i am restoring this backup on same SharePoint 2010 server/same machine on different port by using the command Restore--SPSite "http://sitename:81" -path "C:\backup.bak" -Force it through this error Restore-SPSite : Your backup is from a different version of Microsoft SharePoin t Foundation and cannot be restored to a server running the current version. Th e backup file should be restored to a server with version '4.0.145.0' or later. At line:1 char:15 + Restore-SPSite <<<< "http://sitename:81" -path "C:\backup.bak" -Force + CategoryInfo : InvalidData: (Microsoft.Share...dletRestoreSite: SPCmdletRestoreSite) [Restore-SPSite], SPException + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletRestoreS ite I am amazed. i create the backup and restore on same SharePoint server, then why it is asking different version of Microsoft SharePoint Foundation. Any kind of help will be appreciated.
SharePoint 2010 Restore-SPSite problems
I have also only seen the UIDocument and Core Data examples. What comes to mind is to transform your pics and docs into Core Data blobs and store them with core data anyway. This could also be very efficient. Alternatively, you could check out the Dropbox APIs.
I read the apple documentation and some other links and found there are examples of using iCloud with only either UIDocument or Core Data. I am having a folder created in documents directory named "backUPFolder" and it contains some images and other files in it. I want to ask , if it is possible to move this backUPFolder in iCloud with all the data exist in this folder as it is. If yes it is possible please provide me some useful link or suggest an approach which I can follow. My requirement is to take a back up of my data on iCloud. Please please help me. I am stuck here. Any suggestions would be highly appreciated. Thanks in advance!
Is iCloud only meant for UIDocument and CoreData(How to Take Back up of any folder with its data on iCloud)
0 I know that the question is old and might not be helpful for the op, but for the others: MozBackup just creates a zip archive containing your original profile folder. So, you can open the .pcv file it creates as an archive (eventually change it's extension from .pcv to .zip before), and copy it's content to the profile folder of the new installation. In Ubuntu, I think this is something like ~/.mozilla/firefox/*.default/ . A recommended (and detailed) way to migrate to Linux can be found at http://kb.mozillazine.org/Moving_from_Windows_to_Linux . Share Improve this answer Follow answered Aug 26, 2013 at 22:47 CorneliuCorneliu 1,11011 gold badge1010 silver badges1616 bronze badges Add a comment  | 
I recently switched over to Ubuntu and removed Windows 7. I backed up my Firefox stuff using MozBackup, which appearently does not work on Linux. I tried wine and still not working. Now regarding the question, I would like to know if there is any chance to "convert" a MozBackup file into a FEBE backup file.
Can a MozBackup backup file be converted into a FEBE file?
0 For a Linux host, if they provide shell access and cron jobs, then you can automate most things. Your host, arvixe, seems to provide both. As far as I know, every host that allows cron jobs allow at least one a day, but I seem to recall that a few do not allow more than one a day. So you should be able to backup a MySQL database by running a cron job that calls mysqldump. Something along these lines for MySQL should work. Details might vary, depending on how your host has setup their servers. (You might access your database's hostname through a subdomain, for example.) mysqldump --opt --user=YourUserName -p --host=YourHost.com database_name > backup.sql SQL Server has similar features. That will leave you with a file (a really big file?) on their server. You should think hard about getting it off their server, since their reliability on backup is, umm, suboptimal. You can copy it to your local server using sftp or scp. Copying can be automated, too, with simple shell scripts and local cron jobs. Share Improve this answer Follow answered Mar 30, 2012 at 12:56 Mike Sherrill 'Cat Recall'Mike Sherrill 'Cat Recall' 93.1k1717 gold badges127127 silver badges188188 bronze badges Add a comment  | 
Hello this is a two part question 1) Can someone recommend a way to automtically backup sql and mysql databases? Preferably if there is an online service (free or paid). Would be nice if the software or website could handle sql and mysql. I am currently hosting my .net MVC 3.0 application hosted on arvixe. We are on the final stages of development and run into an issue that corrupted the db a week or so ago. We were lucky that the db for now just contained test information but I gave a call to arvixe to see if they could restore a previous backup. It was only until this problem that they came clean and said that there had been no backup of any files for anyone on my server for more than 6 months because of some issue they were having. We never received any warning about this. This brings me to my next question: 2) Could you provide me with some alternatives to .net hosting? monthly or yearly price is not an issue just looking for the best out there(shared hosting, not ready for private server yet). I notice I am getting session timeouts all the time while logged into the application butd see it happening even at the control panel so I am not sure if the issue might be bigger than just their backup procedures. All help is much appreciated.
sql and mysql automatic backup and .net hosting
0 When you uninstall the app the backup data got removed. Lookup logs for BackupManagerService: Removing backed-up knowledge of <app package> Seems that backup/restore process can vary from manufacturer and device. Testing Backup and Restore document can simple work by uninstalling and installing using a nexus device, but I would not expect the same behavior and consistency on every device. See also this answer https://stackoverflow.com/a/13648673/1598308 Share Improve this answer Follow edited May 23, 2017 at 10:30 CommunityBot 111 silver badge answered Nov 30, 2016 at 21:20 Kamil SewerynKamil Seweryn 2,09222 gold badges1717 silver badges2323 bronze badges Add a comment  | 
I've created a sample project using BackupRestore. I went to register for a key at Android Backup Service. I got the following: Your key is: AEdPqrEAAAAIW4p30C1GTNjzBOqWrb0clI7_OCWxm3ddIgkKhw This key is good for the app with the package name: com.example.android.backuprestore Provide this key in your AndroidManifest.xml file with the following element, placed inside the <application> element: <meta-data android:name="com.google.android.backup.api_key" android:value="AEdPqrEAAAAIW4p30C1GTNjzBOqWrb0clI7_OCWxm3ddIgkKhw" /> When I launch the app and choose "Bacon" + "Tomato", I can see pending backups using dumpsys backup. So I force run it (bgmr run => pendings disappear) and uninstall the app. When I restore it, logcat tells me "No restore data available" and of course, the settings aren't displayed with the correct info. Any ideas what I could be doing wrong ?
Android BackupRestore example not working on Android 2.3 Nexus One
0 Like you commented yourself, it would certainly be worth a try to using your existing configuration. The configuration is typically some site preferences and database configuration, so make sure dat your database is setup in the same way as before. Regarding your configuration problems, maybe php filters away the errors. You can check this by searching for error_reporting in your php.ini. Share Improve this answer Follow answered Feb 18, 2012 at 14:34 Jens NymanJens Nyman 1,20622 gold badges1515 silver badges1616 bronze badges 2 So, I transfered all mediawiki's files from the old server to the new one, and also the mySQL database. Now all I get is a blank screen, when I access the wiki. Any ideas? – bluewhale Feb 19, 2012 at 1:17 Again, a blank screen is often the cause of bad error_reporting. Also, the existing configuration might expect url rewriting: mediawiki.org/wiki/… – Jens Nyman Feb 19, 2012 at 12:33 Add a comment  | 
So, in college I had a Debian server which used to host a wiki, with mediawiki version 1.9. This server stopped working, and all I have now is its HD. I want to transfer this wiki to a new server, which also runs Debian, but I can't do that with Debian's current stable version of mediawiki, 1.15, because it is not possible to transfer a wiki to another version of mediawii. So, my idea is to install mediawiki 1.9 on the new server, and then move the wiki. But I am having problems with installing it. When I go to http://my_hostname/config/index.php, to configure the new wiki, so that I can transfer the other one, I get the following message, and nothing happens: Checking environment... Please include all of the lines below when reporting installation problems. PHP 5.3.3-7+squeeze8 installed" I really don't have a clue on what is wrong. ANY help would be greatly appreciated!
Trying to transfer older version of mediawiki to new server
Heroku fully manages backing up the database daily for you without you having to do anything. All backups (whether manual or automatic) are available for you to export. You can get a list of those by typing this at the command line: $ heroku pgbackups --app your_app_name_goes_here You can find more information on restoring or exporting to your local machine here: http://devcenter.heroku.com/articles/pgbackups
I am using the pgbackups addon from Heroku and trying to figure out what the best option is for me: http://addons.heroku.com/pgbackups I want to basically backups to happen on a daily basis, so I am basically trying to do the daily automatic backups, retains 7 daily backups, 5 weekly backups. The key question I have from all this is ... if I add this addon, does Heroku itself take backups of my DB and store them appropriately? Or do I get some sort of access to a dump of my DB, which I can then store on my local machine & Dropbox?
Is the pgbackups addon for Heroku fully managed?
0 SQL Server is notorious for being delicate with network shares (this is supposedly supposed to improved with SQL Server 2012). If there is any loss of transmition or minimal timeout then SQL Server will abort the backup. Your best bet is to backup to a local disk and then transfer the backup files to the network share outside of SQL Server. You can automate that as well, so you don't have to do these manual steps. Share Improve this answer Follow answered Nov 22, 2011 at 20:30 user596075user596075 2 I have tried that and when I copy the file it goes for almost 1 hour then get an error message saying that a process has the file locked. – Gary Mazzone Nov 22, 2011 at 20:41 @GaryMazzone what process has a lock on the file?? I'm guessing you're waiting until after SQL Server has finished the backup? If so, you may have another process that has a handle on it. Run Process Monitor and see what has a handle on the file to further troubleshoot this. – user596075 Nov 22, 2011 at 21:21 Add a comment  | 
I'm useing SQL Server backup to an unc path. I'm getting the following error on a fairly regular basis now: BackupDiskFile::RequestDurableMedia: failure on backup device 'the unc path\filename here' Operating system error 64(The specified network name is no longer available.). Is there any way to get this consistent? I have asked the network group and they say there is no network issue here. I saw 1 writeup saying to set the SQL Server memory to a fixed amount (there is 32 gb on the server, and this is a SharePoint database). Any suggestions would be appreciated. Thanks, Gary
SQL Server backup failing with Error 64
It looks like the security context that you are executing on the client PC is different than that on your developer PC. Verify that your client PC credentials have access to the Cafeteria database, otherwise it would get you that same message that it couldn't be found (because it doesn't have access to it). My guess is that the client SQL Server login doesn't have a mapped database user in the Cafeteria_Vernier_db database.
I am using this code for backup database from .mdf file. Backup databaseBackup = new Backup(); databaseBackup.Action = BackupActionType.Database; databaseBackup.Database = CvVariables.Catalog; databaseBackup.Devices.Add(new BackupDeviceItem(new NecessaryFunction().MsSqlBackupFileName(this.backupTextboxPath.Text), DeviceType.File)); Server databaseServer = new Server(@".\SQLEXPRESS"); MessageBox.Show(databaseServer.ToString()); databaseBackup.SqlBackup(databaseServer); On my developer PC this code works fine. But on my client`s PC it throw this exception: Backup Failed for Server 'xxxxx/SQLEXPRESS' Microsoft.SqlServer.Management.Common.ExecutionFailureException: An exception occurred while executing a Transact-SQL statement or batch. ---> System.Data.SqlClient.SqlException: Database 'Cafeteria_Vernier_db' does not exist. Make sure that the name is entered correctly. BACKUP DATABASE is terminating abnormally. at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) --- End of inner exception stack trace --- at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(String sqlCommand, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteNonQuery(StringCollection sqlCommands, ExecutionTypes executionType) at Microsoft.SqlServer.Management.Smo.ExecutionManager.ExecuteNonQuery(StringCollection queries) at Microsoft.SqlServer.Management.Smo.BackupRestoreBase.ExecuteSql(Server server, StringCollection queries) at Microsoft.SqlServer.Management.Smo.Backup.SqlBackup(Server srv) What am I doing wrong?
Backup Failed for Server 'xxxxx/SQLEXPRESS'
You can use full export or export data pump to get schema. export: ROWS=N GRANTS=Y export data pump: CONTENT=METADATA_ONLY
While taking backup of a schema, will it also copies the permissions granted on a table in that schema? Consider a schema sch_1 and this schema has a table test_table which has read access granted to an user tst_usr. So, if i take a backup of sch_1 will it copy the schema along with the access granted to tst_user on test_table?
Schema backup in oracle
Based on your comment, it sounds like you want to be able to switch service endpoints in mid-call if the primary service is offline. I don't think there's any way to do that - at least not elegantly. Once a communication channel is established, it's pretty much set until it is closed (or aborted). There's no way to switch it from one endpoint to a different (backup) endpoint - you couldn't even do it by creating a new channel because the proxy would still be using the primary endpoint. Based on my understanding of WCF, about the closest you could come would be for the client to detect that the primary service was not responding (most likely through a timeout), and then it could switch to a proxy configured for the secondary/backup service. Now, you might be able to, within IClientMessageInspector.BeforeSendRequest do some checking to see if the service is responsive, and if it does not response try to generate a new proxy with the backup service endpoint and send the message there...BUT I don't know if that would work, and even if it did it strikes me as a bit of a kludge. Simplest solution is that the client simply switches to the alternate service endpoint if the primary endpoint is down, IMO.
Is that possible (using behavior and IClientMessageInspector.BeforeSendRequest) to change the comunication channel before send a message ? I need to change this, because i have a backup/primary strategy for my proxy.
WCF Keep Alive and Backup Strategy
Usually, binary contents (exe, dll, images, ...) which don't benefit from version control features (diff, labels, merges, ...) aren't under version control. However: if those images doesn't change much (ie the same image doesn't get modified over and over), and if their number is limited (ie you don't upload to webroot/uploads/* thousands of images a day) , you can consider adding them to your SVN repo (since it is a centralized repo, you don't have the worry of cloning the full history of a repo like for a DVCS).
I have the code of a website in a subversion repository. The admin of the site can upload images via a CMS. These images go to different directories inside "webroot/uploads/". This directory forms part of the repository, too. I have a cron task to backup periodically (via svnadmin dump) the repository, but the images uploaded by the user aren't in the backup because they aren't in the repository. At this moment only the admin of the CMS can upload images and not any other user of the web. I'm thinking about doing a backup of "webroot/uploads/*" with tar and gzip. A different idea is to somehow include automatically the uploaded images into the repository. One more advantage of this is that I will receive in my development computer all the images when I update the repository. What do you think is the best way? Thank you!
Backup of images that aren't in the repository
0 I had this is same requirement a few months back. Here is my approach I wrote an application on my local machine and using Windows Task Scheduler I had it run every day. What the application did was make a connection to the database server where my database was residing. Then the application would issue a BACKUP command. This would save the .bak file to the database server and I would utilize FTP for my application to authenticate and download that file to my local machine. Then you can have automatic file cleanup handled in that application, but that isn't a core necessity. Share Improve this answer Follow answered Oct 6, 2011 at 10:31 user596075user596075 1 Interesting, could you point me in some direction, maybe some online resources or tutorials? Thanks for answer. – GibboK Oct 6, 2011 at 10:33 Add a comment  | 
I would like your opinions and thoughts on best practices/strategies and Tools to Backing Up a WebSite with its DataBase. Here I posted some ideas, but I'm not sure what really works in real world scenario and what is feasible (I have a limited experience with DB). My Web Site and DB are hosted on an external Server A managed by an hosting company, I can access my DataBase using MS Management Studio and files using FTP. Hosting provider offer disaster recovery for the DB but I also need regular BackUp. Solutions: 01 - A software running on Server A automatically backup my DB and Files on regular intervals and send the backups to a different Serve B (using FTP). 02 - A software running on my local PC that manually or automatically backup my DB and Files and save it on my local computer or to another Server B. My questions: What others solutions could be available? Do I miss smt? What are the software designed for this jobs (open source or not)? What is your experience? I rally appreciate your time on this. Please let me know thanks. Notes: I would need regular Back Up weekly.
BackUp Strategies and Tools for MS SQL 2008
0 I dont think there is any difference between the default(apps installed along with the android OS) and apps installed later by user. As far as I know there is no way you can know that. Ask or Search this in Android Developers google groups to get a confirmation. EDIT: Following code gets the names of all installed apps. Similarly you can get Packageinfo as well. private void getInstalledApps() { PackageManager packageManager=this.getPackageManager(); List<ApplicationInfo> applist=packageManager.getInstalledApplications(0); Iterator<ApplicationInfo> it=applist.iterator(); while(it.hasNext()){ ApplicationInfo pk=(ApplicationInfo)it.next(); String appname = packageManager.getApplicationLabel(pk).toString(); installedapplist.add(appname); } } Share Improve this answer Follow edited Aug 25, 2011 at 6:12 answered Aug 25, 2011 at 5:48 RonRon 24.2k88 gold badges5656 silver badges9898 bronze badges Add a comment  | 
I'm trying to fetch the default applications (video, images, contact, sms) for backup process. How can i do this? Is there any default method for to do this?
android : how to fetch default application for backup process?
I think you should look into the existing HDD image creators (quick googling gave me XXClone, but you may need something else). You basically need to create an image of the entire PC's storage (i.e. all logical disks) upload the image to the internet area A relatively small program may perform the tasks 1 and 2 for you, provided that you bundle it with an existing image creator.
Closed. This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 10 years ago. Improve this question I have a request to do something that I am not sure is possible. The request is to create a website that will backup someones PC, either through the internet or they would download a small software package that will back up their pc and send the backup to a remote server, kind of like Carbonite or Iron Mountain. Does anyone have any idea where I need to start with this or look into how to code for this.
VS 2010 pc backup coding [closed]
0 The error `"Exiting with failure status due to previous errors" means exactly that. There was an earlier problem which, while not fatal to the running of the program, is reason enough to exit with a failure code. Given that you're backing up from the root level, this is almost certainly because a file is not able to be backed up for some reason. Leave off the verbose flag -v and you should hopefully be able to see the problem better. Share Improve this answer Follow answered Sep 8, 2011 at 6:33 paxdiablopaxdiablo 866k239239 gold badges1.6k1.6k silver badges2k2k bronze badges Add a comment  | 
I've been trying to backup my ubuntu11.04 with the following tar command sudo tar -cvpzf /media/TOSHIBA\ EXT/backup.tar.gz --exclude=/backup.tar.gz --exclude=/lost+found --exclude=/proc --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev --exclude=/home/manuzhang/Music --exclude=/home/manuzhang/Videos --exclude=/home/manuzhang/Pictures --exclude=/home/.aMule / every time there is such a failure message tar: Exiting with failure status due to previous errors <br/> the procedure exited when packaging the directory /sbin several times. Finally I exclude it, but it exited in /root So what caused the problem? Anyone has similar experiences? Many thanks!
errors when backuping ubuntu11.04 with tar
0 First question how much space do you have at the destination? mkdir -p /storage/backups/`date +\%Y-\%m-\%d`-`date +\%A`/$host/$username rsync -avz /storage/backups/`date --date=yesterday +\%Y-\%m-\%d`-`date--date=yesterday +\%A`/$host/$username/ /storage/backups/`date +\%Y-\%m-\%d`-`date +\%A`/$host/$username/ rsync -avz -e ssh --delete --exclude='....' /home/username/ /storage/backups/`date +\%Y-\%m-\%d`-`date +\%A`/$host/$username/ This will keep the backups nice and neat just use the include/exclude option of rsync to only sycn file types you want to back up or use the way i described in this post Local rsync inclusion/exclusion Share Improve this answer Follow edited May 23, 2017 at 12:29 CommunityBot 111 silver badge answered Jun 13, 2012 at 13:51 PaperghostPaperghost 9699 bronze badges Add a comment  | 
What I want to do is to backup my personal data (on a Linux boy) possibly using rsync. I want to include only certain files (jpg,nb,pdf,..) and exclude everything else. This is easily possible with rsync. What I also want to do is to apply certain operation on those files I backup since I need to save some disk space. So my idea is re resize my jpg's, zip my pdf's,... What do you think is the best approach to solve this (as easy as possible)? Kind Regards, André
performing backup for tree of directors and applying certain operations on files
0 Save the key pair into a password-protected key store. Share Improve this answer Follow answered Feb 24, 2014 at 20:31 divanovdivanov 6,24133 gold badges3434 silver badges5151 bronze badges Add a comment  | 
I was wondering which is the best way of making a backup of RSA key pairs. I have them in the same server I use to make backups in order to encrypt and by now this is a SPOF. I can't find any good solution googling around.
Backups of RSA public and private keys
0 If your backup is larger than the MDF, this means you have a lot of log activity recorded too. SQL Server notes data changes that happen during a full backup and does a "mini" log backup to capture this. I'd say that you need to change Index maintenance and backup timings Share Improve this answer Follow answered Jun 18, 2011 at 9:58 gbngbn 427k8282 gold badges593593 silver badges681681 bronze badges 2 Thank you. We reorganize our indexes every week, should this be done more often? What do you mean about backup timings? We backup our transaction log every hour and do a full backup every night. – Kristina Jun 18, 2011 at 17:17 No need to rebuild indexes unless you have a lot of data changes: sound like you have. You can do it smarter sqlfool.com/2011/06/index-defrag-script-v4-1. And the backup size growing then shrinking sounds like the a smaller backup after index maintenance. I guess I'm trying to say don't worry about it. I would make sure that no other jobs or actions occur during your full backup – gbn Jun 19, 2011 at 19:34 Add a comment  | 
We run SQL Server 2005 and have a database that's about 100 GB (the MDF is 100GB and the LDF is 34 GB). Our maintenance plan takes a full database back up every night. It's set up to This backup size is usually around 95-100 GB but it all of a sudden grew to 120 GB, then 124 GB then 130 GB then back to 100 GB over 4 consecutive days. Does anyone know what could cause this? I don't believe we added and then removed that much data in such a short period of time.
Why would a nightly full backup of our SQL Server database grow 30 GB over night and then shrink again the next day?
0 It will work fine, but it is rather inefficient. I've done it with timecapsule and this just works. You can also create a "bare" repository on the stick and regular push your work to it. This is a lot smaller than the original, but you will not have all your branches (unless you all push them of course). If you have a second computer then you can push/fetch the project back and forth, and you always have 2 copies. If you fetch from the remote PC you get all the branches. Sites like github and gitorious allow you to push your repository to the cloud and also serve very effectively as a backup. This is what is great about distributed version control : so much flexibility. Share Improve this answer Follow answered Jun 10, 2011 at 10:33 Peter TillemansPeter Tillemans 35.1k1111 gold badges8383 silver badges115115 bronze badges Add a comment  | 
What is the most simple way to backup my project? Can I just copy XCode project (with hidden .git) directory to USB stick and copy it back when needed?
Xcode 4 / Git - backup
For a full system backup, you can use Nandroid. But this tool is a bit advanced. It requires rooting the phone, and some other tweaks. Basically, this is very much used by people who install alternative ROMs. However, after backing up with Nandroid, you still need to backup the SDCard. Using cp to copy the /data folder could maybe work to a certain extent, but I wouldn't recommend that because it could break if you restore after having upgraded some apps or for some other reason. A simple backup of the SDCard (no need for adb, just plug with USB and copy) will normally contain your pictures and other files. But the contacts are stored in internal memory, somewhere under /data. You can certainly find some apps on the market that are dedicated to backup. Don't skip this. That said, the Android framework does provide backup to cloud storage, but this requires applications to support it explicitly and I am not sure that many apps do.
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 12 years ago. Improve this question Hi can someone explain in a theoretical sense how to backup and Android installation? I am new Android (recent iPhone convert) and have installed the SDK. From the ADB shell can I basically just 'cp /' to another location and its all backed up? Would all of my contacts plus app data plus pictures and everything else get transferred? Or is there a recommended method for creating a system image? If I 'cp /' and I wanted to reinstall from a backup would just copy my backup back to the file system? Any guides on doing this would be appreciated as well :) Thanks.
Specifics on backing up Android OS [closed]
0 For the first part you can set up a CRON job with this command: mysqldump db_name > path_to_file.text replace the placeholders with your database name and the path to the file you want to dump to. As for the e-mail part. I'm not sure. Share Improve this answer Follow answered May 4, 2011 at 8:45 AndrewAndrew 8,48388 gold badges4545 silver badges7171 bronze badges 3 It runs on a virtualhost, I'm not sure I am able to do that. I'd just run a script with CRON and have it do what I need. I'm looking for some backup software or script I could install, instead of reinventing the wheel. – kingmaple May 4, 2011 at 8:48 "I'm not sure I am able to do that". What is it you're not sure you're able to do? Run a MySQL command? Is your MySQL database on the same virtual host as your site? If so, then running that code in CRON should work. Or are you trying NOT to use CRON? – Andrew May 4, 2011 at 8:55 I do not want to reinvent the wheel. I need a script or solution that, when queried, creates a dump of one or more databses, attaches them to an e-mail and sends them to a specific e-mail address, potentially addresses various problems that might occur. I do not want to reinvent the wheel, I'm sure there is something like that already there. – kingmaple May 4, 2011 at 8:59 Add a comment  | 
Can anyone recommend a reliable and simple PHP based MySQL database backup system that can run on the same virtualhost, creates a dump file from a specific database and sends it as an attachment to an e-mail address with frequency I can adjust (or depending on CRON). Thanks!
PHP and CRON based MySQL database dump-to-email backup system?
ls -l / | grep home the output will be like this: lrwxr-xr-x 1 root wheel 8 Mar 30 14:13 home -> usr/home In my case, the owner is root, and the root user its primary group is wheel, so now we add www-data user to wheel group so he can list files in there: usermod -a -G wheel www-data You can download some files because they located in directory owned by www-data user, and when you can't, www-data has no permission in that.
I am trying to backup all the files on our server using some SSH commands via PHP and I have a script working to some extent. The problem is that only some of the folders actually contain any files but the folder structure seems to be correct though. This is the script I am using: <?php $output = `cd / ls -al tar -cf /home/b/a/backup/web/public_html/archive.tar home/*`; echo "<pre>$output</pre>"; ?> I cant even view the files via SSH commands, an example of this is the test account. If I use the following command I am unable to view the website files. <?php $output = `cd /home/t/e/test/ ls -alRh`; echo "<pre>$output</pre>"; ?> But if I use the same commands on the a different account I am able to see and download of the website files. Is this a permission problem or am I missing something in my script? Thanks
SSH backup via PHP problem
0 You could replicate specific customers manually and by adding an FK constraint on the address table replication will fail to insert/update these records. For replicating specified tables in the db http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_replicate-do-table . Use this parameter to silently skip errors on replication http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#sysvar_slave_skip_errors . Share Improve this answer Follow answered Sep 17, 2011 at 15:02 georgepsarakisgeorgepsarakis 1,92733 gold badges2020 silver badges2424 bronze badges Add a comment  | 
Is there an easy way to backup and restore partial data from a mysql database while maintaining the FK constraints? Say if I have 2 tables | CustomerId | CustomerName | ----------------------------- | 12 | Bon Jovi | | 13 | Seal | and | AddressId| CustomerId | City | --------------------------------------- | 1 | 12 | London | | 2 | 13 | Paris | The backup would only take customer 12 and address 1. My goal is to take a large database from a production server and replicate it locally, but with partial data. Due to fairly complicated schema, a custom query is not an option. Also I can't rely on the existence of a main table from which one would get the related rows. Thanks
Mysql - backup partial data
0 Yes, because, when you restore a database the data are written from scratch in your tables and you have no unused space in the data-store, it's like running an OPTIMIZE table on the tables of your database. Your data and indexes are re-created. Unused space is normal when you do a lot of insert/delete , and tipically is present in the wp-options table which is used by plugins to save the options. Share Improve this answer Follow answered Mar 3, 2011 at 14:39 keatchkeatch 2,18722 gold badges1717 silver badges2222 bronze badges 2 Thanks for your answer. I want just to confirm that when you said yes, you meant, it is normal that the numbers differ for these parameters and that the backup is OK. Is this the case? – semyou Mar 4, 2011 at 15:00 Yes, it's ok! As in my reply you are "rebuilding" your tables from scratch! – keatch Mar 4, 2011 at 15:20 Add a comment  | 
I created a backup of my mysql db 5.x using wp-db-backup plugin. I noticed that the rows, auto-increment counter all look like the original. Except, the data length and index length which shows smaller values. Do you know if this is a sign that the backup is not good? Thanks,
Is it ok if index length and data length are shorter after a backup