Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I'm a little unclear, but if you don't want to change file names then I'd suggest either putting the backup file in a subfolder, named after the date, or having the backup in the application folder and have previous version in subfolders named after the date.
I've got a small problem that consists in following: In my C# application, I have a function named MakeBackup() which as it says, makes the backup copy of a txt file in my case. My application also saves some information in a file called settings.txt, so when I launch my application, it checks if this file exists, if yes, it asks you to make a backup. Let's assume that the user has got already the settings.txt file, the application will try to make the backup of that file, and then it will save it's own settings.txt file. So, my question is, how can I distinguish the generated settings.txt file by my application from the user settings.txt file? The names are equal, but not the content. But comparing the content of both files is not the solution I'm looking for. I'll try to explain more: let's say the app generates it's own settings.txt, every time I launch the program, it asks me to do a backup, but I'd like to do the backup ONLY with the user settings.txt file, but not the one generated from the program.
C# Managing a file backup
Update: Emacs 24.3 has been released with full support for this new setting! In the current trunk of emacs, you can simply customize the variable create-lockfiles: C-h v create-lockfiles Documentation: Non-nil means use lockfiles to avoid editing collisions. In your init file, you can set (setq create-lockfiles nil) Get it via bzr branch bzr://bzr.savannah.gnu.org/emacs/trunk emacs-trunk make src/emacs (I found out about this, because I decided to get active and just add an option like that myself… :) )
When I modify a buffer, Emacs automatically creates a temporary symlink in the same directory as the file being edited (e.g. foo.c): .#foo.c -> [email protected]:1296583136 where '12345' is Emacs' PID (I don't know what the last number means). Why does Emacs create these links, and how do I prevent it from doing that? Note that I have turned off auto save mode (M-x auto-save-mode) and disabled backup files (M-x set-variable -> make-backup-files -> nil). When I save a modified buffer, or undo the changes to it, the symlink disappears. In particular, I'm trying to prevent Emacs from creating these links because they cause the directory timestamp to be modified, which causes our build system to rebuild an entire module instead of compiling and linking for one changed file :/ Thanks for any input! Update: In order to prevent Emacs from creating interlocking files permanently, you can change src/filelock.c and build a custom binary: void lock_file (fn) Lisp_Object fn; { return; // Unused code below... } Update 2: Arne's answer is correct. It's now possible to disable lock files in the latest Emacs (24.3.1), by adding this to your .emacs file: (setq create-lockfiles nil)
Emacs create broken dot hash filename symlink when editing [duplicate]
Specify the path on write_file function $backup =& $this->dbutil->backup($prefs); write_file("/home/my_pc/Desktop/NewFolder/backup.sql.gz", $backup); Give permission to NewFolder. It will work.
I am using this code for write file in CodeIgniter. $backup =& $this->dbutil->backup($prefs); write_file("application/backup/backup.sql.gz", $backup); But I need to write that specified file backup.sql.gz to another folder in desktop. How it is possible using this write_file() in codeigniter.
How to write a file to a folder in Desktop using write_file in CodeIgniter
0 #!/bin/bash # script to send simple email # Email To ? EMAIL="sending_to_address" # Email text/message EMAILMESSAGE="/mailmessage.txt" /bin/mail -s "SUBJECT" "$EMAIL" < $EMAILMESSAGE -- -f from_email_address Are you sending the email for your lan? Are you port forwarded if outside? Unless this is for a school project I'd go with using PHP. Share Improve this answer Follow answered Feb 25, 2014 at 4:36 Blake NicBlake Nic 1311 silver badge77 bronze badges 6 I am running a backuppc for backing up my servers and this script is used to send out the status after completing the backup. mails will be going out to the public email addresses outside (gmail,exchange etc). – user3349526 Feb 25, 2014 at 4:53 In that case you will want to port forward your modem. Also check out the response this guy made: stackoverflow.com/questions/9038926/… – Blake Nic Feb 25, 2014 at 5:14 I am able to do the same using sendmail. But the issue is I am not able to put the content of a text file as a body of the email. If I can figure out that, would be enough. Following is my sendmail command : /usr/sbin/sendmail destination_email_address <<EOF subject:$SUBJECT from:source_email_address echo "$EMAILMESSAGE" EOF Where EMAILMESSAGE=/tmp/message.txt; But it never shows the content of the text file, but only the location of that. – user3349526 Feb 25, 2014 at 5:18 Set a variable with the contents of /tmp/message.txt and then call that varible in EMAILMESSAGE. – Blake Nic Feb 25, 2014 at 5:22 Also try something like this: mail -s "mail subject" [email protected] -a "Reply-To: " <<< "/mailmessage.txt" – Blake Nic Feb 25, 2014 at 5:32  |  Show 1 more comment
I have a script to send out email from a particular email address. Still mails are going as username@hostname of the server. #!/bin/bash # script to send simple email # Email To ? EMAIL="sending_to_address" # Email text/message EMAILMESSAGE="/mailmessage.txt" /bin/mail -s "SUBJECT" "$EMAIL" < $EMAILMESSAGE -- -f from_email_address Please correct it so that I can send mails as from_email_address rather than the hostname.
Unable to send email from a bash script
Then you'll most likely want to use this https://www.openshift.com/blogs/introducing-a-new-backup-cartridge.
I want to configure daily snapshots for my Openshift instance and save those snapshots to Amazon S3. When I tried to accomplish this task, I faced several difficulties: Openshift instance can't create snapshot of itself, so you have to have separate instance to create this snapshot for you. When I've created separate instance, I didn't managed to setup rhc properly. When I ran rhc setup (after gem install rhc), it throw me an error: `mkdir': Permission denied - /var/lib/openshift/530...0132/.openshift (Errno::EACCES) I think it will be simpler create backups for database, like described here, but I want try snapshots first. Can you provide me some hints on doing them in daily manner? Thank you.
Periodic snapshots/backups for openshift instance
To clarify ... I want to make a backup of all directories, for example to /home/user file is named backup-2014.02.02.tar and is located in the directory /home/user /backups. I'm doing a backup of the entire /home directory with the following script: #!/bin/bash today=$(date '+%Y.%m.%d') tar czf /var/backup/backup_"$today".tar.gz /home Yes, but I want to go to backups in the following way ... If the directory was /home/user file batskup-user-2014.02.04.tar.gz to go to the directory /home/backups
I want to make a backup of each directory in /home separately and each directory tar (backup) files to enter into a specified directory. Under linux ubuntu.
Backup directories in home with tar
0 This should work - exec('mysqldump -u foo --password=bar --host=localhost foobar > backup.sql'); Share Improve this answer Follow answered Feb 20, 2014 at 11:23 CS GOCS GO 91266 silver badges2020 bronze badges Add a comment  | 
I have used the following to check if exec() is enabled on my server: public function exec_enabled() { $disabled = explode(',', ini_get('disable_functions')); return !in_array('exec', $disabled); } And I found out that it was enabled on my server. Now I am trying to run the following, and the database isn't getting backed up: exec('mysqldump --user=foo --password=bar --host=localhost foobar > backup.sql'); I have also tried exec('mysqldump -u foo -p bar foobar > backup.sql'); But it didn't work either. Also, I am not getting any errors (error reporting is on). Can anyone please tell me what am I doing wrong here?
MySQL database backup using PHP
You should use a utility to do this, something like MySQLDump. You way of copying the files will require manually correcting the conf files and possibly missing something. look Here for Mysqldump
If I copy mysql binary log files (/var/lib/mysql/mydb the .frm and .ibd files) from one mysql instance to another, will the databases be copied over correctly (assuming using the same mysql server version)?
Copying of mysql binary log files
0 Before jumping on the full server backups, please clarify these questions: Backup software's are agent and non agent based, which one do you want to use? Are you interested to go for open source or proprietary software? Determine your source and destination are they in the same LAN or in the Internet. Try to get the picture of the bandwidth between source and destination and the volume of data getting backed up? Also if you are interested try to know gui requirements and various other os platform support for backup software. Importantly try to know the mail notification configuration. Presently am setting one for my project and so far have installed bacula-v7.0.5 with webmin as gui. Trying the same config in the amazon cloud utilizing s3 as storage by mounting s3fs into the ec2 instance. My bacula software is a free community version.Haven't explored the mail notification until now. Share Improve this answer Follow edited Mar 21, 2017 at 10:57 DimaSan 12.5k1414 gold badges6767 silver badges7878 bronze badges answered Mar 20, 2017 at 17:20 novicenovice 2155 bronze badges Add a comment  | 
I'm evaluating the features of a full-fledged backup server for my NAS (synology). I need FTP access (backup remote sites) SSH/SCP access (backup remote server) web interface (in order to monitor each backup job) automatic mail alerting if jobs fail lightweight software (no mysql, sqlite ok) optional: S3/Glacier support (as target) optional: automatic long-term storage after a given time (ie local disk for 3 months, Glacier after that) seems like biggest player are Amanda, Bacula and duplicity (likewise) Any suggestion? thanks a lot
Backup server for a NAS with web interface
0 I typically export the rows from the production server, and import into a database on a non-production server (like my local machine), then delete the existing rows from the production server. Also run an optimize on the production server table so the size is recalculated. This is somewhat manual but it keeps the production server table size down, and the export/import process is rather quick. Share Improve this answer Follow answered Feb 15, 2014 at 1:09 MattMatt 56422 silver badges88 bronze badges Add a comment  | 
I have an SQL database which has a main Orders table taking 2-5 new rows per day. Other table which has daily records is Log table. It receives new data every time a user accesses the login page of the web site including time and the IP address of the user. It gets 10-15 new rows per day for now. As I monitor the daily backup of SQL, I realized that it is growing like 2-3MB per day. I have enough storage but it makes me worried. Is it the Log table causing this growth? I deleted like 150 rows but it didn't cause the .bak file size reduce. It increased! I didn't shrink database and I don't want to do it. I'm not sure what to do about it. Is there any other decent way of Logging user accesses?
How to control growing SQL database day by day
Posts are stored in the database. There is no way to restore them from the forum files.
I have downloaded phpbb forum folder to local system. I have all the files in the system. I didn't take any backup in any conventional ways. Now I want to restore my forum using these files. I have tried to install phpbb again and after wards tried replace forum folder with my local folder. Problem: I have deleted my website from hosting account so database also deleted. Now I am not able to restore my from to previous status. fortunately I have old backups. I am able to restore it to two weeks back. But I am thinking if I may have any luck as I have downloaded all the files while deleting my hosting account. I don't know where the data of posts and users will be stored. If they are stored in forums folder then I may have luck. I have recently made big post to my forum which took my entire day. I want that post . It will be embarrassing to say to new members that theirs ids are not available. I am using plesk windows for hosting. Please help in this regard. phpbb version: 3.0.12
Restoring phpbb forum from locally stored(not backup) files
0 These are hints for you to search for more information, not a full solution. 1) if you are copying a single file then use the copy command 2) You have foreign characters in your path and you will need to deal with them, as code pages and unicode may be the issue. 3) Rar should have a switch to use explicit paths or not. Share Improve this answer Follow answered Feb 12, 2014 at 7:13 foxidrivefoxidrive 40.7k1010 gold badges5656 silver badges6969 bronze badges Add a comment  | 
I'm trying to create a backup batch script. Here's what I have so far: @echo off set drive=E:\zaloha\Backup set backupcmd=xcopy /s /e /h /y /q /i echo ### Copying the files... %backupcmd% "%APPDATA%\Mozilla\Firefox\Profiles\6jmi87vr.default\jetpack\jid1-xUfzOsOFlzSOXg@jetpack\simple-storage\store.json" "%drive%\RES_Settings" %backupcmd% "h:\downloads\アニメ\descript.ion" "%drive%\TC\アニメ" echo ### Packing the archive... "d:\programy\winrar\rar.exe" a -r -m5 -agYYYY-MM-DD_HH-MM-SS -df "%drive%\backup_.rar" "%drive%" echo ### Done! @pause Though I've got several problems with it: 1) At the first file, it always asks Does ... specify a file name or directory name on the target ( F = file, D = directory)?. I tried searching in documentation for xcopy, and I thought /i switch should suppress it, but it doesn't. How do I make xfile copy it without questions? 2) At the second file, it states File not found - descript.ion 0 File(s) copied. The file is definitely there. It has archived and hidden attributes, but I thought that /h switch should cover that. 3) When creating an archive, it archives the entire file path from the root folder of my disk. How do I tell rar.exe to create folder path only locally?
Creating a backup batch script
0 Windows has an FTP.EXE which uses passive file transfers and can be scripted with the -s:file switch. If you use 7-Zip instead of WinRar then you will probably get superior compression on the highest settings, and it can create ZIP and well as 7z files. Another advantage for you is that 7-Zip has a switch 7z l -slt file.zip which you can get the checksum from the archive without another calculation. You can download the backup again to do another checksum. I put this in an answer block so it's easier to read the points. Share Improve this answer Follow answered Feb 8, 2014 at 5:23 foxidrivefoxidrive 40.7k1010 gold badges5656 silver badges6969 bronze badges Add a comment  | 
I have a specific task to accomplish and doing it manually takes many hours, so I'd like an automated way to do it. Relevant Info: - DB is 80GB (35GB compressed with best compression in WinRAR) - DB is across a VPN connection in the cloud - Want to compress DB, copy back to Enterprise - Copying via SMB almost always will result in corruption, FTP is preferred method across the VPN to prevent the corruption - Would like to verify MD5 checksums before copying, and after to make sure it is not corrupted. Manual Steps: - MD5 checksum on DB backup - Use WinRAR to compress latest backup with best compression - MD5 checksum on rar archive (takes about an hour) - FTP the rar file over the VPN to Enterprise center - MD5 checksum on rar archive - Uncompress - MD5 checksum on DB backup In all reality I can probably skip doing the checksum on the rar archive. Call me anal retentive if you will. I think the best thing would be to find an MD5 checksum command line utility, and do the MD5 checksum and WinRAR compression via a batch script. I am unsure how to do the FTP part though. Suggestions? Thanks guys. Cheers.
Automating DB Backup Copies
0 Since you are already using dastardly com objects you could try the following for zipping up: $zipFileName = "c:\temp\logs.zip" $shell = New-Object -Com Shell.Application New-Item $zipFileName -Type f $zipItem = $shell.NameSpace($zipFileName) $zipItem.CopyHere( "PathToYourLogFile",16) So if you have a folder full of log files you can run this $pathToYourLogsFolder = "c:\somepath" ls $pathToYourLogsFolder | % { $zipItem.CopyHere( $_.FullName,16) } and then deal with zip and leftover files as you please. You can read more about CopyHere function on MSDN and this will not work on Win2012 http://msdn.microsoft.com/en-us/library/windows/desktop/bb787866(v=vs.85).aspx Share Improve this answer Follow answered Feb 5, 2014 at 17:37 RafRaf 9,83911 gold badge3131 silver badges4242 bronze badges Add a comment  | 
I am new to PowerShell and I need to create a script (that will also work through the scheduler) that will: -mount the network path as a drive (I think I did this with the code below) #Machine hostname - needed for archive creation and identification $hname = hostname #Map network drive $net = $(New-Object -Com WScript.Network) $net.MapNetworkDrive("X:", "\\your network share\your folder", $false, "domain\user", "password") #Network folder where archive will be moved and stored $newdir = "X:\your folder\$hname" -Create a zip file with yesterday's log that has a name in the format: $today = (Get-Date).AddDays(-1).ToString('yyyyMMdd') something_$today_something.w3c -Save that zip in a temporary local folder -Move the zip to the network path configured -Delete the original log file Any help finishing and optimizing this script would be greatly appreciated. Thanks.
PowerShell script: Zip yesterday's log and move it to a network path
This is because it is looking for a variable name NOW_website_files which does not exist, and thus the resulting file name evaluates to .tar.gz. To solve it, do: #!/bin/sh NOW=$(date +"%F") FILE="${NOW}_website_files.tar.gz" ^ ^ instead of FILE="$NOW_website_files.tar.gz" This way it will concatenate the variable $NEW to the _website_files.tar.gz text.
Hello I am trying to write a simple shell script to use in a cronjob to copy a backup archive of website files to a remote server via FTP. The script below works when I type the file name in by hand manually, but with the date and filename specified as a variable it returns that it can't find ".tar.gz" as if it is ignoring the first part of the filename. I would be grateful if someone could tell me where I am going wrong. #!/bin/sh NOW=$(date +"%F") FILE="$NOW_website_files.tar.gz" # set the local backup dir cd "/home/localserver/backup_files/" # login to remote server ftp -n "**HOST HIDDEN**" <<END user "**USER HIDDEN**" "**PASSWORD HIDDEN**" cd "/backup_files" put $FILE quit END
Shell Script Not Finding File
0 You have to use the using Microsoft.SqlServer.Management.Smo name space and use the BackUp Method provided. It has various backup options. Please See this link Share Improve this answer Follow answered Jan 30, 2014 at 15:41 Mohd WaseemMohd Waseem 18611 gold badge55 silver badges2020 bronze badges 1 it helps a lot but not fully solve my problem yet..Please give me a working example. – Mohd Tashkeel Jan 30, 2014 at 15:43 Add a comment  | 
I develop a windows application for a small shop for generating invoices. Now I want to give a functionality for the user to make backup of each day database on a button click from the windows application. Also he should be able to restore the database from these backup. Please help me i searched many topics but not works. Thanks..
Sql Server 2008 R2 Database backup and restore functionality by C# Windows Application
0 You can't just move files around under the data directory and expect InnoDB to notice them. InnoDB maintains a "data dictionary" which can be thought of like a table of contents for a book. This is how InnoDB knows what tables exist. The data dictionary is updated by DDL statements such as CREATE, ALTER, DROP, RENAME. But it's not updated if you physically move a file into the datadir. A method to import a table from a MySQL Enterprise Backup is documented here: Backing Up and Restoring a Single .ibd File. MySQL 5.6 introduced some new flexibility with their transportable tablespace feature. Note that both of these methods are for importing one table at a time. There is no solution for importing a whole database. Ultimately, it would be a lot easier for you to restore your backup to a separate instance of MySQL, and then access the few things you need to. A good tool to help you set up a temporary instance of MySQL is MySQL Sandbox. Share Improve this answer Follow answered Jan 20, 2014 at 15:53 Bill KarwinBill Karwin 550k8787 gold badges681681 silver badges838838 bronze badges Add a comment  | 
I used MySQL enterprise backup but I have to restore one table now. I don't want to touch my existing tables butI want to restore one database into a new databases so I can view the old values and take only couple thigs out of it. I have apply-log to the database that I want to restore. then I created a new database called XY and I placed all the files that were generated after the "appy-log" operation into the XY folder into mysql data file,. I do see all the tables but when I try to query any of them I get error "table does not exists" how can I get mysql to read the restored files into a new table without causing more issues?
How to create a new database from a backup in mysql
0 First of all this question should be on SuperUser. Nevertheless you strategy is is pretty solid. I would use disks in raid for added protection. I you want to make sure the isos haven't changed you can take their md5sum when you store them and compare it to their md5sum when you retreive them. Share Improve this answer Follow answered Feb 11, 2014 at 21:58 OlivierLiOlivierLi 2,79811 gold badge2424 silver badges3030 bronze badges 1 Thank you! One comment: My "RAID" is simply to store the data twice in different locations. Since it doesn't change, it's rather easy. The USB harddisk (usually kept elsewhere) and our NAS (at home) have one copy of each ISO image. So there is some redundancy. – Jens Feb 12, 2014 at 22:56 Add a comment  | 
I want to create a long-term data archive of old stuff I don't need daily, but don't want to throw away either (e.g. all raw data of my thesis work). Optical media have failed me too often in the past, so now I am using an external USB disk and - to protect against accidental modification of the archive - I create ISO images of data batches and store these (and mount them on demand). The harddisk is NTFS formatted for portability (read/write for Linux and Windows, and at least readable for Macs). My question is: Are ISO images on external harddisks a good idea for long-term archiving data? How about bad disk sectors? It sure sounds easier for the OS to fsck a disk with 200 ISO images instead of 2,000,000 separate files, but is it? Should bad disk sectors be my primary worry when thinking about long term archives? Any ideas - or alternatives - for an affordable long-term data storage concept would be appreciated.
How do I ensure data integrity of ISO images?
0 You would have to either: keep track of the last write date somewhere, persistently across app restarts. query the last write date of the log file itself using the Win32 API GetFileTime() function. put the current date on each log entry that you write, then you can seek to the end of the log file and read the date from the last log entry that was written. Each time you want to write a new log entry, compare the month+year of the last known date against the current date and then zip+reset the log file if the current date is higher. Share Improve this answer Follow answered Jan 17, 2014 at 22:49 Remy LebeauRemy Lebeau 569k3131 gold badges471471 silver badges798798 bronze badges Add a comment  | 
I have a program that writes to a log file and zips it. I want to set it up so that it will take the log file and zip it after a month and clear the file and reset it to do it again if another month has passed procedure SendToLog(Const MType : twcMTypes; Const callProgram, callPas, callProssecs, EMessage, zipName : String; AddlStr : String = '' ); Const MTValues = 'EDS'; var LogFile : TextFile; LogName : String; EString : String; begin logName := WebLogPath; // þ for delimeter EString := MTValues[ Ord( MType )+1] + PC + FormatDateTime( 'mm/dd/yyyy hh:nn:ss.zzz', Now ) + PC + callProgram + PC + callpas + PC + callProssecs + PC + EMessage; Assign( LogFile, LogName ); if FileExists(LogName) then Append( LogFile ) { Open to Append } else begin Rewrite( LogFile ); { Create file } end; Writeln( LogFile, EString ); Close( LogFile ); ArchiveFiles('C:', 'mytest.log', 'C:', zipName + '.zip', 'M'); I want to know how I make so that every time the program logs something it checks if the a month has passed then it will zip everything into a new file and reset the log.
Delphi checking date to make backup files on a scheduled
0 Alternatively you can use SQL Database Import Export Service - http://msdn.microsoft.com/en-us/library/windowsazure/jj650016.aspx NOTE: There are many other ways as mentioned here - http://blogs.msdn.com/b/wats/archive/2013/03/04/different-ways-to-backup-your-windows-azure-sql-database.aspx Share Improve this answer Follow answered Jan 16, 2014 at 15:01 ramiramiluramiramilu 17.1k66 gold badges4949 silver badges6666 bronze badges 1 Problem with that is that it won't do a transactionally consistent backup as it an export. Hence Microsoft advice to do "copy database" first and then export that. Of course, if the system is down then your are okay. However, using "copy database" mean you can roll back in seconds by just renaming databases, which is ideal in a deployment scenario. – flytzen Jan 18, 2014 at 17:18 Add a comment  | 
We are using windows azure for a software and when we release a new version of the system we usually takes the site down and we take a backup with the code below. (CREATE DATABASE databaseCopy AS COPY OF Database;) The backup is taken to ensure that nothing goes wrong and that we can rollback to the latest version if we have created a bigger bug. However this takes a lot of time. (Hours) Is there a way to do a copy faster? We dont have any active users at the time for the release so maybe you can do it faster in another way? If not how do you usually do you usually do your database-upgrades?
Quicker backup in windows azure
0 xcopy /d from the xcopy help /D:mm-dd-yyyy Copy files changed on or after the specified date. If no date is given, copy only files whose source date/time is newer than the destination time. Share Improve this answer Follow answered Jan 15, 2014 at 15:11 Loïc MICHELLoïc MICHEL 25.4k99 gold badges7575 silver badges104104 bronze badges Add a comment  | 
I change my project files on live by copying only the changed files with one xcopy command. Is it possible to back-up the target files (only the changing ones) into another location with xcopy? Or with a batch script? Sorry, my question is not clear enough, here are some further explanation: I have files in folder A that I xcopy to folder B. But I need to backup the files in folder B that are overwritten. How can I do this the easiest way? Thanks.
How can I backup target files with xcopy?
you need to develop application with google drive SDK Check This and check this is also
My phone fills up the google drive with pictures and videos. I would want automatically move new files from google drive to local storage instead to free space from the drive. What would be the best practice using Windows as local machine? I'm open to some alternatives as well :).
Best way to automatically move files from google drive to local storage?
0 to detect changes, you can compute the Hash code (such as MD5) for the original and the modified versions of the file. if they are identical, no changes are made. I think DropBox has its own protocol to detect which part of this file is modified. you can figure out to find out your own way, for example, divide the file to fixed-size parts, store their hash codes. when the client download the file, send these information to the client. after modifying the file, recalculate the hash codes for the parts, compare them with original hash codes, upload the parts that were modified, rebuild the file from original parts and the modified parts. rsync is an open source tool that synchronizes files using delta encoding. ----------------------------------------------------EDIT: my idea above is very simple and not efficient. you can a look at VCDIFF that were explained by research paper and implemented in many languages (C#). Share Improve this answer Follow edited Jan 14, 2014 at 13:59 answered Jan 12, 2014 at 15:37 houssamhoussam 1,8631616 silver badges2828 bronze badges 3 I'm looking for a way to process changes. This system isn't efficient if a small word is added in an extremely large text file, especially if it's in the beginning. Every block in the file after the small change will be modified. I need it to work with modifications, addition of data, and removal of data. – Phoenix Logan Jan 14, 2014 at 13:41 @PhoenixLogan : you are right, my idea was very simple and not efficient. I Edited my answer. – houssam Jan 14, 2014 at 13:59 VCDIFF works on compressed files. I recommend to read about : rsync.samba.org/tech_report – houssam Jan 16, 2014 at 7:11 Add a comment  | 
With a backup application, a good and space-efficient way to back up is to detect changes in files. Some online services such as Dropbox do this as well since Dropbox includes version history. How do backup applications detect changes in files and store them? If you have a monumentally large file which has already been backed up, and you make a small change (such as in a Microsoft Word document), how can an application detect a change and process it? If the file has changes made often, there must be an efficient algorithm to only process changes and not the entire file. Is there an algorithm to do this in C# .NET? Edit: I'm trying to figure out how to encode two files as the original and the changes (in a VCDIFF format or etc.) I know how to use the format and decode it just fine.
How are delta file backups encoded?
If you use the "-T" option with "tar" you can specify the name of a file that contains the names of the files you want to back up. tar -cv -T filelist -f tarball.tar
Hi my fist time trying to write a bash script so im kinda bad at it. I need help with locating a file in a home dir( this file contains a list of files i want to backup). After this i want to loop all lines in the file and make copies of the files. this is what i have tryed but it dont seem to work: find /home$n -name .file_with_backup_register | while read line ; do cp "$line" /var/backup/temp; done when im trying to make my tar backup in the script with tar-czvf $(date +%m_%d_%Y) il get: tar: Cowardly refusing to create an empty archive. So im guessing its the find /home$n... line that dosent work
Bash backup script. Need help finding a file and reading it
0 This should work: set SourceFolder to POSIX file "/Users/alex/Desktop/test/" set TargetFolder to POSIX file "/Volumes/myusb/" tell application "Finder" if exists SourceFolder then try duplicate SourceFolder to TargetFolder replacing yes on error the error_message number the error_number display dialog "Error: " & the error_number & ". " & the error_message buttons {"OK"} default button 1 end try end if end tell ("POSIX file" is from the "StandardAdditions") Share Improve this answer Follow answered Jan 9, 2014 at 2:43 user1804762user1804762 Add a comment  | 
I'm a new user of AppleScript and I try to settle an script to backup a folder from my mac to a folder on a usb stick. I started to create this script but it doesn't work. tell application "Finder" duplicate folder "/Users/alex/Desktop/test/" to "/Volumes/myusb/test/" replacing yes end tell Thanks for you help.
Simple backup of folder on usb stick with AppleScript
0 First go to localhost/phpmyadmin and create a database as before you have. Then import your database file through browse. If your database name exmaple.sql then create database name will be example and import example.sql Share Improve this answer Follow answered Jan 8, 2014 at 12:15 hizbul25hizbul25 3,83944 gold badges2626 silver badges4040 bronze badges 1 I cant do that I dont have the xampp running anymore – kalafun Jan 8, 2014 at 13:06 Add a comment  | 
I would like to ask, if it is possible to get my database from an offline (not functioning) xampp ? You see, I have backed up my database earlier but I am not sure whether there are all the data I need now and the DB is pretty big (like 50 tables). I wanted to go for a local implementation of apache, mysql and PHP for my web applications. So I have reinstalled mysql and want to use my own local apache server instead of xampp. I would like to know where can I find some .sql or something that is stored in xampp that could be otherwise accessible via the phpmyadmin? Is it even possible? I have scrolled through the xampp folder and tried to figure out where it can be, but didnt find anything though. Thanks for help. EDIT I am on a mac running mavericks.
get database from xampp (not via phpmyadmin)
0 So you want to back up images? NSUserDefaults isn't really meant for that. Why not try adding iCloud support instead? Share Improve this answer Follow answered Dec 30, 2013 at 3:16 valheruvalheru 2,56233 gold badges2121 silver badges4040 bronze badges Add a comment  | 
I used NSDefault in my app to backup some images and got rejected because it uses 6mb of storage. Can anyone help me add the donotbackup attribute into it? I would like to keep userdefault directory if possible so old users don't lose their images. Any help would be really appreciated :) My current code is: to save: - (IBAction)d1p:(id)sender { lbl1.text=txt1.text; [txt1 setAlpha:0]; NSString *savestringln1 = lbl1.text; NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; [defaults setObject:savestringln1 forKey:@"savedstringlbl1"];[defaults synchronize]; [self.view endEditing:YES]; } To load: NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; NSString *loadstringlbl1 = [defaults objectForKey:@"savedstringlbl1"];
Is it possible to not back up some of the files saved in nsdefault?
0 I would have cron execute this .sh file. Make sure to validate your public ssh key on external server... #!/bin/bash ## List of databases to backup declare -a arr=(database1 database2) ##use something like this to get full list of databases to loop through #databases=`mysql -B -r -u ${user} --skip-column-names -p${pass} --execute='show databases'` ## mysql user should have SELECT and LOCK privileges on tables to backup user='you' pass='pass' ## scp remote variables url_ext='external url' folder_ext='/' use_ext='external user' pass_ext='external password' ## base dir where backups are saved shdir='/home/to/folder/' ## date and hostname day=$(date | awk '{ print $2"-"$3"-"$6}') hostname=$(hostname) ## now loop through the above array for i in ${arr[@]} do echo $i # current database mysqldump -u $user --password=$pass $i > "$i.sql" #dump db to file wait #if you want you can zip the files zip backup_db_$i_$day.zip "$i.sql" #or tar them #tar -zcvf backup_db_$i_$day.tar.gz /route/to/file wait ##2 using scp you can send the backups to external server ##read about private and public ssh keys for automation without password scp -v backup_db_$i_$day.zip user@ip:/home/to/backup/folder/ wait done #Errase old backups rm -rf $shdir*.sql Share Improve this answer Follow edited Jan 1, 2014 at 0:45 answered Dec 31, 2013 at 3:10 LucasLucas 10.1k55 gold badges4343 silver badges5353 bronze badges Add a comment  | 
I need to backup my MySQL database to a folder under my website root automatically with Cron. I search about it and finally come to a point as the cron job below. 0 1 * * * /usr/bin/mysqldump --opt --all-databases -u USERNAME -pPASSWORD | gzip > /backup-folder/db_bckp`date +\%Y-\%m-\%d`.sql.gz However, nothing happens. What may be the reason?
Backup MySQLdatabase to remote server
0 It can be done (Both solutions). But you need to tell if the solution (01) need to be recursive or Not. I suppose you know php has got a "time" to run (standard are 60 seconds), so you know that the file you need to backup cannot require more then 55 seconds to get a backup. You can try and use the next link, it will backup completly a site, db and file and put it into a zip file. It need to be configured a little, but it can help. http://www.starkinfotech.com/php-script-to-take-a-backup-of-your-site-and-database/ Share Improve this answer Follow edited Dec 27, 2013 at 13:42 answered Dec 27, 2013 at 13:28 GoikiuGoikiu 57233 silver badges1212 bronze badges 6 wooohh.. how quick you've responded. thanks a lot. yeah solution (01) need to be recursive. better if it is run every day morning. – Riffaz Starr Dec 27, 2013 at 13:32 As Wp already stated. To launch it each day you need a cron job, that's another thing. This aside, how many file you need to backup? How long they are? – Goikiu Dec 27, 2013 at 13:33 files almost 150 above. but each files are just below 50 kb. – Riffaz Starr Dec 27, 2013 at 13:37 Replyed into my answer. – Goikiu Dec 27, 2013 at 13:42 thanks... but I am not expert in php. also no need to backup whole site. just need to back up the file named like index.bk.2013-12-27 and index.bk.2013-3-27and that script says that will save the files as zip. but I need to back up that file as it is in the FTP. so ideas? – Riffaz Starr Dec 27, 2013 at 13:50  |  Show 1 more comment
I have lots of backup files in my FTP. The file name like : index.php.bk-2013-12-27 I want to back up those files to the folder named /backup/ so inside of my httpdocs folder looks like this. index.php backup/index.php.bk.2013-12-27 the following both methods are fine to done this. 01. if any file contain name .bk that should be backed up automatically to the folder backup or 02. create a text file named backup_move.text that file contains the paths of files that need to be copied and placed it into the httpdocs folder. then the php script extract those file path from the backup_move.text and sync the files to the folder named backup How can I do this with some php coding.? Any help will be very much appreciated.
Can back up specific files automatically from FTP to a specific folder using PHP?
0 Just Try. $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName WHERE Temp>35"; $result = mysql_query($query) or die(mysql_error()); Share Improve this answer Follow answered Dec 22, 2013 at 17:13 Jenson M JohnJenson M John 5,63955 gold badges3030 silver badges4848 bronze badges 6 I get: Access denied for user '...'@'%' (using password: YES) – Jachym Dec 22, 2013 at 17:15 so I tried to add: grant all privileges on $dbname.* to '...'@'%' identified by '...'; and now I get: Parse error: syntax error, unexpected T_STRING in... – Jachym Dec 22, 2013 at 17:16 @user2370078 Are you running this on your machine or remote server? – Jenson M John Dec 22, 2013 at 17:18 remote server, I set the values of dbhost, dbuser etc. to the ones I use to log in to my phpBB – Jachym Dec 22, 2013 at 17:19 @user2370078 through command line? are you sure DB details are correct? – Jenson M John Dec 22, 2013 at 17:20  |  Show 1 more comment
I was trying to make a backup of my MySQL db called "backup" using a PHP script below, but for some reason, it doesnt work. Any ideas what is wrong? I wanted to create a file called test.sql in the same folder that would contain the data (because the db is quite big I only selected values of Temp>35, but I could change that later). Right now when I run it, I get the echo, but no file created. <?php $dbhost = '...'; $dbuser = '...'; $dbpass = '...'; $dbname = 'backup'; $conn = mysql_connect($dbhost, $dbuser, $dbpass); mysql_select_db($dbname); $tableName = 'backup'; $backupFile = 'test.sql'; $query = "SELECT * WHERE Temp>35 INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); echo "Backed up"; ?>
Backing up a MySQL database using PHP
0 Saving "snapshots" of your Core Data database to iCloud is not its intended purpose so you are looking to swim upstream. Having said that, to create a snapshot you could create a second persistent store, connect that second store to iCloud and then copy your current data into it. This is in lieu of connecting iCloud to your primary store. I would not recommend doing this. Another option would be to use iCloud document storage instead and store a copy of your SQLite file there instead of using iCloud Core Data syncing. This gives you more control over what and when you deal with this snapshot. However, it would be better to solve your duplicates problem instead and then use iCloud syncing as it intended. Share Improve this answer Follow answered Dec 21, 2013 at 7:06 Marcus S. ZarraMarcus S. Zarra 46.6k99 gold badges103103 silver badges184184 bronze badges 1 In dealing with the duplicates, is there any way to intercept the synced data before it is inserted locally? I've listened for the NSPersistentStroeCoordinatorSotresWillChangeNotification, but it doesn't get hit. I only get the DidChange notification. – Jbryson Dec 23, 2013 at 14:51 Add a comment  | 
I am trying to create an iCloud backup of my core data database in my application. I would like to be able to save a 'snapshot' of the database to iCloud and then restore that snapshot to another device that installs the application. On a side note: I've gotten iCloud syncing to work, but was having problems dealing with duplicate entries, something I can't have in my application. So to work around this I was hoping to just backup the database with the option of restoring it later. Thanks!
Backup/Restore Core Data with iCloud iOS7
0 You could check the answers posted here. Or if you specify 10 days because that was the date of the LAST backup operation, you can use MySQL Backup's Incremental backup operations. If you need to capture some of the DB to synchronize it with a different DB, this SQLyog information might be helpful. Share Improve this answer Follow edited May 23, 2017 at 10:32 CommunityBot 111 silver badge answered Dec 20, 2013 at 14:41 Digital ChrisDigital Chris 6,19711 gold badge2020 silver badges2929 bronze badges Add a comment  | 
I want to take backup of my database xyz. Tables of this database should contain all records for last ten days only. Is it possible? If yes then how I can achieve it?
How to take backup of all table's last ten days record?
0 You could create 3 Maintenance plans: One to do the Full Backup hourly Starting at eg. 08:00, another to do the 1st Differential Backup repeating hourly starting at eg. 08:20 lastly another to do the 2nd Differential hourly starting at eg. 08:40. As these can then repeat hourly, you'll get the 3 backups per hour. To make it easier, put them into the same backup folder and include description in each backup name (eg. Full_, FirstDiff_, SecondDiff_). Share Improve this answer Follow edited Dec 18, 2013 at 15:39 Andrea 12k1717 gold badges6767 silver badges7373 bronze badges answered Dec 18, 2013 at 15:18 IanPitzeyIanPitzey 1 Add a comment  | 
I've set 2 schedules on a maintenance plan (SQL Server) for backups. One of the schedules is set to run each 1 hour for a full database backup, and the other is set to run each 20 minutes for a differential backup. The problem is that they will execute at the same time when the first schedule runs. How can I set the differential backup to avoid running at time X:00 ? Current setup: 00:00 - Full backup + Diff backup (Problem) 00:20 - Diff backup 00:40 - Diff backup 01:00 - Full backup + Diff backup (Problem) I want it to execute like this: 00:00 - Full backup only 00:20 - Diff backup 00:40 - Diff backup 01:00 - Full backup only
Conditional maintenance plan for backups
Try this, used the -wait switch so that your powershell script pauses until the backup.bat is complete and the hidden switch so that it runs invisibly. Start-Process -FilePath 'C:\Backup.bat' -Wait -WindowStyle Hidden
I am using the application LabTech to write scripts for Leo backup. I have a batch file on my local C drive (backup.bat). I need that file to run when a backup fails. How would I do this in powershell with commands? I looked on Google and could not find anything concrete. Any help is appreciated. Please let me know if you need any more information.
Running a batch file in powershell
Yes, files in your app's sandbox are kept during an update. Though it is possible that files in the Library/Caches folder could be purged. Keep in mind that the value for XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX will change during an update. If you are keeping a reference to a file, be sure you only store a relative reference and not an absolute reference. For example, if you know you are storing files in the Documents folder, just keep a reference relative to the Documents folder. At runtime you can get the path to the Documents folder and then append your relative path.
As the question states, I want to know if the plist files in this directory : var/mobile/Applications/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/Documents/sample.plist is saved when an update is released. From this documentation It seems that it does get moved when an update is installed. But I still wanted to confirm the same from you guys! (So that I can avoid NSUserDefaults for maintaining 2 int values)
Are plists in var/mobile/Application/xxxxxxxxxxxxxxxxxxxxxxxxxxxx/Documents/xxxxx.plist saved during app updates?
well, some more research suggested me to go to a web service, so I ended up with the following setup. in my cron job on Server1, after pushing the backed up files to the FTP server, I call (using curl) a php script on Server2, this PHP script will then call a batch file to do the copy/duplication job all on Server2.
I have a server 1 (running Ubuntu), on this server, a website. I have a server 2 (running Win Server 2012), on that server some application are running and I have space for my backups. Server 1 has limited space, so I keep backups of both my MySQL database and Webserver file for 1 week only (daily backups). When doing my daily backup, the script does the following : - backup MySQL to a file (Mysqldump) - Compress the Webserver root folder to a tar.gz - push both generated file to a FTP server (total is 6GB) - clean for files older than retention period Now I want to add a step to have a stronger backup policy on server2 (keep daily for 10 days, have a weekly for 5 weeks, a monthly for a year and keep the yearly forever). Each backup interval is in a folder (i.e. a Daily folder, a weekly folder, a monthly folder and a Yearly folder) I want that every sunday my backup file is copied both in Daily and Weekly folder (each of them being cleaned per policy explained previously and with another schedule task), I do not want to FTP it twice. I want basically from server1 to copy the file from \Server2\Daily to \Server2\Weekly. Is RCP the right thing to use? I could not find how to use it with password.
Remote duplicate on FTP server
0 Remove the double quote, you only need them if multiple volumes are to be selected. Share Improve this answer Follow answered Sep 17, 2014 at 5:48 Andrew FengAndrew Feng 1,94011 gold badge1717 silver badges2020 bronze badges Add a comment  | 
I downloaded this EC2 automate backup tool and upload it on ec2-user folder. I used SSH to execute it /home/ec2-user/ec2-automate-backup/ec2-automate-backup.sh -v "vol-XXXXXXXX" and returned: The selection method "volumeid" (which is ec2-automate-backup.sh's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: "-v vol-6d6a0527","-s volumeid -v vol-6d6a0527" or "-v "vol-6d6a0527 vol-636a0112"" if multiple volumes are to be selected.
EC2 automate backup requires volumeid parameter and already provided
0 I don't know what the linkedserver will do for you. You are connected from both server via a vpn? You are in different network (domain) probably? If you are using a linked server, it means you will probably create trigger or stored proc. You will have to configure msdtc (for trigger). You can use : Replication Log shipping Custom replication process I had to configure 2 times a replications to move the data from a server to another, than manage these data with trigger. It was easier to work on the data localy Share Improve this answer Follow answered Dec 5, 2013 at 12:05 Mathese FMathese F 55944 silver badges99 bronze badges 1 You are exactly right , i have different server , with different domains.if i have problem with server A , i want to use server B, but i need the databases are synced with each other. – Saman Gholami Dec 5, 2013 at 12:23 Add a comment  | 
I have a host on a server and that contains an SQL Server Database. I have another server in another country and i want have a backup from the database every 5 minutes or after each transaction only insert new row to another database. After some research i found out i can use linkedservers for this goal. Is this procedure works for me for doing this operation?
Making backup from database to another server
0 You may get access using DDMS. Share Improve this answer Follow answered Nov 26, 2013 at 8:50 BaschiBaschi 1,1281111 silver badges1414 bronze badges 3 my device is not rooted, so i can't see a thing in the data folder... Or is there another way to access the data folder? – Ontwikkelaar Bij Debugged Nov 26, 2013 at 9:41 Ok. Why you think that you will lose your data when updating your App? Normally the DB will not be touched if you not force Android to do it using something like a onDBUpgrade method. So just add a new capability to your app enabling you to receive the required data. Maybe I miss something and I need some clarification!? – Baschi Nov 26, 2013 at 9:48 i forgot to mention that i don't have the original signing. So when i try to install a second version of the app, android doesn't like that :) – Ontwikkelaar Bij Debugged Nov 26, 2013 at 16:01 Add a comment  | 
I recently wrote and installed an Android app on my device. The app wrote data to the local SQL database and uploaded this data to my webserver. But due to network problems, there are a few records that were skipped, so they aren't on my webserver. Now i want to get those rows (or my full SQL database) from my android device. But my question is how? If I write a new version of my app and reinstall it, then all my data will be lost. Isn't there a way to access my SQL database without losing my data? My device is not rooted, so those backup app's won't work...
Get data from installed android app
0 I am not familiar with LEO, but a quick search shows they have a command line interface. So use powershell to handle all the logic of when and what backup to restore and then call the LEO command line utility from powershell. You already have a script to check for a successful backup, so add to the logic that if it is not successful to call LEO via command line. I am not sure where Labtech falls into this. But your PS script can be set to run as a scheduled task at the appropriate time. Hope that helps. Share Improve this answer Follow answered Nov 26, 2013 at 22:12 malexandermalexander 4,55211 gold badge3131 silver badges3737 bronze badges Add a comment  | 
I am trying to think of a good way to write a PowerShell script to run a backup if it had failed. I am not primarily a PowerShell programmer, but networking has been complaining about the number of emails they are getting from LabTech when a backup fails. I have the script that checks for a successful backup I am wanting to automatically run the backup again if the script above fails The LEO backup is what backups our databases We use LabTech to create our scripts Any ideas or help would be great. I have not seen anywhere on Google where someone has tried to do this, so I don't even know if it is possible. Again, I am wanting to automatically run a LEO backup using PowerShell.
Using PowerShell to automatically run Library Expansion Option backup
I second the suggestion by petrus4, lftp is way better suited to scripting than ftp. The ...usage open host-name... error and what follows is because you use the variable $site_ftp which is empty. The variable you set is $site. And if you do it with the open command, you must remove the ftp:// prefix. The warning about line 21 is because you use a here document with <<EOF which means "use everything what follows as input until you find a line which reads EOF". But you don't have a line which reads EOF. By the way, sshfs works very nicely with hetzner backup space. With sshfs, you can mount the backup space as if it was a partition. VERY easy to use.
I am trying to backup of my application directory, database backup and then sending it to my ftp server at hetzner by using the following script and I get few errors My server details: ubuntu12-04 (in hetzner) database: postgresql8.4 my ftp server: hetzner Trying to take backup at ubuntu12.04 server and copying in ftp my server got a sample script at following link bakupscript.sh site=ftp://u***.your-backup.de username=u*** passwd=******************* backupdir=/opt/openbravo-erp filenameob="openbravo-erp.tar.gz" echo "Creating a ob backup file $filenameob of $backupdir." # Make a tar gzipped backup file tar -cvzf /home/hetznerftp/"$filenameob" "$backupdir" echo "creating a db backup file $filenamedb of ob database." export PGPASSWORD="*my db password*" backup_dir="/home/manideep/hetzner/" #String to append to the name of the backup files pg_dump -h localhost -U tad openbravo -Fc $i > $backup_dir$i\rajedb.backup #login into ftp server ftp -in <<EOF open $ftp_site user $username $passwd bin put /home/manideep/hetzner$filenameob put /home/manideep/hetznerftp/pgdump.backup close bye EOF When I try executing that script through command ./backupscript.sh I get following error Creating a backup file openbravo-erp.tar.gz of /opt/openbravo-erp. creating a db backup file of ob database. (to) usage: open host-name [port] Not connected. Not connected. Not connected. Not connected. How do I send those files through script? And will this replace the existing files while I use command put in ftp in ftp server if not, how do I do it?
ubuntu FTP script - hetzner
0 You can make use of the rotatelogs command to log rotate the apache logs. Try to put the following as a crontab. crontab -e Add the following there. /usr/local/apache/bin/rotatelogs /path_to_apachelogs.%Y.%m.%d 86400 /usr/local/apache/bin/rotatelogs This path is meant for a cPanel server. You needed to give the full path for it to work. You can use the following command for getting the path. which rotatelogs If this is not showing any outputs, Try to locate the path with the locate command. You can have further awareness from the following link Share Improve this answer Follow answered Nov 23, 2013 at 20:21 Leo PrinceLeo Prince 2,0292727 silver badges3030 bronze badges Add a comment  | 
This is my first question. I don't know how to config error.log has 2 function as below: The log generated by current day will output to one fixed name log file. e.g error.log. This current log contains the current generated log only. The previous log will back-up to single log file. e.g: yesterday is 11/22/2013, so the error log of yesterday is named 11_22_2013.error.log
apache error log backup auto
Although it should go without saying that backups are something you make before your computer dies, mistakes happen. What you should do is image your drive, copy it to something else, in preparation for your reinstall. I'd recommend copying everything as authentically as you can, even if that takes a while. That feeling you get when you realize you forgot to salvage something important that is now gone forever is not good. MySQL generally stores data in the data directory. It may take some work to get your MySQL back into the same configuration as before, so be sure to grab the appropriate my.cnf file. It can be really difficult to extract data from a bunch of MySQL data files without the MySQL process actually running. This is why tools like mysqldump exist. If you can get it running, even limping along, that'll be good enough to snapshot it.
My computer went almost totally down and I now need to get the data out of it, before I reinstall it. One of the data I need to backup are those on the mysql server. However, I can't run it, so I just need to know, which files should I copy on external drive. I have instaled MySQL with the xampp bundle. In the xampp main folder I can find mysql folder and within it, these folders: .. backup bin data include scripts share sql-bench Which of them contains the actual database data? I'm quite short on space so I need to only backup the most necessary.
Backup dead MySQL server
That about covers it. The only thing you need to be careful of is if the domain changes, or you plan on re-installing everything to a different directory or subdomain. If the URL changes in any way, that change needs to be reflected under the wp_options table under the site_url option after importing your database. Updating this value to the new URL will allow you to get back into wp-admin, and all bloginfo calls will accurately reflect the new information.
We have a wordpress site installed on cPanel in hostgator server. We want to remove it from our account because it is not updated from along time and we do not need it at this period. But we want to keep a backup in case we need it in the future. Could you please tell me if it is enough to: 1\Export the DB from phpmyadmin. 2\Compress the files in cPanel and download them to my computer. Is there any more steps to backup our wordpress? And when we need to restore it the steps should be: 1\Upload the files to cPanel. 2\Create DB in phpmyadmin and import the DB. Is there any more steps to restore our wordpress? Thank you
wordpress backup and restore
0 Almost ... if I understood your initial request correctly the invocation should be the other way round (the -f forces the link, saving you the need to delete the old link first): ln -f /disk/backup/$dirname /disk/backup/test Share Improve this answer Follow answered Nov 13, 2013 at 20:37 tinktink 14.8k44 gold badges4747 silver badges5151 bronze badges 2 Ah, yes. That is what I ought. btw, wouldn't it be better to use softlink for this and not hardlink ? And if I'm getting this right now. I need to make a bash script which would 1. Check if softlink/hardlink exists and remove it. 2. Get path to new directory using the script from first post. 3. Create softlink/hardlink for that directory After that, I can use that link on another server to get backup using rsync. – DasGjinovskaLignja Nov 13, 2013 at 21:05 Hard-links shouldn't require any special smarts from rsync, soft-links might need extra command-line switches; I don't know for sure what rsyncs default behaviour with soft links is. – tink Nov 13, 2013 at 21:32 Add a comment  | 
I need to create script which would be ran by cron each day. Purpose of that script would be to create alias for one directory that each day has different name. The directory name changes it's name each day like 2013-11-11, 2013-12-11 etc. Actually, new directory get's created. I figured out that I can list that recent directory using dirname=$(ls -lt --time=ctime | sed -n 3p | sed 's/^.* //' ) This gets the name of most recent created directory. Now, the problem is how to make alias for that directory to something like "backup". I have rsync creating backup from another server so I need to have something that I can "call" for from that server. I can't create cron for directory itself since it changes names each day. How can I create alias, each day, for most recently created directory ?
Automatically create alias in linux
You did a copy from server files. So, this structure (hooks, db, etc.) is correct. To obtain your repository files, you should use the Export command from TortoiseSVN, or simply copy-and-paste your repository from the client. If you don't have this client repository, I suggest you to boot another SVN Server and restore this backup on it.
I used ftp to copy all files folders which were on a svn server I pulled these off via ftp just copy them on to my external hard drive thinking I can access the files like normal but I cannot all I have is hooks, db, conf etc rev folder which are large but the files are all numbered. How do I get to the svn data files of my projects? trying to view them on my windows machine on my external hard drive. Any help will be appreciated. Thanks in advance Mo
SVN restore from a backup
Thanks for the responses. I was installing the app using the xcode and not iTunes which was causing the issue. I created adhoc build and installed using he iTunes and everything worked fine. Thanks
I have an iPad app which stores some images inside Library directory of app. If I backup my iPad with iTunes and restore the iPad from backup, I can see the images in the previously saved location but My app does not show those images If I try to read the images from same path. Any idea why is this happening? I can not store the images in Documents directory as iCloud will back it up and I don't want this images to be backed up by iCloud. Thanks
Itunes backup and restore for iPad
0 NAME = StackOverflow.exe VERSION = 0.1.0 BASE_NAME = $(basename $(NAME)) ..... PKG = $(wildcard $(BIN)/*.dll) $(BIN)/$(NAME) PKG_SRC = $(SRC) README.org makefile pkgdir: @mkdir -p pkg pkg: $(PKG) | pkgdir tar -jcf pkg/$(BASE_NAME)-$(VERSION).tar.bz2 $^ zip pkg/$(BASE_NAME)-$(VERSION).zip $^ pkgsrc: $(PKG_SRC) | pkgdir tar -jcf pkg/$(BASE_NAME)-$(VERSION)-src.tar.bz2 $^ zip pkg/$(BASE_NAME)-$(VERSION)-src.zip $^ So, the binary backup file has name StackOverflow-0.1.0.zip and source backup file StackOverflow-0.1.0-src.zip Share Improve this answer Follow answered Nov 20, 2013 at 19:57 o3oo3o 1,0941313 silver badges2323 bronze badges Add a comment  | 
I want to update few of my Makefiles. I want to add backup recipe. It will just create two zips, one for sources and second for binaries (+headers). For that reson, I want to create file containing list of source files I want to add to archive. I am wondering about filename for that list. Is there any standard for naming such a list? What is your convention?
List of source files - naming convention
0 Solved - see comment from Deepak Srinivasan. Share Improve this answer Follow answered Nov 12, 2013 at 13:57 queerdancerqueerdancer 3511 silver badge55 bronze badges Add a comment  | 
I'm having the following issue: I backed up my Drupal7 project on my localhost via $ drush archive-dump --destination=/var/backup/example.com.tar.gz using drush 6.1.0 on my Ubuntu 12.04. (apache2/php5.3.10/mysql 5.5.34). I then tried to restore it to my Mac (OSX Lion) via $ sudo drush archive-restore /var/backup/example.com.tar.gz \ --destination=/var/www/example.com \ --db-su=root --db-su-pw=password --overwrite Everything worked fine (Website is available, database and correct useres are created, except the fact that I cannot login anymore to the transferred Drupal project. The path to /drupal/user is simply not there. What went wrong? Thanks!
drush: no path to user login after archive-restore
0 I am not sure that this is the case, but because each database is created from model database check if there are no set restriction on the model database. Also check if you have necessary space available on that drive (I suppose you already check this). If the limit is 4096 MB may be your filesystem is FAT32. It supports maximum 4GB file size. Share Improve this answer Follow answered Nov 5, 2013 at 8:55 Bogdan BogdanovBogdan Bogdanov 1,72522 gold badges2020 silver badges3131 bronze badges 1 Yes, I created a 100GB virtual hd so disk space is not the problem. Do you know which is the most powerful sql server version? I guess the best one (enterprise, business,...) won't have that limitation. – Pask Nov 6, 2013 at 11:02 Add a comment  | 
I've got a 20GB backup of a sql server database and I need to import it into my sql server 2012. But when it starts the imopr process, the following error appears: Error en CREATE DATABASE O ALTER DATABASE. El tamaño de base de datos acumulado superaría el límite de licencia de 4096MB por base de datos. I understand it means I've got a limit to create a database, but how can I increase that limit in order to import my backup? I'm using Windows Server 2008 (32 bits) and I've got the backup in a folder in the Desktop, so I guess the filesystem can handle it.
SQL Server 2012 does not allow to restore 20GB database
0 From http://linuxcommand.org/man_pages/mysqldump1.html The password to use when connecting to the server. If you use the short option form (-p), you cannot have a space between the option and the password. If you omit the password value following the --password or -p option on the command line, you are prompted for one. The system may be waiting for you to input a password. If you want to avoid that just add the password in the command. Assuming your password is "FLOWER": mysqldump -v -u root -pFLOWER -h 127.0.0.1 -P 3308 -x --add-drop-table --add-locks --create-options -K -e -q -A > database.sql Share Improve this answer Follow answered Oct 17, 2013 at 21:04 SébastienSébastien 12.1k1111 gold badges5858 silver badges8080 bronze badges 3 Thank you for response, Sebastien! Look at the question, it says that the message is showed after password input. – Maximus Oct 17, 2013 at 21:45 I tried to use "-pPASSWORD" or "-p" and input password after request. But it has same effect: "Connecting to 127.0.0.1..." – Maximus Oct 17, 2013 at 21:48 Yes. It has no efect. – Maximus Oct 18, 2013 at 11:54 Add a comment  | 
I'm trying to make dump with next command: mysqldump -v -u root -p -h 127.0.0.1 -P 3308 -x --add-drop-table --add-locks --create-options -K -e -q -A > database.sql The result (after password input) is message "Connecting to 127.0.0.1...". After this is nothing (no any errors, just waiting). database.sql is empty file. Why I see no any activity? Is it bug?
Mysqldump connecting issue
0 If they are not to be maintained after a reboot or between invocations of the program why not use /tmp This directory contains mostly files that are required temporarily. Many programs use this to create lock files and for temporary storage of data. Share Improve this answer Follow answered Oct 15, 2013 at 3:29 MattSizzleMattSizzle 3,14511 gold badge2323 silver badges4343 bronze badges 1 They are maintained after boot in case of a daemon/system crash. So /tmp is out. – Chinmay Kanchi Oct 15, 2013 at 9:11 Add a comment  | 
I have a daemon that backs up some system files before it does anything else and restores them afterwards. What is the right place to put these backups? I'm thinking somewhere in /var or /var/opt, since I don't want to pollute /etc with a bunch of backup files that aren't really doing anything. If it matters, I'm specifically looking at Ubuntu 10.04+.
Where should a well-behaved daemon store auxiliary files?
0 reset the counter and clean the table TRUNCATE TABLE "table_name"; Share Improve this answer Follow answered Oct 5, 2013 at 5:46 Juan Castro LuritaJuan Castro Lurita 5311 gold badge11 silver badge99 bronze badges Add a comment  | 
I have a requirement that there is one MySQL table with 50 million rows of data.I want to take backup of last month data and want to insert to a new table.After successful backup,that much data need to be truncated.Each second I am getting packet from a device and inserting to this table.So load everything need to be considered before doing backup.What is the efficient way of doing this.
Backup and Truncate-Mysql
0 You can read .dat file using c#, but it depends on the structure of data how you have inside the .dat file take a look at this link How-read-data-from-DAT-file-using-C how-can-i-read-data-from-dat-files Share Improve this answer Follow answered Oct 4, 2013 at 3:08 Albert ChangAlbert Chang 322 bronze badges Add a comment  | 
I have .DAT from SharePoint, to recover some of the data I need to read the .DAT file using C#. Some of the options are StreamReader objInput = new StreamReader(filename, System.Text.Encoding.Default); string contents = objInput.ReadToEnd().Trim(); string[] split = System.Text.RegularExpressions.Regex.Split(contents, "\\s+", RegexOptions.None); foreach (string s in split) { Console.WriteLine(s); } or //ObjectToSerialize objectToSerialize; //Stream stream = File.Open(filename, FileMode.Open); //BinaryFormatter bFormatter = new BinaryFormatter(); //objectToSerialize = (ObjectToSerialize)bFormatter.Deserialize(stream); //stream.Close(); .../ The problem is the DAT file may contain XMl files, Doc files, or PPT or others. I just want list all the data and files inside the .DAT file. Is there is any way I can do this is C#?
Read inside .DAT file using C#
0 I am facing the same issue. My plan is to automate fetching the database backup from a sftp location and then restoring the database in the next location. There are probably several intermediate steps to make sure this happens, like drop database if exists, etc. But I think it will work. I, like you, don't need database synchronisation. My purposes are for data warehousing. The external database is from an external vendor who is willing to share database dumps with me weekly. Hope that helps, your question is quite old now. :) Share Improve this answer Follow answered Oct 18, 2016 at 0:09 stevenbranigan82stevenbranigan82 32911 gold badge44 silver badges1313 bronze badges Add a comment  | 
I have several websites on several hosting packages utilizing several MySQL DBs. What I want is to get daily instances of ALL these databases and restore them on a local server. So I will have all my databases locally updated from the online ones. Which is the best way to achieve this. Thank you.
Automated MySQL DB backup and restore on local server
0 export/import are actually MapReduce jobs and hence will behave like any other MapReduce job. If NN crashes export/import will fail. To be precise, your Mappers/Reducers will get killed. And you will probably encounter java.net.ConnectException. Share Improve this answer Follow answered Sep 24, 2013 at 21:00 TariqTariq 34.1k88 gold badges5858 silver badges7979 bronze badges Add a comment  | 
I am working on backing up a ~70TB HBASE datastore. We have decided to go with single table backup to HDFS (for now). I have come across the Java API for export/import here: http://hbase.apache.org/book/ops_mgt.html#export. There is not too much information on Apache's website, and I was wondering if people had any more insight into how the export/import worked? The main question I want answered/confirmed is if the import will work even after a full namenode crash? Thanks in advance, -Aaron
HBASE backup - how does the java api for export/import work?
Problem was because, I typed wrong DIR path. I tried to perform buckup not directly on server, but by the local workstation. I typed my local machine DIR path and network DIR path. However correct DIR path, was the local SERVER machine DIR path.
Error: The storage control block address is invalid unable to determine disk freespace SQL Anywhere 9 Sybase Central 4.3 Windows XP How to resolve this problem? Button to choose dir is not activated. I tried also type local and network dir, both the same result (error above).
Unable to perform database backup using sybase central
Since you are concerned with images you should look into using the Flickr API. Flickr will let you store images on their server. The images will persist even if your app gets deleted. Here's some links to check out http://www.flickr.com/help/mobile/ http://www.flickr.com/services/api/
In my app in document directory, i have two things, first is sqlite to manage database and i have some images directly saved in document folder by user(only local database). I updated and inserted the images information through sqlite, now the project requirement is that I have to keep that data alive when app get deleted from device and reinstalled, or suppose user has two device then she can use first device data in second one directly from any way, so what should i do for this? In my app there is neither login functionality to identify user specific data nor I have any server to store data. Now how to handle this situation to backup the data, any suggestion or related tutorial link would be appreciable.
How to backup the iPhone app data, permanently
0 You can use a command line script that backups your Databases to your S3 account quite easily, and run it as often as you like. I had exactly the same problem a while back, and wrote up this handy tutorial. It should be perfect for what you want to do. Share Improve this answer Follow answered Oct 28, 2013 at 21:02 Niraj ShahNiraj Shah 15.2k33 gold badges4141 silver badges6060 bronze badges Add a comment  | 
This may be a stupid question, but after hours of googleing i cant find a suitable answer to this.. We have a buisness critical application running on cloudbees. The sourcecode is backed up properly and we want the same for our db. Cloudbees doc says: "CloudBees MySQL databases are backed by EBS volumes on Amazon EC2 which provides a first layer of storage redundancy. EBS volumes are backed up to S3 every 24 hours for disaster recovery and are not generally available for customer use on multi-tenant MySQL clusters. Customers using Dedicated MySQL instances can request rollbacks to previous backup snapshots by filing a support ticket." So basicly we are protected out of the box in case of emergencies, but not if an employee accidentally deletes something he should not. So my question is: How can we automaticly do a backup of a cloudbees mysql db every night? We have amazon S3 storage where it could be put. Any ideas?
Scheduled Cloudbees MySql Backup
0 Sounds like your backup software isn't doing an Exchange-aware backup. You may need to install or configure a specific agent/component of the software to do an Exchange-specific backup. Only then would Exchange itself know that it's been backed up -- if you are just doing a basic disk-based backup, you are not ensured that the Exchange databases can be restored to an application-consistent state. Look into Exchange-specific features for your backup solution -- if they aren't immediately obvious, then you may need to install an additional component on the Exchange server so that the software knows that the server is running Exchange. Share Improve this answer Follow answered Aug 20, 2013 at 15:05 jbsmithjbsmith 1,6361313 silver badges1010 bronze badges 1 but when I type the following command: Get-Mailboxdatabase -status | Fl backup - while the backup job is running I see: BackupInProgress : True - So the Exchange server do recognize the backup process. So how come when it ends it doesn't write the dates entries? – Aviv Malka Aug 21, 2013 at 7:42 Add a comment  | 
I have 3 mail servers. 1st mail-server for the CAS. 2nd mail-server for the Hub-Transport. 3rd mail-server for the DR/DAG. I am backing up my 1st and 2nd server with CA ARCServ 16.5 with the following conditions: Daily Incremental. Weekly Full. Weekly Verify. When the backup job is running, I check everytime that the 'BackupInProgress' field is on True and it is. Now the issue we are having is when I use the following command: Get-Mailboxdatabase -status | Fl *backup* The LastFullBackup, LastIncrementalBackup, LastDifferentialBackup are empty\blank BackupInProgress : False SnapshotLastFullBackup : SnapshotLastIncrementalBackup : SnapshotLastDifferentialBackup : SnapshotLastCopyBackup : True LastFullBackup : LastIncrementalBackup : LastDifferentialBackup : LastCopyBackup : 19/08/2013 21:01:19 RetainDeletedItemsUntilBackup : False How come thode field are not being updated once the backup job is done? Thanks for the help
Issue with Exchange 2010 - Get-MailboxDatabase -status
0 This is untested - it should copy the changed files by adding a date and time stamp of when the bat was launched, and also copy files that don't exist. Wmic which is used to get a robust date stamp requires XP Pro and above. @echo off cd /d "local folder" set "remote=\\server\share" for /f "delims=" %%a in ('wmic OS Get localdatetime ^| find "."') do set "dt=%%a" set "YYYY=%dt:~0,4%" set "MM=%dt:~4,2%" set "DD=%dt:~6,2%" set "HH=%dt:~8,2%" set "Min=%dt:~10,2%" set "Sec=%dt:~12,2%" set fullstamp=%YYYY%-%MM%-%DD%_%HH%-%Min%-%Sec% for %%a in (*.*) do ( if exist "%remote%\%%a" ( for %%b in ("%remote%\%%a") do if not "%%~ta"=="%%~tb" copy "%%a" "%remote%\%%~na-%fullstamp%%%~xa" ) else ( copy "%%a" "%remote%" ) ) Share Improve this answer Follow answered Aug 16, 2013 at 5:19 foxidrivefoxidrive 40.7k1010 gold badges5656 silver badges6969 bronze badges Add a comment  | 
I'm trying to create a bat file (xp/7) to copy all files in a local folder to a network drive folder but only if the files have changed. If they have changed I'd like to increment the file name by one or put a date (this seems like people have said it's easier). For example, I have a folder called database which contains 4 or 5 files whos content or names may change occasionally I would like to automatically make copies of them on a network drive, either daily or every hour if they are changing. Not all the files will change every day but if they do change I would like to increment their file name to keep the previous versions. How would I go about doing this, is there a better way to go about this? Thank you
bat file to backup folder files only if changed increment name
Solved it, using : sudo mysqldump --databases DBNAME1 DBNAME2 -u SQLUSERNAME -p > SQLBACKUP.sql
I am backupping some databases, but NOT ALL from my old Computer using ubuntu 10.10 and MySQL installed ofc. so i go for sudo mysqldump "DBNAME" -u root -p > DBBACKUPNAME.SQL after entering the password, the backup is successfully stored in the dir I am currently in. Fine. But now i want to backup more than one DB. I tried sudo mysqldump "DBNAME1,DBNAME2" -u root -p > DBBACKUPNAME.SQL But that's not working. So How to do that?
How to backup more than 1 database in once in MYSQL using command line on ubuntu server 10.10?
I solved the problem with the following code : find /home/* -name "." -not -path "/home/*/www/cache/*" -newer /var/backups/data/flagfile -print0 > files.txt Then the tar command : tar czvf /home/backups/incremental/H14/backup.tar.gz --files-from files.txt --quoting-style=shell The --quoting-style=shell escapes special chars so the tar doesn't exit.
I'm making a backup system. I have 2 servers : the main sends data to the backup server, then, the backup server compress the data. To save disk usage, I just compress the files that have changed thanks to a flag-file that stores the last backup time. Sometimes it works fine, but sometimes my script is exiting with an error 123This error is comming from xargs The description is : 123 if any invocation of the command exited with status 1-125 So the command I launch is : find /home/* -name "." -not -path "/home/*/www/cache/*" -not -path "/home/*/www/cc/cache/*" -not -path "/home/*/www/ot/cache/*" -not -path "/home/backups/*" -newer /var/backups/data/flagfile -print0 | xargs -0 -r tar -czvf /home/backups/incremental/H14/backup.tar.gz When I do this : find /home/* -name "*.*" -not -path "/home/*/www/cache/*" -not -path "/home/*/www/cc/cache/*" -not -path "/home/*/www/ot/cache/*" -not -path "/home/backups/*" -newer /var/backups/data/flagfile -print0 It returns a list of files like expected. Thanks for any help !
Incremental backup tar error
0 Test you with this: <?php /** * */ / Set up serial open - $fp =fopen("/dev/ttyUSB", "w"); //check the GET actions variable to see if something needs to be done if (isset($_GET['action'])) { //Action has been requested //Issue the command we wish to send to the Arduino if ($_GET['action'] == "true") { //UP LED true - for this simple script we are just looking for either a 1 or 0 fwrite($fp, chr(1)); } else if ($_GET['action'] == "false") { //Down LED off fwrite($fp, chr(0)); } //We're done, so close the serial port again fclose($fp); } ?><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> <title>Arduino LED control</title> </head> <body> <h1>LED Control system - Arduino Interface</h1> <p><a href="<?=$_SERVER['PHP_SELF'] . "?action=true" ?>"> Turn LED On.</a></p> <p><a href="<?=$_SERVER['PHP_SELF'] . "?action=dalse" ?>"> Turn LED Off.</a></p> </body> </html> Share Improve this answer Follow answered Aug 13, 2013 at 19:55 user2676329user2676329 333 bronze badges 1 I need serial port and not by USB. – Jose Carlos Ramos Carmenates Aug 13, 2013 at 20:00 Add a comment  | 
I needed connect by serial port to APC backup and get any value from it. How can I connect with PHP to serial port? I'm using S.O Ubuntu 13.04, and this testing is in my pc directly connect to APC.
Socket connection to serial port
SOLVED: Based on a comment for another question I ran SQL Profiler to determine what functions MSSMS was using, found it was using master.dbo.xp_dirtree, was able to duplicate this in my app.
I'm using a C# application to Backup and Restore DBs on a remote server using the microsoft.sqlserver.smo.dll. Testing with my local machine, I can browse backup files to select the backup to use. Can this be done through code for the remote SQL Server using the SQL credentials similar to the way MSSMS does it? My backups are saved with a certain naming convention (ie. "Ebuy_full_2013_8_7_H13_M40.bak") and I would like to be able to show these in the application so a decision about which backup file to restore can be made. Thanks, Rick
File access using microsoft.sqlserver.smo.dll?
Use && between commands: it will execute the next command only if the previous command execution is a success. The || does the inverse => echo "error" will be displayed if one of the both cp fails. #!/bin/sh cp ~/SURV/plugins/iConomy/accounts.mini ~/backups/ && cp ~/SURV/plugins/CoreProtect/database.db ~/backups/ && echo "Backups creados con éxito!" || echo "error" You can also add -v option to the cp command for more visibility. In order to make your script executable you have to do: chmod +x backup.sh
I'd like to set up a backup.sh file that executes these two commands when run: cp ~/SURV/plugins/iConomy/accounts.mini ~/backups/ cp ~/SURV/plugins/CoreProtect/database.db ~/backups/ I want it to just run these 2 commands and display the text "Backups creados con éxito!"
Run commands through Shell Scripts in linux
0 If I were in your shoes I would buy an external hard drive that is large enough to hold all your data. Then write a Bash script that would: Mount the external hard drive Execute rsync to back up everything that has changed Unmount the external hard drive Send me a message (e-mail or whatever) letting me know the backup is complete So you'd plug in your external drive, execute the Bash script and then return the external hard drive to a safe deposit box at a bank (or other similarly secure location). Share Improve this answer Follow answered Aug 3, 2013 at 18:56 Benny HillBenny Hill 6,20944 gold badges3939 silver badges6060 bronze badges 2 I would need 2 disks as one always has to be out of the house. It is of course an option, but the other way I wouldn't have to buy hardware I really don't need. It is what I have done so far - not with rsync and email though. And as my data increases in size, I end up having a lot of external drives that are too small for my backup. – Anders Bo Rasmussen Aug 3, 2013 at 22:27 @AndersBoRasmussen The only thing I can think of if you insist on using multiple disks would be to come up with some kind of scheme involving LVM and rsync. – Benny Hill Aug 4, 2013 at 0:08 Add a comment  | 
I have for some time being thinking I can save some money on external hard drives by making this backup scheme: If I have 3TB data to backup, where less than 1TB changes from one backup to the next and I always want to have 1 copy out of the house, it should be enough to have 3 2TB external hard drives. The idea is that each time a disk is used for backup it is completely filled - a full backup is however never made as 3TB>>2TB. So the backup starts by taking disk1 filling it with 2TB of data. Then take disk2 filling it with 1TB of data and 1TB of redundant data as it also exist on disk1. Now disk1 and disk2 can be taken out of the house. When the next backup is made disk2 will already contain 2TB of data, where at least 2TB-1TB=1TB is still valid as only 1TB have changes. So by backing up 2TB of data (where some may also exist on disk2) to disk3 we have a complete backup on disk2+disk3. Now disk3 can be moved out of the house and disk1 can be moved back in, deleted and reused for backup. This can of course be made better so we can use different sizes of disks, have different number of disks, have higher requirement for number of copies out of the house etc. In theory it is quite easy to make by having stored checksums of which files is on all disks, so we can check for changes by checking the checksums. However in practice there is a lot of cases to handle: out of disk-space, hardlinks, softlinks, file permissions, file ownership, etc. I've tried to find existing backup programs that can do this but I have not found any. So my question is: How do I most easily do this? Writing it from scratch would probably take too much time. So I was wondering if I could put it on top of something existing. Any ideas?
How to most easily make never ending incremental offline backups
find . -type f -newermt 2013-08-02 ! -newermt 2013-08-02 See here for more info
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 10 years ago. Improve this question In my office i am taking daily and weekly backups. I am just curious to know if i could view list of backup data on a particular day. Example: If i have taken backup on 1st august 2013 i.e, on Thursday, i want to view list of backup data on 1st august 2013.
How to view list of backup files on a particular date [closed]
0 It's an old question but maybe you still need an answer. In your AppsBackupAgent implementation remove onRestore() and onBackup() methods. Inherited BackupAgentHelper implementations use dispatcher for delivering restore/backup events to registered helper. Your implementation simply disable this work. Share Improve this answer Follow answered Jun 5, 2014 at 14:24 Damian PetlaDamian Petla 9,04355 gold badges4545 silver badges4747 bronze badges Add a comment  | 
I have problem, else I wouldn't be here. I am making backup service, but so far, it doesn't work at all. Dunno what problem it is, but when testing (via emulator or via phone, explained way), the data won't be restored. Maybe someone can help? MyAppsBackupAgent public class AppsBackupAgent extends BackupAgentHelper { // The name of the SharedPreferences file static final String PREFS = "uidpref"; // A key to uniquely identify the set of backup data static final String PREFS_BACKUP_KEY = "uidpref"; // Allocate a helper and add it to the backup agent @Override public void onCreate() { SharedPreferencesBackupHelper helper = new SharedPreferencesBackupHelper(this, PREFS); addHelper(PREFS_BACKUP_KEY, helper); } @Override public void onBackup(ParcelFileDescriptor oldState, BackupDataOutput data, ParcelFileDescriptor newState) throws IOException { // TODO Auto-generated method stub } @Override public void onRestore(BackupDataInput data, int appVersionCode, ParcelFileDescriptor newState) throws IOException { // TODO Auto-generated method stub } Storing happens in mainactivity: SharedPreferences UIDpref = getSharedPreferences("uidpref", 0); Log.e("CODE", preferences.getBoolean("IDgenerated", false)+""); BackupManager mBackupManager = new BackupManager(getApplicationContext()); if(!preferences.getBoolean("IDgenerated", false)){ SharedPreferences.Editor UIDedit = UIDpref.edit(); String rnd = GenerateUID(30); UIDedit.putString("UID", rnd); UIDedit.putBoolean("IDgenerated", true); UIDedit.commit(); mBackupManager.dataChanged(); } And manifest: <application android:backupAgent="ee.elven.katja.AppsBackupAgent" android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > ...
Google backup service doesn't work
0 RMAN is a backup&recovery tool. You can't use it for that purpose. You can use it only as part of "transportable tablespace" process in this context. You can try to use logical standby DB for that purpose but it's little bit overkill. Share Improve this answer Follow answered Jul 24, 2013 at 15:47 Yuri LevinskyYuri Levinsky 1,52522 gold badges1313 silver badges2727 bronze badges Add a comment  | 
If I have two DB's having same database structure and every schema has its separate tablespace then can I use RMAN to take tablespace level backups and apply them on other DB's tablespace? Example: say I have DB schema 'scott' which have been assigned tablespace 'scott_ts' (on both databases), I take backup of scott_ts tablespace and restore it on other DB and after that to refresh this schema/tablespace I apply daily incremental level backups on it? (Please note that I've done some research on other options like data pump, golden gate oracle streams etc. I just specifically want to know whether RMAN would help me in this case or not). Oracle Database 10G on Windows Server 2003.
Refreshing tablespace using RMAN incremental backup from one DB to Other
2 Yes, you can assume that no objects will be created (and items are never updated) in old directories within your content store, although items may be removed by the repository's cleanup jobs after being deleted from Alfresco's trash can. This is the section from org.alfresco.repo.content.filestore.FileContentStore which generates a new content URL. You can easily see that it always uses the current date and time. /** * Creates a new content URL. This must be supported by all * stores that are compatible with Alfresco. * * @return Returns a new and unique content URL */ public static String createNewFileStoreUrl() { Calendar calendar = new GregorianCalendar(); int year = calendar.get(Calendar.YEAR); int month = calendar.get(Calendar.MONTH) + 1; // 0-based int day = calendar.get(Calendar.DAY_OF_MONTH); int hour = calendar.get(Calendar.HOUR_OF_DAY); int minute = calendar.get(Calendar.MINUTE); // create the URL StringBuilder sb = new StringBuilder(20); sb.append(FileContentStore.STORE_PROTOCOL) .append(ContentStore.PROTOCOL_DELIMITER) .append(year).append('/') .append(month).append('/') .append(day).append('/') .append(hour).append('/') .append(minute).append('/') .append(GUID.generate()).append(".bin"); String newContentUrl = sb.toString(); // done return newContentUrl; } Share Improve this answer Follow answered Jul 24, 2013 at 9:55 Will AbsonWill Abson 1,57277 silver badges1313 bronze badges Add a comment  | 
I am an alfresco 3.3c user with an instance supporting more that 4 million objects. I’m starting having problems with backup, because to backup the alf_data/contentstore folder even in a incremental mode, it takes to long (always need to analyze all those files for changes). I’ve noticed that alf_data/contentstore is organized internally per years, could I assume that the olders years (2012) are not anymore changed? (if yes, I can just create an exception and remove those dirs from the backup process, obviously with a previous full backup ) Thanks, kind regards.
Alfresco: unable to backup alf_data
0 Open bare repository on the remote AWS machine, set it as a remote on the local repo, and do a "git push --all" every day/hour/whenever you want. Share Improve this answer Follow answered Jul 17, 2013 at 13:04 Vlad LygaVlad Lyga 1,13399 silver badges1010 bronze badges 1 or make it a cronjob or even a git hook on the primary aws server so you do not have to remember to run it locally from time to time. – mnagel Jul 17, 2013 at 15:14 Add a comment  | 
Need you help in getting the correct approach to sync/backup all my remote repositories located at one server to another aws server. Basically I need to take a backup on regular basis and host it on aws server. This aws server is going to be used simply for backing up the GIT and not for regular git pushes/pull`s. Let me know how we can achieve this from remote server to sync all data including remote branches, tags to another server on very frequent basis.
GIT remote server backup onto aws server
0 Not sure this will work or not but You can try to run the program as administrator through code. There is a good article how to run program with admin rights. How to force my C# Winforms program run as administrator on any computer? Share Improve this answer Follow edited May 23, 2017 at 12:11 CommunityBot 111 silver badge answered Jul 2, 2013 at 5:41 Debajit MukhopadhyayDebajit Mukhopadhyay 4,14211 gold badge1717 silver badges2222 bronze badges Add a comment  | 
We create an Sql DB Bakup through application using the code below: ServerConnection srvConn = new ServerConnection(HostName); srvConn.LoginSecure = true; Server srvSql = new Server(srvConn); string FileName = Environment.GetFolderPath(Environment.SpecialFolder.Desktop) +"//Closing Back-Up"); bkpDatabase.Database = database; BackupDeviceItem bkpDevice = new BackupDeviceItem(FileName, DeviceType.File); bkpDatabase.Devices.Add(bkpDevice); bkpDatabase.SqlBackup(srvSql); It works fine for Drives excluding the Drive where OS is installed due to right problems. In the drive with OS it says:Cannot open backup device. I searched it on internet and SO, but it suggests to manually change Built-In account from "Network Service" to "Local System" using SQL Server Configuration Manager. But we distribute our software online and via CD's for self installation, so it is impossible to instruct each and every client to do so as they may not be technically well-versed with it. So the question is, can we change the built-in account via Script ? Or is there any other solution that we can perform via program or script ? Thank you. Edit: We distribute SQL Server 2008 Express as a pre-requisite and downloaded from same folder as my application. The prerequisite doesn't ask for any account settings or instance-name etc settings, which we find in SQL Server manual installation procedure.
SQL Server Backup fails in drive where OS is installed
this command worked for me. Combine it with a cronjob rsync -avz username@ipaddress:/path/to/backup /path/to/save
I would like to auto backup my server monthly and weekly. My server is running Centos 5.5 and while searching the web I'm found a tool named rsync. I got my first update manually by using this command in terminal: sudo rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP I then prompt my password for that user and bob's my uncle. This backups the necessary files from my remote server to my local device but does somebody know how I can automate this? Like automatic running this script every sunday? EDIT I forgot to mention that I let direct admin backup the files I need and then copy those files from the remote server to a local server.
rsync remote to local automatic backup
0 You can do this: open the backup database create a text table that is a copy of the main table, e.g. CREATE TEXT TABLE yourtable_copy AS (SELECT * FROM yourtable) set a file for the table SET TABLE yourtable_copy SOURCE 'filepath' copy the data to the new table set the source off with SET TABLE yourtable_copy SOURCE OFF shutdown the backup database open the main database now do the same text table creation and source setting with the main database but do not copy the data, as the backup data is already there and will be opend do your updates then turn the text source off in the main database reference http://www.hsqldb.org/doc/2.0/guide/texttables-chapt.html Share Improve this answer Follow answered Jun 27, 2013 at 19:52 fredtfredt 24.2k33 gold badges4141 silver badges6161 bronze badges Add a comment  | 
I'm trying to create a function in my java application, where the user could select a prior made backup but only import table-rows that aren't in the current database instance. With a MySql database I could dump my tables, rename them inside the .sql to create temporary tables when imported again, and then simply cross query all rows not in the DB. Any idea how I could acomplish something similar in hsqldb from within my java application?
Import only specific rows from hsqldb backup
0 I'd recommend using proc_open to execute multiple commands asynchronously. If the backup process is itself a PHP script, it can be run using the php binary (e.g. php mybackupscript.php) Share Improve this answer Follow answered Jun 26, 2013 at 21:52 AlliterativeAliceAlliterativeAlice 12.1k99 gold badges5353 silver badges7171 bronze badges 2 Ok, so how can I use this to set a limit on the number of processes running at a time? Presumably this has some kind of callback that happens that can then trigger the next to fire? – enragedlemon Jun 26, 2013 at 22:09 Actually, I think the following may work for that - if every instance called does the follow? do { $resourse = $database->query( /* Some query to decide the next backup that needs to be run / ); if ( ! mysqli_num_rows( $resourse ) ) break; $back_result = $resourse->fetch_object(); $backup_id_to_run = $back_result->backup_id; $database->query( / Update the backup record to say that it's been started so the other processes don't try to run it */ ); @include( '/path/to/backup/script.php' ); } while ( true ); I assume PHP 5.3's garbage collection could handle it? Any other problems? – enragedlemon Jun 26, 2013 at 22:40 Add a comment  | 
Hoping you can help! I am currently building and testing a PHP script that ports data from one web system to another (think data backup) that needs to run daily for an indefinite number of users. The script is fairly intensive, depending on the amount of data that needs to be pulled (the longest execution time I have seen thus far has been about 30 minutes). Given that, I obviously don't want to run them one after the other, as the whole job won't complete in a timely fashion. So ideally, I would like to have some way to schedule the job so that it can run up to ten (which I can expand as server capacity increases) backups simultaneously. When one script completes, it picks up the next at the top of the pile (a single pile rather than 10) an executes it, and so on. Now, it is possible (and at this stage probable) that some of the instances are going to fail with a fatal error and die. That is fine, as I am handling that with a custom error handler, but obviously I don't want the failure of one instance to have any bearing on the others. Having read some of the other questions on here, I have seen PHP forking and Supervisord discussed, but to be honest, casting my mind back 7 years to my process scheduling paper has defeated me! It would be really great to get some advise of how to implement something like this, if it is at all possible? Thanks :)
Running a series of daily PHP scripts in multiple processes
I have modified your code a bit, have a look see if you can see where youve gone wrong. Dim Page As Worksheet Dim lRow As Long, LCol As Long Dim fullRange As Range Dim PageName As Variant For Each Page In Worksheets PageName = Split(Page.Name, " ") If UBound(PageName) > 0 Then ' Worksheets(Page.Name).Activate - this line is most likely not needed lRow = Page.Range("A" & Rows.Count).End(xlUp).Row LCol = Page.Cells(1, Columns.Count).End(xlToLeft).Column Set fullRange = Page.Range(Cells(1, 1), Cells(lRow, LCol)) accappl.DoCmd.TransferSpreadsheet acImport, acSpreadsheetTypeExcel12Xml, Page.Name, strpathxls, True, fullRange End If Next
For Each Page In Worksheets PageName = Split(Page.Name, " ") If UBound(PageName) > 0 Then Worksheets(Page.Name).Activate lRow = ActiveSheet.Cells(Rows.Count, "A").End(xlUp).Row LCol = ActiveSheet.Cells(1, Columns.Count).End(xlToLeft).Column Fullrange = Worksheets(Page.Name).Range(Worksheets(Page.Name).Cells(1, 1), _ Worksheets(Page.Name).Cells(lRow, LCol)) accappl.DoCmd.TransferSpreadsheet acImport, acSpreadsheetTypeExcel12Xml, _ Page.Name, strpathxls, True, Fullrange End If Next I have written this code in VBA Excel to backup data into access from excel. The code doesn't like the way that I wrote the range in my for each loops. I also tried the 2nd for each loop, but that just backed up the main page repeatedly( with the correct table names though). I think the 1st way is close, but I don't understand what is wrong with FullRange line which is type Range. EDIT: The error is object variable or with block variable not set on the FullRange line Update 6-18, It seems that the fullrange should be in the form string. I have edited a little but the error I am getting now on the transferspreadsheet line is "The Microsoft database engine could not find the object'1301 Array$A$1:J$12'. Make sure that the object exists and you spell its name correctly. I took out fullrange and put in page.name and it gave me the same error. For Each Page In Worksheets PageName = Split(Page.Name, " ") If UBound(PageName) > 0 Then ' Worksheets(Page.Name).Activate - this line is most likely not needed lRow = Page.Range("A" & Rows.Count).End(xlUp).Row LCol = Page.Cells(2, Columns.Count).End(xlToLeft).Column fullRange = Page.Name & Page.Range(Page.Cells(1, 1), _ Page.Cells(lRow, LCol)).Address accappl.DoCmd.TransferSpreadsheet acImport, _ acSpreadsheetTypeExcel12Xml, Page.Name, strpathxls, True, Page.Name End If Next
Setting range properly in DoCmd.TransferSpreadSheet (VBA Access in Excel)?
0 Can you just use robocopy? This line will copy all files in c:\source and its subfolders that have been modified in the last day, to d:\test. robocopy c:\source d:\test *.* /s /maxage:1 Of course if you forget to run it one day, you'll miss any files touched that day. So if this is really for backups, the better approach is to use the Archive bit. robocopy c:\source d:\test *.* /s /m When a file is created or edited, Windows will clear the Archive bit. robocopy with the /m switch will only copy files with the Archive bit set (meaning only ones that have changed since the last time you ran your script), then sets the Archive bit. Share Improve this answer Follow answered Jun 10, 2013 at 1:47 Nate HekmanNate Hekman 6,5772828 silver badges3030 bronze badges 1 You're welcome. If this has answered your question, please mark it as Accepted. – Nate Hekman Jun 11, 2013 at 19:49 Add a comment  | 
Here is what I want to do: I want to write a "bat" file that will check all the files in a single partition to determine whether any file is revised/created today and if any, I would copy these file to a folder. So, if I run this bat everyday before I leave my office, I can backup all the files I used in a single folder. The bat file I have now copies the folder instead of file, and sometimes it doesn't work at all... Could you help me debug it? You might want to put it in a root directory such as C/D, and then change d:/test to whatever folder you plan to "test copy the targeted file. Here is the code I have for now: @echo off set t=%date% set t=%t:~0,10% echo %t% setlocal ENABLEDELAYEDEXPANSION for /f "tokens=*" %%i in ('dir /b /a-d') do ( set d=%%~ti set d=!d:~0,10! echo !d! if "!d!"=="%t%" (if not "%~nx0"=="%%i" copy "%%i" d:\test)) for /f "tokens=*" %%j in ('dir /b /ad') do ( set d=%%~tj set d=!d:~0,10! echo !d! if "!d!"=="%t%" (echo d|xcopy /e /y "%%j" d:\test\%%j))
bat file debug "back up used files"
Try out this python script here. python backup_tool.py
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 9 years ago. Improve this question I have encrypted iPhone backup in Windows and its Encryption Password. How to open an SQLite DB files from the backup manually without using iTunes. Without encryption I am able open it using any SQL Manager.
I have encrypted iPhone backup in Windows and its Encryption Password. How to open SQLite DB files from the backup manually without using iTunes [closed]
0 one possible way is run the following as daily cron. mysqldump -u <db_user> -p <db_password> <db_name> -h <db_host_if_any> > /home/backups/backup_<timestamp>.sql gzip it and store it(just a mechanism to reduce size) gzip /home/backup/backup_<timestamp>.sql Share Improve this answer Follow answered Jun 3, 2013 at 17:51 itz2k13itz2k13 15922 bronze badges 6 I am looking for an automatic solution, something I can set as a MySql Job to back up the database correctly and also make does a maintenance on the table. – Mike Jun 3, 2013 at 18:01 mysqldump is the standard. Run it from a cron job and it is automatic. – MrCleanX Jun 3, 2013 at 18:08 What is a cron job? sorry I am new to this. – Mike Jun 3, 2013 at 18:15 cron is a periodic task scheduler. please follow the link on how to set up a cron task : thesitewizard.com/general/set-cron-job.shtml – itz2k13 Jun 3, 2013 at 18:21 Thank you, My server is Windows 2008 R2 and not lunix. does corn still apply? – Mike Jun 3, 2013 at 19:04  |  Show 1 more comment
How can I create a MySql Job that runs daily to generate a database backup and stores in on the server? Also How can I create a second Job that does a maintenance on the database to keep it running without problem? Thanks
How to create a MySql Job to create a database backup and a maintenace plan?
0 What platform are you on? The CATALOG option will be on the RMAN command line or in the recovery script file. Just "grep" for catalog as a start. Share Improve this answer Follow answered Jul 5, 2013 at 19:36 RMAN ExpressRMAN Express 49833 silver badges1414 bronze badges Add a comment  | 
I have been handed an oracle database (10.1.0.5.0) with no documentation and very little rman information and I need to change the existing the backup location drive for rman backups. Before I do that I want to check if the database has a recovery catalog. How do I do this? If no recovery catalog exists how to do I query existing script names and script content?
Oracle and rman recovery catalog
0 Just zip the whole sql database and let the user choose a folder on the sdcard to safe it. Share Improve this answer Follow answered May 31, 2013 at 10:04 danijoodanijoo 2,84344 gold badges2424 silver badges4343 bronze badges 1 Do you know if its possible to use it on a different phone then? Is the path to images always the same in Android? – user896692 May 31, 2013 at 10:07 Add a comment  | 
I've got an application that lets the user make objects of type 'Kind'. In this objects, there are fields like id, date or name. There is also a field called 'bildid' which contains a path to an image on the phone. All these data are written to a SqLite database. What would be the best way to let the user backup data? There are also some private data, a backup on the mobile phone would be the best. So how to backup the data, especially the links to my images that they can be used in a different mobile phone?
Backup my app data
A few things I can think of: Make sure that the path in the command is fully qualified (C:... etc) and wrap it in double quotes to avoid issues with spaces in path names. Paste the path into a command window and make sure Robocopy opens OK. Make sure that the account that the SQL Server instance is running under (probably Network Service) has access to the robocopy executable.
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 10 years ago. Improve this question I have created a sql job with 2 steps to create a db backup i.e., 1st step to get the db backup and second step to copy the db backup to network drive. 1st Step success. 2nd step fails. In 2step am giving type as operating system(cmdExec) and in command line i mentioned as robocopy <soure> <destination> but it throuws the Error. Error:-The process could not be created for step 2 of job 0x7847DBA2AFA7D149A5ED24AA8B3B9FA6 (reason: The system cannot find the file specified). The step failed. Quick help is highly appreciated.
Sql backup to network drive [closed]
0 DataTables take up quite a bit of memory. Be careful with this approach, as database size grows, so does the likelyhood that this method will fill up your memory. I would look at using the mysql command line to backup / restore and call those commands from your code. Using Process.Start Method => Process process = new Process(); // Configure the process using the StartInfo properties. process.StartInfo.FileName = "process.exe"; process.StartInfo.Arguments = "-n"; process.StartInfo.WindowStyle = ProcessWindowStyle.Maximized; process.Start(); process.WaitForExit();// Waits here for the process to exit. Share Improve this answer Follow edited May 15, 2013 at 14:29 answered May 15, 2013 at 13:19 David CDavid C 3,67044 gold badges2222 silver badges2323 bronze badges 3 so would you recommend doing a mysqldump using cmd.exe? – kformeck May 15, 2013 at 14:24 You could, or you could also use Process.Start. It has a WaitForExit command that will block the current thread until the mysqldump completes. – David C May 15, 2013 at 14:28 David: What is the process name for the MySQL command line interface? – kformeck May 15, 2013 at 15:47 Add a comment  | 
I am tasked with writing a data transfer utility and one requirement is that I copy an entire MySQL database from one server to another. The user will simply click a button when they want the database transfer to occur. I am a little inexperienced with databases, but I worked with them enough to know how to do what I need to do. What is the quickest way of doing this? My original idea was do this: Get a list of all tables Foreach table, get all contents of every table and store them in a DataTable in memory Backup all old tables to a CSV file Truncate all old tables Insert the new DataTables into the appropriate database on the appropriate server Is there a better, more efficient way to do this?
Copy entire database from one server to another in .NET
0 The OpenSuSE tool 'snapper' appears to show diffs between btrfs snapshots: http://en.opensuse.org/Portal:Snapper Share Improve this answer Follow answered Jan 6, 2014 at 12:22 ssamssam 1 Add a comment  | 
I am currently looking into backups and want to speed up the process while staying file-based (as opposed to filesystem-based). I want to use duplicity as the main backup component. The idea would be to use the features of the underlying filesystem to narrow down the files that duplicity has to scan to determine the differences from the last backup. I know that btrfs can make quick diffs between snapshots so that is what I want to use for now but I cannot seem to find userspace tools to actually handle the diffs that btrfs produces. Are there any libraries to interpret btrfs snapshot diffs? Obviously I am not going to hack the kernel with my nonexistand C skills, so the further away from the bare metal I am the more comfortable I am. Python would be great for example... Or am I the only one who would like to quickly have a simple list of the changed files in file system? For reference: a more complete desciption of my idea
Are there any userspace programs to interpret btrfs snapshot diffs?
0 I experienced same kind of problem and could move the /home partition to some mounted device in 2-steps after logging in as root(On 'Ubuntu 12.04.1 LTS' server). Step 1: Move /home to /mounteddevice/home Step 2: Update /etc/passwd file replacing 'home' with 'mounteddevice/home' And the other directory which consumes more space is /lib and /var/lib and I am searching for a proper solution to move /lib and /var/lib Share Improve this answer Follow answered May 30, 2013 at 5:16 Prasanth GuddantiPrasanth Guddanti 2155 bronze badges Add a comment  | 
I 'm using ubuntu and windows in parallel. In my hard disk I left some space for windows and linux also. Now disk apace is full. How can I transfer some data from root to other derive without affecting any applications? plz suggest me the best approch I'm attaching the screen shot of disk usage analyzer!
root space full How to take backup?
After checking my VM's event log, I figured out that the backup wasn't running as expected due to limited space in the HD. After I cleared some space it started running as expected.
I configured Windows Azure Backup on my VM hosted on Azure. I did manage to create and upload a certificate following this tutorial and this tutorial. I downloaded the server agent to the VM and configured it, I then managed to perform a manual backup and it worked fine. However I scheduled it to run every day at 3am using the wizard provided and it's not running. I check every day, and the last backup that is listed is the one I did manually. The dashboard in the Backup Server Agent shows it's scheduled, but it's not running. I tried leaving the agent open overnight, and it didn't help. Any insight on the situation will be helpful. Thanks,
Azure Recovery Services - Backup not running automatically
0 Check your Ruby version. I would recommend using RVM and ruby version 1.9.2. This should help you. Share Improve this answer Follow answered Jul 18, 2013 at 19:37 user2597015user2597015 1 Add a comment  | 
I'm trying to use knife-essentials to backup all objects in a Chef 11 server to json files. I created a directory "backup" containing .chef/download.rb transfer_repo = File.expand_path('..', File.dirname(__FILE__)) chef_server_url "https://localhost" node_name 'chef-importer' client_key "~/.chef/client.pem" repo_mode 'everything' versioned_cookbooks true chef_repo_path transfer_repo cookbook_path nil When I try to use "knife download" I get this error: # /usr/local/rvm/bin/chef_knife download -c .chef/download.rb / ERROR: TypeError: can't convert nil into String This is complaining about cookbook_path, so I tried removing that line, but that gives me this: ERROR: File chef is a directory while file chef is a regular file What's the correct way to use knife-essentials to download everything in Chef 11? Thanks
How to use knife-essentials to backup Chef 11
0 I would generally create a copy of the file itself, place it in a "backup" folder, and apply some naming scheme to it to indicate its age. Eg: folder/originalFile.xyz ==> folder/backup/originalFile_2013-04-14-12-48.bak Update/afterthought: I think the efficiency of this will depend on the OS performing the Copy-operation, but it should in general not be too bad. Unless you have good reason to do so, I would avoid trying to add extra logic to do it more efficiently. Update in response to comment: I won't provide a detailed implementation here, but I'll try to point you in the right direction: Check out System.IO.File, specifically the methods Copy and Exists. (This list of other common IO-taks may also be useful) With these, you should be able to check if a file exists (eg. if you already have "backup_1.xyz" in your backup-folder), and based on that, generate a new name for your next backup file. Create a loop that replaces 1 with increasing numbers until you find a "free" filename, and then copy the original file to a new file with that name. Good luck! :) Share Improve this answer Follow edited Apr 24, 2013 at 11:08 answered Apr 24, 2013 at 10:47 KjartanKjartan 18.7k1616 gold badges7171 silver badges9898 bronze badges 2 Can you give me an example of how to do it with the backup_01, backup_02 etc? Placing it in a special backup folder is a good idea by the way. – Annoying Bot Apr 24, 2013 at 10:58 Updated my answer - no code, but hopefully some useful pointers. – Kjartan Apr 24, 2013 at 11:09 Add a comment  | 
What is the most efficient way to make a backup of a file when it's being opened into the program, so that when the user changes and saves it, there is always a way to go back? Example: private void open_click(object sender, EventArgs e) { ofd.DefaultExt = ""; if (ofd.ShowDialog() == System.Windows.Forms.DialogResult.OK) { fileIn = ofd.FileName; fileOut = Path.GetTempFileName(); string encoded = File.ReadAllText(fileIn); etc. etc. etc } The file that gets loaded into the program needs to get backed up as backup_01 and put in the same folder as the original file. When backup_01 exists, backup as backup_02, and so on). Examples are more than welcome!
How to backup a complete file?
You are confusing two different things: Backup of an individual device to iCloud (Backup) Sharing/syncing of data by an app with other instances of itself through iCloud (Documents and Data) You want to look into the latter if you expect to upload from your app on one device and download to the same app for the same iCloud account on another device.
My app builds PDFs unique to each individual. Since they cannot be easily recreated, and would need to be used on multiple devices, I want them to be backed up to be iCloud. The ideal situation would be that they are made on one device, and when downloaded with same Apple ID on another device, all of the documents are already there. How do I accomplish this? I have iCloud enabled on the APP ID, but there is nothing to configure there, and everything I have seen so far on this subject is how to stop it from backing up to iCloud. I thought this would mean that it does it automatically, but when I install on different devices, the documents are not there. What do I need to be doing? Is there some extra step in coding that needs to take place?
Back Up Documents Folder to iCloud
The privileges tables are located in the mysql database. Check if the content is the same, especially in the tables user, db, tables_priv, columns_priv and proc_priv. Another question to did you also upgraded the server to a newer version? If so, check the MySQL Reference Manual if there aren't any changes in the privilege tables that you need to know about. Run mysql_fix_privilege_tables program on the *unix or SOURCE scripts/mysql_fix_privilege_tables.sql on Windows.
I've just reinstalled everything on my machine. I exported all individual databases from my old MySQL server and I'm attempting to reinstall them on the new one. However, even after importing all tables, including the "mysql" table which I believe should house all the privileges, no users have any privileges other than grant. Except for the root which has all privileges as expected. I've flushed the privileges and restarted the database but no privileges are picking up. I looked at the old backup for the entire server and the old INFORMATION_SCHEMA database had far more privileges in it than the current one. However, I know this is a virtual table and should be picking up the data from the MySQL database and others. How do I restore the privileges?
MySQL Privileges not Restoring?
You are missing a space in where closure. It should be: where {$_.name -eq name_1 -or $_.name -eq $Name_2} To be sure that is working corectly use () so your statement will be: where {($_.name -eq name_1) -or ($_.name -eq $Name_2)} UPDATE My full (tested) script to calculate sum of selected directories: $names = @("IIS", "IIS Express") $folders = @("C:\Program Files", "C:\Program Files (x86)") $x = (gci $folders | where {$names -contains $_.name}) $sum = (($x | %{(Get-ChildItem $_.FullName -recurse | Measure-Object -property length -sum)}) | Measure-Object -Property 'Sum' -Sum).sum if($sum -gt 3000){write-host "success"} In your case: $folders = @("C:\Program Files", "C:\Program Files (x86)") replace with $folders = @("C:\Users\location_A", "C:\Users\location_B", "C:\Users\location_C") and in names put what you need
I'll start by stating that i'm pretty new to Powershell but from what i hear it can be pretty powerful. with that said i'll specify the problem. i'm trying to write a Powershell script destined to run daily and check the total size of a number of specific folders, inside of each of these folders there are folders sorted by, let's say names. what i'm aiming for as the final result is a script that checks these folder's size and if it exceeds of the defined limit i set beforehand, their content will be moved to a pre-defined destination. The files will be moved to a folder with the same name as they were located in before. here is where I've gotten so far: $Folder_A = "C:\Users\location_A" $Folder_B = "C:\Users\location_B" $Folder_C = "C:\Users\location_C" if((get-childitem $Folder_A , $Folder_B , $Folder_C | where {$_.name -eq name_1-or $_.name -eq $Name_2} | measure-object -property length -sum).sum -gt 30000) {write-host "success"} OUTPUT? You must provide a value expression on the right-hand side of the '-eq' operator. At line:1 char:63 + if((get-childitem $input , $Temp , $Tiffs | where {$_.name -eq <<<< I001 -or $_.name -eq I002} | measure-object -property length -sum).sum -gt 30000) { write-host "success"} + CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException + FullyQualifiedErrorId : ExpectedValueExpression if anyone is able to help progress with this thing i would appriciate it alot! thanks in advance.
Difficulties creating an automatic file moving script (Powershell)
0 Try to Repair mailbox permissions as below, I hope it will help you if you are going to restore backup for already existing account. WHM > Email > Repair mailbox permissions Share Improve this answer Follow answered May 29, 2013 at 11:46 Bobbin ZachariahBobbin Zachariah 59533 silver badges77 bronze badges 0 Add a comment  | 
My restoring action stops when it gets to "fixing mail permission" part of restoring Cpanle backup. Any Help?
Cpanel backup freezing in “fixing mail permission” state
0 Are you sure your backUP function is receiving .thumbnails files in $list1? If the files are hidden, then Get-ChildItem will only return them if the -Force switch is used. As for other recommendations, Robocopy.exe is a good dedicated tool for performing file synchronization. Share Improve this answer Follow answered Apr 14, 2013 at 14:58 Emperor XLIIEmperor XLII 13.2k1111 gold badges6666 silver badges7676 bronze badges 1 hey emperor, it is recieving .thumbnails cause other .thumbs are copying ok, it would appear i dont have rights to this particular folder as i have checked. As for robocopy, i will be integrating it into this script, but i like the error capabilities in powershell – user1056661 Apr 16, 2013 at 16:48 Add a comment  | 
I have written a backup script, which backs up and logs errors. works fine , except for some .thumbnails, many other .thumbnails do get copied! of 54000 Files copied, the same 480 .thumbnails do not ever get copied or logged. i will be checking the attributes however i feel the copy-item function shouldve done the job. Any other recommendations are welcome as well, but please stay on topic, thx!!!! here is my backUP script Function backUP{ Param ([string]$destination1 ,$list1) $destination2 = $destination1 #extract new made string for backuplog $index = $destination2.LastIndexOf("\") $count = $destination2.length - $index $source1 = $destination2.Substring($index, $count) $finalstr2 = $logdrive + $source1 Foreach($item in $list1){ Copy-Item -Container: $true -Recurse -Force -Path $item -Destination $destination1 -erroraction Continue if(-not $?) { write-output "ERROR de copiado : " $error| format-list | out-file -Append "$finalstr2\GCI-ERRORS-backup.txt" Foreach($erritem in $error){ write-output "Error Data:" $erritem.TargetObject | out-file -Append "$finalstr2\GCI-ERRORS-backup.txt" } $error.Clear() } } }
backing up .thumbnails powershell
0 Create Procedure ArchiveEmployeeTranactions ( @SaleNumber int, @EmployeeNumber int ) AS BEGIN IF @SaleNumber is null BEGIN RAISERROR ('Please enter valid Sale Number',16,1) END Else IF @EmployeeNumber is null RAISERROR ('Please enter Valid Employee Number',16,1) END INSERT INTO Archive SELECT sale.employeeNumber, employee.FirstName, employee.LastName, sale.saleNumber FROM employee INNER JOIN sale ON employee.EmployeeNumber = sale.employeeNumber WHERE employee.employeeNumber=@EmployeeNumber and sale.saleNumber=@SaleNumber END Share Improve this answer Follow answered Apr 5, 2013 at 6:20 user2214384user2214384 Add a comment  | 
i've got a question about raising errors and copying the contents from a table into a stored procedure. What i need to do is move employee information to an Archive table for storage and backup,Raise Error Messages when employee number does not exist, and only move the employee records that have no sales, currently i'm stuck on what to do after i've made sure both sale number and employee number are not null. Here is what i have so far: Create Procedure ArchiveEmployeeTranactions ( @SaleNumber int, @EmployeeNumber int ) AS SELECT sale.employeeNumber, employee.FirstName, employee.LastName, sale.saleNumber FROM employee INNER JOIN sale ON employee.EmployeeNumber = sale.employeeNumber IF @SaleNumber is null BEGIN RAISERROR ('Please enter valid Sale Number',16,1) END Else BEGIN IF @EmployeeNumber is null RAISERROR ('Please enter Valid Employee Number',16,1) END
copy content from table into stored procedure for backup and storage
0 If you already know the name of the backup process shown when you type top using SSH, then I think you can use ps aux | grep your_process_name with BASH using SSH. You may also call this commands using PHP , more info about this : Run Bash Command from PHP Share Improve this answer Follow edited May 23, 2017 at 12:13 CommunityBot 111 silver badge answered Apr 3, 2013 at 15:16 JeanJean 53444 silver badges2121 bronze badges Add a comment  | 
I have full automatic backups running fine in WHM, and I'm now implementing a script to automatically download content to a database, however they're not playing nicely together. There's several dozen feeds I'm trying to aggregate, many of them containing several thousand items, so ideally I'd like this a process running round the clock refreshing them all, which alone isn't a problem. But, due to the feeds and backups needing a lot of disk usage, the load and I/O wait time is getting ridiculous, so I'm hoping there's a way I can detect (preferably in PHP) if a backup is currently running pause the feed processing while that's happening. I could just play around with backup/feed times so they don't coincide, but the backups seem to take anything from 2 - 10 hours, so that's really not ideal. Can I detect if a backup is currently being run by WHM? Thanks.
Detect if cPanel / WHM is currently running, in PHP?
I figured it out. You can copy your ~/workspace/eclipse/.metadata to somewhere. Change a preference, and sync your workspace to your backup in order to find out what files are changed. You'll find that a lot of settings are in ~/workspace/eclipse/.metadata/.plugins/org.eclipse.core.runtime/.settings/. The javascript templates and other javascript options are in ~/workspace/eclipse/.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.wst.jsdt.ui.prefs org.eclipse.wst.jsdt.ui.prefs = Code Templates org.eclipse.jdt.ui.prefs = Syntax Coloring org.eclipse.ui.editors.prefs = Text Editors
I have three common workplaces where I use the Eclipse IDE. A nice trick when using multiple common workplaces is to copy certain configuration files to Dropbox, and link to them in the original configuration location. This way, all settings and changes are instantly available in your other workplaces. You've got your workspace with a whopping 100 megabytes of files. You've got your .eclipse which is closing in on 200 megabytes. I would like to know which specific files contain my custom javascript code templates, and which contains my keyboard shortcuts, so I can share these, and only these, with myself through Dropbox. Ideally, I'd like a list of of certain settings and their locations so I can choose to share more. But I haven't found something like this on Google. Why am I not just sharing my entire workspace and configuration directory? Well, first, it is crazy big. Second, Eclipse is modular. In some places I use certain modules that I don't use elsewhere. And you all know that modules/plugins are a crazy mess of files and configuration from which there is no escape.
What files in Eclipse contains code templates and keyboard bindings?
Use getopts and pass the directory as a named argument: ./myScript.sh -d target_folder file1 file2 file3... fileN
So, I need to write a script in bash which will backup files of directory. Script gets files (list of files for backup) as arguments, and last argument must be target folder (directory). If target folder doesnt exist, must be created by script. I was planning to use for loop for moving through list of arguments (files), but I dont know how to use last argument, and check if it exists. Call of script: ./myScript.sh file1 file2 file3... fileN target_folder Thanks. :) I started this: #!/bin/bash #doing backup of files passed as list of arguments. if [ "$#" lt "2" ] then echo usage: "./myScript.sh <list of arguments -files for backup.>" exit fi for arg in "$#" do if #last argument exist as folder in directory, just copy all files in else #make targer folder and copy all files in done
Bash script for files in directory (backup)
I don't think there was a need to restore the remote. You could simply push your local repo to a new remote one. That's one of the reasons that made distributed source control so popular, isn't it?
The machine where our remote-repositories are stored crashed. Currently it is restored with a two-day old backup. Can you give us some advice on what to do? In our opinion, all we need to do is push our local commits in all branches back to the remote-repo. Are we oversseing something here? Any advice would be appreciated! Kind regards, Florian
GIT: restoring server with remote git repository - how to act right?
0 Unzip to a temp location. Use the <present> selector to make a fileset of files in both original and temp locations (those that would be overwritten). Use that fileset to <copy> your files from the original location to a backup location. Copy over all files from the temp location to the original location. Delete your temp location. Share Improve this answer Follow edited Mar 26, 2013 at 18:18 answered Mar 26, 2013 at 15:14 martinez314martinez314 12.3k55 gold badges3737 silver badges6363 bronze badges Add a comment  | 
I have some folders which are zipped together. I want to unzip the folders to some location or different, but if there exists the same file in .zip then I want to make a back up of the file in some location and then the UNZIP command will overwrite the file.
how to backup some of the existing files present before unzip