Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
0 We do this with our Nexus installation - the data is stored on a separate EBS instance that's regularly snapshotted but the root disk isn't (since we can use Puppet to create a working Nexus instance using the latest base AMI, Java, Tomcat and Nexus versions). The one drawback of this approach (vs your other approach of backing up to S3) is that you can't retrieve it outside of AWS if needed - if that is an important use case I'd recommend either uploading a volume bundle or a .tar.gz backup to S3. However, in your case if you have a single EBS-backed EC2 instance which is your CMS server you could run it with a large root volume and keep that regularly backed up (either using EBS Snapshots or backing up a .tar.gz to S3) - if you're not particularly familiar with Linux that'll likely be the easiest way to make sure all your data is backed up (and if you need to extract the data only you can always do this by attaching that volume (or an instantiation of a snapshot of it) to another machine - you'd also have access to all the config files which may be of use... Bear in mind that if you only want to run your server some of the time you can always Stop the instance rather than Terminate it - the EBS Instances will remain. Once you take a snapshot your data is safe - if part of an EBS Instance fails but it hasn't been modified since the last snapshot then AWS will transparently restore it with the EBS Snapshot data. Share Improve this answer Follow answered Mar 23, 2013 at 20:26 PeterPeter 96577 silver badges1313 bronze badges Add a comment  | 
Is is possible, or even advisable to use and EBS instance that remains at Instance Termination, to store database/website files, and reattach to a new Amazon instance in case of failure? OR should I backup a volume-bundle to S3? Also, I need an application to accelerate terminal window functions intelligently. Can you tell I'm a linux NOob?
Should I put database and CMS files on a separate EBS or S3?
DDL commands can't be rolled back in MySQL. You need to restore from a backup. If you need to recover data that was committed since the latest backup, perform point in time recovery with binary logs. But this depends on having binary logging enabled, and having a continuous set of binary logs since the date of the last full backup.
I just dropped my company's table and realized that the SQL backup I made was for STRUCTURE not DATA. I need to restore the data immediately.... is there anyway to do this? I'm using PHPMyAdmin and all i've done so far is DROP TABLE USEFUL_TABLE AND CREATE TABLE IF NOT EXISTS USEFUL_TABLE AND (IN DESPERATION) ROLLBACK Is there anyway to get the data records back? Edit Thanks for the comments, and thank God above that I found an obscure backup somewhere that I was able to restore! Just as a tip for anyone as hasty and careless as myself, BEFORE any backup/export operations, always make sure you've selected the Dump all rows option when exporting data for a backup. I didn't, and I didn't even check to confirm that the SQL had the rows dumped.
MySQL data rollback for past 3 queries
0 How long ago did you delete them? If it was recently and you haven't emptied the trash yet, you might be able to find it there. Just click the Trash icon on the dock, and see if the folder is in there. If it is, you can just drag it back into your project and everything should be fine. If it is not there, however, it will be much difficult to recover it and you will have to find some program that searches for it and it may not be able to find it. Good luck! Share Improve this answer Follow answered Mar 19, 2013 at 23:51 JsdodgersJsdodgers 5,28322 gold badges2121 silver badges3737 bronze badges Add a comment  | 
I deleted my whole project folder (all .h and .m files and graphics, etc). Coming from Windows I thought that folders in MAC OS would merge, but they were replaced :-( I don't have any snapshots. Is there any chance to retrieve any data? (reverse engineer any derived data or sth.?) So far I could: rescue my graphics & sound from the .app file in the build folder. rescue my storyboard files from the "Autosave Information" folder (in ~/Library)
Derive deleted project from Xcode data or caches
It's probably because file "C:\Documents" does not exist. You probably want this: -r "C:\Documents and Settings\Administrator\My Documents\a.sql" ^ ^
I wrote the below line to create a mysql backup , for some reasons i'm getting Errcode 13 . E:\Xampp\xampp\mysql\bin\\mysqldump -u root --add-drop-database -B project_db -r C:\Documents and Settings\Administrator\My Documents\a.sql Why does the above line fail to execute ? I'm trying to create a DB backup using the above line ? Pls Help
Mysql backup error
0 Maybe use the suffix tree data structure to find longest common substrings - even possibly with differences thus representing a similarity measure. Create a new tree that mirrors the existing hierarcy in the sense of one node in the new tree per file/directory of the hierarchical structure. As you build the tree : likely recursively for exapmple using FileFilter and descending for each entry that is a directory type : for each node in the new tree create its path from the root down to that node. Make that path a key into a Map where the key is the path and the value is the node reference in your new tree. Then you can employ a suffix tree algorithm against the keySet of this map to find entries that share common suffixes - which are precisely entries that can be de-duped. That takes care of identical subtrees. The suffix tree also permits identifying "misses" -i.e. if there were one or more links in the path that differs between two paths. Share Improve this answer Follow answered Mar 3, 2013 at 1:52 WestCoastProjectsWestCoastProjects 60.8k9696 gold badges331331 silver badges583583 bronze badges Add a comment  | 
I have an archive of about 10 years worth of files that is a large directory tree structure, having multiple copies of smaller tries at various locations in the larger tree. The tree grew into this structure because of a lack of consistent backup strategy and filing strategy (all my own fault, basically). I'm looking for a way to find identical copies of trees in the larger tree, such that I can delete the copies I don't need, moving me one step closer to cleaning this big mess up. I thought I could write a script that would build a database of files in the tree, such that I could then write another script that finds trees that are identical, deleting the tree copy that is nested deepest in the tree. However, I'm not sure how to best go about this, in terms of database design and what sort of algorithm to use to efficiently compare these trees to find identical copies. To recap, this is what the tree looks like: backups/folder1/ backups/somecomputer/vault/folder1 backups/othercomputer/folder1 ... There is no guarantee that the trees are "complete" - it could be that the trees are similar but that only one copy of trees contains most files and subdirectories. So It's about finding the most "complete" tree. If anyone has any other ideas on how to solve this problem or to efficiently clean up cluttered structures like this without going over every individual file I'd be very grateful! Thanks B
find directory trees in large tree structure that are identical
0 Using the cron job to run a .sql file can do what you want. To replace all the data you'll have to add the table drop and creation statements to the file being executed. That will force the specific data that you want to be the only data in the database. Share Improve this answer Follow answered Mar 1, 2013 at 0:30 WhoaItsAFactorialWhoaItsAFactorial 3,56844 gold badges2828 silver badges4646 bronze badges Add a comment  | 
Ok I read this post: I need to restore a database (mysql) every 30 minutes using a cron job and the answer was: mysql -u user -ppassword databasename < /path/to/dump.sql My question is will that erase the data that is already in the database? I need to overwrite from a stored MySQL file. I also then need to overwrite all the files in one directory with a restore directory using cron command. This is for a joomla install... I have tried 2 different components for joomla to setup a demo site that refreshes every 30 mins but I could not get either of them working. does anyone have a solution that will totally restore a directory on a cron command?
Restore a MySQL Database and Files every 30 mins for a demo site
0 what you can or cannot backup is decided by the host. You should contact their support (it's easier). I don't understand what codes are you looking for alternatively. The surest backup is to keep a copy of your files on your hard-disk and FTP is the probably the most reliable way to do it. I might be wrong but... Share Improve this answer Follow answered Feb 28, 2013 at 1:03 tattvamasitattvamasi 83711 gold badge77 silver badges1414 bronze badges 2 They said that I can backup everything. Since I want to backup locally, I don't think I have to use FTP. – misamisa Feb 28, 2013 at 1:15 well if you can backup everything, you have the backup wizard on CPanel don't you...I am not sure I am understanding your question though – tattvamasi Feb 28, 2013 at 21:15 Add a comment  | 
I have been looking for a way to backup complete cpanel including all the files and databases locally on the shared hosting. No backup option was provided in cpanel. I've googled it but all I could found was the php script for automatic backup using FTP on remote host. What I'm looking for is the way to backup on the local shared host. I've tried using the code for FTP backup on remote host by changing the values to what my local shared host have but didn't work for me. It sounds useless to keep the backup locally but that's the only option we have now. Thanks
automatic cpanel backup on shared hosting
0 The absolutely best way of doing this without disturbing the production database is to have a master/slave replication set up, you then do the dump from the slave database. More on MySQL replication here http://dev.mysql.com/doc/refman/5.1/en/replication-howto.html Share Improve this answer Follow answered Feb 26, 2013 at 13:04 hankhank 3,75811 gold badge2525 silver badges3737 bronze badges 2 Couple of queries: does replication happens in idle or real time? If it's during idle time then sync is not guaranteed. Or if it's during real time then it's a network overhead(unless dedicated N/W link) too. – जलजनक Feb 26, 2013 at 13:43 The slave would have to be locked anyways during the dump, so delays as small as the network overhead won't matter much. It's just a matter of not having the production database getting slow during the backup. But you can set up a slave to either replicate in real time or with a delay, if this was your question? – Alb Dum Feb 28, 2013 at 0:33 Add a comment  | 
Currently i am using the mysqldump program to create backups, below is an example of how i run it. mysqldump --opt --skip-lock-tables --single-transaction --add-drop-database --no-autocommit -u user -ppassword --databases db > dbbackup.sql I perform alot of inserts and updates on my database through out the day, but when this process starts it can really slow the inserts and updates down, does anyone see any flaw in the way i am backing it up ? (e.g. tables being locked), or is there a way i can improve the backup process so it doesn't effect my inserts and updates as much? Thanks.
MySQL backup process slowing down inserts and updates
this was a problem with create folder. SELECT @fullPath = @fullPath + N'\Backups\' + @DataBaseName + N'.bak' server1 has "Backups" folder and sql can save backup on it, but there is not this folder on server2
I have a stored procedure, which performs a database backup for a specific database like this : ALTER PROC [dbo].[SP_Backup] @DataBaseName NVARCHAR(500) = NULL @fullPath NVARCHAR(500) OUTPUT AS BEGIN DECLARE @dbpath NVARCHAR(500); SELECT @DataBaseName = DB_NAME() SELECT @dbpath = physical_name FROM sys.database_files WHERE ( name = N'myDb' ); SELECT @fullPath = SUBSTRING(@dbpath, 0, LEN(@dbpath) - CHARINDEX('\', REVERSE(@dbpath) + '\') + 1); SELECT @fullPath = @fullPath + N'\Backups\' + @DataBaseName + N'.bak' BACKUP DATABASE @DataBaseName TO DISK = @fullPath; END; The problem is, it does work on one server and does not work on another server (the servers are on same machines.-- sql 2008 r2). and i get the exception "BACKUP DATABASE is terminating abnormally." backup folders have same permission settings and are like this: C:\Program Files\Microsoft SQL Server\MSSQL10_50.Server1\MSSQL\DATA\Backup C:\Program Files\Microsoft SQL Server\MSSQL10_50.Server2\MSSQL\DATA\Backup and sql users are same too; In the log file there are the following informations: Error: 18204, Severity: 16, State: 1. BackupDiskFile::CreateMedia: Backup device 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.Server2\MSSQL\DATA\Backups\myDb.bak' failed to create. Operating system error 3(failed to retrieve text for this error. Reason: 1815). Error: 3041, Severity: 16, State: 1. BACKUP failed to complete the command BACKUP DATABASE myDb. Check the backup application log for detailed messages. What could be the reason for that ?
BACKUP DATABASE is terminating abnormally
0 You can use the export tool, that is, hadoop jar hbase-*-SNAPSHOT.jar export, with a regexp param (if it starts with ^) or interpreted as a row key prefix as the last argument. Details see in the source as it seems not yet to be documented (should work from 0.91.0 on). Share Improve this answer Follow answered Feb 20, 2013 at 14:59 Michael HausenblasMichael Hausenblas 13.6k44 gold badges5353 silver badges6868 bronze badges Add a comment  | 
I want to create a backup of my hbase table using hbase export. The problem is that my rows are very big and I get a java heap space error. Is there any parameter I can give in order to limit the copied size in each step? I use the following command: hadoop jar /usr/lib/hbase/hbase-0.90.3-cdh3u1.jar export tableName backupPathOnHdfs numberOfColumnFamiliesVersions or hbase org.apache.hadoop.hbase.mapreduce.Export tableName backupPathOnHdfs numberOfColumnFamiliesVersions
hbase export row size limit
0 I think I figured out the issue. It looks like dump adds information in the file so it knows when the previous level occurred. Level 0 backup # file bkup_tmp_0_20130220 bkup_tmp_0_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:29:31 2013, Previous dump Wed Dec 31 18:00:00 1969, Volume 1, Level zero, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3 Level 1 backup, after some change # file bkup_tmp_1_20130220 bkup_tmp_1_20130220: new-fs dump file (little endian), This dump Wed Feb 20 14:30:48 2013, Previous dump Wed Feb 20 14:29:31 2013, Volume 1, Level 1, type: tape header, Label my-label, Filesystem /tmp, Device /dev/sda3, Host myhostname, Flags 3 Share Improve this answer Follow answered Feb 20, 2013 at 20:36 Tux_DEV_NULLTux_DEV_NULL 10922 bronze badges Add a comment  | 
How does dump create the incremental backup? It seems I should use the same file name when I create a level 1 dump: Full backup: dump -0aLuf /mnt/bkup/backup.dump / and then for the incremental dump -1aLuf /mnt/bkup/backup.dump / What happens if I dump the level 1 to a different file: dump -1aLuf /mnt/bkup/backup1.dump / I am trying to understand how dump keeps track of the changes. I am using a ext3 file system. This is my /etc/dumpdates: # cat /etc/dumpdates /dev/sda2 0 Wed Feb 13 10:55:42 2013 -0600 /dev/sda2 1 Mon Feb 18 11:41:00 2013 -0600 My level 0 for this system was around 11GB and then I ran level 1 today and I used the same filename and the size was around 5 GB.
Dump incremental file location
0 My take on it would be the block size of the tape is greater than your BUFFER_SIZE. You are not filling the tape blocks all the way. Share Improve this answer Follow answered Feb 8, 2014 at 6:06 JohnJohn 1 2 Add this as a comment, not as an answer. – InnocentKiller Feb 8, 2014 at 6:24 in that test i was using a single 160mb file, so block size can't help! Thank you anyway :) Looks like the final dimension is related to the number of tape writing error, and there is some way to reduce this error number software-side (like Microsoft NTBackup does..) – HypeZ Jul 14, 2014 at 8:01 Add a comment  | 
i'm writing a little tape writer application in C#, using class contained in this article: http://www.codeproject.com/Articles/15487/Magnetic-Tape-Data-Storage-Part-1-Tape-Drive-IO-Co this works very well, but writes a lot more data on tape than the original file data. Pratical example: my test file is 160mb. writing in a tape results in about 300mb space occupation. enabling hardware compression it takes about 250mb. if i read the just wrote raw data from tape i get an about 170mb file (witch is acceptable) and the backupped file always works well. I tried with other programs, Microsoft NTBackup uses just 170mb (!!) with compression enabled, other commercial and free program uses from 200 to 300mb But ALL the programs can read correctly the backup (same md5 and sha1 on recovered file!) whats going on? how can i improve my application? i really can't understand this. i add my "write" function, who uses a modded write in the class (this works only if you write a single file): private void Write(string path) { int BlockCounter = 0; int BytesRead = 0; Byte[] Temp = new Byte[BUFFER_SIZE]; using (System.IO.FileStream InputStream = System.IO.File.OpenRead(path)) { TapeOperator TapeOp = new TapeOperator(); TapeOp.Load("\\\\.\\Tape0", 0); TapeOp.SetTapePosition(0); BytesRead = InputStream.Read(Temp, 0, BUFFER_SIZE); while (BytesRead > 0) { TapeOp.Write(BlockCounter, Temp); BlockCounter++; BytesRead = InputStream.Read(Temp, 0, BUFFER_SIZE); } TapeOp.TapeMark(1, 1, 1); //TapeMark is a custom function to write a FileMark BlockCounter++; TapeOp.Close(); } } Modded write from class: public void Write(long startPos, byte[] stream) { m_stream.Write(stream, 0, stream.Length); m_stream.Flush(); }
Why Tape Backup dimension change for every program i use?
0 If i understood the question right you can just use a rivision control tool like subversion, or alternatively use tool to Synchronize Folders Between Computers, take a look at this article. Share Improve this answer Follow answered Feb 4, 2013 at 11:12 CloudyMarbleCloudyMarble 37.2k7070 gold badges9898 silver badges130130 bronze badges 4 can you explain in some details?Thanks – user1793700 Feb 4, 2013 at 11:14 Please read the article iv just attached, the second idea is good explained there. – CloudyMarble Feb 4, 2013 at 11:15 Thanks for your reply,it is a good tool, but i want to do this work automatically, with this tool i must to run the "synctoy" app, in local computers, every day(every 24 hours). how can I do this automatically? – user1793700 Feb 4, 2013 at 11:32 in fact, I want to write a C# program, that copy files from local computers to server – user1793700 Feb 4, 2013 at 11:33 Add a comment  | 
we have some computers in a local network, that their users create new files or update existing files in a specified folder in these computers. now i want to copy these files in this folder, to a server every 24 hours. how can I do this in C# ? I want to write a C# program, that do this. and what network configuration i must to do in local computers, so the server can access to the specified folder in local computers? Thanks.
Get backup from one folder in a pc in local network to server pc
0 In basic terms you cannot do that, mysqldump is literally a sql file which has insert statements which would insert the data, where as a Ms-access has a accdb which is where the data is actually stored. The import you are running actually takes the sql file and inserts it into your access db. Share Improve this answer Follow answered Jan 31, 2013 at 11:53 We0We0 1,14922 gold badges99 silver badges2222 bronze badges Add a comment  | 
i am using mysqldump to backup the database available in MySQL using the PHP to the sql dump format which creates the .SQL file. if i want to view / check the backup file then i have to import it first in order to check/verify/view the data which takes a lot of time. the issue/question is whether there is some way to create the backup copy of the MySQL database in some other format like MS-ACCESS etc rather than sql dump .sql file ? it will make it easy to open and check the database backup files.
Backup mysql database in ms-access format using php
0 the easiest way is to use script 1) select records you need 2) put them in some form of dump 3) run delete from table with parameters you need other constructions (triggers with stored procedures or else), IMHO, will shoot you in leg eventually Share Improve this answer Follow answered Jan 29, 2013 at 5:57 user2020432user2020432 3911 silver badge44 bronze badges 1 The above “script” can be a shell script (batch file on Windows), which might be considered easier than the PHP or Perl the OP worried about. – MvG Jan 29, 2013 at 6:21 Add a comment  | 
Is it possible to take daily backup (Only the records per day) for particular table in DB.Once the backup is done need to delete those records from table. Is this scenario will work without using scripting language like php,perl...?
How to take Mysql Backup without using script?
As stated in the documentation for the RESTORE DATABASE command, using the TO target-directory option allows you to change the target database directory, but only if you are restoring a database that does not already exist. If the database already exists (as it does in the commands you give above), then specifying this option has no effect. Keep in mind, too, that the database directory only holds database metadata. The rest of the data (tablespace containers, transaction log files, etc.) may be stored elsewhere on the system. If you need to relocate these files when performing a restore, you either need to use a redirected restore or, if your database is using automatic storage, specify new storage paths. You can read more about how to perform a redirected restore.
Kindly help me out in understanding the below mentioned problem. I took a backup of a SAMPLE db on P:\BAK and the backup was successful. Backup successful. The timestamp for this backup image is : 20130127162614 ---------------------------------------------------------------------------- Comment: DB2 BACKUP SAMPLE OFFLINE Start Time: 20130127162614 End Time: 20130127162619 Status: A ---------------------------------------------------------------------------- EID: 7 Location: P:\BAK Then i wanted to do a test restore to a destination folder P:\REST and i used the command C:\Users\Aritra>db2 restore db SAMPLE from P:\BAK taken at 20130127162614 to P:\ REST and the restore was successful: DB20000I The RESTORE DATABASE command completed successfully. ---------------------------------------------------------------------------- Comment: RESTORE SAMPLE NO RF Start Time: 20130127165456 End Time: 20130127165512Status: A ---------------------------------------------------------------------------- EID: 8 Location: But i am unable to find the backup image in the destination folder P:\REST after the restore. Kindly help me to understand what is wrong in my understanding.
I am unable to find the backup image after the test restore got completed successfully
0 try this tutorial for autoplay USB Then, i am thinking, instead of the autoplay.exe you can set it to run a batch file instead. In that batch file use the xcopy command. Share Improve this answer Follow answered Jan 18, 2013 at 17:16 GeorgeVremescuGeorgeVremescu 1,25377 silver badges1212 bronze badges Add a comment  | 
I have huge amount of flash drives and I need to backup my files on specific folder (for example C:\Backup). There are two problems : 1. I need to backup files with .doc and .docx extension only. 2. When flash drive is inserted, I want that files would copy automatically. (It's loop if I remember) Is there any batch solution for this?
Copy files automatically from usb to specific folder with specific extensions
0 I would not rely on file size. Use the date stamp and keep a rolling set of backups so you always have the last 5 days' worth of backups. rsync is what I would use. Share Improve this answer Follow answered Jan 14, 2013 at 7:23 Adam DymitrukAdam Dymitruk 127k2626 gold badges147147 silver badges142142 bronze badges 1 I want to use the files size merely to see if the backup has increased in size. Due to space limitation, I am replacing the backup on a daily basis, which is why I need to do the check on size. – Leon Claassen Jan 14, 2013 at 8:42 Add a comment  | 
I have written a shell script that creates a backup of my MySQL database. The script performs the following functions: Creates a Backup of the MySQL database Compresses the Backup Copies the Backup to a Remote Server Send an E-Mail displaying the size of the Backup Removes any left over files on the source server not needed. What the script doesn't do, but what I need it to do: Check the newly created Backup against the existing backup on the remote server If the new backup is smaller than the old backup, send a WARNING Notification via e mail / sms. If the new backup is larger or equal in size to the old backup, replace the old backup on the remote server with the new backup and then send the successful notification stated in point 4. Thanks, any help here is really appreciated Operating systems being used: Source Server: Ubuntu 12.04.1 LTS Destination Server: Fedora release 13 (Goddard)
Compare File & Copy Replace if Successful
0 Do you have magic_quotes enabled somewhere in your php.ini? http://php.net/manual/en/security.magicquotes.disabling.php Share Improve this answer Follow answered Jan 12, 2013 at 17:26 mosesfettersmosesfetters 1,6881616 silver badges2525 bronze badges Add a comment  | 
Hi I am getting the below error when importing a backup into phpmyadmin. Any Ideas why I am getting this? There seems to be an error in your SQL query. The MySQL server error output below, if there is any, may also help you in diagnosing the problem ERROR: Unknown Punctuation String @ 593 STR: /> SQL: -- phpMyAdmin SQL Dump -- version 2.11.4 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: Jan 12, 2013 at 11:54 AM -- Server version: 5.1.57 -- PHP Version: 5.2.17 SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";# MySQL returned an empty result set (i.e. zero rows).
SQL Error when importing backup into phpmyadmin
0 If the arguments to mysqldump are the same for when you run them in two different environments, the difference must be in the permissions of the users you are using. Check what user the commands run as (by using SHOW PROCESSLIST in another window while the mysqldumps are running) and that should show the difference and point the way to the solution: change the user or change the privileges of the user you're using. Share Improve this answer Follow answered Jan 30, 2013 at 14:34 D MacD Mac 3,78711 gold badge2626 silver badges3333 bronze badges Add a comment  | 
This is how i make mysql dumps from java public static boolean mysqlDump(String destination){ File back=new File("tempsdfsdf.fdr"); Runtime rt = Runtime.getRuntime(); FileWriter fw=null; try { fw = new FileWriter(back); } catch (IOException ex) { return false; } Process child; try { child = rt.exec("mysqldump -h"+generals.DATABASE_SERVER+" -u"+DATABASE_USER+" -p"+DATABASE_PASS+" --single-transaction --routines databasename -r"+destination); InputStream in = child.getInputStream(); InputStreamReader xx = new InputStreamReader(in,"utf8"); char[] chars=new char[1024]; int ibyte=0; while((ibyte=xx.read(chars))>0) { fw.write(chars); } fw.close(); Utils.deleteFile(back); } catch (IOException ex) { Logger.getLogger(FRMTestare.class.getName()).log(Level.SEVERE, null, ex); return false; } return true; } The dump file is "destination",but i must simulate writing of InputStream() to a file to ensure that "destination" file is fully created when threat ends so that i can zip-it in another threat.Anyway this is not important! My question is why when i run the command in cmd is dumping triggers but when i run the same command using Runtime.exec the triggers are not dumped . Sorry ,the code i a mess but i lost all day changing it to dump triggers. Thanks!
Runing mysqldump from java code won't dump triggers
0 Assuming that you have the necessary field in your table, you could simply change your full backup to something that pulls everything that was created after the last pull: -- export.sql.template sql='SELECT * FROM YourTable WHERE SampleID > $lastSampleID' Then you could store the last SampleID after each backup: # in pseudo-code - this will not work as-is cd /path/to/db lastSampleID=$(cat lastIDFile) . export.sql.template echo $sql > export.sql sqlite db.sqlite --run-script export.sql > backups/$currentDate lastSampleID=$(get-last-id-from-file backups/$currentDate) echo $lastSampleID > lastIDFile Share Improve this answer Follow answered Jan 8, 2013 at 21:19 Sean VieiraSean Vieira 158k3333 gold badges313313 silver badges293293 bronze badges Add a comment  | 
I am using Sqlite in a project. Currently I have collected 30 million samples and the size of the database file is around 2 GB!. I am also performing the backup using the sqlite3.exe application automatically every 12 hours. The size of the DB is increasing and I should switch to the incremental-backup solution. I wanted to ask if there is a way to do this for the sqlite DB? If not, I can migrate to another DBMS (like MySQL or Microsoft SQL Server, ...). Which software can I use to automate this incremental backup process. Is there any free software for this?
Incremental Backup Solution (Sqlite, MySQL, SQL Server)
0 I've a workaround, You can change the session language before call to duplicity declare -x LANG="en_US.UTF-8" It work for me, my default LANG is "es_ES.UTF-8", and duplicity fails. With "en_US.UTF-8" works. See: https://bugs.launchpad.net/ubuntu/+source/duplicity/+bug/1050509 Share Improve this answer Follow answered Jan 14, 2013 at 13:27 José IbañezJosé Ibañez 65288 silver badges1010 bronze badges Add a comment  | 
I'm using ubuntu 12.04 and deja backup. It encounters an error during the preparation of the backup: Backup failed: unknown reason: File "/usr/lib/python2.7/dist-packages/duplicity/selection.py", line 187, in Iterate log.Debug(_("Selecting %s") % subpath.name) UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 48: unexpected end of data see Below for the full report. I suspect this is because a filename was not encoded correctly. Do you have an idea how to correct for this, or at least make deja backup ignore the problem? Thank you! . Traceback (most recent call last): File "/usr/bin/duplicity", line 1403, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 1396, in with_tempdir fn() File "/usr/bin/duplicity", line 1366, in main full_backup(col_stats) File "/usr/bin/duplicity", line 491, in full_backup bytes_written = dummy_backup(tarblock_iter) File "/usr/bin/duplicity", line 197, in dummy_backup while tarblock_iter.next(): File "/usr/lib/python2.7/dist-packages/duplicity/diffdir.py", line 507, in next result = self.process(self.input_iter.next(), size) File "/usr/lib/python2.7/dist-packages/duplicity/diffdir.py", line 188, in get_delta_iter for new_path, sig_path in collated: File "/usr/lib/python2.7/dist-packages/duplicity/diffdir.py", line 281, in collate2iters for relem1 in riter1: File "/usr/lib/python2.7/dist-packages/duplicity/selection.py", line 187, in Iterate log.Debug(_("Selecting %s") % subpath.name) UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 48: unexpected end of data
deja backup: UnicodeDecodeError "unexpected end of data"
0 Adding a column to hold an entered date or a last modified date (or both) could help. You could use a trigger to set the date. Share Improve this answer Follow answered Nov 27, 2012 at 17:57 donLdonL 1,29033 gold badges1313 silver badges3838 bronze badges 1 My problem isn't exactly the date by where I start the process, but the way to connect data from several tables using the foreign keys to keep other tables related rows in order to maintain full integrity of data. But thanks for your help – samthgreat_pt Nov 28, 2012 at 11:59 Add a comment  | 
Hope that some one can give me a hint here. I have a mysql (Inno) database with multiple tables related between them by foreign keys. My database has several Gigabytes of data now. What I need to accomplish is: backup my actual database (ok mysqldump will take care of that for me), and I want to """clean""" the database and add only the data for the last month (for example), maintaining the integrity of all the rows that I want to restore. That is, imagine tables 1,2,3,4,5 (I have about 50): Tables 1 and 2 have new rows every minute, table 3 is a "configuration" table that feeds (via foreign key in) tables 1 and 2, and tables 4 and 5 (these are not """configuration/base""" tables) are related by tables 1 and 2. Ok, I understand that I have to restore all my "configuration/base" tables and contained rows in order to assure integrity of related "dependent" tables and to be possible to use those configured values. But, how do I keep only the last month of every other table rows maintaining full integrity of the restored rows? I thought in scripting something allied with the possible use of mysql binary logs. All of my "id" fields are auto-increment and at any time I should be able to restore the complete database (in meltdown cases, or for "big picture" study/mining). Thanks a million for any help given!
MySQL backup and partial restored "snapshot" (time defined) with auto-increment id fields
0 i would assume that you have a server for your website. you could use the server as the endpoint and send it over via a stream or create your own set of api. Share Improve this answer Follow answered Nov 24, 2012 at 16:34 LZHLZH 78511 gold badge77 silver badges1515 bronze badges 3 Wait a moment: my website is geheimerschatz.altervista.org and i do not know if i can create what you are saying. I am not paying for a website... – Spode Nov 24, 2012 at 17:54 from your description, it would seem to be a no. and it seem that there will lots of upload given that txt is generated from the application. it would also be better to storage them properly for easy retrieving – LZH Nov 24, 2012 at 18:21 :( Nooo. I please you. Make me see the light!! Mine is a free project, that i would to port to different OS – Spode Nov 24, 2012 at 19:30 Add a comment  | 
I want to complete my application with a function that automatically downloads or uploads three .txt files, which are generated by my application. How to do it? Can I do it with one of my web sites? How? Which functions should I use? What i would to do is similar than EverNote, so I know what i want to do is possible in C#.
c# for WP7.5: How to do remote backup of a .txt file?
You can always use this: https://github.com/goncalossilva/rails3_acts_as_paranoid It will mark your records and delete them, but not really delete them, and gives you the possibility of recovering them.
Sometimes I need delete some data in rails: Post.where(id: [123, 321]).delete_all And I need backup the data in case something went wrong. Is there any Gems or Code help me to do this? It should export to YAML file, backup all the data destroyed, and easy revert(import data again). I already done the database level backup. what I need is model/object level backup.
How to delete and backup model in rails?
Read the documentation. Specifically, the second paragraph of the documentation for BackupRead: You must set the variable pointed to by lpContext to NULL before the first call to BackupRead for the specified file or directory. Your code is also in dire need of error handling--you do no checking for errors at all, when in fact many of these APIs may fail (check the documentation for each API to learn how the function may fail and what happens when it fails). You should also implement correct resource handling, e.g. by closing the file handles.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 11 years ago. I am writing an application that used to backup some specified file, therefore using the backup API calls i.e CreateFile BackupRead and WriteFile API's. getting errors Access violation reading location. I have attached code below. #include <windows.h> int main() { HANDLE hInput, hOutput; //m_filename is a variable holding the file path to read from hInput = CreateFile(L"C:\\Key.txt", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_BACKUP_SEMANTICS, NULL); //strLocation contains the path of the file I want to create. hOutput= CreateFile(L"C:\\tmp\\", GENERIC_WRITE, NULL, NULL, CREATE_ALWAYS, NULL, NULL); DWORD dwBytesToRead = 1024 * 1024 * 10; BYTE *buffer; buffer = new BYTE[dwBytesToRead]; BOOL bReadSuccess = false,bWriteSuccess = false; DWORD dwBytesRead,dwBytesWritten; LPVOID lpContext; //Now comes the important bit: do { bReadSuccess = BackupRead(hInput, buffer, sizeof(BYTE) *dwBytesToRead, &dwBytesRead, false, true, &lpContext); bWriteSuccess= WriteFile(hOutput, buffer, sizeof(BYTE) *dwBytesRead, &dwBytesWritten, NULL); }while(dwBytesRead == dwBytesToRead); return 0; } Any one suggest me how to use these API's? Thanks.
How to backup using backup API's in c++ [closed]
This issue seems to be caused by associating the existing public domain name with the restored SharePoint - 80 application. When accessing all the SharePoint pages (and the problematic functional elements as well) via the http://HOSTNAME_HERE/ address, everything operates properly. When accessing all the SharePoint pages (and the problematic functional elements as well) via the http://PUBLIC_DOMATIN_DAME/ address, this issue occurs. I am going to research this issue separately.
I have restored a SharePoint - 80 application backup on a new server and mapped the existing domain name to this server. The default list view selector does not show the available pre-defined views/pages, but returns the 500 Internal Server Error - http://SERVER_NAME/SUB_SITE/_layouts/vsmenu.aspx?List=LIST_NAME... error instead (captured via Firebug).
Unable to expand list view selector
0 Since you have no started to use BigCouch yet and it looks like you need some features that are available out of the box in Couchbase (auto-sharding, administration console ...) Why no going on Couchbase ? Share Improve this answer Follow answered Nov 6, 2012 at 17:02 Tug GrallTug Grall 3,43011 gold badge1515 silver badges1616 bronze badges Add a comment  | 
I have a few questions about BigCouch that i'm interesting getting answers before start using it. Do I need to choose my shard key carefully or can just use an auto-generated GUID? I start with a single server with 1 replication, but I want to be ready when I need to add another shard Any GUI for managing the cluster like CouchBase have, something similar to administer the DB How can I backup the data when hosting BigCouch on EC2 (ie. snapshots) Thanks
BigCouch IDs and Backup data on EC2
0 This is wrong approach as you are open to "frauds". On rooted devices people could easily replace or modify your "wallet" file and without own records you will not be able to catch that. And BackupManager is well... for backups, so I am not really see a correlation between your needs and backups. Share Improve this answer Follow answered Oct 31, 2012 at 12:16 Marcin OrlowskiMarcin Orlowski 74.1k1111 gold badges125125 silver badges146146 bronze badges Add a comment  | 
I'm making an Android application that uses a coin in-game currency (that can be bought using in apps billing) and with that virtual currency the user can buy items that can only be bought once. To manage every purchase and how much coins each user has, I initially though of using a table for every purchase and user used in the application on the server where I keep my item list, but since my server is a low cost one and i think / hope there will be a lot of transactions, the server will not be able to deal with every user request in time (answering update lists, managing purchases, sending the items to user and so forth). Recently I found out about BackupManager and I was thinking if I used a local file to save the user coins, the updated list of items and the purchases the user has done instead of using the server, and then when one of these changes (when purchasing a new item, when updating the list of items, etc) i would update the local file and the backup using BackupManager without even contacting my server. Is that doable, is the BackupManager designed to work with very frequent backups like this?
Using BackupManager on a Android PHP-MySQL Server
0 mysqldump --user=root --password=PASSWORD forums | gzip | ncftpput -c -u root -p PASSWORD 76.121.167.17 backup-$(date +%Y-%m-%d-%s).sql.gz I found a fix, i just opened port 21 on my modem and turned on DMZ Hosting :) Thank you @MarcB!!!! Make sure NCFTP is installed :) Share Improve this answer Follow answered Oct 15, 2012 at 14:41 NiCk NewmanNiCk Newman 1,74677 gold badges2424 silver badges4949 bronze badges Add a comment  | 
Okay I use this script here to make a backup of my database: mysqldump -u root -h localhost -pPASSWORD forums | gzip -9 > backup-$(date +%Y-%m-%d).sql.gz This is used in a cron daily. But I need to download this remotely or through a ftp program every day as well so I have a physical copy of it on my home hard drive, is this possible? I know it is, can anyone tell me a quick way to do it?
How to Download my MYSQL Backup?
0 Additionally to vhosts and mysql you would need to replicate Plesk managing DB ("psa") DNS configuration /etc/psa, /etc/httpd and /etc/proftpd* There may be some other folders as well - you may need to watch changes for a while Share Improve this answer Follow answered Oct 9, 2012 at 5:26 Sergey LSergey L 68433 silver badges33 bronze badges Add a comment  | 
I have 2 linux server (centOS) with Plesk 10 and two license. I want to make the first server master and the second a mirror slave, using RSYNC. I know how to transfer some folders (domains and databases).. but I have a question: If I create a new domain on the master server, how can I "update" the slave server? If I use rsync the vhosts folder and mysql folder.. the slave plesk doesn't know that I have added a new domain.. I have to rsync the psa folder too? Someone can help me to find all the folders "to rsync" from the master server to the slave server? Thanks!
rsync for mirror server with plesk
You can have a cron job set up to back up your database and dump it to a directory of your choosing: mysqldump --all-databases --skip-lock-tables | gzip -9 > /your/backup/dir/here/`date +%Y%m%d`_backup.sql.gz What you do from there is up to you, but I agree that you should not email it due to the size. Perhaps you can use SCP to send it to a different server.
I am setting a new server, with WHM / cPanel installed. It is very important that I can take a full mySQL back-up once or twice a day. Right now the databases are pretty small (20 mb), but volume will increase rapidly as soon s we get more customers. I now there is a possibility to create a cron job and have the back-up emailed. However, I think it is a shitty solution due to the future size of these back-up's. What are your best advices regarding daily mySQL back-ups?
WHM / cPanel: mySQL back-up
0 It sounds like you never CREATEd the columnfamilies in the new keyspace. Share Improve this answer Follow answered Oct 9, 2012 at 17:28 jbellisjbellis 19.4k22 gold badges3838 silver badges4747 bronze badges Add a comment  | 
I need to copy data from one KS to another KS. Keyspaces has different names. I've made a snapshot of one keyspace and do all as said in datastax: http://www.datastax.com/docs/1.1/operations/backup_restore After start cassandra see old column families and doesn't see new column families in another KS. What i am doing wrong?
Cassandra db doesnt see new column families after backup restore
Answered it myself: #!/bin/bash src="/backup/cpbackup/daily" for dir in `ls "$src/"` do if [ -d "$src/$dir" ]; then # look for mysql directory mypath="$src/$dir/mysql" if [ -d $mypath ]; then mysqlfile=$mypath/$dir".sql" rsync -vau --progress --stats --rsh=ssh $mysqlfile [email protected]:servername/$dir fi # look for images directory impath="$src/$dir/homedir/siteassets" if [ -d $impath ]; then rsync -vau --progress --stats --rsh=ssh $impath [email protected]:servername/$dir fi fi done
I'm looking to rsync only specific files and directories from a cpanel backup to a remote server. The basic structure is: /backup/cpbackup/daily/USERNAME from within USERNAME directory I want to backup ../USERNAME/mysql/USERNAME.sql and also a folder (containing files and subfolders) ../USERNAME/homedir/siteassets so I end up on my remote server with: /USERNAME/mysql/USERNAME.sql /USERNAME/homedir/siteassets I could use wildcards rsync /backup/cpbackup/daily/*/mysql/*.sql [email protected]:servername/ but this won't give me the USERNAME folder remotely and will mean all the files end up getting merged. I assume this is possible by iterating through folders with bash or something like tht but thats not my strong point
Loop through directories and rsync contents
Upload the individual files to the server using the same method you used to upload your compressed file. The key is to NOT send the server a compressed file if you are unable to decompress it on the server.
I have a backup of my website in WinRAR file, I wana upload it on my new website... So I uploaded that WinRAR file on my new website panel, but it didn't worked... Actually I'm moving my website to another hosting, so I downloaded the backup in WinRAR file and upload it on new hosting, it uploaded successfully but its not working.. it shows the default page of that hosting... check this http://kownleg.comyr.com, I dont know whats this error all about, I'm newbie... pls help any suggestions, why this happening or how can we upload WinRAR backup file??
How to Upload WinRAR Backup File
0 Obviously this is solvable using a two pass approach. Detect and remove corrupt tar files. Remove all but the latest N files as described in the referenced question. The first pass could be performed with something along for t in *.tar; do if program_that_checks_tar_file_for_integrity $t; then : # OK else rm $t fi done Share Improve this answer Follow answered Sep 24, 2012 at 12:34 JensJens 70.9k1616 gold badges127127 silver badges183183 bronze badges Add a comment  | 
This question already has answers here: Closed 11 years ago. Possible Duplicate: Delete all but the most recent X files in bash The scenerio is as follows: Two tar files will be created in one directory per day, but I need only the latest two files, so how to delete the other files automatically each day? Is i possible to write this script using pure shell commands, and not with high level language such as perl, python or ruby... This issue is a bit similar to FTP - Only want to keep latest 10 files - delete LRU and how to delete all files except the latest three in a folder but mine also needs to test if a tar file is corrupt If newer tar file is corrupt, I would not keep it, but reserve the older ones, so what the script should be like?
shell script to only keep the latest two tar files, but delete all other tar files? [duplicate]
Do you have FTP access to your site? If you access it with an FTP program like Filezilla or SmartFTP you should be able to browse to the uploads folder and right-click to bring up the options and change the permissions. If you don't have your FTP details or have issues accessing that then the best bet is probably to email your webhost with the above information and ask can they change the permissions for you on that folder.
When backing up the whole site. This is the Error -- Backing up with BackupBuddy v2.2.33... 15:10:14: Error #5445589. Invalid backup serial (9au6kfm0rj). Verify backup directory writer permission. Fatal error. A fatal error has been encountered. The backup has halted. -- Solutions from the support -> Adjust permissions to allow write & directory creation access to your uploads folder. ie: /www/wp-content/uploads/ How can i adjust the permission? Any idea please?
BackupBuddy Plugin Error
I see no problem with this; the Mercurial hook in Buildbot works this way, doesn't it?
I have a mercurial server, running RhodeCode, that I commit my code to. My client has a Redmine installation and has requested that code I modify for them be stored on their server (understandable). I would like to still commit to RhodeCode and after a successful commit, push these changes to their repository automatically. They have their code in both an SVN repository and a mercurial repository. I am allowed to commit to either - and they handle the synchronization between the two. My assumption is that it'd be easier to push to a mercurial repository. I have a changegroup hook in mind, but I have a few technical questions on how this should work. What is the best way to handle both receiving and pushing out to an external repository though? User ----> RhodeCode ----> Redmine At the RhodeCode step/changegroup hook, how do I forward on my changes? Can I do it directly from the main repository or am I forced to clone it into another directory and push that to the client? Is there a better way to maintain my master repository and push my client's changes on?
Clone mercurial repository to external repository
0 For oracle you achieve this by using sqlplus to connect to the database and query the data. The results can be written to a file with the spool command sqlplus user/passwd spool exp.txt select * from bla sppol off quit Share Improve this answer Follow answered Sep 1, 2012 at 17:42 stevesteve 5,96011 gold badge2121 silver badges2222 bronze badges 0 Add a comment  | 
How can I export the data in the database of MSSQL Server and Oracle using SSH to connnect? Is it possible for me to type some command and export the data into, for example, a text file? Cheers
Export MSSQL and Oracle database data with SSH
0 You should use rsync. It's not a snapshot, but if you don't want to use LVM, it's a start. Share Improve this answer Follow answered Aug 8, 2012 at 11:37 InternetSeriousBusinessInternetSeriousBusiness 2,6151616 silver badges1717 bronze badges 1 How should I deal with locked files (example DB files of mysql)? – The King Aug 9, 2012 at 18:31 Add a comment  | 
I'm not familier with linux and debian system I work most with windows computers. But one of my clients uses debian linux web server and I need to upgrade the server's raid array. Before I do anything with the server I would need a full system backup. I search through the internet for solution and also this site, but I haven't found acceptable answer. I would need something like LVM snapshot, but I don't want to convert everything to LVM partition just for a backup. I found the DD to make bit by bit copy of the hardrive, but I should unmount the drive for it and I don't too much service offline. The reconfiguration of the raid will be enough offline. I found solution like TAR the necessary files and send through SSH, but it isn't a full system backup. I do backup every month form the files and settings. I need a solution that makes an easy restorable image file of the server for emergency case. If the raid configuration fails I will need SOS restoration of the full system to the old config.
Full Online System Backup Debian
0 Use mysqldump: $ mysqldump my_database > my_database_dump.sql Share Improve this answer Follow answered Aug 8, 2012 at 10:57 user647772user647772 Add a comment  | 
On my dedicated, have a 5gig NAS I want to backup to. Eventually doing a backup of the MySQL tables and public_html of each cPanel, How could I read the content of the MySQL table folder (/var/lib/mysql) and if sub-folders are found, then make a tar.gz files of those sub-folders found then copy them to /mnt/mysql_backup/date/dbname. Then go to /home/ and walk through each folder to backup the /public_html/ folder where it exists, then copy files to /mnt/files_backup/date/cpanelfoler if script is completed successfully. delete the old backups and keep only the last one just created. any help would be appreciated
Bash mysql / public_html only backup script
0 If your job operates as you have described, there is a risk that new data could be added to the table after the SELECT used to generate the email but before the DELETE. The simplest way to prevent this might be to run the two statements inside a transaction, assuming the database engine you're using supports transactions. Alternatively, some database engines support returning data from a DELETE statement in a single atomic transaction - Postgres being one, with the RETURNING clause. If that option isn't available to you, another solution would be to drive the DELETE using a high-water date/time or auto-incrementing identity column in the source table. Implementing this might require a schema change to your source table. Pseudo-code would be something like: SELECT <variable> = max(<identity>) FROM source_table SELECT <columns> FROM source_table WHERE <identity> <= <variable> DELETE source_table WHERE <identity> <= <variable> Any data added to source_table between the first SELECT and the DELETE will have a higher identity value than is stored in <variable>, so will not be removed. Share Improve this answer Follow answered Aug 1, 2012 at 16:15 Ed HarperEd Harper 21.3k44 gold badges5656 silver badges8282 bronze badges Add a comment  | 
I am setting up a Cron Job that will run every 15 minutes and if there is new data in a particular table it will back up that table, email it and delete it. Do I have to be concerned that data will be written to the database at the same time as the backup is running and it will backup "half" the data and than delete the rest of it?
Can I lose data that is being written to the database during a cron jib backup?
0 The specific answer depends entirely on the flavor of your database engine. But the general answer is you need to SELECT the definition from your database's data catalog (meta data). The function and procedure definition will probably come out intact. But the view definition may come out as just the SELECT statement - you might have to prefix it with the CREATE VIEW XXXXXXX AS part. Share Improve this answer Follow answered Jul 30, 2012 at 12:32 dbenhamdbenham 129k2929 gold badges257257 silver badges396396 bronze badges Add a comment  | 
Is there any way where I can use Batch files to get backup of the selected scripts from the SQL database...? Say - I have one stored procedure, one function and one view in a folder. sp1.sql vie1.sql fn1.sql Before run the batch file I want to take the backup of these files. Kindly note: I do not want to take entire database backup. Just the provided scripts alone. Help me to achieve this one pls...
Batch file to get sql backup scripts
0 I understand it is an old thread however, I had some success by looking at raw sql file in a text editor. Here I found that some of my rows where indeed duplicated and I was able to remove the duplicates by hand. Luckily for me there where only five or six duplicate lines. It might be a good place to start debugging. Share Improve this answer Follow answered Dec 15, 2012 at 16:40 happilyUnStuckhappilyUnStuck 37233 silver badges1818 bronze badges Add a comment  | 
I have some serious problems with an sql backup of a shop which I only have in form of a sql backup. Every time I want to import the backup with phpmyadmin (even on a new database), I get this error: INSERT INTO catalog_category_entity_datetime VALUES (1,3,52,0,3,NULL),(2,3,53,0,3,NULL),(9,3,52,0,7,NULL),(10,3,53,0,7,NULL),(11,3,52,0,8,NULL),(12,3,53,0,8,NULL),(13,3,52,0,9,NULL),(14,3,53,0,9,NULL),(15,3,52,0,10,NULL),(16,3,53,0,10,NULL),(25,3,52,0,15,NULL),(26,3,53,0,15,NULL),(33,3,52,0,19,NULL),(34,3,53,0,19,NULL),(35,3,52,0,20,NULL),(36,3,53,0,20,NULL),(37,3,52,0,21,NULL),(38,3,53,0,21,NULL),(203,3,52,0,104,NULL),(204,3,53,0,104,NULL),(207,3,52,0,106,NULL),(208,3,53,0,106,NULL),(209,3,52,0,107,NULL),(210,3,53,0,107,NULL),(211,3,52,0,108,NULL),(212,3,53,0,108,NULL),(213,3,52,0,109,NULL),(214,3,53,0,109,NULL),(215,3,52,0,110,NULL),(216,3,53,0,110,NULL),(217,3,52,0,111,NULL),(218,3,53,0,111,NULL),(219,3,52,0,112,NULL),(220,3,53,0,112,NULL),(221,3,52,0,113,NULL),(222,3,53,0,113,NULL),(223,3,52,0,114,NULL),(224,3,53,0,114,NULL),(225,3,52,0,115,NULL),(226,3,53,0,115,NULL),(227,3,52,0,116,NULL),(228,3,53,0,116,NULL),(229,3,52,0,117,NULL),(230,3,53,0,117,NULL),(231,3,52,0,118,N[...] MySQL responds: #1062 - Duplicate entry '3-249-52-0' for key 'B3A14FF699AA1FA4D4DDA0B048582A7A' There are several thousand products and almost a hundred of categories I need from that file! I tried disableling the foreign key checks, but nothing seems to work. I also just took the tables with the prefixes "category_" and "eav_" and put it in a seperate sql file, but the same there.... (I also could not find " 3-249-52-0 " or the key in the sql file) Anyone got an idea how to fix this? Thanks
Magento - Duplicate entry when importing backup
0 I don't understand how your code relates to your stated goal in the first paragraph of your question. If your goal is really as simple as you stated, then you don't need a batch file. You just need to schedule the following command to run every 10 minutes: xcopy /d D:\Source\* d:\Target The above command will copy only new files or files that have been modified since the last backup. If your backup requirements become more complicated then you probably should switch over to ROBOCOPY. It has a wealth of options that would probably meet your needs. Still no batch required. Share Improve this answer Follow answered Jul 25, 2012 at 1:19 dbenhamdbenham 129k2929 gold badges257257 silver badges396396 bronze badges Add a comment  | 
I need to create a backup batch script for a directory. It will update every 10 minutes. I would like it to only update files that have been added to the directory or modified after the last backup. I tried to use this script: @ECHO OFF SET srcdir=D:\Source SET tgtdir=D:\Target SET /A topcnt=3 SET /A cnt=0 FOR /F "tokens=*" %%F IN ('DIR /A-D /OD /TW /B "%srcdir%"') DO ( SET /A cnt+=1 SETLOCAL EnableDelayedExpansion IF !cnt! GTR !topcnt! (ENDLOCAL & GOTO :EOF) ENDLOCAL COPY "%srcdir%\%%F" "%tgtdir%" ) The problems I have is that it only works in the directory that the batch file is in, which will return the most recent three files including the batch file itself. Additionally, the copy function is not working. The program is not connecting the srcdir with the file extension, thus the program cannot determine what file to copy. Please advise.
Using Batch Files to update and backup a directory
If your schema tables contains BLOB field then sometimes you will get this error, for this I found one solution: Restore the backup using command line as follows: mysql -u'username' -p'password' < pathOfMyBackup.sql this surely solve the problem.
I am want to restore mysql backup file . My tables have lots of Blob fields so that its size is approx 3 GB. When I restore 2.5 GB backup file it is restored successfully but I do not understand what is the problem with this. I also tried to increase max_allowed_packet to 100MB to 1024MB but it did not worked... Suggest me solution if anyone had this error and solved it. thanks in advance...
Unknown object in mysql backup file error
0 Restore an Application Tier Server Restore Data To The Same Location Restore Data To A Different Server Share Improve this answer Follow answered Jul 24, 2012 at 20:06 Dylan SmithDylan Smith 22.2k22 gold badges4848 silver badges6262 bronze badges 1 I will need to add that we will be putting it on a server with a different name (domain requirements). – Ernie Jul 24, 2012 at 20:29 Add a comment  | 
I have seen a lot of discussions on how to move TFS2010, the application tier and DB tier. We have the DB sitting on the same server as the App Tier. I am going to test the process of recovering the TFS2010 DB in a catastrophic failure. Therefore I will have: SQL backups of all TFS and WSS DB's I will have the install software for all the applications (TFS etc) Is there a document that I can use to outline the process of installing the TFS2010 App tier and make it ready to accept the backup of my SQL TFS2010?
TFS2010 Disaster Recovery from Backups Only
0 Seems similar to the post on SO The solution described in that bug is to increase the max_allowed_packet in the MySQL server configuration. Share Improve this answer Follow edited May 23, 2017 at 10:09 CommunityBot 111 silver badge answered Jul 23, 2012 at 8:47 jsistjsist 5,24333 gold badges2828 silver badges4343 bronze badges 5 i increased the max_allowed_packet size to 1024M but problem remains same in ubuntu. – pbhle Jul 23, 2012 at 9:11 Do you have any BLOB field in your tables? – jsist Jul 23, 2012 at 9:26 There are many other solutions as well given in the SO link. read it fully, may be your problem will be solved through some of those... – jsist Jul 23, 2012 at 9:30 Did you restarted your MySQL server again after changing the value? – jsist Jul 23, 2012 at 9:32 yes i have blob fields in my tables. also i have restarted my mysql after increasing packet size. – pbhle Jul 23, 2012 at 9:37 Add a comment  | 
I am restoring mysql backup file which is of size 2.6GB...but while restoring I am getting "Unknown object in backup file" exception . How can i solve this problem. and there are no logs in mysql administrator. Thanks,
Unknown object in backup file in MySQL 5.1.41-3 ubuntu12.10
0 This is only for all database backup DECLARE @name VARCHAR(50) -- database name DECLARE @path VARCHAR(256) -- path for backup files DECLARE @fileName VARCHAR(256) -- filename for backup DECLARE @fileDate VARCHAR(20) -- used for file name SET @path = 'C:\Backup\' SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),112) DECLARE db_cursor CURSOR FOR SELECT name FROM master.dbo.sysdatabases WHERE name NOT IN ('master','model','msdb','tempdb') OPEN db_cursor FETCH NEXT FROM db_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @fileName = @path + @name + '_' + @fileDate + '.BAK' BACKUP DATABASE @name TO DISK = @fileName FETCH NEXT FROM db_cursor INTO @name END CLOSE db_cursor DEALLOCATE db_cursor Share Improve this answer Follow edited Oct 13, 2012 at 20:47 Bo Persson 91.4k3131 gold badges148148 silver badges205205 bronze badges answered Oct 13, 2012 at 11:01 MuruganMurugan 11111 silver badge11 bronze badge Add a comment  | 
I am running into this error when trying to Backup a database: "The media set has 2 media families but only 1 are provided. All members must be provided." Please note this is on BACKUP not on restore. There are a lot of topics on this error for RESTORE, but I didn't find any for BACKUP. I am using this T.SQL on Sql Server 2005: backup database dtplog TO DISK='e:\dtplog.bak' So it looks like SQL Server has some kind of setting specified there are multiple backup devices for this database. For some databases I don't get this error, but for some I do. Any idea what's happening?
SQL Server Backup (not restore!) error: The media set has 2 media families but only 1 are provided. All members must be provided
0 You can set RETAINDAYS to the number of days you wish to keep that backup. Since you are using NOSKIP (and NOFORMAT), SQL Server will not overwrite that backup until it has expired. At that point, you could also institute a naming standard like you are mentioning, or set a maintenance plan to erase backups older than a certain age. Share Improve this answer Follow answered Jul 12, 2012 at 18:30 Tim LehnerTim Lehner 15k44 gold badges5959 silver badges7777 bronze badges 2 Thanks for your reply, can you show me how to make it backup with incremental filenames like I mentioned? – Guy Cohen Jul 13, 2012 at 4:13 You would probably setup a job to do this. I might recommend starting by creating a maintenance plan, which includes a wizard that walks you through some of these tasks (including customization like you're asking about). You can read a great primer by Brad McGahee. – Tim Lehner Jul 13, 2012 at 12:30 Add a comment  | 
I have this script to backup my sql server 2000 database: BACKUP DATABASE [CRM] TO DISK = N'd:\CRM_BACKUP\crm.bak' WITH NOINIT, NOUNLOAD, NAME = N'GUY_CRM_BACKUP', NOSKIP, STATS = 10, NOFORMAT I want the backup to be for several days. I thought about giving the name of the backup the day of the month e.g. crm01.bak, crm02.bak.... crm30 or crm31.bak. How can I do that please? TIA Guy
sql server 2000 backup with agent
0 Well since i dont know for sure i`ve checked the manual page of redmine. Maybe you can create an repository with everything in there and than from there try to get all the datafiles to the right location on youre redmine server. http://www.redmine.org/projects/redmine/wiki/RedmineRepositories It`s a pity that there is no documentation about back ups or something else. Good luck trying to fix it! Share Improve this answer Follow answered Jul 10, 2012 at 9:47 SanshineSanshine 61511 gold badge88 silver badges1717 bronze badges 2 Thanks for answer. But redmine linking to repository won't help me. – dfed Jul 10, 2012 at 9:50 oke , i thought maybe it was a way to at least get your data back on the server. – Sanshine Jul 10, 2012 at 10:51 Add a comment  | 
We lost server with redmine. Now we have old database dump and email notifications by redmine and svn history with descriptions (descriptions contains num of issue and some comment). Does anybody know any way to restore some of lost information?
Restoring redmine issues by email and svn
Check if all SharePoint Web Applications Settings are correct. In TFS Administration Console, go to SharePoint Web Applications and for each web app in Settings window click Verify Path. This forum post led me to solution to my problem. In my case the issue wasn't with web application URL, but with port for Central Administration URL. I've recently restored TFS and SharePoint and old port number was in these settings. Once the port number was set correctly, the Backup Plan Wizard was able to launch successfully.
We have TFS 2010 and SP1 and TFS Power Tools December 2011 installed. When I try to run Backup Plan Wizard from TFS Admin Console, the wizard window never gets displayed. Admin Console freezes for a bit and than nothing, but when I close the Admin Console a dialog pops asking do I want to close the wizard. Has anyone had this issue? Is there a way to run the Backup wizard outside TFS Admin Console? EDIT: Reinstall of Power Tools didn't help and I've also tried with other users (same thing happens). There are no errors reported in event log.
Cannot start TFS 2010 Backup Plan Wizard
Here is a wiki page that has much of your question answered. I would read the whole page to grasp one concept at a time, but the "snapshot backup" is the rsync-script-to-beat-all-rsync-scripts: it does a TimeMachine-like backup where it does differential storage going backwards in time, which is quite handy. This is great if you need chronologically-aware but minimally-sized backups. Arch (the distro for which this wiki covers) does a really nice thing where you can just drop your scripts into a known location; you will have to adapt that to calling a script as a cron job. Here is a fairly comprehensive introduction to cron. I would also like to point out that rsync's compression operates on transmission not on storage. The file should be identical on your backup disk, but may take less bandwidth to transfer.
I'm in need of suggestions for backing up a very large file directory (500+ GB) on a weekly basis. (In a LAMP environment) I've read about rsync, but I'm not sure how to trigger that with CRON. Also, does anyone happen to know how much the compression in rsync shrinks the filesize? (Lets say of a 3MB .jpeg). I only ask because I can't tell how large of a backup server I will need. Just pointing me in the right direction will be enough, I don't expect anyone to do my homework for me. Thanks in advance for any help!
Backing up large file directory
0 openthe log file, findout the date of the backup you want then use the script RESTORE DATABASE [restoretest] FROM DISK = N'C:\folder\backfile.bak' WITH FILE = 1494 GO hurray a working answer. Buzz Share Improve this answer Follow answered Jun 1, 2012 at 1:03 Bz BurrBz Burr 9111 gold badge11 silver badge88 bronze badges Add a comment  | 
we have a huge (17 gig) MSDE backup file that have discovered has been appending nightly for the last 4 years. We had a database failure last night and now we need to restore the database. We tried a standard restore but it restored the database info all the way back to 2008. Which isnt an accurate reflection of the state of the database as it was last backed up. How do we restore back to just the last backup (or even a date/time) and not all the way back to 2008? cheers Buzz
How to restore MSDE database to last backup
Use cronjob to run a bash script mysqldump the databases tar -cvf the files wput it all to your remote server Also you can set a variable like now=$(date +"%Y_%m_%d") to use in your file names
I am trying to make a complete file & mySQL backup of my site each night. I was thinking the best way of doing it would be to have a cronjob run each night that would login to a remote server and replicate all of the local files. Then, I need to figure out a way to take backups of all the mysql databases (currently there are three) and upload them all to the remote server as well. This sounds like a huge project and I don't know whether to reinvent the wheel here, or if there is some script out there which basically does the same thing already.
Replicate / Backup Entire Site & SQL Databases To Remote Server With PHP Cronjob
0 You may want to use replace option with mysqldump: http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html#option_mysqldump_replace something like mysqldump --replace -uroot -p123456 mydb > export.sql Share Improve this answer Follow answered May 30, 2012 at 13:15 rkosegirkosegi 14.3k55 gold badges5353 silver badges8686 bronze badges Add a comment  | 
I have an SQL back up file for two full fields (ID, and COUNT) from a table (CATEGORIES) and I need to import them into another identical table on another backup database where COUNT is currently default to 0. When restoring I get an error message that the IDs do exist, which is true but all I need to restore/update is each COUNT into the corresponding ID field. How can I accomplish this with mysqldump or through phmyadmin. The backup SQL file includes over a Million pairs of values (ID, COUNT).
Conditionally importing a MySQL dump file
0 Store the timestamp of when each row was changed in your tables (call it change_timestamp column). Don't delete rows - instead, mark them with "D" deleted status. This way you don't have to break your head/back figuring out which rows were deleted since last backup. Have a "backup times" table When you run a backup: Save current run time into "backup times" table Retrieve the last 2 rows from "backup times" table (If <2 rows, do full backup) Back up main table rows where change_timestamp is between the 2 last backup timestamps. You can do #1/#2 with judicious use of audit tables but it's a bit harder. Share Improve this answer Follow answered May 29, 2012 at 16:32 DVKDVK 128k3232 gold badges214214 silver badges332332 bronze badges Add a comment  | 
I have several Sqlite databases, which will be updated daily. So I need to do daily backup, in case if any crash happens so I can restore. However, it is costly to backup the whole database everyday, so I'm thinking of doing incremental backup (only backup those information between two different dates). Currently my database updating process is done in Perl, so I am wondering: does anyone know how can I perform incremental database backup/ recovery in Perl/ Matlab/ Java using script? Thank you! Yours Sincerely, Qiao.
Database Incremental Backup and Restore (how to implement in Perl or Java)
0 Did you check the size of the log file (database properties, files, check the log file size)? If it is taking most of the total space, you might opt to backup locally (just in case), change the the recovery model to simple, compact the database, backup it and this way might be of a size far more manageable for the transfer. Share Improve this answer Follow answered May 24, 2012 at 17:02 eddoeddo 2,11411 gold badge2525 silver badges3131 bronze badges 2 Even if it was reduced to 1 GB it would be slow , it would mean at least stopping the site for hours... – Mazen Abu Taweelih May 25, 2012 at 1:06 Well, you can take the time you need to transfer the DB to server B as i suggested, set the recovery mode of Server A to full, and then update the status of the server B loading the changes from the log file of server A which would be of far smaller size. If the size is still considerable, you can loop as needed. The final run, which will take very small time, before transferring the file, you bring the WebApp in Server A down so that you are sure no data is lost. – eddo May 25, 2012 at 2:50 Add a comment  | 
I am having my sql server which is almost 10 GB of data in server A . and I need to get it through a backup restored in server B . however , the problem is that server A and server B are both in USA and i am in asia , and hosting company is asking me for $100 for that transfer , is there a way that i can make the backup between them quickly ? Note : I have sql server management studio to both servers . Their suggested solution is that i download the database and reupload it , but that seems extremely hard and very long process.
sql server database backup
0 Using BackupAgentHelper only uses the in-built Google backup system, and won't do anything useful for you if you don't have a Google Account on the device. SharedPreferences files are just regular files in your app's data directory. If you want to copy them to the SD card, or your own server, on command, you can do that using the normal File API. It sounds like for what you're trying to achieve, it might be easiest to just call Context.fileList() to get an array of files in your apps' internal storage, and then iterate over that array, copying each file to a folder on the SD card. Share Improve this answer Follow answered Mar 21, 2013 at 16:56 Dan HulmeDan Hulme 15k33 gold badges4747 silver badges9696 bronze badges Add a comment  | 
I am developing an App for a lot of Devices, so that they aren't registered to a Google Account. But I want to backup my SharedPreferences somewhere. In best case in a folder on the device. So I tried to use the BackupAgentHelper, but this backups only to Google or can it backup somewhere else ? Alternatively is there a way to copy the SharedPreferences to a place on the sdcard? The problem is when I doing a update to my app (not trough the market) I lose all my data. PS: Sorry for my bad englisch
Backup SharedPreferences in Android to local Server or Device
0 Popular question. You can find a few different way to copy files on Android devices, and a SQLite database is a simple, self contained file that you copy/replace like most others. This code works dandy: How do I backup a database file to the SD card on Android?. You can find other methods in the Related column as well. --> Share Improve this answer Follow edited May 23, 2017 at 12:27 CommunityBot 111 silver badge answered May 15, 2012 at 3:33 SamSam 86.8k2020 gold badges182182 silver badges180180 bronze badges Add a comment  | 
I would like to know how I can make a copy of my current sqlite db to a separate folder. Also, I need to know how I can replace a brand new database by the stored db. I can't use the Android backup engine because I will also copy some other files and folders. Any help is appreciatted!
Copy and replace (like backup) android database
0 Having earned the Tumbleweed badge (= its a boring question) here is some information to answer it.... The way to set the "do not back-up" flag has changed between 5.0.1 and 5.1. The release notes for iOS 5.1 SDK has the following entry under "Backup" iOS 5.1 introduces a new API to mark files or directories that should not be backed up. For NSURL objects, add the NSURLIsExcludedFromBackupKey attribute to prevent the corresponding file from being backed up. For CFURLRef objects, use the corresponding kCFURLIsExcludedFromBackupKey attribute. Apps running on iOS 5.1 and later must use the newer attributes and not add the com.apple.MobileBackup extended attribute directly, as previously documented. The com.apple.MobileBackup extended attribute is deprecated and support for it may be removed in a future release. Note that iCloud was introduced in iOS 5.01, and this change was introduced in 5.1, which means that the app must adapt to the iOS specific version running on the device. One of our developers found the following Gist for code that handles pre- and post- iOS 5.1 devices. Share Improve this answer Follow answered May 31, 2012 at 20:39 ColinColin 1,1191212 silver badges1717 bronze badges Add a comment  | 
Our applications set the "do not back-up" flag as per Apple's requirements. Or at least we thought so. A recent submission has been rejected because the reviewer found a file without the flag set. We tested, re-tested and tested again and see that all of our files are created with the "do not back-up" flag. Hmmm! This is not our first application using the same code base. We've had many others pass through with no issues even some quite recently. So could it be a sequencing problem? We are copying a database file out of the download bundle that is used as the application's starting content; this content is then updated as the user gets more data. The initial database file can be large - as big as 2MB - depending on the application. We open a new file in the Documents folder, copy the database contents to the new file, close it, and then set the "do not back-up" flag. Instead should we create an empty file and then immediately set the "do not back-up" flag, prior to opening it to overwrite the empty file with the database contents from the bundle? I've asked the Apple reviewers this question but have not received an answer yet. I could simply try the different sequence and see what happens in the re-review, but I'd prefer to know what I should be doing and do it, rather than guess what the problem is and shoot in the dark. So does anyone know of a sure-fire "Apple approved" way to copy out a (database) file from the bundle into the Documents directory and set the "do not back-up" flag? Can anyone shed light on any similar rejections and what they did to please the reviewers?
iPhone: Sequencing issue with creating file with "do not back-up" flag in Documents folder?
0 The FilenameFilter only matches the names of files or directories. It cannot help you with the decision whether or not to include an item based on existence in a different tree. However, copyDirectory() seems to do the right thing for your needs, just inefficiently - if you want to exclude unchanged files from the copying you need to add that logic yourself. (The solution could still make use of copyDirectory() internally, unless you want the date comparison to happen on every level.) Share Improve this answer Follow answered May 9, 2012 at 9:40 Kilian FothKilian Foth 14.1k55 gold badges4141 silver badges5757 bronze badges 1 I mean FileFilter not FilenameFilter, maybe i have to create my own FileFilter and check if file from src exsist in dest folder and check modify time... i think i can do it but i have to find dest file just from src absolute path... – Tobia May 9, 2012 at 9:50 Add a comment  | 
i'm using FileUtils of Commons.IO and i'm trying to create a backup script, the simple rules is to copy from source to dest directory all files (and subdirs) that don't exist in dest or if source has a lastmodified date newer than other. I can not understand if FileUtils.copyDirectory() is the right choice than how can I set the right FileFilter. Thank you.
Java FileUtils copy backup directory
0 I would suggest you to go for CDP 3.0 backup solution. I am taking backup of all my VPS nodes (Linux & Windows) using CDP 3.0. It is fast, efficient and takes incremental backup of your VPS. Share Improve this answer Follow answered May 4, 2012 at 8:10 AccuWebHosting.ComAccuWebHosting.Com 1,15911 gold badge77 silver badges1717 bronze badges Add a comment  | 
Here's the current solution: http://www.tech-problems.com/backup-mysql-and-files-on-amazon-ec2-to-s3/ I'm currently using a PHP CLI script that runs nightly using cron. The problem is before the script runs for the first time there's 300MB of RAM in use then after it runs RAM doubles to almost 700MB. The web files backup uses the most memory. After the scripts run the RAM says used. Anyone with VPS / low memory setups that can suggest a better - more efficient - alternative to backup web files and MySql database to s3?
What's a RAM efficient solution for web files & MySQL backup to Amazon S3 on a VPS?
0 The file as written to local storage via ftp will simply reflect the bytes sent to it from the client. It would have to be encrypted after it was received,b as ftp has no native encryption that I know of. Share Improve this answer Follow answered Apr 30, 2012 at 13:48 David WDavid W 10.1k3535 silver badges6060 bronze badges 1 the file is allready stored on my server, i am just wondering if windows offers the capability to redirect the output stream of gpg to an ftp share – Gabriel Apr 30, 2012 at 13:50 Add a comment  | 
Is there a possibility to encrypt a file "on the fly" in windows while copying file via FTP to remote storage? i don't know if that description is good enough but i want to do something in the way of gpg -e file > ftp://xxx or will i just have to rely on cygwin? i'm using windows server 2008 R2 and the file i'm copying is around 750GB in size so it's not possible to encrypt it first and then copy it.
On the fly file encryption in Windows server 2008 R2
A good starting point for this is the Android Cloud to Device Messaging (C2DM) introduced in Android 2.2. You should play around with it a little bit. It will give you a first understanding of what a cloud service is all about and what it entails on the server side as well as the client side. Take at the blod post, here.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center. Closed 11 years ago. I understand that this is an insanely broad topic, but can anyone give me any starting links/documentation on how to integrate cloud storage into my app? As an example (this isn't the actual app, since there's dozens of apps like this), I want the user to write a note and have it automatically saved to some cloud linked to their specific account. Now, I'm aware that I could work with DropBox to achieve this, but that requires the user to have a DropBox account and I would like this to work independently of any third-party services. I know this is a lot to ask, but are there any beginner resources out there that anyone can think of to get me started thinking about this? Thanks! EDIT: For all those downvoting, I understand that this is a broad topic. I just wanted to throw the question out there to the developer community and see if there were any favorite resources people used.
Android Cloud Storage [closed]
BackupExec works with append-periods and protection-periods, so yes I think you're correct. It cannot just overwrite the "oldest" backup set on the tape. Don't forget this is an actual linear tape, so it doesn't have random access quite like a hard-drive. So when it starts to overwrite Job1 at the beginning of the tape with Job9, what happens when it's used all the space that Job1 used, and now it's over-writing Job2. Basically, you are either appending to a tape at the end of the utilised area, or you're splatting the tape and starting at the beginning.
I want to set-up what I believe to be a simple back-up process using Backup Exec. This is how I want it to work: Tuesday: Run a full back up Tuesday morning with tape "A" Tuesday during the day, swap out tape "A" with tape "B" Wednesday: Run a full back up Wednesday morning with tape "B" Wednesday during the day, swap out tape "B" with tape "A" Thursday: Run a full back up Thursday morning with tape "A" Thursday during the day, swap out tape "A" with tape "B" Friday: Run a full back up Friday morning with tape "B" Friday during the day, swap out tape "B" with tape "A" and so on week after week. The problem I am running into is I just want the media (tape "A" and tape "B") to overwrite the earliest back-up once the media is maxed out in capacity. It seems that what I have to do is pick an amount of time that would be close to when the media "should" be maxed out. Then set the AP to be infinite. Is that the closest way I can achieve my goal here? Thanks David
Backup Exec configuration
The best solution for this it's not a backup/restore-over-write. Best solution it's a failover solution. Combine rsync (for any file on system and files cpanel (usr(local7cpanel & /var/cpanel/) + MySQL on mode Master-Slave In this case, you have a clone copy of your system on other server .... A nice day.
Cpanel has a built-in feature that allows automatic backup of accounts. Is there a way to automate restoration of it to another geographically separated server? For example: You setup daily backups from Server A. You restore those daily backups to Server B as they come, overwriting previous restores. Basically it's like having a standby server except the backup server's data is one day late.
How to automate restoration of Cpanel backup to another server
0 maybe robocopy will do if you are only on Win7 http://technet.microsoft.com/en-us/library/cc733145%28v=WS.10%29.aspx Share Improve this answer Follow answered Apr 13, 2012 at 7:12 AndersKAndersK 35.9k66 gold badges6161 silver badges8686 bronze badges Add a comment  | 
Can anyone recommend a program that automatically runs a backup on a given local path to a network destination (just sync, I only need an exact copy of the folder on the network location)? It is important that this program automatically runs the backup process even if the user is not logged in to the windows 7 machine. Rsync or just network destination, both will work fine.
Backup routine from a Windows 7 machine
0 Have you tried using dbo.backupset instead of dbo.backupmediaset? SELECT TOP 20 database_name, type, backup_start_date, backup_finish_date, compressed_backup_size, backup_size FROM msdb.dbo.backupset ORDER BY backup_set_id DESC; This is what I use, the dbo.backupset table should have a history of all backups ever run on your SQL server, unless someone ins manually purging that table, which is often times overlooked. More details at my blog: http://stevestedman.com/2012/01/latest-backups/ I hope this helps. Have a great day! Share Improve this answer Follow answered Apr 8, 2012 at 17:41 Steve StedmanSteve Stedman 2,65233 gold badges2222 silver badges2121 bronze badges Add a comment  | 
Does anyone know any tactics to determine if backup exec is taking backup? We do not have access to the program to determine if it is in charge but are receiving alerts for failed backups. If you run the select * from dbo.backupmediaset for System databases it normally returns what software is being used, however Backup Exec will still return SQL Server. The backupmediaset tactic also does not work for any user databases as it is only included in the msdb System DB. Any help would be greatly appreciated. Thanks
Query to determine if Backup Exec is taking backups
Posted on Superuser...ta Entering more characters
I have various applications/directories on many different servers (Linux). My method of backing these up so far have been to just use scripts/cronjobs etc to back them up onto another server. But that isn't really scalable. But what I want is a single backup solution that will back up all of these applications/directories. I need it to have a management web interface that I can use to restore data when the inevitable happens. I have done alot of research into this but I really don't know where to start, theres hundreds of applications (and they all claim to be the best). So I need some pointers. Can someone in the know suggest anything? Apologies in advance if this is a bad question to ask here, I just know people here will have useful suggestions.
Method of system/file/directory/application backup - Linux
We have a blog on how to back up Subversion repositories here: http://blogs.wandisco.com/2012/03/20/how-to-backup-subversion-repositories/ However, that only backs up the Subversion data and not the uberSVN admin stuff (such as users and teams etc). Currently the only way to back this up is via the uberSVN interface. If you have any further questions gimme a shout here: http://www.svnforum.org/forums/32-uberSVN-Help-and-Support Mand Online Operations, WANdisco
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 11 years ago. Improve this question Currently we can backup ubersvn server data by going to the web interface, log in, go to Administartor, Backup and follow the UI there. My need is to do it via a command-line/batch command.
How to backup and restore `ubersvn` data with command line? [closed]
0 Here is what I have after utilizing your sample code. (Thanks, BTW). $bkup_Counts = @{ FldrBkup = 5 txtBkup = 6 } $type = $null #set Backup Types from the array we just defined foreach ($type in $bkup_Counts.Keys) { $bkupTypeArr += $type } # nameConv has dt stamp of getdate, # fArray will have filename in it FldrBkupYYYYMMDD_HHmmSS.zip and # txtBkupYYYYMMDD_HHmmSS.zip $type = $null foreach ($type in $bkup_Counts.Keys) { $fArray += $type + $nameConv } # Array of extensions txt, asc foreach ($ea in $extArray) { $filelist += Get-ChildItem $ToBeZipped"\*" -Recurse -Include $ea |Select-Object fullname } What I need to do though, is to have a way to associate a set of extensions, with a set of bkup types, and then associate those backup types with a file or folder set of backups. example - f1.txt f2.txt f3.asc f4.asc all would zip into txtBkup....zip x1.xml x2.xml all would zip into xmlBkup....zip folder c:\myfiles would zip into fldrBkup....zip I need to be able to associate a set of file extensions with a backup type and to loop thru those backing up files, then go forward with the folder backups. Share Improve this answer Follow answered Mar 23, 2012 at 19:24 steve_osteve_o 1,24366 gold badges3535 silver badges6060 bronze badges 1 You can accept your own answer, so that this question is recorded as completed.... – Brian Tompsett - 汤莱恩 Aug 21, 2015 at 14:06 Add a comment  | 
I have some files & folders I want to back up using powershell. I'm using arrays to hold the file extensions I want to back up into one backup, and then I'm backing up the entire folder as well. The zip files I create are named TxtFileBkup_yyyymmdd_hhMMss.zip and FldrBkup_yyyymmdd_hhMMss.zip. I build the list of items to back up (by extension) as follows: $extArray = @("*.txt","*.asc") foreach ($ea in $extArray) { $filelist += Get-ChildItem $ToBeZipped"\*" -Recurse -Include $ea |Select-Object fullname } foreach ($fn in $fileList) { $fileName = $fn.FullName create-7zip $filename $zipFolder\$DataFileOut } The folder is then backed up separately in an additional step. Later on, I set the attribute byte of all files in the backup folder, then count the number of files in the folder matching a certain pattern, and if it's over 5, I un-set the attribute byte that is checked (and deleted) in the next run. $delfiles=0 $delfiles= (dir $zipFolder\TargetBackup*.zip).count-5 if ($delfiles -gt 0) # If there are more than 5 zipped backups, we'll turn off the archive bit on them {dir $zipFolder\TargetBackup* | sort-object -property {$_.CreationTime} | select-object -first $delfiles | foreach-object { attrib $_.FULLNAME -A} }} What I would like to do is to store the generic backup names (TxtFileBkup, FldrBkup) in an array along with a number of backups I would want of them, and then use that array & nbr of backups to determine and mark backups to be deleted the next time the script runs. A bonus would be if I could use that array with or without the array of file extensions & the folder backup (so it would be 1 step to create both sets of backups).
Keep set number of backup files
On Request, I repost my comment as an answer: Look at Clonezilla - this is the physical equivalent of a Snapshot in virtual world.
sorry about the long title! I have a windows server 2003. I want to a cheap backup software that will save every single thing in the machine: files, regsitry, user accounts & settings, down to the single byte! I prefer to have dvd storages at the end to restore from. I don't want to even have to worry about Admin setup or rerun software installations or anything like that. So, if the server crashes totally, I will be able to bring it back to exact mirror of how it was before it crashed. I want to be able to insert dvd and reboot to get everything back. Does the Backup utility on server 2003 do that? If not, does a software like this exist?? If not, what is the next closest thing? thanks!
Best Ultimate Full Cheapest Backup Software
Here is a PHP code snippet demonstrating how to serve an image from a PostgreSQL large object to an HTTP connection: $db=pg_connect("dbname=mydb host=dbserver"); pg_query("BEGIN"); $p = pg_query("SELECT object_id, image_type FROM contents WHERE id=$some_id"); list($oid,$type) = pg_fetch_row($p); $h=pg_lo_open($db, $oid, "r"); if ($h) { header("Content-Type: $type"); pg_lo_read_all($h); pg_lo_close($h); } pg_query("END");
I have a PostgreSql db that has several OID fields. The data to these fields has been inserted using the lo_import command. But I am not able to backup the database and restore it in another server. If I try to do so, all the text entries get restored and the OID fields have a OID value, but doing an lo_export gives me an error like the File does not exist. What is it that I am doing wrong here? [Edit:] Ok, I have found that the DB backup actually works but I have another unique problem. My DB server and my HTTP server are on different machines. So now my question is how do I retrieve the image in the HTTP server from the image server?
Accessing the image in HTTP server from a seperate image DB server
0 You can achieve this by using for command to execute copy for each file. A simple batch file would be: cd "G:\PMO\Talent Mgt\Data" for %%A in (*.accdb) do copy %%A ..\Archive\%%A_%date:-=% Share Improve this answer Follow answered Feb 14, 2012 at 21:27 MBuMBu 2,93022 gold badges1919 silver badges2525 bronze badges Add a comment  | 
I have two folders in the same drive. I want to create backup of an access database. I need to copy the main file, append the name with the date and time and store it in a different folder. Source Folder: G:\PMO\Talent Mgt\Data Source file: Talent_Management_Data.accdb Destination File: G:\PMO\Talent Mgt\Archive\Talent_Management_Data.accdb_20120101 Any suggestions?
Batch File to store backup files with timestamps
0 Programs like "diff" or "rsync" solve this problem in their own way. The basic algorithm requires you pick a "modification window" (its size depends on available memory and time, longer windows require longer matching efforts), and when an old and new hash for the same block don't match, you actually try to match with the next blocks within the given window. You need a more generalized algorithm to also handle block removals (you could actually try to match at +/- half-window for instance). Rsync (http://rsync.samba.org/) does this incremental backup job in both a disk and network I/O efficient way, and is much more sophisticated than this simple hash matching. It required the author, Andrew Tridgell, several years and a dedicated master thesis to design the algorithms and the protocol. If you don't have 3 years to spare on this, try reading the papers ! Have fun : http://samba.org/~tridge/phd_thesis.pdf Share Improve this answer Follow answered Apr 20, 2012 at 13:09 zerodeuxzerodeux 3,56311 gold badge2020 silver badges1111 bronze badges Add a comment  | 
I'm developing a backup tool and I can't figure out the most efficient way to do remote backup. I don't want to send the whole file every time there's a small change so I guess incremental backup is the solution. This is all well and good but now I'm stuck with a problem that how can I chunk one file into multiple parts. The problem is that let's say I have a simple text file and one chunk is one line: First line Second line Third line Fourth line Now I have 4 chunks. If I update the second line to let's say "THE second line", now I only need to backup the second chunk. But what if something like this happens: First line First and half line Second line Third line Fourth line Now that I added "First and half line", every line is now in a different place. So if each line is one chunk, it looks like that every chunk after the first has changed even the content is the same. Is there any simple solution for this? First I thought that I could do hash of each chunk and then just create "catalog" that would indicate the correct chunk order. This way I could match easily if the chunk exists already with the hash. However I realized that hash table solution wouldn't work with anything else than with files where chunks can be predicted and fixed. For example with binary files you are pretty much limited with fixed byte sized chunks so if there was more data added in the beginning and you started chopping it down to let's say 100k chunks, you would get different data in the later chunks than before. Any solutions?
Efficient incremental backup with data chunks
0 Ran into this a while back, the issue is probably the Remote Connection Timeout setting in SQL Server (default 600), if you log onto your SQL Server box, go to the properties and update this (for the SQL Server instance your database is hosted in) to something a bit higher, you're probably safe to just double it for now. Share Improve this answer Follow answered Feb 4, 2012 at 10:55 Daniel MorrittDaniel Morritt 1,7971717 silver badges2626 bronze badges Add a comment  | 
I have scheduled TFS backup using TFS power tool. It was working fine till 10 days back. Now it started this... tried all the ways what am aware of but no luck.
TFS Backup Fails - when it starts taking transactional backup
Since dealing with security issues are often tough and time consuming, I decided to store the file as binaries to the database and load it from the 2nd server instead. Impersonation works but that's really more of bypassing the securities.
We're using windows 2008 R2 servers and we need to backup the file to the other server whenever a file gets uploaded. Unfortunately, the client requires that there would be no file/directory sharing between servers via LAN so we are trying to do this via WCF calling another WCF. But now we're having problem calling the other WCF since they're hosted on SSL-secured website. Calling the WCF via silverlight works. Questions: 1) What might be causing the SSL/TLS error when the WCF calls the other but everything works fine for the silverlight calling the WCF? code: public FileUpload(FileUploadClass file) { // store locally ... // call the other wcf if (!fileIsExisting) { ServiceRefClient svcClient = new ServiceRefClient(); svcClient.FileUploadClass(file) } } 2) Any other way to backup the file to the other server securely apart from using WCF and Database (I'm trying database now but hopefully there is a prettier way to do this)? File/Directory/Drive sharing via local network is prohibited.
File Backup/Sync between two servers
For an iso look at remastersys - this will create a full iso with all your files. Shouldn't this be in ask ubuntu or unix/linux not stackexchange?
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 12 years ago. Improve this question I would like to know which backup software to use with my ubuntu server 11.10. Currently my server uses raid 1 in order to don't have data loss, but I would like to have a restore iso because I'm starting to do some configuration that could ruin my current installation. I tried Clonezilla with nodmraid option activated but after language selection it loops on usb devices found and disconnect without any possibility to go forward.
Backup software Ubuntu server [closed]
This question is lame, because it involves three completely unrelated domains: firefox, sqlite, and batch files. You should have isolated the problem by determining whether this is a firefox issue or an sqlite issue or a batch file issue and then you should have come up with a question regarding the domain in which the issue lies, with absolutely no mention to the other domains. I am going to give you an answer as best as I can regarding batch files: First of all, you need to stop needlessly using the start command and just invoke things directly. So, instead of: start "%SQLITE_EXE%" "%FF_profile%\%DB_dest%" < "%SQLITE_SQL%" You need this: "%SQLITE_EXE%" "%FF_profile%\%DB_dest%" < "%SQLITE_SQL%" Then, you need to redirect the output of the above command into a file of your choice. For this, you need to make use of the '>' operator. So: "%SQLITE_EXE%" "%FF_profile%\%DB_dest%" < "%SQLITE_SQL%" > myfile.txt That should do it, as far as batch files are concerned. If it does not do it, then it is a firefox or sqlite issue.
Not familiar with windows stuffs.. , I'm trying to write a little MS Windows batch in order to backup firefox history but I'm not getting the expected result, eg the firefox history dump into a file (not implemented here), and can't figure out why and how to solve. Instead I get a dump of the database in a new window. Here is what I've done till now : cmd windows terminal start "TEST" sqlite.cmd sqlite.cmd REM backup firefox history setlocal set DB_src=places.sqlite set DB_dest=places1.sqlite set FF_profile=C:\Documents and Settings\User_A\Application Data\Mozilla\Firefox\Profiles\1e6xxxxx.default set SQLITE_EXE=C:\Documents and Settings\Admin_User\SoftWare\sqlite3.exe set SQLITE_SQL=C:\Documents and Settings\Admin_User\Bureau\sqlite.sql copy "%FF_profile%\%DB_src%" "%FF_profile%\%DB_dest%" @echo off start "%SQLITE_EXE%" "%FF_profile%\%DB_dest%" < "%SQLITE_SQL%" endlocal sqlite.sql .dump html .output moz_places.html SELECT moz_places.visit_count, moz_places.url FROM moz_places ORDER by visit_count DESC LIMIT 20; [EDIT]: Worked around : - using the right sqlite query (updated in sqlite.sql below)as for these examples. - using the sql html output "moz_places.html" as I could not get the redirection work. linux stuffs are easier for me...
windows batch sqlite backup firefox history
If you've got backup_migrate you could get the database. If you've got views you could build a view to list the files or anything else you need. This would be difficult but it could work. Edit: From memory, a user can browse their files with IMCE if that is installed. You could get it to browse the same directory that the file field uses.
I broke up with my boyfriend so he changed the FTP password to my Drupal 7 website he was hosting through his host. I still have admin access but that's about it. I can get the database(s). But without FTP I can't get the files. Ran this on a Drupal basic page echo ini_get("disable_functions"); The following functions are disabled exec,system,passthru,shell_exec,escapeshellarg,escapeshellcmd, proc_close,proc_open,dl,popen,show_source Is there anything I can do!?
How to Copy Drupal 7 site with only Admin access No FTP
0 If there is still a .git folder, it should keep all version's record. I think maybe you can use git log to find out which version you are, git status to know what's the status for this repository, git diff (branch|Tag|HEAD) to check the differences from others. And finally use git revert HEAD to revert the commit status or git checkout to checkout the commit. And I recommend to use some service like Github to push your code. Share Improve this answer Follow answered Jan 9, 2012 at 15:42 Kuo JimmyKuo Jimmy 80155 silver badges1313 bronze badges Add a comment  | 
I was having problems with my Mac and decide to format and reinstall everything. I make a copy of my projects to other machine hoping that git will preserve my changes. Now I copy my projects back and the result is that all my files appears as modified product of the copy-paste, yes a fool error. How can I ignore all this dummy differences and keep the old changes? Is there a command that can revert all this differences? Thanks Edit: With "dummy changes" I'm referring to this: $ git diff --stat AuthenticationViewElements/background.png | Bin 6130 -> 6130 bytes AuthenticationViewElements/background_iPhone.png | Bin 5799 -> 5799 bytes ... $ git diff AuthenticationViewElements/background.png diff --git a/AuthenticationViewElements/background.png b/AuthenticationViewElements/background.png old mode 100644 new mode 100755 Here the change is in the file permissions caused for the copy of the file. This is the kind of changes I want to get rid.
How can I get back my changes from a backup?
From what I've seen and used, appliances are released with the ability to restore their default VM, probably from a ghost partition of some kind (I'm thinking about Comrex radio STL units I've worked with). Patches can be applied to the appliance, with the latest patch usually containing all the previous patches (if needed). A new VM means a new appliance - Comrex ACCESS 2.0 or whatever, and 1.0 patches don't work on it. It's never backed up, rather it can just be restored to a factory state. The Comrex units store connection settings, static IP configuration, all that junk, but resetting kills all that and has to be re-entered (which I've had to do before).
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 12 years ago. Improve this question My understanding of a virtual appliance is 1+ pre-configured VM(s) designed to work with one another and each with a pre-configured: Virtual hardware configuration (disks, RAM, CPUs, etc.) Guest OS Installed & configured software stack Is this (essentially) the gist of what an appliance is? If not please correct me and clarify! Assuming that my understanding is correct, it begins to beg the question: what are the best ways to back up an appliance? Obviously a SCM like SVN would not be appropriate because an appliance isn't source code - its an enormous binary file representing an entire machine or even set of machines. So how does SO keep "backups" of appliances? How does SO imitate version control for appliance configurations? I'm using VBox so I'll use that in the next example, but this is really a generic virtualization question. If I develop/configure an appliance and label it as the "1.0" version, and deploy that appliance to a production server running the VBox hypervisor, then I'll use software terms and call that a "release". What happens if I find a configuration issue with the guest OS of that appliance and need to release a 1.0.1 patch? Thanks in advance!
Version Control for Virtual Appliances [closed]
0 As you guess part of the backup catalog includes the date and time of the backup. The WITH COMPRESSION option compresses the backup to save space but a little change in the file will cause changes throughout the file because of the way compression algorithms work. If you don't want so many differences then remove the compress option, but comparing backup files isn't the way to go. If you have a database that changes little then incremental or differential backups may be of more use. However you seem to have fallen into a classic trap called the XY Problem as you are asking about your attempted solution rather than your actual problem. What is prompting you to try and compare databases? Share Improve this answer Follow edited Mar 20, 2017 at 10:29 CommunityBot 111 silver badge answered Dec 15, 2011 at 10:28 Stephen TurnerStephen Turner 7,17544 gold badges5252 silver badges7070 bronze badges 4 I've tried to omit compression, it doesn't help - files still differ. My original intent for this comparison was backup optimization, I've added detailed description of my backup algorithm in the question. – GreyCat Dec 15, 2011 at 12:23 1 The metadata information is being written to the media header using Microsoft Tape format, see msdn.microsoft.com/en-us/library/ms178062.aspx I don't think there is a way to separate the two unless you use a 3rd party backup tool. – Stephen Turner Dec 15, 2011 at 22:04 Is there a way to work it around? For example, to compute some sort of a hash that would show that state of the database has changed? – GreyCat Dec 16, 2011 at 6:51 Always replace the old backup with the most recent. – Stephen Turner Dec 16, 2011 at 11:57 Add a comment  | 
I'm using the following line to backup a Microsoft SQL Server 2008 database: BACKUP DATABASE @name TO DISK = @fileName WITH COMPRESSION Given that database is not changing, repeated execution of this line yields files that are of the same size, but are massively different inside. How do I create repeated SQL Server backups of the same unchanged database that would give same byte-accurate files? I guess that simple BACKUP DATABASE invocations add some timestamps or some other meta information in the backup media, is there a way to disable or strip this addition? Alternatively, if it's not possible, is there a relatively simple way to compare 2 backups and see if they'll restore of the exactly same state of the database? UPDATE: My point for backup comparison is that I'm backing up myriads of databases daily, but most databases don't change that often. It's normal for most of them to change several time per year. So, basically, for all other DBMS (MySQL, PostgreSQL, Mongo), I'm using the following algorithm: Do a new daily backup Diff new backup with the most recent of the old backups If the database wasn't changed (i.e. backups match), delete the new daily backup I've just created This algorithm works with all DBMSes we've encountered before, but, alas, it fails because of non-repeatable MSSQL backups.
Backup of SQL Server database without timestamps
You can use cron jobs to manage the archiving every day (at midnight or whatever). You can look it up using 'man cron'. One way to do this would be to make one cron job that archives the directory and updates it every day. Another cron job to compress that tarball. So for the first, something like: if [ ! -f directory.tgz ]; then tar cf directory.tgz directory_name/ else tar uf directory.tgz directory_name/ An example of a cron job that will run each day would look something like this: 58 23 * * * script Which will run at the 23rd hour on the 58th minute (just prior to midnight). Obviously you will need to adjust this to suite your needs. EDIT: I should also add how to go about compressing it. It would be a smart idea to look up the advantages and disadvantages of the compression algorithms. For example, gzip will probably take less time to compress than bzip2; but the latter may provide a smaller compression (based on the data being compressed) than the former. They're usage is pretty simple, again use the 'man' pages for them. Here's an example: bzip2 -c directory.tar > directory.bz2
I need to setup a bash script which tars and compresses a directory. The tar should be updated every day after that point with the same directory. This should happen for a duration of 14 days until the tar is finally removed and the process is restarted. I could use a hand setting this up. Many thanks
How to create tar that gets updated for 2 weeks
0 Check sybase syntax here. You have to specify the database that you want to load. Share Improve this answer Follow answered Dec 12, 2011 at 15:40 aF.aF. 65.8k4444 gold badges137137 silver badges199199 bronze badges Add a comment  | 
I get a dump file on my local machine and I want to reload it into a remote server ASE 12.5 (from the same local machine) I tried load database from "/tmp/local.dmp" and got Server 'SYBASE_BS', Procedure 'bs_read_header', Line 0: Backup Server: 4.26.2.1: Volume validation error: failed to obtain device information, device: "/tmp/local.dmp" error: No such file or directory. Server Message: Number 8009, Severity 16 I guess I'm wrong because I'm trying to call (implicitly) local Backup server. How can I set a remote server ?
Remote backserver with Sybase ASE
0 NO can one can extract .tgz file in plesk panel. I can only done via on your desktop with WinRar Software or you can upload .tgz file thorugh backup manager and then click on uploaded and then click on Restore. Share Improve this answer Follow answered Dec 31, 2012 at 14:03 community wiki GWAA Add a comment  | 
I have made a email backup with plesk for a particular domain. Now i like to extract/import the emails in email program local. When i unzip domain.com_mn_1112091008.tgz i get a .discorvery folder and another .zip file. How can i import the emails with their corresponding attachments? Regards
Can I manually extract email files from backups created by Plesk?
0 First you need a way to determine if your server is available or not. ping might be sufficient but is probably not ideal. As poor man's solution you could prepend your original command like this: ping -c2 server >/dev/null 2>&1 && rsync --delete... This means that cron will run rsync only if ping has been successful. Share Improve this answer Follow answered Nov 24, 2011 at 10:21 u-punktu-punkt 4,81411 gold badge1717 silver badges77 bronze badges 5 Thanks for this. Unfortunately, this does not work. Can you do me a favour and explain me the code you have provided. I understand that the output of a two-time ping goes into the nirvana of /dev/null/ but what does 2&1 mean? – Kristian Unger Nov 24, 2011 at 14:02 It means you're redirecting the file descriptor 2 to the same destination as file descriptor 1, in this case, 2>&1 means send STDERR to the same destination as STDOUT, that is, the bit bucket. – Joao Figueiredo Nov 24, 2011 at 15:06 Joao explained quite nicely what 2>&1 does but could you give more detail about what is not working for you? – u-punkt Nov 24, 2011 at 15:29 I cannot really tell what goes wrong I have added the line ping -c2 server >/dev/null 2>&1 && rsync -azvv -e ssh /Users/user/Work/Folder user@server:/home/user/BACKUP/ to my crontab and it just stopped working. – Kristian Unger Nov 24, 2011 at 15:38 Could you just fire a ping -c2 server in a terminal to check if ping is really working and not blocked by some firewall inbetween? – u-punkt Nov 24, 2011 at 16:49 Add a comment  | 
I am running a cronjob for rsyncing my hard drive with a folder on a server within my companies network. This happens on my laptop which I also use outside that network. My crontab looks like this: */30 * * * * rsync --delete -azvv -e ssh /Users/user/Work/Folder user@server:/home/user/BACKUP/ How can I make cron running this job only when the server is available? Many thanks!
cronjob for backup only when connected to company network
0 Install another instance of the database version you have the backup from. Then you can restore you backup into this instance. After that you can use the migrations tool to move the content of the database back to you old server. That is how I did it when I had the same problem once. Maybe someone else post a more tricky way. Share Improve this answer Follow answered Nov 18, 2011 at 12:20 YvesRYvesR 6,10277 gold badges4444 silver badges7171 bronze badges 2 What is the migration tool? Where can I find it? – Bogdan Verbenets Nov 18, 2011 at 12:22 Sorry, more specify -> Rights mouseclick the database, all tasks (import/export data). This is a wizard that help you create some Packages to move your data from one database to another. If you have to make conversions you can safe such a packet and modify it with BI version of Visual Studio. – YvesR Nov 18, 2011 at 12:25 Add a comment  | 
This question already has answers here: Closed 12 years ago. Possible Duplicate: Is it possible to restore Sql Server 2008 backup in sql server 2005 I have a backup that was made on MS SQL Server 100.50 version, and I attempt to restore it on a MS SQL Server 100.0 version. I get an error message "The database was backed up on a server running version 10.50.1617. That version is incompatible with this server, which is running version 10.00.5500. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server.". So how do I restore it? I can see following solutions: Create a backup for my version of database. To do this I've tried to set the compatibility level of the original database on 100.50 server to 90 (MS SQL Server 2005), but it still produces same backup file. Haven't found other ways to do it. Update my SQL Server instance to 100.50 version. Haven't found how to do it yet. Maybe there are other ways to solve this problem about which I am not aware yet. Any advice is welcome!
can't restore a database from a 100.50 backup on a 100.0 server [duplicate]
0 Did you make sure to set file permissions back so the sms app has permissions to read the file? Files stored on sdcard all have ---rwxr-x, so copying your backed up sms database to /data/data/com.android.providers.telephony/databases/ will keep those permissions from sdcard, also if you are copying it as root, root will be the owner of the file, denying the sms app write privileges i believe -rw-rw---- radio radio 972800 2011-12-05 06:40 mmssms.db there's the db on my device, so after you copy your backup back onto device /data/data do chmod 660 mmssms.db and change the ownership back to radio chown radio.radio mmssms.db Share Improve this answer Follow edited Dec 5, 2011 at 14:00 mih 51533 silver badges1313 bronze badges answered Dec 5, 2011 at 12:56 user1081573user1081573 1 Add a comment  | 
This question already has answers here: Closed 12 years ago. Possible Duplicate: Backup and restore SQLite database to sdcard I'm writing an application and I need to be able to backup a database to the sdcard and restore it via java. I first tried just copying it to the sdcard that seems to work fine and after browsing the database it seems to be all there and fine. However i can not seem to restore it if i just copy it back and overwrite the existing one i get force closes. I'm looking to backup the sms database and restore it. Thank you for any help with this issue
Android database backup restore [duplicate]
0 You need to check the account that SQL Server Agent service runs as. If the account doesn't have permissions on that network share, then it won't be able to see that path. Executing that query outside of a SQL Server Agent job (indirectly through a maintenance plan) doesn't use the security context of the SQL Server Agent service. Make the SQL Server Agent service run as a domain account with access to that network share. Share Improve this answer Follow answered Oct 18, 2011 at 17:48 user596075user596075 3 I tried to run the query with the same user and to create a job with the query and then run it. Both worked, but the maintenance plan, which uses SSIS, doesn't. – Lynx Kepler Oct 18, 2011 at 18:10 @LynxKepler I'm talking about the SQL Server Agent service account. Configure it to run under the context of a domain account through the SQL Server Configuration Manager. – user596075 Oct 18, 2011 at 18:11 at the services.msc console I saw the "SQL Server Agent" service, on the last column (Log On As) it says it's running under "MyDomain\ABC.MyUser"; with this same user I could access the network share, invoke the job and run the query successfully. – Lynx Kepler Oct 18, 2011 at 18:15 Add a comment  | 
I have an SQL Server 2005 Enterprise Edition whose Maintenance plan fails constantly with the error: backup MYSERVER (MYSERVER) Backup Database on MYSERVER Databases that have a compatibility level of 70 (SQL Server version 7.0) will be skipped. Databases: All databases Type: Differential Append existing Task start: 2011-10-18T00:10:09. Task end: 2011-10-18T00:10:09. Failed:(-1073548784) Executing the query "BACKUP DATABASE [model] TO DISK = N'\\myNetworkDrive\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp' WITH DIFFERENTIAL , RETAINDAYS = 13, NOFORMAT, NOINIT, NAME = N'model_backup_20111018001008', SKIP, REWIND, NOUNLOAD, STATS = 10 " failed with the following error: "Cannot open backup device 'C:\\Program Files\\Microsoft SQL Server\\MSSQL.1\\MSSQL\\Backup\\Arca\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp'. Operating system error 3(The system cannot find the path specified.). BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. But the query: BACKUP DATABASE [model] TO DISK = N'\\myNetworkDrive\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp' WITH DIFFERENTIAL , RETAINDAYS = 13, NOFORMAT, NOINIT, NAME = N'model_backup_20111018001008', SKIP, REWIND, NOUNLOAD, STATS = 10 runs normally and gives me the expected results. Is this a bug? What am I missing here? What is the elegant way to backup to a network location?
Maintenance Plan Fails But Query Runs
0 This example from the php manual posted by mroerick might help you. <?php $ftp_server = "ftp.example.com"; $conn_id = ftp_connect ($ftp_server) or die("Couldn't connect to $ftp_server"); $login_result = ftp_login($conn_id, "user", "pass"); if ((!$conn_id) || (!$login_result)) die("FTP Connection Failed"); ftp_sync ("."); ftp_close($conn_id); function ftp_sync ($dir) { global $conn_id; if ($dir != ".") { if (ftp_chdir($conn_id, $dir) == false) { echo ("Change Dir Failed: $dir<BR>\r\n"); return; } if (!(is_dir($dir))) mkdir($dir); chdir ($dir); } $contents = ftp_nlist($conn_id, "."); foreach ($contents as $file) { if ($file == '.' || $file == '..') continue; if (@ftp_chdir($conn_id, $file)) { ftp_chdir ($conn_id, ".."); ftp_sync ($file); } else ftp_get($conn_id, $file, $file, FTP_BINARY); } ftp_chdir ($conn_id, ".."); chdir (".."); } ?> Share Improve this answer Follow answered Oct 11, 2011 at 6:30 Jonas mJonas m 2,67633 gold badges2323 silver badges4343 bronze badges 0 Add a comment  | 
I'm building a custom web app that stores the FTP and MySQL settings for the websites I manage for clients. My goal is not only to store the settings for reference, but to create functionality to assist in doing regular backups. I've got the MySQL backup functionality working great, as it connects to the remote databases, creates a dump and sends it to my browser to download locally. BUT... what is the best way to connect to a remote FTP and download all the contents of a specific folder to my local computer? Any suggestions would be amazing!
Use PHP to connect to FTP and backup all folder contents
Just thought I'd answer this for anyone coming along - as far as i can make out the answer to the question is no, not really - unless I'm doing it the wrong way - ( and I think not ) then the best method of backing up a custom theme is to first compress the entire app, skin and media ( if relevant) directories into one archive file, move this and expand it in an empty directory, then carefully delete all the other theme folders, leaving just the files which you are using and the directory trees which you have created - this preserves any files you have in your custom theme and also the necessary directory structure. If anyone has a better method maybe they'd share it.
I am making some changes to a purchased magento theme. Is there any simpler way to backup my work other than by copying the relevant folders inside app skin and media and their directory structures.
Backup Magento Theme
The problem was a misunderstanding... I needed to get RAW Contact ID and I used ContactID.. so, before searching for GroupId I needed to obtain Contact Raw Id
Basically things are more like black and white, on one phone (Galaxy S) works fine and on another (Nexus one, my client's of course) it doesn't. First I show a list of Contacts that have phone numbers. The user chooses to backup a contact and I try to load all contact info to store it in a local database cursor = contentResolver.query(ContentUris.withAppendedId(ContactsContract.Contacts.CONTENT_URI, id), null, null, null, null); if (cursor != null && cursor.getCount() >0) { cursor.moveToFirst(); id = cursor.getLong(cursor.getColumnIndex(ContactsContract.Contacts._ID)); //get all the things I need like phones, picture, etc } Using this id I try to get the contact's group id cursor = contentResolver.query(ContactsContract.Data.CONTENT_URI, null, ContactsContract.Data.RAW_CONTACT_ID + "=" + id + " AND " + ContactsContract.Data.MIMETYPE + "='" + ContactsContract.CommonDataKinds.GroupMembership.CONTENT_ITEM_TYPE + "'", null, null); if (cursor != null && cursor.getCount() >0) { cursor.moveToFirst(); groupId= cursor.getString(cursor.getColumnIndex(ContactsContract.Data.DATA1)); cursor.close(); } Well, testing by adding a new contact, on my phone I get groupId=1, meaning System:My Contacts. On the Nexus One, I get null for group id. Of course, restoring in on my phone works fine, and on the other phone, the contact is not visible because it doesn't belong to any visible groups... Any ideas ?
Error retrieving contact group id in Android 2.1+
0 Should be in msdb..backupmediafamily, physical_device_name column Share Improve this answer Follow answered Sep 23, 2011 at 9:25 gbngbn 427k8282 gold badges593593 silver badges681681 bronze badges Add a comment  | 
We have a product which uses a database and whenever a major release is done one of the upgrade scripts used on the database calls one of our stored procedures whose job is to drop and then recreate the fulltext index thus ensure any changes (i.e. new columns that are indexed) in the new release get installed. The problem is that if our client has set up the database to be full recovery mode then this will fail at the point at which it tries to recreate the fulltext index. To ensure the upgrade instructions are as simple as possible I don't want the IT guy doing it to have to do anything other than run the set of scripts according to there filenames, (i.e. run 001 - xxx.sql, then run 002 xxx.sql etc...). So the idea I thought to try was for the SP that drops/recreates to do the backup of the transaction log if the database is set to full recovery mode for them by backing up to the same location as the last transaction log backup was done. The problem is how do you find out the last location? I've searched and found scripts that indicate the use of sys.sysdatabases and msdb..backupset but those tables don't seem to have the information I need. Any ideas? Is it even possible?
Is it possible to find out the last backup location used for a transaction log file in SQL 2005 or higher
0 There two cool examples (with working sample code) from Apple, that helped me to understand how keychain service works on iOS. I suggest you to look at them, and hope they will help you to resolve your issue: Generic Keychain : This sample shows how to add, query for, remove, and update a keychain item of generic class type. Also demonstrates the use of shared keychain items. All classes exhibit very similar behavior so the included examples will scale to the other classes of Keychain Item: Internet Password, Certificate, Key, and Identity. AdvancedURLConnections : This sample demonstrates various advanced networking techniques with NSURLConnection. Specifically, it demonstrates how to respond to authentication challenges, how to modify the default server trust evaluation (for example, to support a server with a self-signed certificate), and how to provide client identities. Share Improve this answer Follow answered Sep 22, 2011 at 15:03 NektoNekto 17.8k11 gold badge5555 silver badges6565 bronze badges 0 Add a comment  | 
I'm developing an application for an iPad2 that needs to write some items in Keychain but I don't want it replicates in every computer I plug, doing a backup/restore of the device. I'm using kSecAttrAccessible key to select the kind of accesibility I want with kSecAttrAccessibleWhenUnlockedThisDeviceOnly value to be sure that if I do a backup of all things that are in the device, the Keychain is not going to be present in that backup. So I proceed in this way: I reset the Keychain, insert a item in Keychain and dump all the content of Keychain, so I see that the item is there. Then I do a backup of the iPad. I reset the Keychain and restore the backup so no key should be in the Keychain as long as the restore procedure doesn't deal with the Keychain. Next time I run the application, I dump the contents of the Keychain and the key is there, so it's not working as it should. I'm using iphone-lib (http://code.google.com/p/iphone-lib/) to dump and reset credentials in my iPad. My SDK version is 4.3. The code I use to insert the item in the Keychain is the following: NSMutableDictionary *dic = [NSMutableDictionary dictionary]; NSData* identifier = [@"mypassword" dataUsingEncoding: NSASCIIStringEncoding]; [dic setObject:(id)kSecAttrAccessibleWhenUnlockedThisDeviceOnly forKey:(id)kSecAttrAccessible]; [dic setObject:identifier forKey:(id)kSecAttrGeneric]; [dic setObject:@"myaccount" forKey:(id)kSecAttrAccount]; [dic setObject:@"myservice" forKey:(id)kSecAttrService]; [dic setObject:(id)kSecClassGenericPassword forKey:(id)kSecClass]; [dic setObject:identifier forKey:(id)kSecValueData]; OSStatus error = SecItemAdd((CFDictionaryRef)dic, NULL); Thank you!
Do not restore passwords inserted in iOS Keychain issue
0 Got it, using pm helped the cause. Share Improve this answer Follow answered Sep 20, 2011 at 10:59 Julius CanuteJulius Canute 22155 silver badges1616 bronze badges Add a comment  | 
I have a rooted android phone, I am working on an app which does backup of application installed in sdcard to PC. I am able to successfully backup the application from sdcard to PC. While restoring the application from the backup, I restored all files(asec,pkg,cache,data) pertaining to the application in the exact same place they were, including the permissions. When i reboot or restart the launcher the application does not show up as installed. Instead asec or pkg are getting flushed after (reboot)/(restart of launcher). What should be done to make the application show up as installed after restoring?
Problems while restoring application from backup
check if backup-restore is available and could be utilized to pre-seving game status/data; We can't backup/restore application on Android with Adobe AIR, but we can use many 3-part software for backup. http://developer.android.com/guide/topics/data/backup.html
Lets assume I have a phone with Android and some applications installed on it. And I make a backup to cloud using my google account id. Then I broke my phone, and buying a new one with Android. I'm assosiating it with my google account id. 1)Is it mean that all my programs will be restored on my new phone?? 2)If I have no google account id - how devices can be associated? 3)How can I make application backup with Adobe AIR SDK for Android? 4)If I make backup of my phone - is MAC address saved too? Thanx!
android backup with adobe air
0 Go to the task manager and kill the home/launcher application. That would usually do it. And I believe https://android.stackexchange.com/ is the place for that type of question as this is meant for programming related questions. Share Improve this answer Follow edited Apr 13, 2017 at 12:18 CommunityBot 111 silver badge answered Sep 5, 2011 at 20:04 ReedReed 14.8k88 gold badges6767 silver badges110110 bronze badges 1 question moved to: android.stackexchange.com/questions/13179/… – Julius Canute Sep 7, 2011 at 5:44 Add a comment  | 
I have rooted android phone. I backed up a application, uninstalled the application from the phone and restored it back the way it was there in the phone(using backup including changes to packages.list and packages.xml). The restored application is not showing up immediately but shows up after reboot of the phone. Is there any service that has to be restarted to recognize the presence of the app immediately?
Android Backup and Restore
0 I took a quick look. To start, try pastie.org and paste source of plugin. I think you are going to get far fewer answers if people have to go download the plugin to view the source. I took a quick look and the changes you made seem reasonable. See line 59 where the backup is scheduled using wp_schedule_event. This basically says, take the value passed in from the admin form and backup on this schedule. If this isn't working then install Fiddler and check the post data getting sent from the plugin when you save your changes. You should see your new 'often' option going in. If it is being passed in, but still not running then look at the hook s3-backup (third param). Something must be going wrong in there. Good luck. Share Improve this answer Follow answered Sep 2, 2011 at 14:30 mrtshermanmrtsherman 39.5k2323 gold badges8888 silver badges111111 bronze badges Add a comment  | 
I'm currently trying to change the Automatic WordPress Backup script so that it can save files on a more regular basis (about once every 6 hours). At the moment I can't get it to work so any help would be much appreciated. In automatic-wordpress-backup I've added 'often' => 600 in the init function (line 51) as well as adding 'Often' in the following code (line 369-373) : <select name="s3b-schedule"> <?php foreach ( array('Disabled','Often','Hourly','Daily','Weekly','Monthly') as $s ) : ?> <option value="<?php echo strtolower($s) ?>" <?php if ( strtolower($s) == get_option('s3b-schedule') || ((get_option('s3b-schedule') === false || get_option('s3b-schedule') == '') && $s == 'Daily') ) echo 'selected="selected"' ?>><?php echo $s ?></option> <?php endforeach; ?> </select> With the entire code being quite long I'm not sure if I need to upload everything to here or if you'll check the original package in the link provided above but if you need more information please let me know. Thanks
Wordpress Automatic Backup Modification