Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
I'd personally do it slightly differently and have my configuration file more of a "control file". For example: /path /path2 /laptopBackup /tmp /test /bigmachine etc.. 1 line per mount, 3 fields per line (source, destination, backupfoldername) Then use something like : while read SOURCE DESTINATION BACKUPFOLDERNAME do <stuff> done < ${configfile} (removed the cat so as not to shame myself further :( )
I'm having some difficulties with this. Basically, for work I need a bash script that backs up a variable number of directories that are stored in a config file. I'm sure I need to import the list from the config file and just use a loop to copy all the directories across. I have it working for a single directory. My code is below. I've cut it down to a minimum. #!/bin/sh if [ ! -f ./backup.conf ] then echo "Configuration file not found. Exiting!!" exit fi . ./backup.conf unset PATH # make sure we're running as root if (( `$ID -u` != 0 )) ; then { $ECHO "Sorry, must be root. Exiting..."; exit; } fi ; # attempt to remount the RW mount point as RW; else abort $MOUNT -o remount,rw $SOURCEFILE $DESTINATIONFOLDER ; if (( $? )); then { $ECHO "snapshot: could not remount $DESTINATIONFOLDER readwrite"; exit; } fi ; # step 2: create new backup folder: $MKDIR $FULLPATH **Loop should go here** #copy source directories to backup folder $RSYNC \ -va --delete --delete-excluded \ --exclude-from="$EXCLUDES" \ $SOURCEFILE $FULLPATH; The config file is as follows SOURCE=path DESTINATION=path2 BACKUPFOLDERNAME=/laptopBackup My question is what is the best approach to do this task. i.e how should I format the config file to import a variable amount of paths to an array? or is there a better way of doing this?
Bash script to backup multiple directories specified in config file
2 You could copy it with command select * into new_table1 from primarytable This will create a table named new_table1 with data from primarytable. You must of course backup the whole database. Share Improve this answer Follow edited Sep 28, 2012 at 7:23 marc_s 742k177177 gold badges1.4k1.4k silver badges1.5k1.5k bronze badges answered Sep 28, 2012 at 5:45 user1703059user1703059 9488 bronze badges Add a comment  | 
I have one SQL Server database table. I need to take backup of my table daily based on the date. For that I need to write a script. But I'm new to SQL Server can any one please help me. Thanks in advance
How to take table backup in SQL Server Express
2 I use NSCachesDirectory (Library/Caches) and never got problems with apple. Share Improve this answer Follow answered Sep 11, 2012 at 11:46 Jonathan CichonJonathan Cichon 4,4061616 silver badges1919 bronze badges 3 when i save to Caches directory don't i need to delete data after user exists app? – kjhkjhkjh Sep 11, 2012 at 11:57 1 caches folder can be deleted on iOS 5.0.1 and up – jcesarmobile Sep 11, 2012 at 11:58 1 No, the Caches directory can be deleted by the user manually. The Temp dictionary should be delete after/during the user exits the app. The Caches directory is for all data your app needs but can recreate if it doesnt exist anymore (all Data not created by the user, but by the developer) – Jonathan Cichon Sep 11, 2012 at 12:01 Add a comment  | 
I download some data and save it to Library/PrivateDocuments directory. every file i download in this- Library/PrivateDocuments directory i set "do not backup" attribute. and apple still says : "In particular, we found that on launch and/or content download, your app stores too much data (10.3 MB after app launch) in the incorrect location. To check how much data your app is storing: Temporary files used by your app should only be stored in the /tmp directory; please remember to delete the files stored in this location when the user exits the app." p.s i need that files to stay there.. at first i check for files and if some of them doesn't exists i download them. so they aren't temp files and i don't want to delete them.. and i don't know what to do.. if you are familiar with this problem please give me a hint.. thanks.
Rejected iOS app reason : Data Storage Guidelines
If you make an image of the filesystem, you will get it exactly as it was in the moment you did the image (links and everything, these are part of the filesystem information). You're probably mistaken the filesystem itself with the logical entities it represents (files, directories...). I personally used standard tool dd long time ago to make/restore partition backups, and it worked perfectly.
I have a Linux filesystem that I would like to make backups of. I want to image the entire filesystem, for later restoration if needed. However, this particular filesystem contains multiple hard links to some files, which must be preserved by the backup and properly re-linked when it is restored, exactly as they are now. Is there a Linux tool that can efficiently image a filesystem, preserving hard links in the process? I would prefer an open source one if possible, although I'm willing to consider all options.
Backup solution for a filesystem containing hard links
You can simply serialize and deserialize your datastructure, it certainly looks Serializable. Tip Or if you prefer more robust solutions, you can create DB tables with simple DBs (outside the process of your app), so it can survive the crashes, and reload your datastructure. (For example some simple Java based DBs: JavaDB, h2, HSQLDB)
Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 9 years ago. Improve this question in my JAVA program I scan folders every hour an check those for changes. If there's a change I receive a unique number to identify this file. These data are stored in a field variable. So I need to store the data locally for the case of restart or crash. My Datastructure: Map<String, Long> currentFiles = new HashMap<String, Long>(); Map<String, Long> prevFiles = new HashMap<String, Long>(); Which backup should I use? The data should be save against modifications. The data aren't confidential.
backup variable [closed]
2 To configure s3cmd in order to be able to run in in your scripts run: s3cmd --configure Run this command from the user who has to run the backup script or copy the generated file to that user home directory. To delete file with s3cmd you use the del command, so, in your script you should call s3cmd like this: s3cmd del s3://bucketname/file Of course, you can also use wildcards: s3cmd del s3://bucketname/file* You can name you backup files with the date: db.backup.20130411.tgz and use the date in order to delete previous backups. Btw, in the s3ql project there is an expire_backups script that might be doing already what you want to do. Share Improve this answer Follow answered Apr 11, 2013 at 17:49 adosaiguasadosaiguas 1,33099 silver badges1313 bronze badges 5 I know how to call s3cmd and that I delete files with del, but I'm looking for the exact script to delete those files I explained :-) – MultiformeIngegno Apr 11, 2013 at 23:25 ... and have the situation I explained mantained periodically. :) – MultiformeIngegno Apr 11, 2013 at 23:51 1 Well, this is a questions and answers site, and not a place where to request people program for you. There are other sites where you can do that. – adosaiguas Apr 11, 2013 at 23:54 Yours isn't an answer as what you said was already covered in the question – MultiformeIngegno Apr 12, 2013 at 1:12 "A detailed canonical answer is required to address all the concerns." – MultiformeIngegno Apr 12, 2013 at 1:24 Add a comment  | 
I need a script for s3cmd that sends the deletion command for some files in my Amazon S3 bucket. I know there's this method built-in but it's not useuful. Let's see why. At the moment I have a script that backups my database every midnight and my websites files every Friday and Tuesday (still at midnight). The script I'm looking for should delete files in order to have on the bucket only: 1 backup per month (let's say, of the 1st day of every month) of both db and files. And leave all the backups (every day for db and tuesday and friday for files, so the "default" behavior) for the last 7 days. Example, if I'm on june 17th 2012 the situation should be: 1st January 2012: db and files 1st february 2012: db and files 1st march 2912: db and files 1st april 2012: db and files 1st may 2012: db and files 10th june 2012: db 11th june 2012: db 12th june 2012: db and files 13th june 2012: db 14th june 2012: db 15th june 2012: db and files 16th june 2012: db Then on july 2nd 2012 the bucket should contain: 1st of jan, feb, march, apr, may, june: both db and files last 7 days: "default" backups untouched (files backup only on tuesday and friday and db every midnight). The script would automate what I'm doing manually right now (I already have just 1 backup per past month, so I have to delete backups starting from 7 days ago going back to the last 1st-of-the-month backup). Sorry for the convoluted question, I hope it's clear enough. :D
Amazon S3: automatic deletion of certain files with s3cmd
Akeeba Backup has a database tables exclusion functionality. Refer to their documentation here for answers: https://www.akeebabackup.com/documentation/akeeba-backup-documentation/database-tables-exclusion.html
I have a database, my_db; and I have a joomla site, my_joomla_site, that stores its tables there. However, my_db also has other tables that are UNrelated to my_joomla_site. When I use Akeeba Backup to backup my_joomla_site, Akeeba packs all of the tables from my_db. I only want it to pack/zip the appropriate tables of course. Is there a way to tell Akeeba to only pack the appropriate tables?
joomla's akeeba is backing up ALL tables on my db
There are many differences. In A you are writing to gzip which compresses the data before writing to disk. B writes plain sql files which can be 5-10 times bigger (results from my database). If your performance is disk bound this could be the solution -c = "full inserts" is not specified in A -q is not specified in A for large databases INFORMATION_SCHEMA queries can be a pain with mysql (try executing SELECT * FROM information_schema.columns. For B every dump has to do these queries while A has to do this only once.
I use two different ways to backup my mysql database. mysqldump with --all-databases is much faster and has a far better performance than a loop with to dump every database in a single file. Why? And how to speed up performance for the looped version /usr/bin/mysqldump --single-transaction --all-databases | gzip > /backup/all_databases.sql.gz and this loop over 65 databases even with nice: nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c xxx -q > /backup/mysql/xxx_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx -q > /backup/mysql/dj-xxx_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-xxx-p -q > /backup/mysql/dj-xxx-p_08.sql nice -n 19 mysqldump --defaults-extra-file="/etc/mysql/conf.d/mysqldump.cnf" --databases -c dj-foo -q > /backup/mysql/dj-foo_08.sql mysqldump.cnf is only used for the authentication, there are no additional options there.
mysqldump with single tables much slower than with --all-databases
1 I think this script will be helpful to you for taking dabase backup <?php backup_tables(‘hostaddress’,'dbusername’,'dbpassword’,'dbname’); /* backup the db OR just a table */ function backup_tables($host,$user,$pass,$name,$tables = ‘*’) { $link = mysql_connect($host,$user,$pass); mysql_select_db($name,$link); //get all of the tables if($tables == ‘*’) { $tables = array(); $result = mysql_query(‘SHOW TABLES’); while($row = mysql_fetch_row($result)) { $tables[] = $row[0]; } } else { $tables = is_array($tables) ? $tables : explode(‘,’,$tables); } //cycle through foreach($tables as $table) { $result = mysql_query(‘SELECT * FROM ‘.$table); $num_fields = mysql_num_fields($result); $row2 = mysql_fetch_row(mysql_query(‘SHOW CREATE TABLE ‘.$table)); $return.= “\n\n”.$row2[1].”;\n\n”; for ($i = 0; $i < $num_fields; $i++) { while($row = mysql_fetch_row($result)) { $return.= ‘INSERT INTO ‘.$table.’ VALUES(‘; for($j=0; $j<$num_fields; $j++) { $row[$j] = addslashes($row[$j]); $row[$j] = ereg_replace(“\n”,”\\n”,$row[$j]); if (isset($row[$j])) { $return.= ‘”‘.$row[$j].’”‘ ; } else { $return.= ‘”"‘; } if ($j<($num_fields-1)) { $return.= ‘,’; } } $return.= “);\n”; } } $return.=”\n\n\n”; } //save file $handle = fopen(‘db-backup-’.time().’-’.(md5(implode(‘,’,$tables))).’.sql’,'w+’); fwrite($handle,$return); fclose($handle); } ?> Share Improve this answer Follow answered Jun 16, 2012 at 16:27 HardikHardik 1,41122 gold badges1919 silver badges3737 bronze badges 1 You Well come and if it will be helpful to you then Give it 1+ up and make it correct so it will be helpful to others – Hardik Jun 16, 2012 at 16:30 Add a comment  | 
I'am using the code below to make a backup to mysql database . but when I import the backup file to new database the file imported successfuly but the new database is empty(no tables). this is the code which I use: <?php $dbhost = 'localhost:3036'; $dbuser = 'root'; $dbpass = 'rootpassword'; $backup_file = $dbname . date("Y-m-d-H-i-s") . '.gz'; $command = "mysqldump --opt -h $dbhost -u $dbuser -p $dbpass ". "test_db | gzip > $backup_file"; system($command); ?>
back up mysql database using php
2 I suppose that you are a Windows user, then, you can use Notepad to browse the database if you just need to check some records. Share Improve this answer Follow answered Aug 15, 2013 at 9:48 Mohammad AniniMohammad Anini 5,15044 gold badges3636 silver badges4747 bronze badges Add a comment  | 
I want to open an .bak file that was created with SQL Server. There is some method to open that database with any other program? Thanks.
Can I open .bak file without SQL Server?
2 Commands in a script are run one at a time in order unless any of the commands "daemonizes" itself. Share Improve this answer Follow answered May 26, 2012 at 2:00 Ignacio Vazquez-AbramsIgnacio Vazquez-Abrams 786k155155 gold badges1.4k1.4k silver badges1.4k1.4k bronze badges 3 you mean clamscan -i -r --remove /home/ will run after backup or with backup ? – Michael Atef May 26, 2012 at 2:32 It will run after, unless cpbackup daemonizes itself. – Ignacio Vazquez-Abrams May 26, 2012 at 2:49 i try it today , i see that two command run at same time , can any one help how to do it after it – Michael Atef May 27, 2012 at 0:36 Add a comment  | 
about run script.sh via ssh #!/bin/bash /usr/local/cpanel/scripts/cpbackup clamscan -i -r --remove /home/ exit are that mean run /usr/local/cpanel/scripts/cpbackup and after finished run clamscan -i -r --remove /home/ or run two command at same time ???
run job file via ssh command order how?
You don't need to modify the method. Convert your string to URL. NSURL *url = [NSURL URLWithString:@"your string"];
In order to follow the Data Storage Guidelines I must use the below method to add a flag to say to not back it up to iCloud. However, the parameter here is for a NSURL. I need to pass it a NSString like from a line like so return [[self offlineQueuePath] stringByAppendingPathComponent:@"SHKOfflineQueue.plist"]; Here is the method that takes in a URL. - (BOOL)addSkipBackupAttributeToItemAtURL:(NSURL *)URL { if (&NSURLIsExcludedFromBackupKey == nil) { // iOS <= 5.0.1 const char* filePath = [[URL path] fileSystemRepresentation]; const char* attrName = "com.apple.MobileBackup"; u_int8_t attrValue = 1; int result = setxattr(filePath, attrName, &attrValue, sizeof(attrValue), 0, 0); return result == 0; } else { // iOS >= 5.1 NSError *error = nil; [URL setResourceValue:[NSNumber numberWithBool:YES] forKey:NSURLIsExcludedFromBackupKey error:&error]; return error == nil; } } Anyway, how would I modify the method above to achieve the same while taking in a NSString as a parameter? Thanks!
addSkipBackupAttributeToItemAtURL -> NSString parameter?
Assuming that you are storing the ringtone as a content:// Uri value, I would use either openInputStream() or getType() on ContentResolver. getType() is probably "the most lightweight", but it might be prone to false negatives (e.g., ringtone exists but the MIME type cannot be determined for some reason).
In my app, I'm storing the user's choice of a ringtone in a SharedPreference file. When the app is reinstalled and the backup is restored, I want to check if the ringtone still exists on the device, because if it doesn't I would want to use the default ringtone (as opposed to playing nothing). So to do so, I plan on overriding the onRestore method and checking if the ringtone is available on the device. So how can I go about checking if a ringtone exists on the Android device (I would prefer the most lightweight method possible)?
How can I check if a Ringtone exists?
1 Maybe you can include a copy of pg_dump.exe binary for Ms Windows (and required dlls) with your application. Then invoke it with proper parameters from a GUI. (you can use Dependency Walker to find out what libraries are required by pg_dump.exe) Share Improve this answer Follow answered Apr 26, 2012 at 7:41 dschulzdschulz 4,72611 gold badge3232 silver badges3131 bronze badges Add a comment  | 
I've got application which contains a database based on Postgresql. I wannna get a backup of this database using my application. for ex. I wanna click an option in program menu and wanna get file with database backup. I got all administratior rights for this database. Application is wrtitten in .net 4.0 (C#), windows forms. How could I solve my problem? i' ve tried it but it's not working: string zapytanie = @"pg_dump WFR > C:\kopia"; string pol = Ustawienia.ConnectionString; NpgsqlConnection conn = new NpgsqlConnection(pol); conn.Open(); NpgsqlCommand comm = conn.CreateCommand(); comm.CommandText = zapytanie; comm.ExecuteNonQuery(); conn.Close(); errors: ERROR: 42601: syntax error at or near "pg_dump" stacktrace: w Npgsql.NpgsqlState.<ProcessBackendResponses_Ver_3>d__a.MoveNext() w Npgsql.ForwardsOnlyDataReader.GetNextResponseObject() w Npgsql.ForwardsOnlyDataReader.GetNextRowDescription() w Npgsql.ForwardsOnlyDataReader.NextResult() w Npgsql.ForwardsOnlyDataReader..ctor(IEnumerable`1 dataEnumeration, CommandBehavior behavior, NpgsqlCommand command, NotificationThreadBlock threadBlock, Boolean synchOnReadError) w Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb) w Npgsql.NpgsqlCommand.ExecuteNonQuery() w Faktury_i_Rachunki_2.Forms.FrmKopiaBezp.BtUtworzKopie_Click(Object sender, EventArgs e) w D:\nwfr3\Faktury i Rachunki 2.0\Forms\FrmKopiaBezp.cs:wiersz 38
backup of postgresql database from my application
2 You want to use cron if you're on linux or task scheduler if you're on windows to schedule a periodic script. Share Improve this answer Follow answered Apr 23, 2012 at 9:42 JackJack 5,7201010 gold badges5050 silver badges7474 bronze badges Add a comment  | 
I have created a web application storing files online. I want to get a backup of those uploaded files on a regular basis, meaning after a day or an hour. What should I do?
How to get a backup of files at specific interval of time?
2 You can use xcopy to recreate the file structure with the /T switch. You will also need to use the /E switch to copy empty directories and subdirectories. At a cmd prompt type xcopy /? for help on all the switches. Hope this helps! Share Improve this answer Follow answered Mar 26, 2012 at 9:40 Bali CBali C 30.8k3535 gold badges124124 silver badges152152 bronze badges 1 Great, glad I could help. If this has answered your question you can accept the answer by clicking the tick. – Bali C Mar 26, 2012 at 12:01 Add a comment  | 
I currently have a batch file which reads a filelist from folder A, if it's over a certain date copies it to folder B and deletes the original. The problem is that I would like the batch file to recreate the file structure inside folder A, and I'm not quite sure how to do this. Here's the code: echo off echo REM :- THE FOLLOWING FILES ARE OLDER THAN 7 DAYS AND CAN BE MOVED TO BACKUP -: pause forfiles /p c:\test\one /s /m *.gif /c "cmd /c dir /b/s/t:w @path" echo. echo REM :- CONTINUE TO MOVE THOSE FILES TO THE BACKUP LOCATION -: pause forfiles /p c:\test\one /s /m *.gif /c "cmd /c mkdir c:\test\two\@relpath" echo. pause forfiles /p c:\test\one /s /m *.gif /c "cmd /c copy /-y @path c:\test\two\@relpath" pause echo. echo REM :- THE FOLLOWING FILES EXIST IN THE BACKUP LOCATION -: echo. pause dir "c:\test\two" /a/s/b /o:gn echo. echo REM :- CONTINUE TO DELETE THOSE BACKUPS -: pause rd /s "c:\test\two" echo. pause The problem is that @relpath seems to take the whole path and filename, so in folder B I end up with each filename inside a folder of its own name (e.g. 'filename.gif' inside a folder of the same name). How can I strip the filename from the path, create a file structure in folder B based on that, and then copy the file to the correct place? Thanks
Batch-file backup in windows, recreating nested folders/files
Avoid using dd; programs like ddrescue have a much better status reporting, imply noerror, and conversion is not desired on disk images at all.
I will be backing up a large (750GB) disk to an external USB disk using dd. What is the most appropriate use of notrunc, noerror and sync conversion arguments? It seems some people use them in different ways, or not at all. Also, what is the best block size? USB is likely to be the bottleneck here. dd if=/dev/sda bs=1M | gzip -c > /mnt/sdb1/backups/disk.img.gz gzip -dc /mnt/sdb1/backups/disk.img.gz | dd of=/dev/sda bs=1M Thanks.
dd disk imaging - which conv switches to use and when?
1 Don't put a space or equals sign between the -p and the password. Also, you are missing a space before the -p. $command = 'mysqldump -h' . $dbhost . ' -u ' . $dbuser . ' -p' . $dbpass . ' '. $dbname . ' > ' . $backupFile ; Share Improve this answer Follow edited Jan 31, 2012 at 23:56 answered Jan 31, 2012 at 23:50 Ben LeeBen Lee 52.9k1313 gold badges126126 silver badges146146 bronze badges 3 i removed the spaces and the = , it was a typo , but nothing happened , still the same issue. – Bader H Al Rayyes Jan 31, 2012 at 23:52 1 Then please do what @Book Of Zeus said and echo the command to make sure the rest looks right. – Ben Lee Jan 31, 2012 at 23:54 @BaderHAlRayyes, also I just edited my post to fix a mistake. You can't have the "=" sign either. Try it now. – Ben Lee Jan 31, 2012 at 23:57 Add a comment  | 
i am trying to backup mysql database by using this code include ("functions_cp/f_connection.php"); Sqlconnection() ; $dbname = "Reservebox"; $dbhost = "localhost"; $dbuser = "root"; $dbpass = "123"; $backupFile = $dbname . date("Y-m-d-H-i-s") . '.sql'; $command = 'mysqldump -h' . $dbhost . ' -u ' . $dbuser . '-p =' . $dbpass . ' '. $dbname . ' > ' . $backupFile ; system($command); the script works fine and it generates a .sql file , however this file is empty , how can i fix this problem ? thanks
backup .sql file is empty
2 The best bet will be to use MySql Schedular, I know MySQL 5.1.6 come with this, check out this. For more on how you can backup MySql database check out Using MySQL Schedular you can do something like CREATE EVENT MyEvent ON SCHEDULE AT TIMESTAMP '2011-12-30 23:59:00' DO SELECT * INTO MY_NEW_TABLE FROM MY_CURRENT_TABLE; Share Improve this answer Follow edited Dec 30, 2011 at 17:28 answered Dec 30, 2011 at 17:20 Emmanuel NEmmanuel N 7,40022 gold badges2727 silver badges3636 bronze badges 2 Thanks .Will try and back to u – user1119566 Dec 30, 2011 at 17:30 Remember to turn on event schedular using SET GLOBAL event_scheduler = ON;, incase the schedular has not been turned on. – Emmanuel N Dec 30, 2011 at 17:35 Add a comment  | 
On 31 Dec 2011, I have to copy the data of one of my database tables into another table automatically at 11:59. What steps should I follow?
How to make a backup copy of a table at a specific date/time using MySQL?
2 One liner to remove files more than 7 days old: find ${path_to_files} -daystart -maxdepth 1 -mtime +7 -exec rm -rf {} \; &>/dev/null Maybe you could adapt it to your needs, by ignoring the files exactly x%15 days old... Share Improve this answer Follow answered Oct 31, 2011 at 12:51 IsabelleIsabelle 63111 gold badge77 silver badges1414 bronze badges Add a comment  | 
I am using Webmin to backup automatically every day. I want to automatically delete all backups older then 7 days, unless they are every 15 days. I guess I need to write some sort of bash script to do this, does anyone know of a way built into webmin, or a script that does this already. Summary: - Daily Backups are already being made. - Backups need to be retained for 7 days from current date, and removed otherwise UNLESS it is every 15 days. Thanks
Webmin Automatic Backup Cleanup
Is there some way to remove workspaces and build controller settings from TFS backup (before it will be restored to second server)? I'm fairly confident that the answer is, unfortunately, No. TFS backup (including backup to restore on another server) is at the database level, and that is where all the state is held. To move only part of the data would require moving only part of the database1. It is quite possible to use the command line to enumerate and delete workspaces (see tf workspace /delete) of other users from account with sufficient access. 1 Or database*s* is using TFS 2005 or 2008.
There are two TFS servers. We need to move data from first server to second server. We need to move all data else workspaces and build controller settings. But if to do backup of first TFS server, then it contains these data too.... Is there some way to remove workspaces and build controller settings from TFS backup (before it will be restored to second server)? Thanks to Richard, I will specify the question: did somebody manage to find the set of sql commands for to delete from backup database (yes, TFS2010) the data about workspaces and build controller settings and break nothing?
Remove workspaces and build controller settings from TFS backup
I think you are just missing a -c on the gzip line, try: $MYSQLDUMP -u $MYSQLUSER -p$MYSQLPASS --all-databases | $GZIP -c9 > $BACKUP_DIR/$NAME.sql.gz
I am trying to make a bash script to backup my sevrer, however it is creating empty tar archive and empty sql files and I don't know why. Can anyone see the problems here? #!/bin/bash SERVER_DIR="/var/www/vhosts/site.org" DATE=$(date +"%d-%m-%Y") BACKUP_DIR="/backups/$DATE" NAME="full-$DATE" MYSQLUSER="admin" MYSQLPASS="pass" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" mkdir -p $BACKUP_DIR tar -zcvf $BACKUP_DIR/$NAME.tar.gz $SERVER_DIR $MYSQLDUMP -u $MYSQLUSER -p$MYSQLPASS --all-databases | $GZIP -9 > $BACKUP_DIR/$NAME.sql find /backup/ -mtime +31 -exec rm -rf {} \;
Bash backup script
You could always just dump ALL the databases: mysqldump --all-databases | gzip -9 > /backup/dbs.bak.gz That'd free you from having to keep track of which dbs there are. The downside is that restoring gets a bit more complicated. As for using root, there's no reason you couldn't create another account that has permissions to do backups - you should never use the root account for anything other than initial setup.
I have several websites hosted on a VPS and am currently performing database backups by running a shell script via cron that looks something like this: mysqldump -uusername1 -prootpassword dbname1 > /backup/dbname1.bak mysqldump -uusername2 -prootpassword dbname2 > /backup/dbname2.bak mysqldump -uusername3 -prootpassword dbname3 > /backup/dbname3.bak I have a couple of concerns about this process. Firstly, I'm using the root server password to perform mysqldump, and the file is being stored in clear text on the server (not publicly accessible or anything, but there are obviously concerns if I grant other users access to the server for one reason or another). I'm using root because it's simpler than tracking everybody that creates a database down and asking them for their specific db passwords. Secondly, this process only works if people inform me that they've added a database (which is fine for the most part, we're not doing anything super complicated over here). I would prefer to have a backup of everything without worrying that I've overlooked something.
How best could I optimize the way I'm backing up MySQL databases?
I always do this just via ssh: tar czf - FILES/* | ssh me@someplace "tar xzf -" This way, the files end up all unpacked on the other machine. Alternatively tar czf - FILES/* | ssh me@someplace "cat > foo.tgz" Puts them in an archive on the other machine, which is what you actually wanted.
I have a large number of files which I need to backup, problem is there isn't enough disk space to create a tar file of them and then upload it offsite. Is there a way of using python, php or perl to tar up a set of files and upload them on-the-fly without making a tar file on disk? They are also way too large to store in memory.
is it possible to take a large number of files & tar/gzip and stream them on-the-fly?
1 Do you have a timestamp field on your database? Using a timestamp with ON UPDATE CURRENT_TIMESTAMP clause would allow you to know the modification time of each row. That way, you could easily do a SELECT query on rows WHERE timestamp is greater than a given value. Share Improve this answer Follow answered Apr 15, 2011 at 13:02 Charles BrunetCharles Brunet 22.5k2525 gold badges8484 silver badges124124 bronze badges 0 Add a comment  | 
I have 2 MySQL databases on the same Linux box. They aren't super large, but some tables hold around 500,000 rows, increasing by about 40,000 rows per month. What I need to do is to write a partial backup from one database to the other once per day. This partial backup is a snapshot and apart from the backups will not have any fresh data written to it. It contains only some of the tables of the main db, and from those tables only some of the fields. It is easy enough to write a PHP script that deletes the backup database, and then recreates it with the desired data to get a fresh snapshot, however i am wondering if there is a way to do this incrementally with PHP and only write new or changed data.
Incremental MySQL
2 No, it is not safe to delete transaction log. But you can shrink it: 2005 backup log XXX with truncate_only .... dbcc shrinkfile 2008 Alter database xxx set recovery simple dbcc shrinkfile Alter database xxx set recovery full can I restore a transaction log backup to another server Yes, you can, but before you need to restore LAST FULL backup to another server. Share Improve this answer Follow answered Mar 15, 2011 at 10:17 cethceth 44.7k6262 gold badges183183 silver badges294294 bronze badges Add a comment  | 
Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 11 years ago. Improve this question Is it safe to delete log file, because I have a big size log file ? Another question: can I restore a transaction log backup to another server ? I just want ensure about that.
Can file log deletion raise any problem? [closed]
You can create a symfony task. If you pass in an environment (i.e. dev, prod) or connection you get access to the Doctrine connection manager and create a connection. You can use that connection to make a database dump or get the connection details from the connection manager. You can use the doctrine insert sql task as template for your task. I've done similar in the past.
I'm trying to do a php script to backup my database. Here's what I've tried so far : $command = "mysqldump -u [username] -p [password] [databasename] | gzip > db.sql.gz"; $this->output = system($command); How do I get the password and username from databases.yml ? How can I do a script that sends me the backup file, instead of saving it on the server (à la phpmyadmin) ?
Backup MySQL via php, in a Symfony application
2 Stage the data to S3. You can then directly download it or have Amazon send it to you on a physical drive using AWS Import Export. Share Improve this answer Follow answered Feb 20, 2011 at 18:54 Uriah CarpenterUriah Carpenter 6,6663232 silver badges2828 bronze badges 1 Brilliant - thanks didnt even know that service existed .... should probably read the website ! – Ben Mar 4, 2011 at 19:58 Add a comment  | 
I have just inherited a system running on EC2, running several instances and there are some large data volumes - 2TB+. Any suggestions on best (cheapest + quickest) way to back these up and move to local machines ? I think the files will tar and zip efficiently but still be several 100g. Should I just forget about pulling the data down and build new instances based on ec2 backups.
Moving large datasets in AWS
There are lots of devices that are installed using what's called a "No Driver" INF. These INFs provide enough information such that Device Manager will have some info to show for the device (thus avoiding them appear in the "unknown devices" category) but don't actually install any drivers. These devices do not need drivers because they are managed by either the O/S itself, the BIOS, or both. Usually these devices are all "installed" using machine.inf, which has a giant list of known no driver devices. As for any software that claims to back the drivers for these devices up, either they're just copying the INF or they're full of it because there's nothing but the INF to back up. -scott
I'm stuck. To cut long story short, the task is to enumerate all driver files for backup. For some drivers like display adapter driver, I use SetupScanFileQueue(queueHandle, SPQ_SCAN_USE_CALLBACKEX,NULL, DumpDeviceDriversCallback,&count,&scanResult) from setupapi and that's working fine as in DumpDeviceDriversCallback I can get the Source of the device driver file and then copy it to backup location one by one. However, the same function ignore the callback for system drivers. For example for "Direct memory access controller" I can not get the list of files. Funny thing, but windows device manager also can not find any files for some of the system devices. Some special software like DriverMax and DoubleDriver actually CAN backup those driver. So, this is the problem that may be solved. Anyone can explain me what is going on here?
Driver Backup using setupapi
You could use the logrotate(8) tool that came with your distro. :) The manpage has an example that looks close to your need: /var/log/news/* { monthly rotate 2 olddir /var/log/news/old missingok postrotate kill -HUP `cat /var/run/inn.pid` endscript nocompress } Well, not the monthly bit, or restarting inn :) but I hope you get the idea that you could easily add a new config file to /etc/logrotate.d/ and not worry about it again. :)
Hello I keep my log files under /opt/project/logs/ and I want to daily copy these to /opt/bkp by compressing them. For this I have written this and works well: #!/bin/bash getdate(){ date --date="$1 days ago" "+%Y_%m_%d" } rm -rf "/opt/bkp/logs/myapp_log_"$(getdate 365).gz ; /bin/cat /opt/project/logs/myapp.log | gzip > /opt/bkp/logs/myapp_log_`date +%Y_%m_%d`.gz ; echo "" > /opt/project/logs/myapp.log ; However it is not functional or general, I will have several applications saving files with their names ie app1.log app2.log under the same /opt/project/logs/ folder. How can I make this as a "function" where script reads each file under /opt/project/logs/ directory and taking backup of each file ends with .log extension?
improving my backup bash script
2 If you can and are allowed to create `database links, create one and then copy the data over the database link. That would be: on the destination db: create database link db_link connect to <username> identified by <password> using '<connection_string>"; then insert into projects select * from projects@db_link where .... Or,alternatively, try the copy command of SQL*Plus. SQL> copy from <db_src> to <db_dest> append projects using select * from projects where .... Share Improve this answer Follow edited Jan 18, 2011 at 23:05 answered Jan 18, 2011 at 22:59 René NyffeneggerRené Nyffenegger 39.8k3333 gold badges166166 silver badges302302 bronze badges 2 Thanks Rene. Will this copy all the data/dependencies of the project at the source database to the destination database? – Raj Jan 19, 2011 at 1:07 To clarify my question, there are other tables in the source which contain data that is related to the project that I am copying. Should I just copy each of those tables separately? – Raj Jan 19, 2011 at 1:24 Add a comment  | 
Could somebody tell me how to do this in Oracle: I have a table named project in which there a multiple projects. I want to copy the data of a particular project from the source database to another database. The project doesn't exist(in the project table) in the destination database. I want something like: copy from sourceDatabase to destinationDatabase create new_table using select * from project where name='Name of the project to be copied'
copy data of a project(from a project table) from one database to other in Oracle
Assembla provides an 'Import/Export' under the Subversion tab which allows you to get complete historical dumps of your repository. On a repository you host yourself, you can use svnadmin dump
I am still pretty new to VCSs in general. I have a SVN repository hosted at assembla. I know how to check out and commit to the repository. But if I wanted to actually get a backup of the repository itself to save offsite. How can I do that?
Backup hosted repository to off-site location
Take a look at the source of the DevCon utility which is included in the Windows Driver Kit (WinDDK) for Windows 2008 R2/Windows 7 (7.1.0). DevCon Sample DEVCON DevCon is a command-line tool that displays detailed information about devices, and lets you search for and manipulate devices from the command line. DevCon enables, disables, installs, configures, and removes devices on the local computer and displays detailed information about devices on local and remote computers. DevCon is included in the Windows DDK. This should point you to which API's you need.
Can someone please help me and tell me how I can backup a Windows driver programmatically using Delphi? Any code samples, links to articles are highly appreciated Thanks for your time
How to backup a Windows driver using Delphi
looks like demas was correct. this isn't possible. I ended up using the description field in my backup to store the extra data i needed. Thanks
I have a normal SQL bak up, is it possible to use c# & SMO to read info from a table inside my backup file? The backup file is a normal SQL .bak backup. (in simple mode) If SMO is not able to do this, is there any other technology that can assist? Thanks
SQL Server Managment objects to read data out of a backup file
If you check out the assembly directory on the Win7 machine (GAC) you'll see an entry called Microsoft.SqlServer.ConnectionInfo. (Browse to %windir%\assembly) On my machine it looks like this: In my case I'm using version 10.0.0.0. In your case, you will at least see version 9.0.242.0 as that is what your program is compiled against (I find it unlikely that you're not referencing the dll from the GAC). If you don't have the same version installed on both machines, you've spotted the problem and you need to update the client library accordingly. I think it's likely that you have a newer version running on the XP machine, since you just installed 2008 there. If you need more help after checking this out, you can comment here.
I'm using the following code to back up a SQL Database : void BackupDatabase(string sConnect, string dbName, string backUpPath) { using (SqlConnection cnn = new SqlConnection(sConnect)) { cnn.Open(); dbName = cnn.Database.ToString(); ServerConnection sc = new ServerConnection(cnn); Server sv = new Server(sc); // Create backup device item for the backup BackupDeviceItem bdi = new BackupDeviceItem(backUpPath, DeviceType.File); // Create the backup informaton Microsoft.SqlServer.Management.Smo.Backup bk = new Backup(); bk.PercentComplete += new PercentCompleteEventHandler(percentComplete); bk.Devices.Add(bdi); bk.Action = BackupActionType.Database; bk.PercentCompleteNotification = 1; bk.BackupSetDescription = dbName; bk.BackupSetName = dbName; bk.Database = dbName; //bk.ExpirationDate = DateTime.Now.AddDays(30); bk.LogTruncation = BackupTruncateLogType.Truncate; bk.FormatMedia = false; bk.Initialize = true; bk.Checksum = true; bk.ContinueAfterError = true; bk.Incremental = false; // Run the backup bk.SqlBackup(sv); } } In my system (Win7 x64) it works fine but in destination system (WinXP SP3 x86) I receive the below error : How can I fix it ? Thanks.
SQL Backing up with SMO & C#?
If you need to dump entire database much simpler solution is working as a superuser (postres by default). Isn't it an option? pg_dump -U postgres my_database > backup.sql
I'm trying to backup my database with: pg_dump my_database > backup.sql unfortunately there are no privileges set for many objects in the database, therefore the command does not work! Furthermore this does not grant privileges as expected: GRANT ALL ON DATABASE my_database TO root Any ideas?
Granting privileges to ALL objects in a database - Postgres
2 There's no way to rename that directory using standard mercurial configuration options. If you're on unix, and I'm guessing your are if .hg sounds hidden, you could use a pre-backup script (or cron job) to snapshot it using cp -al into something with a different name. Using -l gets you hardlinks, so it won't actually take up extra disk. However, most people back up their .hg repositories with a push to a different mercurial server, which can be easily scripted too. Share Improve this answer Follow answered Aug 31, 2010 at 20:00 Ry4an BraseRy4an Brase 78.2k77 gold badges149149 silver badges169169 bronze badges Add a comment  | 
Our company policy is not to back up hidden folders. Is it possible to change the .hg folder name to something visible?
Mercurial repository backup when hidden files can not be backupped
You are going to want to look into version control. There are many options, the most popular of which are SVN, git and mercurial. The latter two are probably along the lines of what you're looking for. It is generally good practice to use version control for projects of any real size.
Is it possible to backup an entire eclipse PDT project (or parts of it), so that it can be restored to that state later? Ideally this would only save changes from the last backup to save space. If it makes any difference I have Mylyn installed (Maybe you could backup the current task context?) and am using CodeIgniter. Thanks, Lemiant
Create backups of eclipse projects
2 Make sure about these things: This database actually exists The login and user that you use have rights to backup database Share Improve this answer Follow answered Jul 13, 2010 at 17:05 AndreyAndrey 59.4k1212 gold badges121121 silver badges165165 bronze badges 1 I know the database exists since my program retrieves and stores information from it. I am not sure about the login part. – Daniel Harris Jul 15, 2010 at 2:40 Add a comment  | 
I want to backup my database using Linq to SQL: Dim sql As String = "BACKUP DATABASE SeaCowDatabase TO DISK = _ '" + sfd.FileName + "'" db.ExecuteCommand(sql) But instead, I get this error: System.Data.SqlClient.SqlException (0x80131904): Could not locate entry in sysdatabases for database 'SeaCowDatabase'. No entry found with that name. Make sure that the name is entered correctly. BACKUP DATABASE is terminating abnormally. at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at System.Data.Linq.SqlClient.SqlProvider.Execute(Expression query, QueryInfo queryInfo, IObjectReaderFactory factory, Object[] parentArgs, Object[] userArgs, ICompiledSubQuery[] subQueries, Object lastResult) at System.Data.Linq.SqlClient.SqlProvider.ExecuteAll(Expression query, QueryInfo[] queryInfos, IObjectReaderFactory factory, Object[] userArguments, ICompiledSubQuery[] subQueries) at System.Data.Linq.SqlClient.SqlProvider.System.Data.Linq.Provider.IProvider.Execute(Expression query) at System.Data.Linq.DataContext.ExecuteCommand(String command, Object[] parameters) at SeaCow.Main.Ribbon_Save_Click(Object sender, EventArgs e) in C:\Users\Daniel\My Programs\Visual Basic\SeaCow\SeaCow\SeaCow\Main.vb:line 595 Anyone have any suggestions?
VB.NET - SqlException: Could not locate entry in sysdatabases for database
The T-Log will only have portions of it marked inactive, when the database has a transaction log backup on it performed - a portion (vlf) is only marked inactive if there are no outstanding transactions within the VLF. A full backup, whether in fully logged mode or bulk logged mode will not mark any portions of the t-log inactive. Paul Randal devoted an entire post to this question before : http://www.sqlskills.com/BLOGS/PAUL/post/Misconceptions-around-the-log-and-log-backups-how-to-convince-yourself.aspx
Is the database transaction log automatically truncated after we create a backup and the DB is in full recovery mode? Or do we need to make 2 different backups, let's say 1 in full recovery mode and a different one for the log file.
Is The Transaction Log Truncated When Doing a Backup In Full Recovery Mode?
Either write a udev rule to run the script, or write a D-Bus client that listens to hal.
I want to have automatic backup of some directories on my pendrive whenever I insert a pendrive to my laptop running Ubuntu 10.04. What whould be the simplest solution for that?
Run a script when I put a pendrive in
2 Maybe instead of writing your own backup script you could use python tool called rdiff-backup, which can create incremental backups? Share Improve this answer Follow answered Mar 13, 2010 at 11:53 gruszczygruszczy 41.5k3131 gold badges133133 silver badges184184 bronze badges Add a comment  | 
#Filename:backup_ver1 import os import time #1 Using list to specify the files and directory to be backed up source = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\Dr Py\Final_Py' #2 define backup directory destination = r'C:\Documents and Settings\rgolwalkar\Desktop\Desktop\PyDevResourse' #3 Setting the backup name targetBackup = destination + time.strftime('%Y%m%d%H%M%S') + '.rar' rar_command = "rar.exe a -ag '%s' %s" % (targetBackup, ''.join(source)) ##i am sure i am doing something wrong here - rar command please let me know if os.system(rar_command) == 0: print 'Successful backup to', targetBackup else: print 'Backup FAILED' O/P:- Backup FAILED winrar is added to Path and CLASSPATH under Environment variables as well - anyone else with a suggestion for backing up the directory is most welcome
Python Script to backup a directory
2 I think this is where you'd be better off if you used something like RSync rather than a home-grown solution. Share Improve this answer Follow answered Feb 19, 2010 at 12:05 Anton GogolevAnton Gogolev 114k3939 gold badges202202 silver badges290290 bronze badges Add a comment  | 
I have clients out there running an SQL Server Express 2005 and each of these needs a backup each month and that backup needs to be moved to our server in case they lose their backup. Our software automatically backs up the database each month but we have to manually go in and copy it across. Is there any way to automate the copying of files up to 800 megs from their machine to ours each month perhaps using FTP? Also, if using FTP it has to support resume in case we lose the connection three querters through which happens quite often. I would like to write this functionality into our VB.net application that only requires the .net framework and not use any third party controls.
I need to pull large backup files from my clients' servers to my server every month
I've used rsync just fine with moving files around and using them on other boxes. I've never done it with the MYSQL running live, but i've restored from the files in /var/lib/mysql before with no problems. It's a good way to "copy" over databases for your "development" box. I suggest shutting down mysql, moving the files over then starting it back up again. That is how I've done it when necessary. myqldump gives you nice neat SQL code though, good if you ever need to "tweak" something with sed along the way. I'd have no worries about using rsync though. I use it for many purposes including pushing code updates out to client machines.
I've usually use mysqldump to export a database. However, when the database is really large, it seems much faster and less intensive to just gzip the database files directly without involving the MySQL deamon and copying that to the other database. eg: tar -czvf {db_name}.sql.tgz /var/lib/mysql/{db_name} Is this a good method of doing this? What are the (dis)advantages? I also read another post here that mentioned: rsync /var/lib/mysql/ ... Would it be a good option to just use rsync to keep a backup db in sync with the development db?
Best way to do a MySQL large database export
2 If you did a full recovery then that is the result I would expect. The DROP SCOTT.DEPT action was applied do the database when you recovered and fed RMAN all the outstanding archived logs. You want to do a point in time recovery to a time before you issued the DROP statement. rman target sys/manager@db RUN { SET UNTIL TIME 'Feb 3 2010 08:30:00'; RESTORE CONTROLFILE ; ALTER DATABASE MOUNT; RESTORE DATABASE; RECOVER DATABASE; } More info here: Oracle 10.2 Backup and Recovery Basics - Performing Database Point-In-Time Recovery ALternately you could leave the RECOVER DATABASE step off and just RESTORE the database followed by an OPEN RESETLOGS. That would allow you to skip applying any changes in the Archived Logs. Share Improve this answer Follow answered Feb 5, 2010 at 14:36 David MannDavid Mann 1,9701515 silver badges1515 bronze badges Add a comment  | 
I want to backup via RMAN and delete scott.dept and again restore everything. (this is for testing RMAN mechanism) I wrote like this : 1)rman target sys/manager@db 2)in sql*plus shutdown immediate; startup mount exclusive; ALTER DATABASE ARCHIVELOG; 2)CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'g:\db\db_cf%F'; 3)BACKUP DATABASE PLUS ARCHIVELOG; 4)alter database open; 5)drop scott.dept 6)in sql*plus shutdown immediate; startup mount exclusive; ALTER DATABASE ARCHIVELOG; 7)Restore Database; 8)Recover Database; At the end it shows me : successfully completed . but scott.dept not restore yet; why? Thanks ...
How to restore via RMAN?
Snapshotting, or 'Shadow Copy' as Microsoft calls it, see Shadow Copy on wikipedia
I created a backup disk image of my disk yesterday and the software told me to close all Windows programs to make sure the process finishes successfully. I did that, but I was wondering what happens when some program does write to the disk nevertheless during the process. Windows 7 is a complex system and surely various log files and such are written continuously (the disk has one partition which contains the Windows install too). How does the backup software handle it when the disk content is changed during image creation? What is the algorithm in this case?
How do backup apps which create a system image handle disk changes during the image creation process?
If you are uysing SQL Server, you could use the SQL Management objects (SMO) An example can be found here Kindness, Dan
I have an application extension which I need to test. Part of the extension applies some updates to the application database schema (via the applications API). i want to test that given version 1 of the application when my class is run the schema becomes v1.1 and that certain queries for items which should exist in 1.1 return correctly. I have backup of the database at schema v1.0 and what I would like to do in my tests is: 1/ restore the database from a backup 2/ call the code which does the update 3/ call several methods which verify that the schema updates have succeeded 4/restore the database from a backup Are there classes I can use to do this restore in my c# code, or do I have to execute the command in shell process?
Restore Database backup from code?
It's interpreting your dots as different prefixes, while in fact, I'm guessing they're just part of your database name...? In that case, increasing the amount of prefixes allowed is not what you want, but rather something like this: ALTER SCHEMA dbo TRANSFER [[email protected]].tablename
I am transferring a database from one hosting provider to another. The current provider uses the domain name as part of the user name. The domain name is a .co.nz domain. So some objects in the database have a fully qualified name of [email protected]. Im trying to alter the schema of these objects to put them into a dbo schema using: ALTER SCHEMA dbo TRANSFER [email protected] But i get Error Message 117: The object name '[email protected]' contains more than the maximum number of prefixes. The maximum is 1. In another database I get the same error message but the maximum number is 2. So obviously the maximum number of prefixes can be set... somewhere. How do I increase the maximum number of prefixes so I can transfer securables out of the [email protected] and into the dbo schema?
How to resolve Sql Server Error Message 117 - Too many prefixes for object name?
I'm using the same technique to make a backup from database. I've created a stored procedure as follows. Create Procedure [dbo].[CreateBackup] As Begin Declare @path nvarchar(256), @filename nvarchar(256), @currentDateTime datetime Set @currentDateTime = GetDate() Set @path = 'C:\DBBackup\' Set @filename = @path + Cast(DatePart(month, @currentDateTime) As nvarchar) + Cast(DatePart(day, @currentDateTime) As nvarchar) + '.bak' Backup Database Foo To Disk = @filename Set @currentDateTime = DateAdd(day, -3, @currentDateTime) Set @filename = 'del ' + @path + Cast(DatePart(month, @currentDateTime) As nvarchar) + Cast(DatePart(day, @currentDateTime) As nvarchar) + '.bak' Exec xp_cmdshell @filename End To use the xp_cmdshell, you should enable it before. http://weblogs.sqlteam.com/tarad/archive/2006/09/14/12103.aspx
Using osql command, SSQL backup of a database is created. It is saved to Disk. Then renamed to match the date of the day backup was taken. All these files are saved in a single folder all the time. for example: Batch1.bat does the following 1) Created backup.bak 2) renamed to backup 12-13-2009.bak (this is done by the combination of % ~ - etc to get the date parameter) This is now automated to take backup everyday by Task scheduler in Windows. Can the batch file also be modified to delete backup files older than 7 days? if so, how ? If it is not possible via batch file, other than manually deleting the files are there any other alternatives to automate the deleting job ? Thanks in advance, Balaji S
Delete backup files older than 7 days
There's a solution already on StackOverflow - here.
I have one stored procedure to backup the database. It will backup metadata as well as data. Is there any option to back up the database with out data. ie, back up only the schema (Empty tables). I don't want to script the database.
backup SQL Server 2005 database without data
All the objects in our databases are maintained in code - tables, view, triggers, stored procedures, everything - if we expect to find it in the database then it should be in DDL in code that we can run. Actual schema changes are versioned - so there's a table in the database that says this is schema version "n" and if this is not the current version (according to the update code) then we make the necessary changes. We endeavour to separate out triggers and views - don't, although we probably should, do much with SP and FN - with drop and re-create code that is valid for the current schema version. Accordingly it should be "safe" to drop and recreate anything that isn't a table, although there will be sequencing issues with both the drop and the create if there are dependencies between objects. The nice thing about this generally is that we can confidently bring a schema from new to current and have confidence that any instance of the schema is consistent. Expanding to your case, if you have the ability to run the schema update code including the code to recreate all the database objects according to the current definitions then your problem should substantially go away... backup, restore, run schema maint logic. This would have the further benefit that you can introduce schema (table) changes in the dev servers and still keep the same update logic. I know this isn't a completely generic solution. And its worth noting that it probably works better with database per developer (I'm an old fashioned programmer, so I see all problems as having code based solutions (-:) but as a general approach I think it has considerable merit because it gives you a consistent mechanism to address a number of problems.
We want to have our test servers databases updated from our production server databases on a nightly basis to ensure we're developing on the most recent data. We, however, want to ensure that any fn, sp, etc that we're currently working on in the development environment doesn't get overwritten by the backup process. What we were thinking of doing was having a prebackup program that saves objects selected by our developers and a postbackup program to add them back in after the backup process is complete. I was wondering what other developers have been doing in a situation like this. Is there an existing tool to do this for us that can run automatically on a daily basis and allow us to set objects not to overwrite (without requiring the attention of a sysadmin to run it daily).
SQL Server 2008 Auto Backup
2 BackupRead is not a magical function that lets you read any file. For starters, you need to run as a user with backup privilege. Secondly, you must respect the FilesNotToBackup registry key. You also have to call CreateFile with FILE_FLAG_BACKUP_SEMANTICS. But even with that in mind, I'm not sure if it's the bets way to backup the registry. Share Improve this answer Follow answered Oct 16, 2009 at 13:52 MSaltersMSalters 176k1111 gold badges158158 silver badges358358 bronze badges 1 Thanks for the answer! I'm aware of all that but still can't copy these files. I know for sure that there is a way, because there are many programs doing that - just have to learn more about security of the objects... – SEA Oct 18, 2009 at 10:25 Add a comment  | 
Using http://support.microsoft.com/kb/240184 I'm able to open SAM, SOFTWARE, SYSTEM reg hives, but I do not know the next step to get them copied in a different backup folder? I get Access Denied when I try to use BackupRead API. Any help is much appreciated!
backup reg hives
2 So you're not using the default aspnetdb? The membership tables are contained in your database? These would be have names with the prefix aspnet_. Perhaps you also need to back up aspnetdb? Share Improve this answer Follow answered Oct 12, 2009 at 17:28 spenderspender 119k3535 gold badges235235 silver badges360360 bronze badges Add a comment  | 
I am backing up a SQL Server Express database and then restoring on another machine. For some reason the asp.net membership data is not getting transferred. Do I need to do something different. Does ASP.NET membership data not get backed up?
SQL Server Express backup restore not working for asp.net membership
I'd do nothing. Or have SIMPLE permanently. Changing the recovery model back to full will require a full backup anyway to preserve integrity later on. You'll have a gap in your LSN chains otherwise. You've mentioned differential backup, so I assume your full is not each night. So, putting this together means you'll use more disk space for your full backup than for the LDF file.
I have a nightly job that does a bunch of inserts. Since I have a full recovery model, this increases my transaction log size. Currently I have my log file big enough to accommodate these transactions, but the issue is that the transaction log is mostly empty throughout the day. Is it an issue (besides disk space) to have a huge (mostly empty) transaction log? I'm thinking about switching the database to simple recovery before the job, running the job and then switching it back to full recovery. I can have the transaction logs just not be backed up until our nightly differential backup comes around and then i can start the transaction log backup again. Suggestions?
SQL Server nightly job log management strategy
One simple solution for backup: Call mysqldump from php http://dev.mysql.com/doc/refman/5.1/en/mysqldump.html, and save the backup file somewhere convenient. This is a safe solution, because you have mysqldump on any mysql installation, and it's a safe way to generate a standard sql script It's also safe to save the whole database. Be careful with blobs, like saved images inside database Be careful with utf-8 data One simple solution for restore: Restore operation can be done with the saved script. First disable access to the the site/app. Take down the database. Restore the database from the script with mysqlimport http://dev.mysql.com/doc/refman/5.0/en/mysqlimport.html Calling external applications from php: http://php.net/manual/en/book.exec.php
I've built a simple cms with an admin section. I want to create something like a backup system which would take a backup of the information on the website, and also a restore system which would restore backups taken. I'm assuming backups to be SQL files generated of the tables used. I'm using PHP and MySQL here - any idea how I can do this?
Want to create a script that takes backup of entire database and downloads it
For the database, do a regular Microsoft SQL Server backup and restore (or whetever your hosting company lets you do). Beyond the database, you probably want to make a copy of the Web.config, because you will be transferring some of those settings to a new machine, and you want a copy of any files that you have customized, like in the "custom" folder.
I want to take backup of BugTracker.Net hosted on my local and restore it to a new machine ? Has any one clue about this ? Thanks
How to take backup of BugTracker.net?
No, assemblies in the GAC will not be backed up by an IIS backup. You will need to re-install your assemblies into the GAC.
I was wondering: if I have some DLLs in the GAC - will they get restored if I restore a backup of IIS6 on a fresh Windows 2003 box? Or will I need to backup/rebuild the GAC separately from IIS?
Does restoring a backup of IIS6 restore the GAC?
2 Setting up a cronjob to run a script that does a mysqldump and stores the dump on a separate disk from the database itself (or a remote server) is a quite easy and efficient way to backup a database in my opinion. You could even have it dump every database with the --all-databases switch If you have more than one MySQL server, you could also use replication Frequency of backups depends on how much data you are willing to lose in case of a failure. Share Improve this answer Follow answered Jun 23, 2009 at 8:01 Jani HartikainenJani Hartikainen 43k1010 gold badges6868 silver badges8686 bronze badges Add a comment  | 
I am using mysql How often do you back up your database? How do you normally backup your database? Export all data into sql or cvs format and keep it in a folder??
In MySQL, what are the practices to backup databases?
SSARC is not really a backup tool. I wouldn't recommend it. It's more like a way to cut-n-paste segments of a VSS repository between different databases. It's also not without side effects. At minimum, items in the source database get marked as archived. At worst, they get deleted. The best way is to take your DB offline, zip it up (or RAR, whatever), and copy the *.zip file somewhere safe. VSS was designed back in the days when file sharing was the only cross-platform protocol reliably available on PC LANs, so the filesystem is the database. Compared to modern client-server systems, VSS architecture has many flaws which I'm sure you're aware of -- but you may as well use its convenience to your advantage.
I was wondering what the best approach might be for creating a backup of my organisation's SourceSafe database, and moving it to a share on another server? Currently we have a scheduled job which runs a batch file, which in turn executes a PowerShell script. This Powershell script creates a backup file (using SourceSafe command-line arguments), moves it to a new server (via a drive that has been mapped on the SourceSafe server), and sends the output of the SourceSafe backup to our administrator via e-mail. This process works for the most part, but I can't help but feel there are more streamlined approaches or tools we should be utilizing. Any advice is welcomed!
SourceSafe backup script - best approach?
I use the following script to send a small dump to a dedicated mail account. This of course assumes you can send mails from your machine using the mail command. #!/bin/bash gzdate=`/bin/date +%Y-%m-%d_%H%M`; gzfile=dump_${gzdate}.sql.gz [email protected] dumpuser=username dbname=mydb mysqldump --single-transaction --opt -u ${dumpuser} ${dbname} | gzip > ${gzfile} if [ $? == 0 ]; then ( echo "Database Backup from ${gzdate}:"; uuencode ${gzfile} ${gzfile} ) | mail -s "Database Backup ${gzdate}" ${mailrecpt}; else ( echo "Database Backup from ${gzdate} failed." ) | mail -s "FAILED: Database Backup ${gzdate}" ${mailrecpt}; fi You just need to adapt the variables at the top.
We're running a CentOS server with a lot of MySql databases atm, what I need is a really easy way for us to back those up. Since many of them are under a couple of meg. Dumping, zipping them up then sending them to a secure Google Apps account sounds like a pretty good idea. So what I need is: a script that will dump and zip the database, then email it somewhere, if it fails email somewhere else.
backup MySql databases and email them somewhere at a certain time
I'll try the 'soft-upgrade' way described in the documentation but do you think this will work? I don't know enough about GEOS to say for sure, but it sounds like a good thing to try. From the docs you linked to: If a soft upgrade is not possible the script will abort and you will be warned about HARD UPGRADE being required, so do not hesitate to try a soft upgrade first. Otherwise, I'd just follow their "hard upgrade" directions, which appear to be functionally equivalent to the usual pg_dump/pg_restore approach used to upgrade to a new major version of PostgreSQL. There's plenty more information in the Postgres documentation about how to do that; it's a very safe procedure and, as the official migration method, is extremely well supported. One thing you may wish to consider is upgrading to the PostgreSQL 8.4 beta while you're doing all of this work. It's beta software, true, but that might be acceptable for your environment, and if it is suitable, then you get the new features of 8.4 plus the ability to do a soft upgrade to 8.4 final (as the on-disk formats are not expected to change after the start of beta).
Current situation: Ubuntu 8.04 server edition (live server) Postgresql 8.3.7 (from standard repositories) Postgis 1.3.3 (from standard repositories) GEOS 2.3.4 (from standard repositories) Problem: GEOS contains bugs which are fixed in the 3.0 release. I have encountered these and need to upgrade GEOS/Postgis to include the GEOS fixes. Where i'm standing now: On a test machine with nearly identical setup, i removed the postgis-packages and tried to recompile Geos 3.1.0 against Postgis 1.3.5 and Postgresql 8.3.7. After fixing some linking and path problems this works. My specific question: What is the best way to migrate my databases (tables, functions, triggers, gist indexes, data...) from the 'based on older geos/postgis' version to the 'newer' one? I'll try the 'soft-upgrade' way described in the documentation but do you think this will work? What's the best way to make a full backup of this postgis-enabled database so i can completely restore it on the 'newer postgis version' i'm compiling?
postgresql/postgis backup strategy to restore after geos/postgis recompile?
2 DropBox does everything your asking. http://www.getdropbox.com/ Plus it's fully cross platform, Windows, Mac, Linux. Free up to 2GB. Share Improve this answer Follow answered Apr 20, 2009 at 5:33 Brian GianforcaroBrian Gianforcaro 26.8k1111 gold badges5858 silver badges7777 bronze badges Add a comment  | 
I'm working on some documents on a laptop which is sometimes offline (it runs winXP). I'd like to backup automatically the documents to a folder to a remote location so that it runs in the background. I want to edit the documents and forget about backuping and once online - have it all backuped to a remote location, or even better - to an svn server or something that supports versioning. I want something which is: 1. free 2. does not overload the network too much but only send the diff. 3. works 100% thanks in advance
sync automatic background offline
2 I would say just backup the entire database. That will be an easier process, and you'll more easily maintain identity integrity in case you need to restore a backup. Share Improve this answer Follow answered Mar 16, 2009 at 13:25 atfergsatfergs 1,68411 gold badge1111 silver badges1717 bronze badges Add a comment  | 
i am required to back up certain rows from certain tables from a database. often the criteria requires joins and such. what's a good way to accomplish this? (i don't have to use mysqldump).
Complex Database Backup
It doesn't work that way. What you need to do is set up a mini chroot jail for each backup host. It just needs to be able to run sh and scp (/dev only needs /dev/null entry). Use jailsh as the login shell for each account. Jailsh is a suid-root login shell that sets chroot jail to the directory marked by two consecutive slashes, drops root privs, and execs /bin/sh.
I need to run backups from multiple servers to a single account on another server. If one of the public servers is compromised, I don't want the other server's files on the backup account compromised. What I need to do is only allow SCP to a specific directory, based on the ssh key of the incoming connections. I know that I can set the shell, and several options on a per key basis in the authorized_keys file. http://www.manpagez.com/man/8/sshd/ (Scroll down to "AuthorizedKeysFile") What I don't know how to set the internal-sftp command to only use a certain directory. I don't have root on the the machine, so I can't do the normal internal-sftp + chroot.
Restricting OpenSSH to allow uploads only to certain directories
Depending on how up-to-date your data needs to be, snapshot replication seems like the best fit for you. It's not that difficult to set up and I believe that it's fairly common in scenarios like yours.
We recently moved from a simple DB recovery model (with daily full database dumps) on our SQL Server 2000 Standard database to full recovery -- combined with weekly full database backups, daily incremental, and transaction dumps every 10 minutes. Our previous reporting DB instance (SQL Server 2005) was built from the daily backups which no longer exist. I can re-build the reporting database by loading the weekly dump, leaving it in recovery mode, and then restore the incremental backups. Unfortunately, this is not easily scriptable and doing this by hand sucks. Taking additional full backups of the 2000 production database will ruin the incrementals (which are desirable for a number of reasons). Is there a better way to do this? We can't do log shipping since we're only SQL Server 2000 Standard (eventually we'll upgrade to 2K5).
Set up SQL Server 2005 Reporting DB from SQL Server 2000
you have a couple of options with SBS, you can have additional servers in the SBS domain such as win2003 servers. You can only have a additional SBS server on the same domain for a limit of seven days, this is for migration situations. In you situation I would have a additional win2003 server connected to the domain to act as a backup domain controller, then if your SBS systems goes down you can still do Active directory auth. and access any resources still on your network. We create a base acronis image of the SBS server, then if anything goes wrong you restore the acronis image and then apply the backup file (We use NTBackup provided in SBS) to bring you up to date. using the combination of acronis and backups cuts down the time to recover significantly. Hope this helps.
I have just joined a company with Server 2003 Small Business Server. The company contains only an handful of staff and needs a backup system. I would like to restore a tape backup (including system state, Exchange server, etc) to a second server. The aim is to have a verified set of backups and be able to swap the servers if necessary. Am I right in thinking that the second server could not be on the SBS network?
Backup Strategy
I think I found a way to do a complete migration... Install a fresh version of MOSS 2007 on the new server (Server_B). Install the features and solutions you have on Server_A. Then use the SPContentDeploymentWizard which can be downloaded for free from CodePlex to make an export of all site content and import these on Server_B. Also backup custom databases needed by features and create these on Server_B. I do have a almost completly equal server running now, some funky errors pop up now and then so I don't think it's the best way to do it... Also, custom developed webparts need to be deployed manually to the new server, I didn't find a way to migrate these
The situation at the moment is that we have a sharepoint server which started out as a pilot but now actually runs as the production environment. The server on which sharepoint runs is an old machine which does not conforms the standard requirements so I want to move the current environment to the shiny new server. I've red a lot about migrating the MOSS services, databases and content and stuff but to be honest I am kinda lost in a sea of information and I can't find the right method to do this, I've tried to install MOSS 2007 on the new server as a clean install, restored the databasses on the new server, restored the backup on the new server which I made with Sharepoint Central Administration, alas, I did not worked :-( Lots of "Can't find this" and "Can't find that" errors... It should be possible to grab all the data/sites/subsites/databasses/content/documents and everything else and restore that to the new server right? Anyways, I was hoping for some step-by-step information... :-) Regards Erik404
Complete MOSS 2007 Migration
Looks like gtkpod can do it. Looking over its source code might give you what you're looking for.
Are there any publicly available libraries or APIs out there on Ubuntu that allow me to programmatically archive the contents of my iPod? If no library or API exists, what alternative options do I have for saving the contents of my iPod?
Programmatic iPod Backup on Linux
2 Cloning is perfectly acceptable. You don't have to backup to tape... It can be done to a NAS for example, and with the proper security and setup, backups cannot be deleted by unauthorized people. Share Improve this answer Follow answered Oct 8, 2008 at 19:01 Hans DoggenHans Doggen 1,80622 gold badges1717 silver badges1313 bronze badges Add a comment  | 
I am application developer and don't know much about virtual machine(VM). however, our application is resided on a VM. frequent patch need be apply to fix/update this application. For diaster recovery, It was suggest to backup every thing on the server. so, once server is restored, no application need be re-installed and configured. our network administrator thinks it can be done by cloning VM. but if we want to backup the clone to a tape. it would expose VM to backup drive. any one who can access to it can erase the VM and every thing woudl be gone. it is very risky. I would appreciate it if you could let me what you think on this or any suggestion.
Can clone VM be application backup plan?
Basically what 'F. Hauri` said in the comments above. #!/bin/bash -x SERVIDOR="$(hostname)" PASSWORD='PASSWORD' FECHA=$(/bin/date +\%Y\%m\%d) [email protected] STATUSFILE="/tmp/statusfile.$FECHA" echo "Backup report from $FECHA" > "$STATUSFILE" mysqldump --databases prueba --no-tablespaces --skip-comments --single-transaction --default-character-set=UTF8 --insert-ignore --complete-insert --add-locks --triggers --routines --events --disable-keys --lock-tables=false --set-gtid-purged=OFF --user=backup -p$PASSWORD > /tmp/"$SERVIDOR"-DBFLBASEPRUEBA-"$FECHA".sql RESULTADO=$? echo "Muestra la salida de mysqldump: ${RESULTADO}" >> "$STATUSFILE" if [ ${RESULTADO} -eq 0 ] then echo "El respaldo diario de la base de datos 'prueba' en el $SERVIDOR se ha realizado correctamente" >> "$STATUSFILE" else echo "El respaldo diario de la base de datos 'prueba' en el $SERVIDOR no se ha realizado correctamente" >> "$STATUSFILE" fi gzip -9 /tmp/"$SERVIDOR"-DBFLBASEPRUEBA-"$FECHA".sql Added a few more smaller cleanups, and you don't seem to use $NOTIFICADOS anywhere.
The next script apparently executes successfully cause I get the backup but when I modify some parameters for checking the output of mysqldump all time send an email with the messages "El respaldo diario de la base de datos se ha realizado correctamente" (the backup executed successfully) `#!/bin/bash -x SERVIDOR="$(hostname)" PASSWORD='PASSWORD' FECHA=`/bin/date +\%Y\%m\%d` [email protected] STATUSFILE="/tmp/statusfile.$FECHA" echo "Backup report from $FECHA" > $STATUSFILE mysqldump --databases prueba --no-tablespaces --skip-comments --single-transaction --default-character-set=UTF8 --insert-ignore --complete-insert --add-locks --triggers --routines --events --disable-keys --lock-tables=false --set-gtid-purged=OFF --user=backup -p$PASSWORD > /tmp/$SERVIDOR-DBFLBASEPRUEBA-$FECHA.sql echo "Muestra la salida de mysqldump: $?" >> $STATUSFILE if [ $? -eq 0 ] then echo "El respaldo diario de la base de datos "prueba" en el $SERVIDOR se ha realizado correctamente" >> $STATUSFILE else echo "El respaldo diario de la base de datos "prueba" en el $SERVIDOR no se ha realizado correctamente" >> $STATUSFILE fi gzip -9 /tmp/$SERVIDOR-DBFLBASEPRUEBA-$FECHA.sql` I executed the script with --no-tablespace the value of mysqldump is 2 but the message said that the backup executed successfully. Can you help me about the correct way to detect an issue, please?
Bash script backup full MySQL
Azure powershell list assigned backup policy to VM The command Get-AzVM -Status does not show any information about the VM backup or the recovery service vault. The following are the values of the Get-AzVM -Status command. To check all VMs that are backed up in the recovery service vault, along with their Backup Schedule, Backup Policy, and Backup Retention (daily, weekly, monthly),' details, you can make use of the script below. $vaults = Get-AzRecoveryServicesVault foreach ($vault in $vaults) { Write-Host "Vault Name: $($vault.Name)" # Get all protection policies in the vault $policies = Get-AzRecoveryServicesBackupProtectionPolicy -VaultId $vault.Id -BackupManagementType AzureVM -WorkloadType AzureVM $containers = Get-AzRecoveryServicesBackupContainer -ContainerType "AzureVM" -VaultId $vault.Id foreach ($container in $containers) { $containerName = $container.FriendlyName Write-Host " VM Name : $containerName" $backupItems = Get-AzRecoveryServicesBackupItem -Container $container -VaultId $vault.Id -WorkloadType "AzureVM" foreach ($item in $backupItems) { $output = @{ "Vault Name" = $vault.Name "Container Name" = $containerName "Backup Item Name" = $item.Name "Backup Policy Name" = $policies.Name "RetentionPolicy Name" = $policies.RetentionPolicy "SchedulePolicy Name" = $policies.SchedulePolicy } New-Object PSObject -Property $output | Format-Table -AutoSize } } } Result:
How can I make a list of all VM's inside my subscriptions including the information of: Backup Policy Backup Schedule Backup times Backup Retention (daily, weekly, monthly) So additional to the result of this Get-AzVM -Status Thx I can get data from vm's and data from recovery services vaults, but not together.
Azure powershell list assigned backup policy to VM
Based on your google\_client\_secret\_json\_file.json file, this doesn't contain the json objects needed and as explained by the error google.auth.exceptions.MalformedError: Service account info was not in the expected format, missing fields client\_email, token\_uri. You will need to generate a Service Account key instead by following this reference article. Sample format for Service Account credentials: { "type": "service_account", "project_id": "[PROJECT ID]", "private_key_id": "[PRIVATE KEY ID]", "private_key": "-----BEGIN PRIVATE KEY----- [PRIVATE KEY HERE] -----END PRIVATE KEY-----\n", "client_email": "[SERVICE ACCOUNT EMAIL]", "client_id": "[CLIENT-ID]", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/[EMAIL].iam.gserviceaccount.com", "universe_domain": "googleapis.com" }
I'm trying to use duplicity to backup my files from my Linux desktop. I read the answer to this question How do I backup to google drive using duplicity? dating from 2015 but it might be obsolete ? From the duplicity documentation, https://duplicity.gitlab.io/stable/duplicity.1.html, I understand I have to : Go to https://console.developers.google.com and create a projet, which I did. Name: mybackup-12345 (I changed the name for this question) create an oauth access, and get the secret in a json file. My json file content is as follow (/home/myuser/backups/google_client_secret_json_file.json): { "installed":{ "client_id":"XXXXXXXX.apps.googleusercontent.com", "project_id":"mybackup-12345","auth_uri":"https://accounts.google.com/o/oauth2/auth", "token_uri":"https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs", "client_secret":"XXXXXXX", "redirect_uris":["http://localhost"] } } export GOOGLE_SERVICE_JSON_FILE=/home/myuser/backups/google_client_secret_json_file.json export GOOGLE_CREDENTIALS_FILE=/home/myuser/backups/google_credentials_file (this file does not exist yet, I supposed that duplicity would create it after the first login) export GOOGLE_SERVICE_ACCOUNT_URL="[email protected]" And finally launch duplicity: duplicity /home/myuser/Documents gdrive://${GOOGLE_SERVICE_ACCOUNT_URL}/backups/documents?myDriveFolderID=root I tried other values before, but I guess this should not be far from what I should do. But I get this (python) error now : google.auth.exceptions.MalformedError: Service account info was not in the expected format, missing fields client_email, token_uri.
How to backup my files to Google drive using Duplicity on Linux?
I've answered this on IRC so for completness: The problem is the 'index="4"' part, which is an output-only element and thus not allowed by the schema on input. The rest of the XML is correct.
I am trying to create a backup of a given VM in »push mode« as described here. I have tried a lot of variations of the backup-xml but none passes the validation test. That includes the example posted here (the first one). Additionally I just ran that command: sudo virsh backup-begin vm1 && sudo virsh backup-dumpxml vm1 which dumps the autogenerated XML of the backup job with default values. In my case that looks like that: <domainbackup mode='push'> <disks> <disk name='vda' backup='yes' type='file' backupmode='full' index='4'> <driver type='qcow2'/> <target file='/home/xxx/.local/share/libvirt/images/vm1.qcow2.1684137281'/> </disk> <disk name='sda' backup='no'/> </disks> </domainbackup> So I put that output in a file (bg.xml) and ran: sudo virsh backup-begin vm1 ./bg.xml which than again showed the error: error: XML document failed to validate against schema: Unable to validate doc against /usr/share/libvirt/schemas/domainbackup.rng Extra element disks in interleave Element domainbackup failed to validate content Any Idea what is going wrong here - since the auto-generated content fails validation I am getting out of ideas.
virsh begin-backup — Unable to validate doc …/domainbackup.rng
1 For a good solution, these are not very detailed information. But the first hint: The name of the folder is not IndexDB but IndexedDB ... please note the additional ed. Second item: If the screenshot is from somewhere inside the IndexedDB, it must have been a subfolder within the IndexedDB: => The name of that folder is required. The IndexedDB is stored within your user-profile. If you didn't change things (created a second user-profile, etc...) the name is "Default". So the whole path is C:\Users\<username>\AppData\Local\Google\Chrome\User Data\Default\IndexedDB\<extension_identifier>\<your-backuped_files> For Chromium it differs slightly: C:\Users\<username>\AppData\Local\Chromium\User Data\Default\IndexedDB\<extension_identifier>\<your-backuped_files> For more there are more informations required. — Such files can also be found outside the `IndexedDB' within your Chrome profile folder. Their names are generic, the specifier is the name of the folder they are stored in. Share Improve this answer Follow edited Mar 27, 2023 at 13:44 answered Mar 27, 2023 at 12:33 dodrgdodrg 1,20522 silver badges1818 bronze badges Add a comment  | 
I have backup files that have the localStorage data saved in IndexedDB. These files look like this: Due to system crash, the browser (Chrome) lost these files and I had to recover them from a backup system. Now I need to restore these files back. Is there anyway to restore or even be able to read these files? Any help will be appreciated. Thanks.
How to restore IndexedDB from backup file?
1 This is the most obvious answer, of course, but you may need to run the database engine or GUI again, whichever is running you into this problem. Share Improve this answer Follow answered Mar 23, 2023 at 16:55 Anthony PulseAnthony Pulse 6711 silver badge77 bronze badges Add a comment  | 
We are using SQL 2019. I wanted to create a Production database from my Dev database. So I took a full backup of the Dev database and restored it as Production - that part completed OK. But after it completed, I noticed that my Dev database is inaccessible and says (Restoring...). Why is this? I did not specifically ask to restore the Dev database. Also, it has been in this Restoring... state for some time now, a few hours, much longer than it took to create the Prod database. How can I get back to normal status?
SQL Database Restore
1 First, perform the sanity check, connect to the database and run SHOW archive_mode; SHOW archive_command; to ascertain that the settings are indeed as you think they are To debug that further, you should resort to the log file. If the archive command is executed and encounters problems, those problems will leave a trace in the log. If you cannot see anything there, set log_min_messages to debug3, then you will get messages like executing archive command "..." whenever archive_command is executed. Share Improve this answer Follow answered Mar 13, 2023 at 7:46 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
here is my configuration file: wal_level = replica archive_mode = on archive_command = 'touch /var/backups/test' archive_timeout = 1min Permissions are ok. Postgres user can write into the aproppiate folder. I also restart the database to make sure the configuration goes through. When running SELECT pg_switch_wal() I can see data under $PGDATA/pg_wal/, however there is nothing in /var/backup. Any ideas? Saw that older question which is very alike: https://serverfault.com/questions/221791/postgresql-continuous-archiving-not-running-archive-command in my case permissions are okay. Also, I can't seem to find any errors or warnings on the logs (When running docker logs ) 2023-03-13 07:29:36.216 UTC [108] STATEMENT: insert into prueba2 values (165); 2023-03-13 07:33:43.197 UTC [51] LOG: checkpoint starting: time 2023-03-13 07:33:43.418 UTC [51] LOG: checkpoint complete: wrote 2 buffers (0.0%); 0 WAL file(s) added, 0 removed, 1 recycled; write=0.101 s, sync=0.025 s, total=0.221 s; sync files=2, longest=0.017 s, average=0.013 s; distance=6606 kB, estimate=6606 kB
Postgres wal archiving: archive_command not being executed
Take a look at the STANDBY option for the RESTORE statement. From the docs: Specifies a standby file that allows the recovery effects to be undone. NB - differential backups are based on the last full backup taken. So let's say that you take full backups on Sunday, differential backups every other day of the week and you're restoring every day. Sunday, you'd restore only the full backup, bringing it online for read by specifying STANDBY with the restore Monday through Saturday, you'd restore the latest differential backup on top of what you've already restored, again using STANDBY When Sunday rolls around again, you'd need to restore the new full backup. That said, you mentioned that you're on SQL Express. The database size limit is 10 Gb. At that size, how long does the restore of the full backup take? Is it worth your time? And even there, it's really "how much time are you saving the robot?" because you've already (presumably) automated the restore.
I'm periodically getting backups of an SQL database which I would like to restore in my machine. I'm able to do that with RESTORE fullDB WITH NORECOVERY; RESTORE differentialDB WITH RECOVERY; However, I need to access the restored database in between, as the interval in which I'm getting the backups can be a few hours. I tried restoring full backup WITH RECOVERY, but in that case I'm getting exception while restoring differential backup. NB: It's not just one differential backup, it's real time backups taken every few hours. I'm using C# for executing the operations. Any help is appreciated to solve the issue. Also, let me know if I'm barking at the wrong tree. Instead of sending SQL database backups from client as .bak files, should I opt any other way to send the data?
Is there a way to keep on restoring SQL database backups periodically, and meanwhile access the database from another application?
Unfortunately, there's no native --pre-snapshot-script option for creating an EBS snapshot. However, you could use a Lambda function, that is triggered based on a scheduled EventBridge rule, to trigger a script to be run before you then programmatically take the EBS snapshot. You'd need the SSM agent to be installed on your EC2 instance(s). The idea is to use ssm:SendCommand and the AWS-RunShellScript managed document to run Linux shell commands on your EC2 instance(s). You have a couple of options, specifically: if you want to inline all of your 'special script' in your Lambda function, or download it from S3 during your user data script, or manually transfer it or ... if you can run the Boto3 create_snapshot() (or the AWS CLI ec2 create-snapshot command) as part of your 'special script'. If not & you want to do it separately after doing send_command, you will also have to use the Boto3 ssm.get_command_invocation (or the AWS CLI ssm get-command-invocation command) to poll+wait for the command to finish, before you create your snapshot What you decide to do is dependent on your specific requirements and how much infrastructure you want to manage, but the essence remains the same. This could be a good starting point: import boto3 def lambda_handler(event, context): ssm = boto3.client('ssm') commands = ["echo 'Command xxxxx of special script'"] commands.append("aws ec2 create-snapshot --volume-id vol-yyyyy") ssm.send_command( InstanceIds=['i-zzzzz'], DocumentName='AWS-RunShellScript', Parameters={'commands': commands} )
I want to make a backup of several EBS volumes attached to a EC2. There are Snapshots and AWS Backups available. Issue is, before I can do a backup, I must execute a special script to freeze my application. Why: it's somewhat im-memory, so this call forces all writes to disk to comlpete and also prevents new disk writes for a duration of a backup. Is there a way to execute an arbitrary bash script before the backup job/snapshot?
Callbacks for AWS EBS Backups or Snapshots
You ask about monthly backups. In case you're okay with more frequent backups, you can schedule daily backups using heroku pg:backups:schedule (' quotes converted to " for compatibility with Windows): Scheduled Backups In addition to manually triggered backups, you can schedule regular automatic backups. These run daily against the specified database. Set Up a Backup Schedule heroku pg:backups:schedule DATABASE_URL --at "02:00 America/Los_Angeles" --app example-app The --at option uses a 24-hour clock to indicate the hour of the day that you want the backup taken. It also accepts a timezone in either the full TZ format (America/Los_Angeles) or the abbreviation (PST), but we recommend using the full TZ format. Note that backup retention depends on the database tier, varying from 7 daily backups and 1 weekly backup all the way up to 7 daily backups, 8 weekly backups, and 12 monthly backups. If you need longer retention, you'll have to retrieve your backups from Heroku and store them elsewhere. If you really want monthly backups and not daily ones, you'll have to schedule backups some other way, e.g. using a cron job on another system.
I want to do backup for my website database monthly without involving me everytime I put this code everytime to backup my website heroku pg:backups:capture --app appname Is there any code that i can add to backup monthly?
How can I schedule to backup my database monthly
The process I currently employ across my environments is: Start with a baseline where STAGING and PROD are exactly the same. This you can achieve by restoring the latest backup(s) from your PROD environment to your STAGING environment. As changes occur in DEV and need to be released to STAGING, create the appropriate release scripts to apply to STAGING. When you need to refresh the data in STAGING, restore the latest backup(s) from PROD to STAGING again. Optional: Run any post-refresh STAGING specific scripts that are needed. E.g. if you obfuscate any of the data, or change of the data to signify it's the STAGING environment. Run any release scripts in STAGING, that haven't yet been released to PROD. Over time your release scripts that were used in STAGING, should get used to release to PROD, which is what will help keep the two environments in sync. Repeat steps 2, 3, and 4 at the frequency acceptable to your goals. I have the above in a SQL Job so it's essentially a one click process, but my database is SQL Server on-prem. I also use a schema comparison tool called SQL Examiner to automate generating the release scripts I need, but not sure it's applicable to Azure SQL Database.
I have two databases in Azure SQL Database almost identical in their structure (unless one of the two is modified). I need to synchronize the data from the production database with the staging database, being able to make changes in staging without harming production unless I need to do a production restore (that's another topic). In the case that there is no solution, I want to be able to make staging equal to production when my developers need it, is there a way to modify a database with a backup of another without having to create a new database (since you would have to modify the server name in the app)
Synchronize data from the production Azure SQL database with the staging database allowing changes to be made in the second
1 You can checkout the documentation here https://min.io/docs/minio/linux/administration/bucket-replication.html#minio-replication-behavior-resync Share Improve this answer Follow answered Sep 23, 2022 at 17:07 prbh4tprbh4t 14133 bronze badges 1 Also min.io/docs/minio/linux/reference/minio-mc/mc-mirror.html is useful. – Omid Estaji Jan 1 at 6:57 Add a comment  | 
we are trying to implement "disaster recovery" on Minio. Are there any documentations about how to implement DRC on Minio? Has anyone done it? Thank you
Is there any way to implement Disaster Recovery on Minio?
1 If you want to take a backup copy of the database in a network path, the User that started the SQL Server service must have write access to that shared folder in the network path. Otherwise you will get an access denied error. Share Improve this answer Follow answered Sep 16, 2022 at 18:11 Vahid MousaoghliVahid Mousaoghli 4644 bronze badges Add a comment  | 
I want to back up other computers' databases on my network to a network-shared folder on its C:\ disk remotely, the shared folder is only open to its own user. It works fine if I give everyone permission to the shared folder, but I need to restrict it and I don't know how to do the code part.
Backup SQL Server database remotely to a specific user shared restricted folder
When using the SaaS service on gitlab.com there isn't really a procedure to "backup" your data. Generally, you rely on GitLab to ensure your data is always available on gitlab.com There is documentation for backup and restore processes but this applies to self-hosted GitLab instances only. (this is probably why you aren't able to figure out using the gitlab backup tools because they are intended for self-hosted instances only). If you still want to backup your data that is on gitlab.com, one option might be to do a file-based export of a group. However, this procedure is deprecated and import support may be removed entirely in the future. Another option may be to "migrate" your groups following the group migration documentation onto a self-hosted GitLab instance, then backup that self-hosted instance using the backup and restore process linked above. Other than this, it is also possible to export projects to create zip archives that can later be imported to GitLab. This will include repository data, issues, etc. You can't do this in a single operation for your whole group, but you can export each project in your group individually, if you want.
I have a gitlab group with some repositories and issues and I would like to make a backup of the whole group. I was looking at the doc. from gitlab but it's confusing for me and I don't know where to start. When I run this command it creates a 300Kb .tar file but the group is larger than that, where do I specify the URL of the gitlab group I want to backup? sudo gitlab-backup create
How to make a backup of a gitlab group?
let's assume that duplicity works (it's not officially supported on windows in any way). never tried it. say your backup data exists in the root of your external harddrive mounted as E:. you want to restore the complete last backup into a folder C:\Users\john\OneDrive\Documents\temp\ . two points point it to your backup location properly. absolutely that would be /cygdrive/e/ or as url file:///cygdrive/e/ point to your target folder as folder ending with a slash / to signal that the backup is to be restored in there. taking these points into account a command like duplicity file:///cygdrive/e/ /cygdrive/c/Users/john/OneDrive/Documents/temp/ should work as expected. NOTE: you don't need the action command restore as the order of arguments (url before local file system location) tells duplicity already that you want to restore.
My Linux machine recently failed and I am trying to restore my files onto a Windows 11 machine. The files were created using Duplicity (the external HD containing the files has hundreds of .difftar.gz and .sigtar.gz files as well as a '.manifest'). Having installed CGWin and the duplicity package, I traverse to my external HD in cgwin... $ pwd /cygdrive/e ... and attempt to restore the latest snapshot of my lost directories/files to a temp folder on my Windows 11 machine by running: duplicity restore file:/// /cygdrive/c/Users/john/OneDrive/Documents/temp At this juncture, the restoration fails due to a "IsADirectoryError" error. Warning, found the following remote orphaned signature file: duplicity-new-signatures.20211221T070230Z.to.20211224T103806Z.sigtar.gz Warning, found signatures but no corresponding backup files Warning, found incomplete backup sets, probably left from aborted session Synchronizing remote metadata to local cache... Copying duplicity-full-signatures.20211118T103831Z.sigtar to local cache. Attempt of get Nr. 1 failed. IsADirectoryError: Is a directory Attempt of get Nr. 2 failed. IsADirectoryError: Is a directory Attempt of get Nr. 3 failed. IsADirectoryError: Is a directory Attempt of get Nr. 4 failed. IsADirectoryError: Is a directory Giving up after 5 attempts. IsADirectoryError: Is a directory Is there an error in my duplicity command? Do I have corrupted backups? Any assistance in trouble-shooting this would be greatly appreciated!
Duplicity Restore Throwing "IsADirectoryError: Is a directory" Error
1 you can create another collections as history and save all record like backup to history time of save records in rest api Share Improve this answer Follow edited Jul 29, 2022 at 11:18 answered Jul 29, 2022 at 11:17 shubham gosavishubham gosavi 1133 bronze badges 1 Do you mean first I have to create another collection then backup all data into history collection and for real-time modification, I have to all this history collection in backend code same I am doing a query for the original collection? – Shatya Kesarwani Jul 29, 2022 at 11:23 Add a comment  | 
I want to copy the database's specified collection into new database. I searched and found there is something Trigger technique which will update copying database whenever any modification happened in original database but it cost must so I want any other alternative solution. I also want rules for copying something like I only want few fields of particular collection however its not much important but main task is copying original database collection into new database in real time. We can say something like backup
real time copying mongoDB instance collection into new database
1 What you have is not a backup (encrypted or otherwise) but the output from the db2move export command execution. Read the db2move documentation to learn how to perform the opposite operation. Share Improve this answer Follow answered Jul 21, 2022 at 11:38 mustacciomustaccio 18.5k1616 gold badges4949 silver badges5757 bronze badges Add a comment  | 
I am trying to restore a DB2 database using an encrypted backup file. The backup zip file contains an .lst file, a .ddl file, over 3000 .ixf files, same number of message files and a folder with few .lob files in it. I have tried using bind @ list_file grant public after placing the .lst file and .ixf files in the /bind directory. But the error was that .ixf files could not be opened. Any help appreciated.
DB2 restore from encrypted back-up
Regarding your specific question of: Is there a way to "chunk" the stream so that it writes a new file/stream every X bytes? Your best bet for adapting your current workflow to "chunk" your backup file would be to use GNU Parallel, using --pipe (and --block to specify the size each block of data to pipe through to each instance of aws).
We have a bespoke database backup solution that will be causing us problems in the near future. I'll explain. S3 has a single file limit of ~5TB. Our backup solution utilizes xtrabackup with the xbstream option which is then piped into an 'aws s3 cp' command to store it in S3. The interesting part of the script looks like this: innobackupex {0} --host={4} --slave-info --stream=xbstream /tmp {5} | lzop | /usr/local/bin/aws s3 cp - {1}/{2} --expected-size {6} --storage-class STANDARD Ignore the variables, they're injected in from a different part of the script. The key thing to notice is that the xtrabackup xbstream output is piped-into lzop to compress it, then piped-into the "aws s3 cp" command to store it in our bucket as a multi-part upload. We use the streaming paradigm to avoid local storage costs. By now you can probably guess the issue by now. Our compressed backup size is rapidly approaching the 5TB limit, which will be catastrophic when we reach it. So here's my plea. Is there a way that we can write compressed backups larger than 5TB to S3 from XtraBackup's xbstream to S3 without compromising our preference to not store anything locally? I'm not too well versed in bash pipelines. Is there a way to "chunk" the stream so that it writes a new file/stream every X bytes? Or any other sane options? We'd prefer to avoid writing the raw backup, as it's 4x the size of the compressed backup. Thanks!
Issues with large MySQL XtraBackup stream to S3
1 You can use Azure Monitor service to create alert for Backup failure. Search for Monitor in Azure Portal. Click on alerts under Monitor. Click on +Create -> Alert rule. Click + Scope and select the required fields. Under Condition tab, select the option Activity log and select Signal name as Export an existing database as shown in below image. Select the alert logic. Under Action tab, create a New action group or use existing group. In the action group, create an alerts like Email, SMS or Voice call and click on OK. Share Improve this answer Follow edited Jun 29, 2022 at 16:55 answered Jun 29, 2022 at 9:08 Utkarsh PalUtkarsh Pal 4,30011 gold badge66 silver badges1414 bronze badges 8 Won't this just alert on the execution of a backup, not an actual "backup failed"? – Wolff Jun 29, 2022 at 15:51 You can select alert logic under Condition tab in step 4. Check the updated answer. – Utkarsh Pal Jun 29, 2022 at 16:54 Wahoo. Easier than I thought. Thanks! – Wolff Jun 29, 2022 at 20:02 FYI: Looks like I'm not able to customize the severity of the alert. This particular alert raises as a Severity 4 and I'm needing it to be higher. – Wolff Aug 2, 2022 at 14:01 1 Looks like this only applies to exporting to a bacpac. – Wolff Aug 17, 2022 at 18:16  |  Show 3 more comments
Does anyone know how to create an alert when, for whatever reason, an Azure SQL Database backup fails to complete?
Is there a way to create an alert when an Azure SQL Database backup fails?
1 There's no such parameters for BGSAVE. An approximate solution is to use the SAVE config. Config it in redis.conf or use the config set command: config set save "60 1000" However, this config means after 60 seconds if at least 1000 changes was performed. It's not exactly what's you need: sync every 60 seconds or 1000 keys. Share Improve this answer Follow answered Jun 1, 2022 at 0:25 for_stackfor_stack 21.7k44 gold badges3636 silver badges5050 bronze badges Add a comment  | 
I would like to run bgsave periodically, to sync data to the disk, I tried BGSAVE 60 1000 to sync every 60 seconds or 1000 keys added but it does seem to work
how do I pass pass parameters to redis BGSAVE command?
1 The problem is that the default output format of COPY, which is text, treats the backslash as an escape character, so it must be doubled. Use the csv format and set quote characters and delimiter to characters that do not occur in your data, for example \copy (SELECT ...) TO '...' (FORMAT 'csv', QUOTE E'\u0007', DELIMITER E'\u0008') Share Improve this answer Follow answered May 17, 2022 at 10:04 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
I am trying to copy jsonb type of data into a file. \copy (select myjsonbcolumn from mytable where time > timestamp '2021-05-01 00:00:00') to '/home/ubuntu/jsobdata.ndjson'; Now this jsonb data have \' within it. e.g., {"ID": "123","Body": "<p><a href=\"https://google.com\">Lorem ipsum</a></p>\n<p>Lorem Ipsum Lorem ipsum </p>"} Now the above copy command adds an extra "" to it, which transforms into below {"ID": "123","Body": "<p><a href=\\"https://google.com\\">Lorem ipsum</a></p>\\n<p>Lorem Ipsum Lorem ipsum </p>"} Is there a way to notify not to add extra \? Because this huge data, more than 200GB and to replace those extra \ will take a lot of time via file processing
Postgres copy command adding extra backslash "\"
firstly, update Elasticsearch, and keep as up to date as you can. 8.X is now current and 7.X is only patched at the 7.16 level. but you're on 7.10, which was released in (late) 2020 second, you might want more than one slm policy. so something that you can use to take shorter snapshots that are discarded more frequently, versus monthly ones that you use for long term retention requirements you can then look at making yearly snapshot repositories for the monthly snapshots, that means you can easily drop repos you don't need
We are using Elasticsearch v7.10 and want to add a snapshot policy to our cluster. We want to be able to restore specific indexes even after few years. On the one hand it is recommended to take frequent snapshots (~every 30 minuets - Set up snapshot policy) but on the other hand it is not a best practice to accumulate thousands of snapshots, because it requires more memory on the master node and can destabilize it. It is recommended to include retention rules in the SLM policy (Snapshot retention limits). I need to be able to restore at least 1 snapshot from each month in the last 7 years - is it possible? how does my SLM policy/ies and retention rules should look like?
Saving the Elasticsearch snapshots for years - best policy and practices
Sure that will work. It has the advantage that you can compress the backup before sending it over the network. And if the transfer fails, you could resume it (using rsync). It has the disadvantage that it stores the full backup on the server. Or you could backup from the "reserve" host using -h to specify the main host. This won't compress the data for transit (until v15 comes out) and can't be resumed if interrupted.
I am trying to figure out what is the correct way to create a full standalone backup which will be stored on a remote host(I should use pg_basebackup). I suppose that it should look like that: pg_basebackup on the main host Use scp to deliver it on a reserve host Is it a right way?
Postgresql full standalone backup on remote host
I would suggest you to follow the below steps: Open the notebook file in Jupyter notebook and copy all the cell contents as given below: How to copy multiple input cells in Jupyter Notebook Copy the content to a single .SQL file. In the Management Studio, open a new query window and open the file, created in step no. 2 and you can run the sql statements. Note: Please review the SQL file once to see if everything is falling in place. You might need to add GO statements between batches. Also, it is recommended to put semicolon at the end of statements, to make them running without issues.
I have a doubt regarding an ipynb file, it turns out that they send me a database to replicate the structure, they use SQL server Management studio, but I don't know how to import it. I thought it was a simple python script, which could create a SQL database , then Install anaconda, use %%sql statements to recreate it, Until I realized that they could be imported in SSMS, but there is something that I am not doing well to import it correctly, I understand that it is a problem of correctly parsing the file, I appreciate any help, thanks! Install extensions in visual code, anaconda and the necessary libraries for handling SQL in Python, but it all boils down to correctly importing the file created in SSMS.
Import SQL server Management studio backup from an .ipynb file
1 Have you double checked the $storagename variable to just make sure it isn't actually null? Also, double check to see if the current storage account/container is tied to a RSV. Because, if it is, it won't allow you to run the Enable-AzRecoveryServicesBackupProtection cmdlet. Share Improve this answer Follow answered Feb 28, 2022 at 23:41 kooleosiskooleosis 4366 bronze badges 0 Add a comment  | 
function Set-StorageAccounttoPolicy { $storageaccounts= Get-AzStorageAccount | where {$_.StorageAccountName.StartsWith('p')} Get-AzRecoveryServicesVault -Name xyztestvault | Set-AzRecoveryServicesVaultContext $policy=Get-AzRecoveryServicesBackupProtectionPolicy -Name testpolicy foreach($storage in $storageaccounts) { $storagename= $storage.StorageAccountName $resourcegroup= $storage.ResourceGroupName if($storage.PrimaryEndpoints.File -ne $null) { $fileshares= Get-AzRmStorageShare -ResourceGroupName $resourcegroup -StorageAccountName $storagename foreach($file in $fileshares) { Enable-AzRecoveryServicesBackupProtection -StorageAccountName $storagename -Name $file.Name -Policy $policy } } } } I keep getting an error "Enable-AzRecoveryServicesBackupProtection : 'containerName' cannot be null.", but this storage account has not been assigned to a recovery vault or policy yet. How can I fix this?
I am trying to write a script that will backup Azure Files for each storage account that starts with a certain letter
1 The backup file generation is for persistence durability. When you don't need durability you can turn it off with this SQL statement: SET FILES LOG FALSE Or the database connection property, hsqldb.log_data hsqldb.log_data=false The data is still consistent with this setting but the changes will be lost in case of process termination. See the guide http://hsqldb.org/doc/2.0/guide/deployment-chapt.html#dec_bulk_operations Share Improve this answer Follow answered Feb 17, 2022 at 12:14 fredtfredt 24.2k33 gold badges4141 silver badges6161 bronze badges Add a comment  | 
So we are using HSQLdatabase as temporary storage to offload to disk for doing some calculation, we will be doing some bulk operations on merge statements(upsert/ insert or update)and size of database file in some cases reach as high as 70+gb. Is there a way to disable .backup file creation in HSQL db? we are fine with instances of DB corruption or node crash. Are there any properties that we can rely on for such use cases? I referred to Bulk Inserts, Updates and Deletes from document and will set up my properties accordingly.
Is there a way to disable backup file generation in HSQLDB?
As per the official docs, A custom image contains only the OS, software, and settings for the WorkSpace. A custom bundle is a combination of both that custom image and the hardware from which a WorkSpace can be launched. Seems like the image does not carry forward the personal settings like set wallpaper or any browser settings. I experienced this myself. However, if you are worried about losing whatever configurations you have done if workspace becomes unhealthy, then you can use Rebuild or Restore option. By default aws takes auto snapshots of root & user volumes of your workspace every 12 hrs. You can read more bout this here in terrible case where my images and my workspaces was accidentally removed If your workspace is deleted/ terminated, no data can be retrieved.
I have an workspace in AWS workspace with a lot of configuration files, installed software and files with templates, shell scripts and code, so it's fully configured. My problem is that when I try to create an image, I lost everything but the installed software. So anybody knows how can I create a backup of my AWS workspace to avoid to have to configure the desktop in terrible case where my images and my workspaces was accidentally removed? Thanks.
Backup AWS Workspace file system
1 When restoring, by default, SQL Server create the database file at the same place and name as the original backuped database. If you are doing a copy of the database on the same SQL instance, this will raise an error, because files already exists. So you must move the file to another directory or give another name to the file, which is done by the MOVE option of the RESTORE statement. Prior to that, you must ask which files constitues the original database, by executing the Transact SQL command : RESTORE FILELISTONLY FROM DISK = 'the path and file of my sql backup' Then you will have to follow the MOVE syntax, which is just: MOVE 'logical file name' TO 'new path and/or new file name' Share Improve this answer Follow answered Feb 15, 2022 at 10:57 SQLproSQLpro 4,52011 gold badge88 silver badges1717 bronze badges Add a comment  | 
I am trying to copy 'Development' database and create a new database called 'Testing'. For backup, I have used the below query and it worked fine. BACKUP DATABASE Development TO DISK = 'Development15feb2022.bak' For restoring I have used the below query which is giving some errors like 'Development.mdf cannot be overwritten. It is being used by database 'Development'. RESTORE DATABASE Testing FROM DISK = 'Development15feb2022.bak' I have done some googling and came to know that I need to use MOVE for logical file and logs. But I am not sure if that applies to my scenario. I want both Development and Testing working independently and storing each logs respectively. Can anyone clarify please?
Backup and restore database using T-Sql
AWS is (mostly) region-based. This means that if you wish to communicate with a particular AWS service (eg Amazon EC2) in a particular region, then you must make an API call to that region. It can be done by specifying region_name when creating the client: ec2_client = boto3.client('ec2',region_name='ap-southeast-2') Thereafter, any actions performed on that ec2_client will be sent to the region that was specified. For more examples, see: Python boto3- How to work on cross region If you wish to make calls to multiple regions, you will need to loop through each region and create a boto.client() for each region in turn. There are a few exceptions to this requirement for 'global' services: IAM, Route 53 and CloudFront. They replicate configurations between regions, so you can connect to just one region (typically us-east-1) to configure those services globally.
I am using AWS Lambda in Ireland region to create AMIs on daily basis for my EC2 prod instance. All my servers are in Ireland region except one which is in the London region. For the Ireland region, I have the python script for taking backups, I just need to add the code in the same lambda for taking backup pf the London instance as well. Since am new to both lambda and python, I m not getting where to add or what to add here. Can Anyone help me here to enable backup for the London instance as well? The current Lambda script is provided below. # ec2_client = boto3.client('ec2',region_name=globalVars['REGION_NAME']) ec2_client = boto3.client('ec2')
Run AWS Lamba in One region to take backup of ec2 instance AMI from 2 or more Regions
1 There is a resource you can use aws_db_instance_automated_backups_replication Link to TF Docs. Share Improve this answer Follow answered Jul 28, 2023 at 9:27 Hans HomanHans Homan 5144 bronze badges Add a comment  | 
Need to enable backup replication feature in AWS RDS - oracle through terraform. So do we have any attributes from terraform side for that particular feature?
Need to enable backup replication feature in AWS RDS through Terraform
If you are specifically targeting ignroed files : you can list the ignored files in your directory using : git ls-files --exclude-standard -i -o You can then use the output of this command to do something with the listed files. For example, you can create a tgz archive : # create a tgz archive : git ls-files --exclude-standard -i -o | xargs tar -czf ../myenvfiles.tgz and copy/extract that archive some place else.
My problem is that I often forget to backup or I need to use some ignored files in other devices, like .env ones, and sometimes it troubles my day because those files should not be commited, but at the same time I need those files updated if for some reason I need to use this repo in another device. Is there any solution for that when I commit to my repo, some selected files can stored in a clould service like onedrive or google drive? I tried to use my repos inside a folder that is synced with my clould, but the amount of files not ignored, like venv or node_modules, more hinders than helps me. Thank you!
Solution to store specific files not ignored on .gitignore