Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
1 If you still face any issue in it, you can follow the approach (17 points): http://msdn.microsoft.com/en-us/library/ms187510.aspx Share Improve this answer Follow answered Aug 23, 2012 at 11:07 NG.NG. 5,85522 gold badges1919 silver badges3131 bronze badges Add a comment  | 
I have a small database (500 MB) with 10 tables. I can do select on all tables. But when I do a backup, it is never ending. I have tried through the UI and also using the following command: BACKUP DATABASE DB_NAME to Disk = N'C:\temp\DB_Aug22.bak With noformat, noinit, name = N'DB_Aug22.bak, STATS = 10 what am i doing wrong? Regards, Shiyam
SQL server 2008 R2 - Backup doesn't end
You can use xp_cmdshell to do a dir for your file. Note, however, that xp_cmdshell is normally disabled for good reasons. Given this is UAT, that may not be an issue. See here for more http://www.sqlusa.com/bestpractices2005/dir/
I am doing the below using SQL Server / T-SQL : RESTORE DATABASE UAT FROM DISK = 'E:\Databases\backup\MY_LIVE_20120720_070001.bak' WITH REPLACE But I want to be able to use a file location that ignores the numbers in the file name (which represent the date) in my backup file. There will only ever be one 'MY_LIVE_****.bak' but its number string will change each day. The goal is to restore my UAT instance from live each week, using the latest backup - of which there will be only file matching that string prefix, but the numbers/date will change each week.
Restore database from backup file, where file name matches reg expression
From my comments… Likely the options between mysqldump and PHPMyAdmin export don't match. For example, inclusion of DROP TABLE, extended INSERT, etc. I suggest comparing the two files. I'm sure there is something obvious. Then either adjust the options for mysqldump or in PHPMyAdmin. Either should work as the latter uses mysqldump underneath.
I have a large mySQL database that I backup each night via a cron job: /usr/bin/mysqldump --opt USERNAME -e -h SERVERNAME -uUSER -pPASSWORD > /home/DIRECTORY/backup.sql It is working well - except when I go to 'restore' the sql file on another server - it takes a long time (about 3 mins) This is in contrast to using phpMyAdmin - if I do "export" and export the same mySQL database, then import that sql file into another server it only takes 10 seconds. Question: how do I make "mysqldump" create the same type of sql file that "phpMyAdmin" does? Example of some FAST version sql (not all of it): CREATE TABLE IF NOT EXISTS `absence_type` ( `absence_type_ID` int(16) NOT NULL AUTO_INCREMENT, `name` varchar(30) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`absence_type_ID`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=14 ; -- -- Dumping data for table `absence_type` -- INSERT INTO `absence_type` (`absence_type_ID`, `name`) VALUES (1, 'Sick Leave'), (2, 'Personal Carers'), (3, 'Other'); Example of some SLOW version sql (not all of it): -- -- Table structure for table `absence_type` -- DROP TABLE IF EXISTS `absence_type`; /*!40101 SET @saved_cs_client = @@character_set_client */; /*!40101 SET character_set_client = utf8 */; CREATE TABLE `absence_type` ( `absence_type_ID` int(16) NOT NULL AUTO_INCREMENT, `name` varchar(30) COLLATE utf8_unicode_ci NOT NULL, PRIMARY KEY (`absence_type_ID`) ) ENGINE=MyISAM AUTO_INCREMENT=14 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; /*!40101 SET character_set_client = @saved_cs_client */; -- -- Dumping data for table `absence_type` -- LOCK TABLES `absence_type` WRITE; /*!40000 ALTER TABLE `absence_type` DISABLE KEYS */; INSERT INTO `absence_type` VALUES (1,'Sick Leave'), (2,'Personal Carers'), (3,'Other'); /*!40000 ALTER TABLE `absence_type` ENABLE KEYS */; UNLOCK TABLES;
mysqldump file imports slowly compared to phpmyadmin file
1 Login to hbase shell snapshot 'table_name','snapshot_name' you can see whether it create a snapshot or not from hbase shell, type list_snapshots Share Improve this answer Follow answered Jun 30, 2015 at 0:57 shakirgootyshakirgooty 13422 silver badges88 bronze badges Add a comment  | 
I am new to Hbase and I was looking for Hbase backup and restore solution. CAn you please mention how to take snapshot of Hbase or Hbase table and restoring it as part of recovery solution? Thanks in advance!!!
How to take Hbase table snapshot?
Is there any reason why you cannot use the built-in CopyFiles command? To debug this I would suggest that you add a DetailPrint near the top of the .MoveFolder_Locate_moveFile function. If you see all the file names go by then the problem is the move operation in that function, if not then the problem is in the ${Locate} macro used by this code. Another alternative is to watch the filesystem operations with Process Monitor...
An NSIS installer creates a fairly large folder structure. When the installer starts, it checks the registry to see if there is a current version installed... Then it asks if you want to make a backup of the current folder. It works most of the time, but sometimes when it is backing up older versions, instead of copying over the entire directory, it only copies the icon. !insertmacro un.MoveFolder "$INSTDIR" "${BACKUP_FOLDER}" "*.*" Reference: http://nsis.sourceforge.net/MoveFileFolder !insertmacro MoveFolder "$INSTDIR\[path\]source-folder[\]" "$INSTDIR\[path\]destination-folder[\]" "file-mask" Afterwards, it moves on to the delete section... Could it be that it doesn't have time to do it ? it starts the next process before finishing the move ? What else could be going on so that it does not copy the entire folder ? During the installer, I see Create folder c:\backup_folder Moving files: c:\current_folder\*.* to c:\backup_folder\ Delete file: c:\current_folder\file1......... And at the end, backup_folder has only the icon (not all the files) Edit: The solution - please see my post here NSIS difficulty moving folders - $INSTDIR is indeed a special folder so I had to move the uninstaller to a $TEMP folder.
NSIS doesn't backup all files
1 You can just copy the entire directory if you want to take a cut of your repo's outside of the svn backup options. There isn't a way to add multiple repo's at once to uberSVN at the moment, you would need to do these individually. It is something that has been discussed from time to time, but if it's a feature you deem important I'd recommend raising it via the suggestion area here, then other users can see it and comment etc and we can assess demand. Share Improve this answer Follow answered Jun 22, 2012 at 9:42 Mand BeckettMand Beckett 68244 silver badges99 bronze badges Add a comment  | 
I am new to SVN and UberSVN on windows. I am using UberSVN 12.04 Free edition (not using any uberapp.) I would like to know if there is anyway in which I can take the backup of all the repositories at once? I know that I can take one by one backups for every Repo. Is there any way that I can take backups and restore it at once and it playes well with Uberportal as well. (shows in the repositories tab) Some detail will be appreciated as I am not to familar with SVN and its configuration.
UberSVN on Windows: Take a backup of all repositories at once and restore them
1 From personal experience I agree with other commentators that Git is the way to go, or even Mercurial. The learning curve bends down after a while especially if the needs are modest. As to the need for a "Poor Man's Version Control", sometimes you do need one. For example, you work at a employer that does not allow downloading and use of non-corporate software and the centralized VCS is not allowed to be used for Ad Hoc, Experimental, or skunk work. Related post: poor mans source control zip project files on build Share Improve this answer Follow edited May 23, 2017 at 11:48 CommunityBot 111 silver badge answered Jul 4, 2012 at 22:16 Josef.BJosef.B 94288 silver badges1616 bronze badges Add a comment  | 
Now(6:13pm Jun 1, 2012): I resign myself to learning git and github so that I can do version control. I won't need to mail copies of the (compressed) code to myself, but I still don't understand the mechanism after a day of looking at this stuff. I get the SHA1 concept for uniquely identifying a file, and using the first 2 characters fo the hash as a directory name. But I'm still confused on the updates, pointers, merge business. Previously: I have multiple versions of programs, so I can regress to an earlier one to solve a problem. I used to like to compress the one I was using, and send it to myself via email, but today when I did that the compressed version was too small (49 kb instead of 6 mb). So I guess I am referencing the "workspace" (the extension on the app is ".xcworkspace"). I probably shouldn't waste too much time on this problem, since it is merely a backup, but on the other hand, having the full size is an indication that the whole app is self contained, instead of pointers elsewhere that may be inadvertently changed or destroyed. Is there any way to "undo" my current version to have all the correct data, or is it really tough?
Compress Workspace for archiving App versions
1 Have you checked this document? http://www.iis.net/learn/publish/using-web-deploy/web-deploy-automatic-backups It says that a backup is automatically created if it is enabled on the server (and then explains how to set it up) Share Improve this answer Follow answered Jun 12, 2013 at 8:04 SimonSimon 3344 bronze badges Add a comment  | 
I am using the following command to build package and deploy site on remote server using teamcity msbuild /M /P:Configuration=%env.Configuration% /P:DeployOnBuild=True /P:DeployTarget=MSDeployPublish /P:MsDeployServiceUrl=%env.TargetServer%/MsDeployAgentService /P:DeployiisAppPath=%env.IISPath% /P:MSDeployPublishMethod=RemoteAgent /P:CreatePackageOnPublish=True /P:Username=%env.username% /P:Password=%env.password% But I want to take a backup of that site to a particular directory. Tried doing this using batch file to compress and take a backup but it is a too much time consuming as it will run on a remote machine. I am looking for a solution that I can use using msbuild or any other that is efficient.
How to take backup of Site prior to publish
Okay, to be honest I had some difficulty to figure out the question at first. I think we sorted that in the comments. So you have $backupdir which contains the absolute path to the backup directory, plus $pathtodir which is a parent directory of the backup directory. You can now use string substitution in Bash as follows: # Replace $pathtodir with empty string inside $backupdir relativepath=${backupdir//$pathtodir/} # Now remove the leading slash, if any relativepath=${relativepath#/} If I misunderstood something still, let me know and I'll adjust the answer accordingly.
I have a script that gets the input from the user as an absolute path (using the fselect dialog box). backupdir=$(user_select "choose a directory backup destination"); I used tar cvzf backup.tar.gz $backupdir however this includes the absolute directory path (*1) so instead I attempted (out-with the script): tar czvf backup.tar.gz -C $PATH directory-to-backup Therefore, in my script I can use: pathtodir = dirname $backupdir to get the $PATH of the backup directory but I need the name of the directory I wish to backup i.e: dirname = .. tar czvf backup.tar.gz -C $PATH $dirname How do I get the name of $dirname? 1 - "Removing leading `/' from member name"
How do I use tar using an absolute path from user input to backup a directory?
Windows could change that drive letter assigned to your USB drive. The correct way to do this backup is mounting the USB drive in an empty directory. Not only does it add some consistency to swapped storage, it also allows for a persistent shortcut on a Windows desktop. That's how to: Run "diskmgmt.msc" from Windows' Run/Start Search box, Right-click on your plugged-in drive and choose "Change Drive Letter and Paths." Remove the current drive letter assigned to your drive. Click on the Add button Select Mount into the following empty NTFS folder and click on browse. Now navigate to the subfolder that you want to assign the USB drive to and confirm the assignment. The USB drive will from now on be accessible from that folder (if it is connected to the computer of course). Now you can change your script to select, as destination folder, the folder with the mounted drive and forget the drive letter persistence.
I am writing a PowerShell script that will back up several folders from my Vista drive to an external USB drive, using robocopy. Windows does not guarantee that it will always assign the same drive letter to the external drive. What is the best way to get around this problem? How do I code the destination paths? Thanks.
External Drive Letter - Backup
The problem is: your database db.mdf is not attached to the SQL Server Express instance - therefore, you cannot back it up. This whole AttachDbFileName=..... story is a bit tricky - and quite a mess, if you ask me. I would recommend to just forget about using that AttachDbFileName=... stuff, and instead: just attach the db.mdf file to your local SQL Server Express instance (using SQL Server Management Studio Express) talk to the attached database using it's logical database name instead of messing around with a .mdf file Once you've done that - then you can use commands like BACKUP DATABASE ... and everything else! Your connection string would be much simpler, too! <add name="ConnectionString" connectionString="Server=.\SQLEXPRESS;Database=YourDatabaseName;Integrated Security=True;" providerName="System.Data.SqlClient"/>
I am trying to create a backup of my SQL Server Database in Visual Studio 2010. I am using the following command to back up the database: BACKUP DATABASE Db TO DISK = 'c:\server.bak' This is connection string that I am using. <add name="ConnectionString" connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\Db.mdf;Integrated Security=True;User Instance=True" providerName="System.Data.SqlClient"/> However, when I run the command, I get the following error message. Database 'Db' does not exist. Here is a screenshot that describes my error: http://www.cs.purdue.edu/homes/aerfanfa/sqlerror.jpg Does anyone have a solution for my problem? Thanks!
SQL Server Backup Command Error
1 You could accomplish this by using the task scheduler like this: schtasks /create /sc DAILY /tn Backup /tr C:\backup.bat Note: Type schtasks /create /? for more options. You can rename the file with the date by using this: ren C:\file.txt *. && ren C:\file. *%date:~-10,2%%date:~-7,2%%date:~-4,4%.txt Share Improve this answer Follow answered Mar 29, 2012 at 9:08 Bali CBali C 30.8k3535 gold badges124124 silver badges152152 bronze badges Add a comment  | 
I want to create a .bat file to create a daily backup of a file. This should also update the file name with the date or time. How can create this file? I ve tried a lot of different ways.. mcopy, xcopy.. etc. somehow.. it does'nt work.
Bat file to copy a file everyday with the time stamp
Note that I don't know that there's a way to specify which filegroup a stored procedure is on (other than the default). So what you may consider, in order to at least keep the script repository backup small, is: create a filegroup called non_data_objects, and make it the default (instead of PRIMARY). create a filegroup for each set of tables, and create those tables there. backup each set of tables by filegroup, and always include a backup of non_data_objects so that you have the current set of procedures, functions etc. that belong to those tables (even though you'll also get the others). Because 1. will only contain the metadata for non-data, it should be relatively small. You might also consider just using a different database for each set of tables. Other than using three-part naming in your scripts that need to reference the different sets, there really is no performance difference. And this makes your backup/recovery plan much simpler.
I'm looking for a way to get all table creation and alteration queries attached to a database, in SQL Server 2000. Is this stored in a system table, or is there a built in method to remake them? Goal: to extract the schema for customizable backups. My research so far turned up nothing. My Google-Fu is weak...
Query SQL Server 2000 for table creation and alteration queries
1 If you use a filesystem like btrfs or ZFS you can use filesystem snapshots to capture the state. If you keep a special partition for the database contents it should be easy to stop mysqld, go back to the snapshot and start the server again. Share Improve this answer Follow answered Mar 6, 2012 at 21:50 Erik EkmanErik Ekman 2,0361212 silver badges1313 bronze badges 1 +1 I was going to suggest something like virtual box and creating a linked clone, it's practically instantaneous. Depends on how the infrastructure is set up. – Ann Addicks Mar 7, 2012 at 1:26 Add a comment  | 
I'm trying to speed up a development task that I'm working on. I'm writing some code that is accessing and analyzing a large-ish MySQL database (about 5GB). I want to test my code as I go along. After making a code change, I need to try it out (which will do some inserts/updates/deletes on the DB). It'll take several iterations of code tweaks to get it working right. But after each iteration I need to restore the DB to the state it was before running the code. It's very time consuming to do a full DB drop/restore after each test, which is what I'm doing now. So, I'm looking for a way to simplify the rewind process - perhaps by logging DB changes with enough information such that the statements that manipulated data (done over the course of about 30 seconds) can be undone in reverse chronological order. Does anyone know of any tools out there that will allow for a more rapid, incremental restore? Basically, is there a way replay the query log in reverse? Or, at the very least do a data diff against a snapshot in order to undo the recent changes? FYI, I'm using MySQL 5.5.x with InnoDB. I'm coding in Ruby on Rails, but there's other non-Ruby code so ideally, I'd be looking for something more of a language independent command line utility that I could run before and after executing a test.
MySQL tools/tricks/scripts for rewinding DB changes
You can set up a remote repo at hostgator, and push changes to the remote using git, that doesn't require github, you can just do it from a repo locally on your machine. Here is a tutorial.
I am using Coda to create a web application using CodeIgniter. I am hosting it live on hostgator and testing it as well. I want to know if there is a way to use some kind of Revision contorl or backup system like Github that would allow me to save my files and keep them updated without having to actually do the folder copy pasting.
How can I keep a backup of my web application being developing live on web server?
You're going to have to use expdp and impdp, sorry. You may be able to find a GUI tool that runs impdp like TOAD, but in the end, it'll end up executing expdp/impdp.
I want to download and run a PROD DB on my local env. Are there any easy Gui Tools I can use to do this? I would rather not get into command line headache.
Cloning Oracle Database
I am missing something here and need more context, but I will rant on for a second and see if anything is helpful. Do you mean actually back up the file, not the data? If so, the easy answer is no. The problem is SQL Server will lock the file when it is attached to the database server (SQL Express in this case). You can detatch and copy and then attach, but the application will be down during that time. This can also be done manually. If you want to backup the data, I would consider scheduling it within SQL Server rather than programmatic, unless you cannot do it that way. Backup is more of a maintenance function than a part of the program. As for your database name being empty, that is impossible. In fact, it looks like you are trying to set up a database called XMain.
I have a database (mdf file) which I'm approaching with the Entity Framework. Is it possible to make a backup of the MDF file. I tried already but SMO but the problem is because I'm using a mdf file the database name is empty. I've read that it's autogenerated. Piece of my backup code: String destinationPath = "C:\\"; Backup sqlBackup = new Backup(); sqlBackup.Action = BackupActionType.Database; sqlBackup.BackupSetDescription = "ArchiveDataBase:" + DateTime.Now.ToShortDateString(); sqlBackup.BackupSetName = "Archive"; BackupDeviceItem deviceItem = new BackupDeviceItem(destinationPath, DeviceType.File); ServerConnection connection = new ServerConnection(".\\SQLEXPRESS"); Server sqlServer = new Server(connection); StringCollection sc = new StringCollection(); sc.Add(Environment.CurrentDirectory + "\\db\\Xmain.mdf"); //Bin directory sc.Add(Environment.CurrentDirectory + "\\db\\Xmain_log.ldf"); sqlServer.AttachDatabase("Xmain", sc); Database db = sqlServer.Databases["Xmain"]; sqlBackup.Initialize = true; sqlBackup.Checksum = true; sqlBackup.ContinueAfterError = true; sqlBackup.Devices.Add(deviceItem); sqlBackup.Incremental = false; sqlBackup.ExpirationDate = DateTime.Now.AddDays(3); sqlBackup.LogTruncation = BackupTruncateLogType.Truncate; sqlBackup.FormatMedia = false; sqlBackup.SqlBackup(sqlServer);
Backup a database mdf & Entity Framework
1 How about git repository? Wouldn't it be easier? You could easily also track the changes. Share Improve this answer Follow answered Jan 25, 2012 at 17:19 Michał ŠrajerMichał Šrajer 30.8k77 gold badges6262 silver badges8686 bronze badges 2 Would GIT work in the way that it would help me gather the files from the different laptops and make sure that the all the data is copied!? – Grimlockz Jan 26, 2012 at 16:51 @Grimlockz: Yes. You can use git this way. From git perspective you can have one master branch and many other branches created from the master. Then you need to merge changes from each branch to the master branch. – Michał Šrajer Jan 26, 2012 at 17:32 Add a comment  | 
I need to improve my method, or even change it completely, for copying files on a private network from multiple Windows machines to a central Linux machine. How this works is that I run the script below as a cron job every 5 minutes to copy data from say 10 Windows machines, all with a shared folder, to the central Linux machine that gets collected each day. So in theory the Linux machine at the end of the day should have all the data that has changed on the Windows machines. #!/bin/sh USER='/home/user/Documents/user.ip' IPADDY=$USER USERNAME=$USER while read IPADDY USERNAME; do mkdir /mnt/$USERNAME mkdir /home/user/Documents/$USERNAME smbmount //$IPADDY/$USERNAME /mnt/$USERNAME -o username=usera,password=password,rw,uid=user rsync -zrv --progress --include='*.pdf' --include='*.txt' --include='issues' --exclude='*' /mnt/$USERNAME/ /home/user/Documents/$USERNAME/ done < $USER The script works fine but it doesn't seem to be the best method as a lot of the time data is not being copied across or not all the data is copied correctly. Do you think that this is the best approach or can someone point me in a better solution?
Linux to Windows copying network script
1 sudo is on OSX later versions. sudo rsync ..... Share Improve this answer Follow edited Nov 14, 2012 at 16:13 Anirudh Ramanathan 46.5k2222 gold badges133133 silver badges194194 bronze badges answered May 1, 2012 at 12:28 jwillekejwilleke 10.8k11 gold badge3131 silver badges5353 bronze badges 1 You do not get it. I know what rsync is and I don't want to run anything on the client (OSX) but on my server (Ubuntu) since that is where the users home folders live. – Herman May 5, 2013 at 7:24 Add a comment  | 
Currently I'm developing a control website for my home server. The server has LDAP setup for Mac's to login. The home directories are also on the server. I want to create a backup tool for my family, so they can backup while I'm off. I don't want to do this scheduled (at least not allways, since they must be able to start a backup right away). I got stuck when I was trying to find a way to run the rsync commands as a privileged user. I've got some ideas on this but I would like to hear the cons and pros of the options. Create simple deamon that runs as root and backup's folder -arg1 to -arg2 minding the old backup in -arg3. Run rsync as the logged in user by remembering the users pass at login at the control panel. (Problem: running ps will reveal password). Create special rsync user (Problem: rsync user can read everything). The project is located at https://github.com/hermanbanken/ldap-control and this issue is also on GitHub at https://github.com/hermanbanken/ldap-control/issues/1.
Backup server permissions
1 For the first step, take a look at the shutil module, starting with http://docs.python.org/library/shutil.html#shutil.copytree For the second step, filecmp.dircmp is a reasonable choice. For the fifth step, take a look at the archiving options in the tarfile module and zipfile module. Share Improve this answer Follow edited Nov 28, 2011 at 18:04 answered Nov 28, 2011 at 8:38 Raymond HettingerRaymond Hettinger 221k6565 gold badges395395 silver badges491491 bronze badges Add a comment  | 
I am a python newbie. My question is what approach I should use to set up a file/directory backup routine, as described below (os.walk or filecmp.dircmp, or something else). I want to set up a backup routine as follows: Every night, I want to make "bakup_dir_a1" (and all its subdirectories) into a mirror of "local_dir_a" (and all its subdirectories); But, each night . . . First, I want to compare local_dir_a (and all its subdirectories) to bakup_dir_a1 (and all its subdirectories), to identify differences. Next, I want to create a list of files (full path including filename) in bakup_dir_a1 (and all its subdirectories), that will be replaced by newer files copied from local_dir_a (and all its subdirectories), and the respective last-modified dates of the newer and the older files; Next, I want to create a list of files (full path including filename) in bakup_dir_a1 (and all its subdirectories), that will be simply deleted from bakup_dir_a (and all its subdirectories); Next, I want to create an archive (.rar or .zip) in bakup_dir_a2 containing a copy of all the files identified in paragraphs no. 3 and no. 4 above. Lastly, I will execute the mirroring described in paragraph 1 above. I've spent some time trying to learn how to work with os.walk and filecmp.dircmp. I suspect that os.walk might be the better device to use for my purposes. Any suggestions would be much appreciated. Thanks, Marc
Backup dirs and subdirs using Python; Use os.walk or filecmp.dircmp, or something else
say suppose you have xyz.bak as a bakup and you want to restore it on to your sqlserver 2008r2 try running following query in ssms Restore database DatabaseName from disk='path of your bak file' this will restore your database.
I'm using MSI Project with InstallShield 2010. I have a .bak file, a backup of a SQL database (I am using SQL Server 2008 R2). How can I restore it using InstallShield? I was looking in SQL Scripts tab and I didn't find anything about restoring backups. Thanks for your time!
How to restore a database from a backup in installshield
Download RootTools (the jar file). You can then run linux commands like this: RootTools.sendShell(command); For example to backup, you could do: RootTools.sendShell("cp -f /data/data/com.sec.android.app.twlauncher/databases/launcher.db /sdcard/directory/"); And to restore the file: RootTools.sendShell("cp -f /sdcard/directory/launcher.db /data/data/com.sec.android.app.twlauncher/databases/"); cp is the copy command, and -f is what allows it to overwite the file if it already exists. RootTools are great, and for the commands, just google how to do linux commands and then place them into the sendShell I have no idea how to do it with java. I personally think that using the linux commands are 10 times easier, though. And to note, it is actually good to get sdcard location like this: Environment.getExternalStorageDirectory(); and then append the rest of the storage location info to the end of that.
I'm fairly new to apk development. So far, after a book purchase and with a lot of Googling I've managed to make an application that controls some features of my custom ROM. I'm currently trying to implement 2 backup features. I want to backup /data/system/batterystats.bin to /sdcard and also i want to backup launcher.db of my touchwiz launcher to /sdcard. For the first part i haven't actually found anything. I've searched a lot about how to restore a file, not much has come up. It's mostly about SQL .db files. I've also looked for the possibility to run a shell script via the apk just to perform this backup. With a shell script it's easy work, but doing this via .java, i honestly have no clue. Also, i've tried quite a lot of code to get my sqlite database file to backup, but i was quite unsuccessful. Here's my code for you to look at: public class Backup extends Activity { public void exportDB(){ try { File sd = Environment.getExternalStorageDirectory(); if (sd.canWrite()) { String currentDBPath = "data/data/com.sec.android.app.twlauncher/databases/launcher.db"; String backupDBPath = sd + "/launcher.db"; File currentDB = new File(currentDBPath); File backupDB = new File(backupDBPath); if (currentDB.exists()) { FileChannel src = new FileInputStream(currentDB).getChannel(); FileChannel dst = new FileOutputStream(backupDB).getChannel(); dst.transferFrom(src, 0, src.size()); src.close(); dst.close(); } } } catch (Exception e) { e.printStackTrace(); } } } I have added permissions for external storage write, of course in the androidmanifest, but nothing happens. No FC, it just sits there doing nothing. And when I check my sdcard, there's nothing there. Any help would be greatly appreciated. Thanks
Backup /data/system/xxxx.bin to /sdcard
1 It is better to schedule task on OS to execute some .bat (.sh) and do the DB back up. Share Improve this answer Follow answered Nov 16, 2011 at 10:31 Jigar JoshiJigar Joshi 239k4242 gold badges404404 silver badges441441 bronze badges Add a comment  | 
What is the best way to take a back up of database every night at a specific time (say 12 am) through Java code ? [Oracle database , Windows XP] I actually got asked this question in an interview.
What is the best way to take a back up of database every day at a specific time?
I've done this by my own Coding... bu.runBackup(Images.Media.CONTENT_URI); bu.runBackup(Video.Media.CONTENT_URI); bu.runBackup(Audio.Media.CONTENT_URI);
I've developed a Backup application. Now, it could takes backup for contacts, settings and browser I would take these backups by that host like Backup bu = new Backup(this); bu.runBackup(Contacts.People.CONTENT_URI); bu.runBackup(Settings.System.CONTENT_URI); bu.runBackup(Browser.SEARCHES_URI); I've use gethost method in Backup class like int count=0; String file = uri.getHost() +"-"+ System.currentTimeMillis(); Cursor cursor = cr.query(uri, null, null, null, null); count = cursorToCSV(cursor, file); cursor.close(); String msg = String.format("Backed up %d records to %s file", count, file); Toast.makeText(context, msg, Toast.LENGTH_SHORT).show(); return count; I would like to take backup for media files images, videos, musics Is it possible to do that? How can i do like this? Anyone knows mean tell me otherwise what's the alternate way?
Problem when taking a backup of media files?
Unless the phone is rooted, you cannot access to other apps data. For more info looks here : http://developer.android.com/guide/topics/providers/content-providers.html http://developer.android.com/guide/topics/data/data-storage.html http://developer.android.com/guide/topics/security/security.html
Where to get example program or tutorial for backup everything in android phone to database as files? any help is appreciated and Thanks in advance...
Get example program or tutorial for backup everything in android?
Create the batch file, which is usually called a shell script. Enter all the commands that you want to run. Set the executable bit, this is done with chmod +x path-to-the-file in Terminal. Show info for the script and set Terminal to the application which should open it. However, what I've done in similar situations and that I would recommend that you do is that I've created a shell script and instead of using Terminal I've initiated it from an AppleScript application. You can of course embed the entire shell script in the AppleScript as well. Basically it will look something like the following: on run do shell script "rsync -av ~/Pictures /Volume/Backup" end run Repeat the do shell ... line for each folder that you want to copy, or call the shell script itself. Then use AppleScript Editor which is included with Mac OS X and save it as an actual application.
I am from a windows background and trying to help a mac user friend to backup her pictures, docs, etc. onto an external drive. In windows, I would accomplish this by creating a simple batch file with an xcopy command and have a shortcut on the desktop that pointed to that .bat file when double clicked. However, in the mac world I am having significant trouble finding how to do this. I have searched repeatedly to find the mac equivalent, but all I find are sites saying things like "there are so many options on a mac - use one of them." However, none have ever given a specific solution nor pointed to a specific solution. Anyone here know of a specific step by step process to accomplish this? I simply want to be able to have her double click an icon on the desktop and have it copy her personal documents (not application settings or other overhead) to her external hard drive. Any help would be appreciated.
open console (terminal) window and execute command (rsync) on os x
1 ls -t $backdest/jary_p-*.tgz | tac | tail -n +3 | xargs rm And repeat with $backupall's glob Share Improve this answer Follow answered Jun 13, 2011 at 15:56 Seth RobertsonSeth Robertson 31k77 gold badges6464 silver badges5858 bronze badges 6 1 Why use tac, won't ls -tr do the same? – shellter Jun 13, 2011 at 16:01 Sure. You could also use ls | sort -r | tail -n +3 in case someone has touched the file since it was created. There is more than one way. – Seth Robertson Jun 13, 2011 at 16:04 I should have mentioned originally that I'm always looking for ways to remove processes ;-), but as you say, there's more than one way. Cheers. – shellter Jun 13, 2011 at 16:40 @Auston Jary: Always possible. My example was keeping the last three files of each backup type. Perhaps you can explain more about how this doesn't meet your needs? – Seth Robertson Jun 14, 2011 at 15:01 ls | sort -r | tail -n +3 or ls -t $backdest/jary_p-*.tgz | tac | tail -n +3 just display the lastest 3(6) files or backup.sh. But I want keep it... – Paul Yin Jun 14, 2011 at 16:59  |  Show 1 more comment
A backup Shell Script #!/bin/bash backdest=/home/backup date=$(date "+%F") backupall="$backdest/arch-full-$date.tgz" backuphome="$backdest/jary_p-$date.tgz" tar -czpvf $backupall / --exclude=/home/* --exclude=/mnt/* --exclude=/media/* \ --exclude=/proc/* --exclude=/sys/* --exclude=/dev/* \ --exclude=/tmp/* --exclude=/lost+found/* tar -czpvf $backuphome /home/jary_p Several(5) times later there are Serveral(10) files in /home/backup $ls /home/backup backup.sh arch-full-2011-05-13.tgz arch-full-2011-05-25.tgz arch-full-2011-06-01.tgz arch-full-2011-06-09.tgz arch-full-2011-06-11.tgz jary_p-2011-05-13.tgz jary_p-2011-05-25.tgz jary_p-2011-06-01.tgz jary_p-2011-06-09.tgz jary_p-2011-06-11.tgz How can I just keep the lastest 3 fiels(6) and delete extra files? thanks and, apologize my poor English.
how to delete extra files from backup with shell script?
1 There is no QuickBooks API call (either qbXML or IDS) to perform an automated backup of QuickBooks. Share Improve this answer Follow answered Jun 9, 2011 at 12:43 Keith Palmer Jr.Keith Palmer Jr. 27.9k1616 gold badges6969 silver badges107107 bronze badges Add a comment  | 
Does anybody know if there is a Quickbooks API available that can be used to backup the Quickbooks company file?
Quickbooks API for backup
Why not just invoke the Workbook.Save function for all sheets where Saved is false? Or maybe SaveCopyAs... I looked but didn't see anyway to forcible trigger the "backup" process, But, since you can query the AutoRecover object for a path, you could just use SaveCopyAs to do the same thing.
I have a rather involved Excel add-in that's begun exhibiting some bugs after being deployed. This is not unexpected, but one of the bugs is proving really hard to reproduce (and therefore to fix), and it does lock up the application instance, potentially leading to loss of data. So I'd like to trigger an automatic backup right before any function runs that might conceivably crash the application. In time I'll fix all the bugs, of course, but it's proving tricky so I'm looking to use AutoRecover as a stopgap measure in the meantime. Now, VSTO exposes the AutoRecover object which controls automatic backups of open documents, but all it lets you do is enable/disable AutoRecover, control where backups are stored, and set the backup interval in whole minutes (with a minimum value of one minute.) So is there some other way to trigger a backup event?
VSTO Excel: Triggering automatic backups
i got solution before restoring database i used to delete all tables, but when i delete whole database and create new one, then restore is done successfully
when i try to restore my database ,it shows error could not create large object 515025 in pgAdmin and command line as well its not working even if i am creating another backup any suggestion?
Error in postgresql 8.3 restore database
1 inotify with IN_DELETE? Share Improve this answer Follow answered Mar 25, 2011 at 14:58 Alexander PoluektovAlexander Poluektov 7,92411 gold badge2828 silver badges3232 bronze badges 4 That only works if his backup program is running while the file's deleted. – Gabe Mar 25, 2011 at 15:06 Right. Need some monitoring process to be running. – Alexander Poluektov Mar 25, 2011 at 15:22 Plus, sounds like Windows (there was mention of an "archive bit"). – Jonathan Mar 25, 2011 at 15:35 I am currently using the archive bit. But, you can check the archive bit, only if the file exists. If the file is deleted, there is no way of telling it. And yes, I dont want the process to be constantly monitoring the folder. I was reading something about NTFS journal, does anyone know how to read/decipher it? – roymustang86 Mar 28, 2011 at 13:24 Add a comment  | 
I have an offsite backup solution which runs on C++ to break the files into blocks, and keeps track of the blocks using md5 hashes on a SQLITE3 database. And it transfers the blocks along with the database to a remote site. So, when I want to do a restore, it queries the SQLITE3 database and restores the blocks accordingly. When the first backup runs, it creates a big table called the base_backup. Every subsequent file changes or new files are added as new records in a new table. If I want to do a restore, I query the base_backup table plus all the differences and restore the files. The way the backup runs, it scans for all the files in a given folder for the archive bit, and if it is cleared, then verifies if a record does not already exist in the database and decides whether to back it up or not. Coming to my question, if a file is deleted on the local computer, how do I keep track of it and update the offsite backup accordingly? Because when I do a restore, I don't want to restore all the garbage files. Is there anyway of knowing if files have been deleted from a folder or not? I do not want to run a verify check from the database since it will take too long.
Incremental backups: how to track file deletions
1 While I have not done a S3sync setup in windows, I have successfully written a CentOS / Plesk setup. This error looks very familliar, and it looks like your config files are not set up properly. I'd check the file you store the AWS credentials in, and then also check the config file carefully. Make sure there are no leading slashes ('/') and trailing slashes only if required. Spaces... anything can set this script off with an error. Good luck, let me know how you go. Share Improve this answer Follow answered Mar 5, 2012 at 7:07 itsrickyitsricky 2122 bronze badges Add a comment  | 
I am really new to shell script etc. i am trying to get s3sync to work i have installed ruby with the rubyinstaller i am on windows and have been using the cmd prompt that you get after installing ruby. Can someone help me with these error?? C:\Sites\s3sync>s3cmd.rb listbuckets C:/Sites/s3sync/HTTPStreaming.rb:53:in `<module:S3sync>': uninitialized constant S3sync::SimpleDelegator (NameError) from C:/Sites/s3sync/HTTPStreaming.rb:52:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from C:/Sites/s3sync/s3try.rb:28:in `<module:S3sync>' from C:/Sites/s3sync/s3try.rb:10:in `<top (required)>' from <internal:lib/rubygems/custom_require>:29:in `require' from <internal:lib/rubygems/custom_require>:29:in `require' from C:/Sites/s3sync/s3cmd.rb:16:in `<module:S3sync>' from C:/Sites/s3sync/s3cmd.rb:11:in `<main>' I have been following this article but now i am stuck http://blog.eberly.org/2006/10/09/how-automate-your-backup-to-amazon-s3-using-s3sync/ Any help please
s3sync automate amazon backup
You could mysqldump --all-databases but you'll get only one big SQL dump. Found this bash script: for T in `mysql -N -B -e 'show databases' -pYOUR_ROOT_PASSWORD`; do echo $T; mysqldump -pYOUR_ROOT_PASSWORD $T | gzip -c > $T.sql.gz; done You just have to test it and adapt the path/names to your needs.
Hey all, we have 2 web servers which may go offline Friday. we have ~90 websites hosted on these servers and I already have found a way to backup each website folder to its own .tar.gz file - what I need now is to find a way to export each database from our database server as its own SQL backup with one command. Currently the only way I know how is to use PHP myAdmin, but for 100+ databases that gets a little tedious. Is there a simple way to export each database as its own SQL backup file over SSH?
How to back up multiple databases to seperate files with one command
1 Looks like hes talking about SQL Server 2008. To do a full database backup to File/Query you can use the 'Generate Scripts...' option on the Database. Open SQL Server Management studio, right click on the database and choose 'Tasks->Generate Scripts...' Then use the wizard to backup the database. You can script the whole database or parts of it. Two important options: In the 'Advanced' section, you will probably want to ensure 'Type of backup = 'Schema and Data' and the 'Script Statistics' is on. This will produce a *.sql file that you can use as a backup. Share Improve this answer Follow answered Feb 22, 2011 at 13:55 SimonSimon 9,3371313 gold badges7777 silver badges116116 bronze badges Add a comment  | 
I want to take data base back up, so is there any sql script to take that, like how we use the sql script to get table content as "SELECT * FROM tblname" , Thanks in advance
SQL script to get table content as "SELECT * FROM tblname"
rsync is a popular file synchronization tool, best suited to files being added, deleted, or extended. It's been very well debugged and is quite simple to set up. (rsync -avzP username@hostname:/path/to/source/ /path/to/dest/ or rsync -avzP /path/to/source/ username@hostname:/path/to/dest/ are common.) rsync is frequently tunneled over ssh; it does have its own protocol if you don't mind it being publicly open. But if you've got a lot of data that is being slightly moved, or frequent renames, a tool like git can make much better use of bandwidth. It does carry the downside of keeping history on both sides, which might be less disk-efficient than you'd like, but it can more than compensate if your bandwidth is a bit less amazing. git is also frequently tunneled over ssh; it also has its own protocol if you don't mind it being publicly open. I doubt either one has D library bindings, but C bindings ought to be easy to come by. :)
For some Webhost issues I have to write a file backup/syncronisation tool for a common OS in the server sector (Windows/Linux). Most Linux root-servers offers the ssh-interface for secure communication so I could use the SSH File Transfer Protocol, but what's the best solution on the windows side (on the fly) ? And are there good D libraries (or C alternatives) I'm writing here and not in the admin or windows stack because there is one reason: It's important that there are existing libraries. So an easy implementation is more important than the existence of an interface or protocol. The simplicity and the language features and not the possibilities have priority. All in all I am looking for an easy way to implement an os independent tool for a file exchange. For the synchronisation work it has to be possible to access some file information ( last write time, modified time, filesize, etc.) edit:"My Version" of a synconisation tool should work on a new system without extra sotfware installation (maybe some automated installation over ssh-windows-equivalent [if there is one]) You only enter your access data and it should work. Furthermore I also need a protocol and this is the biggest problem. Because ssh doesn't work on windows on the fly - is there an equivalent?
Basic Software-Design Questions about backup-prog. in D
Isn't bazaar supposed to be a distributed VCS (scm, whatever)? You should have the entire repository along with that working tree. (that's how git/hg work) I may not understand your question, in that case the only thing you can do is git init with no chance to restore the history.
Recently my Bazaar server broke. I'm left only with working copy of my branch. Is there possibility to migrate this working copy to git repository? Do I have chance to restore history of commits?
How to migrate from bazaar working copy to git
1 There is a directory named rdiff-backup-data in your destination backup folder which contains a lot of interesting stuff including files named like file_statistics.2011-04-08T16:50:20+03:00.data.gz which do have information about the files changed. Share Improve this answer Follow edited Oct 15, 2014 at 20:14 Jesse W at Z - Given up on SE 1,75011 gold badge1313 silver badges3535 bronze badges answered May 1, 2011 at 23:46 SPDenverSPDenver 43455 silver badges99 bronze badges Add a comment  | 
I haven't found any obvious answer to that question in the rdiff-backup documentation: is there a simple way to list the changes included in a given increment, i.e. which files/folders have been added, removed, updated, etc.? I'm not necessarily interested in the details of those changes (i.e. what was changed in a given file). When I run the following command:   rdiff-backup --list-increments backup I get a list of increments. For example: "Found 3 increments:     increment3...     increment2...     increment1...  Current mirror: ..." I can list the changes included in the latest increment (increment3) by running the following command:  rdiff-backup --list-changed-since time backup by choosing the adequate "time" value. But what if I want to see what changes are contained in increment2 only? Thanks for your help!
rdiff-backup increment contents (files added, removed, updated, etc.)
Seems the dirset collection was skipping non-empty directories. I overcame it by using a fileset for the entire backup dir: <tstamp> <format pattern="MM/DD/yyyy HH:MM aa" offset="-4" property="backup.deletedate" /> </tstamp> <echo message="Deleting log directories created on or before ${backup.deletedate}" /> <delete verbose="true" includeemptydirs="true"> <fileset dir="${backup.dir}"> <date datetime="${backup.deletedate}" when="before" checkdirs="true" /> </fileset> </delete> Works like a charm!
I have added a target in a build file to delete backups that are older than 4 days by using a timestamp: <tstamp> <format pattern="MM/DD/yyyy HH:MM aa" offset="-4" property="backup.deletedate" /> </tstamp> <echo message="Deleting log directories created on or before ${backup.deletedate}" /> <delete verbose="true"> <dirset dir="${backup.dir}/CI"> <date datetime="${backup.deletedate}" when="before" checkdirs="true" /> </dirset> <dirset dir="${backup.dir}/DEV_MASTER"> <date datetime="${backup.deletedate}" when="before" checkdirs="true" /> </dirset> </delete> However it ony deletes from the first directory (CI) and skips the second. How can I set it to remove from BOTH directories?
ANT build- Deleting multiple dirsets
Most of those options are fine, but check the Structure -> "Add DROP TABLE..." and "Add CREATE PROCEDURE", then Data -> "Extended inserts" (this decreases loading time when re-inserting the data and isn't essential). Then click "Save as file" and export, the rest of the options are suitable.
For someone that is not used to mySQL, when using phpMyAdmin administration program, what is the recommended setup to backup the entire database with all tables and with data?
if we only have access to phpMyAdmin what's the best way to backup the entire db?
If you are concerned about using up space for the repository on the remote system, then I would suggest not using Git on the remote system at all. Maybe consider using rsync to sync between the local and remote systems. For keeping a history on the local system, you can then commit to a Git repository on the local system after each rsync. This way you have a backup, with complete history, on the local system and no history at all on the remote.
I have a remote device which I access over a wireless and intermittent link. The device logs daily data, and I'd like to be able to get all updates in a robust way. I thought of using git for the purpose: I'd have a periodic job which would git commit all the logs on the remote At the local server, I'd git pull any new log, so that the underlying protocol would handle the atomicity and robustness of the connection However, I still have an issue: how do I keep the remote repository "small"? I'd like to purge in some way the revisions which I already got on the local server, but keep the history on the local server. I tried with git filter branch and repack, but it breaks any clones. I believe it is the same with git rebase --interactive, with the added issue of requiring manual editing of the file (i.e. changing pick -> squash). Maybe creating new branches every time and deleting them?
Git as a remote backup and update system
1 You need a complete backup file to rollback your changes. If you do not have it you can not revert back. Does noone have a more recent backup? (localy placed somewhere on a server) Is there any other wy to retrieve the data (maybe developement code from repositories on local computers)? If not then you learn how important regulary backups are - NOW. Sorry. EDIT: Check this recovering info for the possibility of recovering any of your data. Share Improve this answer Follow edited Nov 9, 2010 at 7:35 answered Nov 9, 2010 at 7:29 ThariamaThariama 50.4k1313 gold badges141141 silver badges171171 bronze badges 1 How about operation system recoveries ? – roohollah jaberi Nov 9, 2010 at 7:32 Add a comment  | 
I have a software and at instruction I ordered Operators to backup software data(SQLServer2000 locally) regularly . But the operator didn't and by curious she restored database by the First backup (7 months earlier) . Now we have lost all software data ( database) . such a disaster . For more information software use this query for backing up : BackUP DATABASE databaseName to DISK=... and this one for restoring : RESTORE DATABASE databaseName FROM DISK=... I appreciate any idea
Is there any way i restore database to last stable stage without having Back up file?
You'll find that if you add Database=Quickstem to your connection string, your backup code will work just fine. Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\quickstem.mdf;Integrated Security=True;User Instance=True;Database=Quickstem
I would like to be able to run an on demand backup of a .Net MVC app's SQL Express 2008 database to eg a flash stick plugged into the machine running the app. I tried QuickstemDataContext db = new QuickstemDataContext(); string quickstem_path = Path.Combine(save_path, "quickstem.backup"); db.ExecuteCommand(string.Format("BACKUP DATABASE {1} TO DISK = '{0}' WITH COMPRESSION;", quickstem_path, db.Mapping.DatabaseName)); But get the exception Database 'quickstem' does not exist. Make sure that the name is entered correctly. BACKUP DATABASE is terminating abnormally I am using the following connection string. connectionString="Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\quickstem.mdf;Integrated Security=True;User Instance=True" Do I need to attach the DB using something like Express Management Studio and give it a name etc. Ideally I want to keep the app deploy very simple without having to setup sql management studio etc. Can this attaching be done another way or can a Backup be done with out needing to attach I tried giving it the full path of the .mdf file instead of the database name but got a syntax error on c:
Backup Sql Express
I would have thought the failed backup would be rolled back, so the ongoing transaction log backups plus a full database restore prior to these, should be sufficient to restore a database. Having said that, I would fix this issue as soon as possible, it's not a good position to be in.
I have an SQL Server 2005 instance whose full backup (.BAK) failed due to low disk space. However half hourly transaction log backups continue (.TRN). Assuming I have an older full backup, could these continuing transaction logs be used to restore the database? i.e. do the transaction log backups only run from the last successful backup and ingore any intermediate failed full backups?
SQL Server 2005 failed backup effect on transaction logs
1 Does it work from the shell? For example if you do the following, do you get the same error? # Dump global objects, such as user /usr/lib/postgresql/8.4/bin/pg_dumpall -g -U postgres > /backup/global.sql # Dump schema of database /usr/lib/postgresql/8.4/bin/pg_dump -Fp -s -v -f /backup/schema.sql -U postgres dbname # Dump contents of database /usr/lib/postgresql/8.4/bin/pg_dump -Fc -v -f /backup/full.dump -Z4 -U postgres dbname Share Improve this answer Follow answered Jul 15, 2010 at 11:30 John PJohn P 15.1k44 gold badges4848 silver badges5656 bronze badges Add a comment  | 
I am desperately trying to backup using pgAdmin III my database and I receive an error: geometry contains non-closed rings. How can I get around this??
PostgreSQL Error: geometry contains non-closed rings
I have solved the issue Although the folder had permissions for the account to copy files accross it did not have share permissions set on the root drive. Link to share permissions tutorial
I have job that runs every 15 minutes that uses robocopy to copy a backup of the tranaction logs to a different server. This job is failing USER has full access rights to both home folder and the destination folder. JOB SQL: robocopy "e:\Backup\SQL02$PROD" "\SERVER\DRIVE$\prod\sql\backup\" /MIR /E /Z /NS /NFL /NDL /NJH /NP /R:10 /W:30 if %errorlevel% LSS 8 set errorlevel=0 Error Log: Date 22/06/2010 09:05:00 Log Job History (Sync Production backup to app040) Step ID 1 Server NDAHHSQL02\PRODUCTION Job Name Sync Production backup to app040 Step Name robocopy production Duration 00:00:00 Sql Severity 0 Sql Message ID 0 Operator Emailed Operator Net sent Operator Paged Retries Attempted 0 Message Executed as user: DOMAIN\USER. 2010/06/22 09:05:00 ERROR 5 (0x00000005) Getting File System Type of Destination \\SERVER\DRIVE$\prod\sql\backup\ Access is denied. 2010/06/22 09:05:00 ERROR 5 (0x00000005) Creating Destination Directory \\SERVER\DRIVE$\prod\sql\backup\ Access is denied. Process Exit Code 16. The step failed.
Database Backup Robocopy
1 I have faced similar situation today and used following workaround. Use "Execute Process Task" to rename the backup. I created a batch file with following command and executed it after the Database backup task. ren BDNAME.bak DBNAME_%date:~-4,4%%date:~-7,2%%date:~4,2%.bak Above command will rename DBNAME.bak file to DBNAME_yyyymmdd.bak Keep the file in the same folder where you keep the backup file. In the Execute Process Task Editor, specify batch file name in the Executable property and the location of batch file in the WorkingDirectory property. Hope it helps. Share Improve this answer Follow answered Nov 18, 2015 at 0:59 KDesaiKDesai 1333 bronze badges Add a comment  | 
I have a SQL server 2008 and I would change the name of the backup file. I use an SSIS package to perform my backups. The file's name looks like [DATABASE_NAME]_backup_YYYY_MM_DD_XXXXXX_XXXXXX This is automatically generated by SqlServer, and I want to remove the "_". How I can modify this ? Thank you in advance, Andy.
SQLServer 2008 : Name of backup file
1 You back up an entire database. A table consists of entries in system tables (sys.objects) with permissions assigned (sys.database_permissions), indexes (sys.indexes) + allocated 8k data pages. What about foreign key consistency for example? Upshot: There is no "table" to back up as such. If you insist, then bcp the contents out and backup that file. YMMV for restore. Share Improve this answer Follow answered Jun 16, 2010 at 18:23 gbngbn 427k8282 gold badges593593 silver badges681681 bronze badges 1 My problem with backing up the entire DB is that it's huge, and the table I need to back up is much smaller. The overhead in backing up the entire DB is enormous. – James Horton Jun 16, 2010 at 21:31 Add a comment  | 
I have a table in a database that I would like to backup daily, and keep the backups of the last two weeks. It's important that only this single table will be backed up. I couldn't find a way of creating a maintenance plan or a job that will backup a single table, so I thought of creating a stored procedure job that will run the logic I mentioned above by copying rows from my table to a database on a different server, and deleting old rows from that destination database. Unfortunately, I'm not sure if that's even possible. Any ideas how can I accomplish what I'm trying to do would be greatly appreciated.
Daily Backups for a single table in Microsoft SQL Server
If you're worried about transports, use SSH. I tend to use replication over an SSH tunnel to keep a MySQL database in sync. A backup of the version control server (which is not the same as the deployment server) is passed by rsync over ssh. If you want to encrypt files locally you could use gpg, which would of course not work in tandem with the database replication, in that case you'd be forced to use a dump or snapshot of your database at regular intervals.
So what can be best way to have a Backup of code and DB is it downloading Locally via http ? But i fear it is security risk as some hacker might get access to it . I am looking into compress then encrypt the compressed file. But i dunno what encryption i should use and if linux CLI tool available for password protected encryption ? Thanks Arshdeep
best way to take Backup of code and DB?
1 There is no way you can do it with Express edition. It has no SQL Agent Job Scheduling Service shipped with it. Please see table for comparison. You need to create batch script to accommodate this or upgrade edition. Share Improve this answer Follow answered Feb 25, 2011 at 14:04 Pavel NefyodovPavel Nefyodov 88622 gold badges1111 silver badges3030 bronze badges Add a comment  | 
If I make a database backup job, where I set Expire day = 2. Run this backup job once every day. How can I delete backup sets that are more than 2 days old? I'm Using SQL 2005 Express, so everything runs as script. This is the script running: BACKUP DATABASE [DatabaseName] TO DISK = N'C:\Temp\DatabaseName.bak' WITH RETAINDAYS = 2, NOFORMAT, NOINIT, NAME = N'DatabaseName-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO After 5 days I have 5 backup set. This will fill up the disk.... Thanks
How can I enforce expiry day in SQL backup append?
You could try rsync with the --bwlimit option. Or you could do a one-off sneakernet backup with a USB drive and subsequently maintain the backup with rsync. As a general rule, always use rsync in preference to cp. It has many benefits, including the fact that you don't have to mount remote volumes.
I tried backing up our site today using the Unix 'cp' command and ended up getting our office blocked out by PLESK -> it added my ip to /etc/hosts.deny as it thought I was flooding the server. After Tech support fixed the issue, they suggested I go folder by folder to back it up, but there's about 10,000 folders on the site totaling 1/2 a terabyte, each with multiple sub-folders, so this isn't viable. Basically I want to be able to mirror the domain on another domain we've got set up on the same dedicated server so I can test with live images (the bulk of our content). Any suggestions e.g adding some rules to open_base_dir and getting PHP to recursively copy the folders to the other domain (remember it's on the same dedicated box so it just needs to traverse the directory, not FTP things).
Make backup of large site with 100,000+ files/images
I don't know how it's done, but these are the possible scenarios: DB-per-customer schema-per-customer single-schema Case 1 is trivial in terms of backup/restore (or import/export), case 2 is similar. I would venture a guess that these 2 are the most used approaches. The third option makes export/import difficult, but not impossible. The basic idea is that a table holds data from all companies, but distinguishes the company by a foreign key. Export and import would require the same kind of ETL tool to be used because these actions require filtering by company ID. The export procedure takes a company as a parameter and runs the task for that company only. The dump would take the form of insert statements (like the one you can get with MySQL or PostgreSQL) or XML (like the one created by DDLUtils). There are situations where the single-schema setup comes in handy, but I don't think multi-tenancy is one of them.
How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.
SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?
It sounds like importing the project also turned it into a working copy.
im using netbeans for svn. i open a project in netbeans and then i import it to a svn repo. it seems that although im only importing the project folder, svn creates .svn folders in all folders within this project folder. why is that? i thought that i was only creating .svn folders to checked out projects, not imported ones? now this folder acts very weird, when i open this folder as a project in netbeans, netbeans treats it like a svn folder some way. is this normal? cause i want this one to not be under SVN.
importing a project into a svn repository question
No. SQL Server differential backups contain those pages that have been changed since the last full backup. This will be completely unrelated to SQL DML statements that have been run and the data can not be extracted. What are you trying to do though?
Is there any way to convert differential backup to SQL statements which will produce identical results when applied? Or any other solution similar to binary log in MySQL?
SQL Server differential backup to SQL statements
The specific error you got on import is because by default it will try to create the table, not just the data in it. You can use the IGNORE=Y flag to avoid that problem. But it will try to insert all the users that existed, not just the one you deleted, which might cause you other problems. Or, it could fail for those rows if there's a unique index.
I have a table in Oracle for users. I am going to install new schema and want to backup all users with passwords and other fields. I tried exp and int utilities, but imp doesn't recover anything. I created temporary user in USERS table. Then I did backup with command: exp user_owner/password file=file.dmp table=USERS rows=yes indexes=no After that I deleted the temporary username and I tried to restore with: imp user_owner/password file=file.dmp table=users fromuser=user_owner Export file created by EXPORT:V10.02.01 via conventional path import done in UTF8 character set and AL16UTF16 NCHAR character set . importing USER_OWNER's objects into USER_OWNER . importing USER_OWNER's objects into USER_OWNER IMP-00015: following statement failed because the object already exists: bla bla bla Import terminated successfully with warnings. In the USERS table temporary user didn't appear. Please advice how can I perform such task like backup and restore rows (with values) of the table in Oracle.
How to backup and restore records in database (Oracle 10)
It depends on table type. If the tables are innoDB, then you should be using the --single-transaction flag so that the dumps are coherent. If you're tables are MyISAM, you have a small issue. If you run the mysqldump as is, the dump will cause the tables to lock (no writing) while performing the dump. This is obviously a huge bottleneck as the databases get larger. You can override this with the --lock-tables=false option, but you can be gauranteed that the backups won't have some inconsistent data in them. The ideal solution would be to have a backup replication slave server that is outside of your production environment to take dumps of.
Hi I have multiple databases need to back up daily. Currently, I am using cronjob to set a batch file to back it up. Here are my situation, I have about 10 databases need to backup, 3 of them are growing pretty fast, let me show you the current DB size: DB1 = 35 mb DB2 = 10 mb DB3 = 9 mb the rest: DBx = 5 mb My batch file code is: mysqldump -u root -pxxxx DB1 > d:/backup/DB1_datetime.sql mysqldump -u root -pxxxx DB2 > d:/backup/DB2_datetime.sql ... and so for the rest I have run this for 2 days, seems quite okay to me. But I wonder, if it will effect my website performance when executing the batch file. If this method is not good, how do you backup multiple databases while its on live and the size keep increasing daily?
Backup multiple Databases[MySQL] at 1 time?
Yes, I would use simple mode. In fact, I do... The data does not require "point in time recovery" so make life easier for yourself. Do you even need a full backup?
is it ok to set recovery mode simple in a staging db for an ETL process... The customer is not even doing a regular backup! So what's the point in keeping the transaction logs... I propose to organize a daily backup after the bulk import and that's it... Anything against this plan? Also the transaction logs were at 80gb after 3 weeks... cheers
SQL Server ETL process transaction logs
After doing some intensive research, I finally found out that this is a bug in sql server 2005. After I installed the SP3 everything went fine.
I have a .bak file which contains backup sets of two different databases. It was made by sql server maintenance plan. Now I have to restore both databases. The problem is, that while the first database is restored ok (db_companies) the other database (db_data) gives an error: Restore failed for Server 'SBSERVER'. (Microsoft.SqlServer.Smo) System.Data.SqlClient.SqlError: Logical file 'CompaniesDB' is not part of database 'DataDB'. Use RESTORE FILELISTONLY to list the logical file names. (Microsoft.SqlServer.Smo) The database restore wizards shows up both databases, and I select full and the lastest differential sets. The RESTORE FILELIST command show only the CompaniesDB. What's up with this? I've also tried RESTORE DATABASE WITH MOVE but it doesn't recognize the DataDB logical name. Is there any way to restore the DataDB from the backup set?
Restoring two databases from a single backup file (SQL Server 2005)
The ability to recover a database is the most fundamental responsibility of a DBA -- All else amounts to nothing if data is lost. If you have limited knowledge of the recovery process and you are in charge of recovering a production instance, my first suggestion would be to contact support : you don't want to make a mistake. Trust me, you don't want to practice on a production environment. Once the database is restored, when you have plenty of time, I'd suggest you start by having a thorough look at the documentation. You should be fine with the Backup and Recovery Basics Guide. Nowadays, you can perform recovery through the Entreprise Manager web interface (this is a nice wrapper of RMAN).
Here is the scenario: Oracle 10g database running Windows Vista Business. This is a production/live db. Nightly backups (Whole database, online backup, ARCHIVELOG mode) moved to different machine on the network. Hard disk dies. Setup OS and Oracle 10g on the new hard drive. Oracle does not have any db instances yet. Is there an easy (or at least relatively easy) way to restore the database from the backup? I'm not an Oracle DBA and my Oracle knowledge is very limited. I have seen some "advanced RMAN commands", but I have no clue what the doc is about. Is there a 3rd party utility that simplifies the restore process? If the RMAN scripts are the only way to go, then do I have to create an empty database in Oracle before proceeding? thanks
Oracle 10g Backup & Restore
Assuming you want to re-initialize the file every sunday (you can change this to your favorite day of the week) you can use the following: declare @init_option nvarchar(50) declare @cmd nvarchar(1000) set @init_option = 'NOINIT' IF (datename(dw, getdate())) = 'Sunday' set @init_option = 'INIT' set @cmd ='BACKUP DATABASE [db1] TO DISK = N''D:\SqlServer\Backup\db1.bak'' WITH ' + @init_option + ' , NOUNLOAD , NAME = N''db1backup'', NOSKIP , STATS = 10, NOFORMAT' EXECUTE(@cmd)
I have everyday schedule to backup my database in sqlserver 2005. Now it's looks like this: BACKUP DATABASE [db1] TO DISK = N'D:\SqlServer\Backup\db1.bak' WITH NOINIT , NOUNLOAD , NAME = N'db1backup', NOSKIP , STATS = 10, NOFORMAT But in this case, it's will grow infinitely and I want to store only last 7 backups in file. How can I do that(maybe erase old backups somehow)?
How to store certain number of backups in backup file
1 i must say at first i would have thought like you about the differential backup size ... so i had to check it up... from what i could read : The downside is if you run multiple differential backups after your full backup, you're probably including some files in each differential backup that were already included in earlier differential backups, but haven't been recently modified i am still trying to find what kind of file could be included that were not in the full backup cant seem to find the answer but from what i could see the reason of the backup growing by a lot would be that ... im still gonna keep looking if ever i find the answer ill come back and comment Share Improve this answer Follow answered Aug 19, 2009 at 20:48 Lil'MonkeyLil'Monkey 99166 silver badges1414 bronze badges 1 also : Diff. backup not just contains changed data. If you modify one row, sql will backup whole extent the row resides in. If that row has relation to other table, will backup more extents.... so im guessing maybe your database contains a lot of relations .. so one chance could make the whole backup grow by a lot – Lil'Monkey Aug 19, 2009 at 20:53 Add a comment  | 
For my backup plan for SQL Server 2005, I want to do a full on Sunday: BACKUP DATABASE myDatabase TO DISK = 'D:\Database Backups\myDatabase_full.bak' WITH FORMAT GO Then I want to do a differential nightly the rest of the week: BACKUP DATABASE myDatabase TO DISK = 'D:\Database Backups\myDatabase_Diff.bak' WITH DIFFERENTIAL GO My assumption was that if there was little/no activity in the database, then the differential would not increase in size (or wouldn't increase by much). However, when I run the differential backup above (with little or no activity), I'm seeing the differential backup increase my megs at a time. Why is it increasing like that? Thanks
Database backup - differential file size question
Contact Oracle support right now.. Unless you can afford to lose this data, don't mess about with listening to what people on forums have to say, now is not the time to experiment.
Can anyone show me the step of restore and recovery of below scenario? I have used the differential backup (cumulative) everyday. RUN { RECOVER COPY OF DATABASE WITH TAG "whole_database_copy"; BACKUP INCREMENTAL LEVEL 1 CUMULATIVE FOR RECOVER OF COPY WITH TAG "whole_database_copy" DATABASE; } I have copies of all datafile, all backup sets, all redo log files with all archive logs on different media. My system has crashed and all of my working database files are lost. How can I do to recover my database to another server? Regards, Sarith
Backup and Recovery Scenario
1 Pretty much any version control system i know of supports binary uploads. Subversion (in short SVN) is free and pretty popular. If you also download TortoiseSVN you can handle everything from within Explorer. The only requirement i cannot help you with is 1. automatic saving from within your application. But you can of course do this by copying over your old version of the file in the file system and committing your changes via TortoiseSVN. PS for some reason i cannot connect to the SVN site right now. It might be down at the moment. It is still a great product, though :) Share Improve this answer Follow answered Mar 6, 2009 at 6:48 tehvantehvan 10.3k55 gold badges2828 silver badges3131 bronze badges Add a comment  | 
This is not strictly a technical question, however I feel this will be useful for many technical people as well. I'm looking for a version management / backup solution which need not be only for source code. This could be for non-text files e.g. images. The requirement is this - Every time I save the file from within the application, it should create a version. I should be able to add comments for say, major revisions. At any time, there should be only one version current. I should be able to view previous versions without doing a 'restore' I should be able to move back and forth between versions. A calendar feature showing the various versions of a file would be helpful, if I could get to it for a specific file from the Explorer context menu I don't really need to compare different versions or anything like that. Windows solutions only. I've looked at NTI Shadow and it comes a bit close to what I'm looking for. Are there any paid / free / open source solutions for the above requirements?
Version Management / Backup solution
The figures I have seen suggest compression ratios in the range 3 - 4.5 times smaller, which is not as good as the figures you quoted for rar. See Tuning the Performance of Backup Compression in SQL Server 2008. On a side note, creating compressed backups is faster.
Does anyone have real world experience of backup compression in Sql 2008, I'd like to know how it performs compared to Sql 2005 + winrar. Currently when I back up a I get a 14G file. Using rar shrinks it to <400M, a massive saving. Most of the data is similar, being figures from the stock market so I guess that makes it easy to compress.
Sql Server Backup Sizes
What user account is the MSSQLSERVER service running under? I believe LocalSystem is restricted and cannot access remote resources.
SQL Server 2000 on Windows Server 2003. I am trying to run a backup, from within EM, directly over to another server, as follows: backup database AbraEmployeeSelfService to disk = '\\servername\f$\Backup\myDB_backup.bak' I get this error: Cannot open backup device '\\servername\f$\Backup\myDB_backup.bak'. Device error or device off-line. See the SQL Server error log for more details. Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. The error log shows this message: BackupDiskFile::CreateMedia: Backup device '\\servername\f$\Backup\myDB_backup.bak' failed to create. Operating system error = 5(Access is denied.). I am running the command using a windows login that has full rights on the destination path. At least, from my desktop I can open a run window, enter \\servername\f$\Backup, and in the resulting Explorer window I can add/edit/delete files in that directory. What do I need to do to get past that access denial?
sql server 2000 backup directly to a separate server
1 OK, first things first. Exactly what version of Oracle was the backup taken from? 9i is a marketing label-- we need the full 4 digit version number (i.e. 9.2.0.4). Is your PC running exactly the same version of Oracle? Is your PC running exactly the same operating system? How was the backup done? Are you looking at a consistent cold backup of a database? Do you not have any control files (normally .ctl)? SNCF[SID].ORA sounds like a parameter file, not a control file. You would need a control file to be able to restore the database. If the database is actually up and running somewhere, I'm going to wager that you'll go through far less pain and agony overall if you work on figuring out why using the export utility didn't work and fixing that problem than in trying to recover the database from a cold backup, particularly if there is any sort of confusion over how the backup was taken. Share Improve this answer Follow edited May 18, 2011 at 8:54 Jeff Atwood 63.7k4848 gold badges150150 silver badges153153 bronze badges answered Nov 18, 2008 at 0:02 Justin CaveJustin Cave 229k2424 gold badges370370 silver badges387387 bronze badges 1 Unfortunately so. We'll have to give it another try. Thanks! – user38341 Nov 18, 2008 at 23:20 Add a comment  | 
I'm a developer so I'm a little lost in the DBA world. Our systems guys have given me a backup of an Oracle 9i database. I have installed oracle 9i on my pc and am now trying to 'import' the backup files so I have a normal database to work with. The backup folder has on SNCF[SID].ora file and around 150 [SID]-[Date]-[counter]-[soemnumber].ora files. The question is, how do I get this data into oracle so I can query it through sql? I have gotten as far as creating a database which matches the SID of the backed up database. Google tells me that I need to go into rman and run "database restore." But how does it know where the backfiles are located? Any ideas? Thanks. I...don't...know. Seriously, the Oracle data we backed up is part of a very old application which is being phased out. No one really owns the database, and we have no DBAs on hand...so it falls on me, the only developer. I can try to get more details for you tomorrow. In the mean time, the original database runs on a windows machine and is some version of 9i. I installed Oracle 9i (9.2.0.1.0) on my Windows XP box (I installed this older version specifically so I could recreate the database from the backed up files). The backup was done specifically so I could try to recreate it on my pc (so we wouldn't mess with production). We only have one window of a few hours a week, so it is not easy to just redo the backup differently. A DBA friend advised that we export the database rather than doing a full backup; however, the systems guys had a problem with the export so now I have this full backup. As far as control files are concerned, it looks like there is an SNCF[SID].ORA file, about 2.2 megs, which is apparently the control file. All other files (a little over 150) are around half a gig, also with ORA endings. I'm guessing those are the actual data files. I'll get the exact oracle and windows versions soon.
How to recreate an oracle 9i database from backup files (ora files)
1 Read the backup and recovery guide. Don't just backup... Make sure you test your backup, regularly!! Share Improve this answer Follow answered Oct 8, 2008 at 8:35 Matthew WatsonMatthew Watson 14.2k99 gold badges6262 silver badges8282 bronze badges 1 While this is true it doesn't really answer the question. – Leigh Riffel Oct 22, 2008 at 13:50 Add a comment  | 
Oracle database 11g. What is the easiest way to set up a full nightly database backup to a network drive (ie drive on another computer)?
Oracle: How to setup a database backup to a network drive
I'm personally fond of using ImageX to capture the VHD to a WIM file. (This is called file-based imaging, as opposed to sector-based imaging.) WIMs are sort of like an NTFS-specific compression format. It also has a single-instance store, which means that files that appear multiple times are only stored once. The compression is superb and the filesystem is restored perfectly with ACLs and reparse points perfectly intact. You can store multiple VHDs and multiple versions of those VHDs in a WIM. Which means you can backup incremental versions of your VHD and it'll just add a little delta to the end of the WIM each time. As for live images, you can script vshadow.exe to make a copy of your virtual machine before backing it up. You can capture the image to WIM format in one of two ways: Mount the virtual machine you want to capture in Windows PE using Virtual Server. Then run ImageX with the /CAPTURE flag and save the WIM to a network drive. Use a tool like VHDMount to mount the virtual machine as a local drive and then capture with ImageX. (In my experience VHDMount is flaky and I would recommend SmartVDK for this task. VHDMount is better for formatting disks and partitioning.) This only skims the surface of this approach. I've been meaning to write up a more detailed tutorial covering the nuances of all of this.
I am seeking a backup tool to back-up virtual OS instances run through Microsoft Virtual Server 2005 R2. According to the MS docs, it should be possible to do it live through volume shadow copy service, but I am having trouble finding any tool for any. What are the best solution to back-up MS Virtual Server instances?
Backup tool for Microsoft Virtual Server 2005 R2?
0 You could use expo-file-system and save info to documentDirectory or cacheDirectory. The content for writeAsStringAsync should be string. Hence, you might need custom readParseAsycn and writeStringifyAsyc expo-file-system: local storage methods and constants with custom read and write import * as FileSystem from "expo-file-system"; const { deleteAsync, documentDirectory, cacheDirectory, EncodingType, getInfoAsync, makeDirectoryAsync, readAsStringAsync, writeAsStringAsync, readDirectoryAsync, } = FileSystem; const encodingObj = { encoding: EncodingType.UTF8 }; async function readParseAsyc(fileUri) { return JSON.parse(await readAsStringAsync(fileUri, encodingObj)); } function writeStringifyAsyc(fileUri, content) { return writeAsStringAsync(fileUri, JSON.stringify(content), encodingObj); } const readWriteSampleAsync = async () => { const sampleFileUri = `${documentDirectory}sample.txt` await writeStringifyAsyc(sampleFileUri, {sampleArray: ["sample0", "sample1"], sampleBool: false}) const sampleRead = await readParseAsyc(sampleFileUri) console.log(sampleRead ) } Share Improve this answer Follow edited Feb 7 at 2:16 answered Feb 7 at 1:55 Iman MalekiIman Maleki 59411 silver badge1414 bronze badges Add a comment  | 
I created a React Native app for personal purposes and this app is about books. I don't use any back-end for my app. I just have a JSON file in which all book info is stored in there. now for avoid data losing, I want to create backup & restore feature in my app. I need to create a backup file and save it on my mobile device and use it for restoring data.
How can i create backup/restore in React Native project?
What is difference between Snapshot and Transfer Data to Vault? Is this the default behaviour, or could there be a specific reason for this duration? While taking the Backup of VM to Recovery Service Vault contains two stages 1.Snapshot ensures the availability of a recovery point stored alongside the disks, making it suitable for Instant Restores. These snapshots remain accessible for up to five days, depending on the user's configured snapshot retention settings. 2.Transfer data to vault : During the Transfer data to vault phase, a recovery point is established within the vault for long-term retention. This phase initiates only after the completion of the snapshot phase At the backend, two Sub Tasks are in progress—one for the front-end backup job, which can be viewed in the Backup Job details pane as shown below The completion time of the Transfer data to vault phase may be lengthy, influenced by factors such as disk size, churn per disk, and various other considerations. Reference: Back up Azure VMs in a Recovery Services vault
I am taking backups of my Azure VM using Azure Recovery Service Vault. When I checked the backup recovery points, I noticed that it's taking a time up to 4 to 5 hours, but initially, it only took a snapshot for 10 minutes. It is a scheduled backup that runs once a day during off-hours By getting it's more details have noticed it is taking time in Transfer data to vault What is difference between Snapshot and Transfer Data to Vault? Is this the default behavior, or could there be a specific reason for this duration?
Azure VM Backup Taking Longer Than Expected
0 This sounds like a job for transaction log or replication. https://blog.pythian.com/monitoring-transaction-logs-in-postgresql/ https://www.cherryservers.com/blog/how-to-set-up-postgresql-database-replication Or, triggers might also work: https://www.postgresql.org/docs/current/sql-createtrigger.html And if none of those options work, query a timestamp on every column. Share Improve this answer Follow answered Jul 26, 2023 at 12:19 Andrew ArrowAndrew Arrow 4,3941010 gold badges5454 silver badges8787 bronze badges Add a comment  | 
Database DB_B is a copy of DB_A. DB_B is less than an hour old but DB_A is live until the users are switched over to DB_B. How do I track the changes made during that interval? I will know the time that the copy was made. I can't be adding extra columns or tables to DB_A unless they are small changes and/or don't affect business. This is my current script. Can I do better than this? SELECT [name] ,create_date ,modify_date FROM sys.views WHERE modify_date > '26 July 2023 15:50:00' Idealy I would like anough information from DB_A to create an insert/update script on DB_B Is this possible?
Get short-term database changes on database move
You can backup a copy of your folder containing your git project, that's it. Later if you want to have it on a new remote repository you could change your remote: git remote set-url origin https://github.com/OWNER/REPOSITORY.git If you worked only locally you add the remote: git remote add origin https://github.com/OWNER/REPOSITORY.git
I've just started using Git and GitLFS for version control on game projects that I'm making with the Unreal Engine. These projects use a lot of large binary files and so they're expensive to store on an online server for long periods of time. Accordingly, I want to make offline backups of my project that contain all files and assets. This is for general backup purposes during development, and also for longer-term backup, e.g. when I have a project that I've finished working on for the time being, but might want to pick back up later. From what I currently understand, one option is to clone my repository to a new location on my desktop PC, then literraly just delete the (hidden) .git folder from that new cloned version of the repo, and then zip the project folder. I can then delete the remote repository. When I want to continue development, I can unzip the folder I zipped up, open gitbash, and run "git init" to start a new repo. However, that obviously won't restore my old branches or commit history. Is there an alternative way of backing up a repository if I want to keep alive things like the branches and commit history? I've considered whether the correct approach is simply to clone the remote repo straight to the backup drive. However, I don't know what would happen if I then deleted the remote repo. Could I simply create a new remote repo and then associate it with my local backup repo by opening GitBash in the backup folder and calling e.g. "git remote set-url origin git://new.url.here" ?
What is the best way to create an offline backup of a git project (Unreal Engine)?
0 To answer the question straight: There is no way in ansible to do something like the above without the usage of the shell module. This comes with all the downsides when using the shell module what shouldn't be used unless there is absolutely no other way around. However, since my objective here was: »backing up KVM VMs« I found two alternative ways. Mount a network volume (nfs, etc.) and use dd … of=/path/to/network/drive/… This way the data will be copied over the network without piping it over ssh Libvirt offers the ability to create backups (full and incremental) in »pull mode«. Going that way libvirt creates a »network block device (NBD)« which exposes a network block device, which in turn can be read from the target system. Share Improve this answer Follow answered May 18, 2023 at 7:15 philippphilipp 16.2k1515 gold badges6161 silver badges114114 bronze badges Add a comment  | 
I have setup some VMs on a Hypervisor. Each of which is using a raw LVM Volume as disk. This way I can use LVM Snapshots to prepare a backup. Up to this point that is working and the only missing piece of the puzzle boils down to: How can/should I do something like this with ansible: dd bs=16M if=/dev/h-vg/vm-dev | ssh root@serverB "dd bs=16M of=/path/to/backup.img" I know that I can first use dd to create the image and than ansible.builtin.fetch to pull it down. But than I would need three times as much hard disk space as the volume I want to backup, since I need space for the snapshot and additional space for the image. Update With regards to the Comments I know that using the shell module is an option (as always) but I want to make sure that there isn't any other more »ansibleish« way to do that, to avoid all the downsides of using the shell module. Piping data over ssh is some sort of general tool, but since ansible is managing all the »to and fro« I am curious if the is an ansible way to stream data like that.
how to backup raw- / kvm-device directly over the network
0 Found the steps: Take the backup of existing DB. mongodump --host [IP ADDRESS] --port [PORT] --db [DB NAME] --authenticationDatabase [DB NAME] --username [USERNAME] --password [PASSWORD] --out [BACKUP DIRECTORY] Once the backup is ready, copy it to Atlas using this. mongorestore --uri "[MONGO ATLAS CLUSTER URL]" --db=[DB NAME] [BACKUP DIRECTORY] The right access needs to be given to user. Thanks, Atul Share Improve this answer Follow answered May 4, 2023 at 13:48 AtulAtul 3,1772828 silver badges4242 bronze badges Add a comment  | 
I want to import a DB into the Mongo Atlas from my managed mongo instance(which is managed by us). The backup is in the form of BSON. Want some hints, how can I do that. MongoDB - 6.0 Thanks, Atul
Import export of MongoDb database from local instance to Mongo Atlas
The --target-time option should accept any timestamp allowed by the PostgreSQL recovery_target_time parameter. Your particular --target-time value is using en dashes (–, UTF-8 code: E2 80 93) instead of hyphens (-, UTF-8 code: 2D). If you replace the dashes with hyphens then the timestamp will be accepted, e.g.: 2023-04-22 00:00:00.000000+02:00.
I try to recover a barman backup, and I want to use the parameter target-time. But barman does not accept this parameter, even if I use it in the same manner like many sources I found. The whole output: barman@BKUP-XXX:~$ barman recover --target-time '2023–04–22 00:00:00.000000+02:00' --remote-ssh-command "ssh postgres@pgdb-XXX" PGDB-XXX 20230416T214125 /var/lib/postgresql/13/main/ Starting remote restore for server PGDB-XXX using backup 20230416T214125 Destination directory: /var/lib/postgresql/13/main/ Remote command: ssh postgres@pgdb-XXX ERROR: Unable to parse the target time parameter '2023–04–22 00:00:00.000000+02:00': Unknown string format: 2023–04–22 00:00:00.000000+02:00 barman@BKUP-XXX:~$ The string format of the time is unknown. But it's exactly the format used in many examples, no matter if with or without milliseconds or timezone, every time I try is unknown. Maybe it's a python issue? The server is used only for barman, I don't think, there is another software with interdependencies to barman or python. How can I use the target-time?
barman does not accept parameter 'target-time'
0 No, not necessary. Maybe consider exporting the tables in the system_auth keyspace if you want to be able to recover your user data. But as long as you have that and your table CQL, there's nothing else you need to save or backup in the system keyspaces. The node will replace/rebuild all of that at startup time, anyway. Share Improve this answer Follow answered Apr 19, 2023 at 13:05 AaronAaron 56.4k1111 gold badges118118 silver badges136136 bronze badges Add a comment  | 
I was wondering about the necessity to backup (in order to restore of course) the system keyspaces of Cassandra. My backup/restore procedure is the following: backup: snapshots (nodetool) + dump of keyspaces (cqlsh DESC keyspace) on all cluster nodes restore: create keyspaces (cqlsh) restore schema.cql of all tables of all snapshots (cqlsh) import tables of all snapshots (sstableloader) With this procedure, I was wondering: I read that among system keyspaces, only system_schema could be usefull to be backuped/restored, are there others ? Since I recreate keyspaces, and reimport tables schema (cql included in snapshots for each table), is the backup/restore of the keyspace system_schema necessary ? Thank you
Is it necessary to take a backup of the system_schema keyspace?
you can modify your backup command to include the WITH INIT option REM Backup each database, prepending the date to the filename FOR /F "tokens=*" %%I IN (%DBList%) DO ( ECHO Backing up database: %%I SqlCmd -E -S ERP2 -Q "BACKUP DATABASE [%%I] TO Disk='D:\SQL ERP2_Backup\%%I.bak' WITH INIT" ECHO. ) any existing backup file with the same name will be overwritten
I use a bat file to backup databases to destination folder as below. @ECHO OFF SETLOCAL REM Get date in format YYYY-MM-DD (assumes the locale is the United States) FOR /F "tokens=1,2,3,4 delims=/ " %%A IN ('Date /T') DO SET NowDate=%%D-%%B-%%C REM Build a list of databases to backup SET DBList=D:\SQLDBList.txt SqlCmd -E -S EvergrandERP2 -h-1 -W -Q "SET NoCount ON; SELECT Name FROM master.dbo.sysDatabases WHERE [Name] NOT IN ('tempdb')" > "%DBList%" REM Backup each database, prepending the date to the filename FOR /F "tokens=*" %%I IN (%DBList%) DO ( ECHO Backing up database: %%I SqlCmd -E -S ERP2 -Q "BACKUP DATABASE [%%I] TO Disk='D:\SQL ERP2_Backup\%%I.bak'" ECHO.) REM Clean up the temp file IF EXIST "%DBList%" DEL /F /Q "%DBList%" ENDLOCAL I can successfully backup the database to the folder, but when I run the bat file again, I cannot overwrite backup files. What can I do to overwrite the old file when I run the .bat file again?
Question of SQL backup database to destination folder by bat file
CHECK constraints are not yet supported by the latest phpMyAdmin version 5.2.0. This is overdue, because CHECK constraints have been supported since MySQL 8.0.16 (2019-04-25) and MariaDB 10.2.1 (2016-07-04). There are issues showing that they are aware of the feature request. https://github.com/phpmyadmin/phpmyadmin/issues/13592 https://github.com/phpmyadmin/phpmyadmin/issues/16224 They currently schedule it for the phpMyAdmin 5.3.0 milestone, but they have not publicized a due date for that milestone. The intervals between milestones are irregular, between 4 and 13 months. For example, here's the history of recent milestones: 4.8.0: 2018-05-24 4.9.0: 2019-06-04 5.0.0: 2019-12-26 5.1.0: 2022-01-21 5.2.0: 2022-05-11 So your guess is as good as mine when it will be released. In the meantime, you must use mysqldump.
When I export my database from phpMyAdmin (quick method), it does not include CHECK constraints. However, when I run SHOW CREATE TABLE table, I see the constraints. Moreover, foreign constraints are backed up. In addition, when I take the backup using mysqldump, CHECK constraints are included in the file. Is there any way to tell phpMyAdmin to include the CHECK constraints in the backup? I have checked the "custom" method, but I do not see any option.
How to include "CHECK" constraint to backup done via phpMyAdmin?
0 First you need to build images of the existing containers that you are using, next docker save command can be used to save all your docker images but you need to mention each image name manually which you don’t want to do, however these docker commands can be executed by using python or shell script. This document explains clearly about how to manage your docker environment using python code. Refer this and there are many functions and example code snippets available, hope this will help you. Share Improve this answer Follow answered Jan 10, 2023 at 10:26 Kranthiveer DontineniKranthiveer Dontineni 1,44933 silver badges1212 bronze badges Add a comment  | 
I need advise, how to backup Docker/K8. I am migrating my dev/lab environment from VMWare to Docker (which I am not 100% familiar yet) and I need to set up backup/restore procedure. It was quite simple with VMware images (copy all VM files). I would like to backup "whole Docker desktop" or in other words, I nee to make lab environment recovery as simple as possible: e.g. "copy/untar backups to freshly installed PC" and after recovery all containers,defined networks, volumes, K8s, etc. are recovered as well. And after recovery procedure I am ready to start using "good old recovered containers". I am aware that ideal scenario is to keep all container data in volumes and that there are defined procedures how to backup one by one: volumes ( https://docs.docker.com/storage/volumes/#back-up-restore-or-migrate-data-volumes ) images containers ( https://docs.docker.com/desktop/backup-and-restore/ ) K8 backup/recovery is probably somehow defined ( I am not familiar yet ). I am wondering if there is any way how to backup "Docker Desktop" as single "tar" file for example, because until I understand deeply Docker/K8 backup procedures, I would prefer to keep backup as simple as possible. I am using Docker/WSL2 on Windows and my most "valuable" containers are 4+ Oracle Database containers ( with different Oracle versions/flavors), but I assume number containers will grow and I do not want to update backup script each time new container is created ( this would definitely end up in missed backup one day ). I there any "simple" recommended way I missed ?
Docker Desktop Backup
Redis persists data to disk based on configuration. There are config options to persist every command or periodically and in different formats (RDB or AOF). To understand how Redis data gets backed up, please go thru this link https://redis.io/docs/management/persistence/. In your case, since Redis is running inside Docker container make sure to mount a volume that is outside container. Otherwise data will be lost after container restart.
When does Docker Redis store on a volume? That store it on a volume every command? I want to know when Redis will be backed up.
When does Docker Redis store on a volume?
0 If you're on a self-managed instance of GitLab, and you have administrator rights (or rather the ability to access the rails console), then you can follow the instructions in documentation you linked to. Basically, you need to turn on the feature flag by starting the rails console, and running the command: Feature.enable(:bulk_import_projects) After turning on the feature flag, and you go to migrate a group, it should migrate the projects in the group as well. There is no built-in automation to import all of your groups. If you need to, you might consider looking at the Congregate tool mentioned in the documentation. Share Improve this answer Follow answered Sep 11, 2022 at 1:15 Arty-chanArty-chan 2,73022 gold badges2121 silver badges2727 bronze badges Add a comment  | 
I want to migrate the group, all sub-groups and projects from gitlab.com to a self contained gitlab instance(local). I can migrate the group and sub-groups but not the projects. The documentation says this, but I'm not able to 100% understand it: On self-managed GitLab, migrating project resources are not available by default. To make them available, ask an administrator to enable the feature flag named bulk_import_projects. On GitLab.com, migrating project resources are not available. link: https://docs.gitlab.com/ee/user/group/import/index.html#migrated-project-resources This means that I can't migrate from gitlab.com to self contained gitlab? Is there an automated method that allows me to do this?
How to migrate entire gitlab project?
0 If you need to schedule some operations in Firestore, you can consider using Cloud Scheduler, which allows you to schedule HTTP requests or Cloud Pub/Sub messages to Cloud Functions for Firebase that you deploy. If you need to get the documents that are added/updated in a given period of time, like the last 1 hour, then don't forget to add a timestamp field to your documents. In this way, you can query based on that timestamp field. Share Improve this answer Follow edited Sep 2, 2022 at 9:46 answered Sep 2, 2022 at 9:40 Alex MamoAlex Mamo 135k1717 gold badges164164 silver badges196196 bronze badges Recognized by Google Cloud Collective 3 I already have Cloud Scheduler + Cloud Pub/Sub backup system. But as I have already told we have more than 3 million and it is backing up all the docs. It is costing too much. I want to reduce this cost and sorry we don't have an updated timestamp field at present. Can we get an update_at detail from some metadata of firestore doc? – Ankur Kumar Sep 2, 2022 at 10:00 You can get some data from a SnapshotMetadata oject but there are no details about the last update. You have to add that, as I mentioned in my answer. – Alex Mamo Sep 2, 2022 at 10:35 Hey Ankur. Can I help you with other information regarding the initial question? – Alex Mamo Sep 5, 2022 at 7:24 Add a comment  | 
I need to back up my prod server Firestore DB hourly. I know about exportDocuments but it incurs one read operation per document exported. I have more than 3 million and these are increasing day by day. Is it possible to export docs that are added/updated in a given period like the last 1 hour? I already have Cloud Scheduler + Cloud Pub/Sub + function-based backup system. It is backing up all the docs. It is costing too much.
Hourly Backup Firestore Databse
While I'm not sure about using one of the backup files, it is possible to export an application schema to a deluge file from one account and create an equivalent application on a different account by uploading the deluge file to the new account. Settings->Application IDE->Export Note, the data won't be included in this operation. To include the data, it will need to be exported also. I think there are several methods to do the data export/import.
Good day all, For the purposes of Business Continuity I have investigated whether I can download a backup file from one Zoho instance / profile, to another brand new Zoho Creator instance / profile. The idea is to mitigate risk if my main profile is somehow ransomed or hijacked. But from what I can see, there is no option to restore a downloaded backup into Zoho. Please can someone give me some advice?
How can I restore a downloaded Zoho Creator backup file on a separate Zoho profile?
There is a possibility that the Dump file is empty. Check its size, the presence of data in the file itself, whether the format is correct. In my case, most likely this solved the problem
I am applying a database backup file to my Rails project, but objects are not created in my database (The dump (backup) file is at the root of the project) My backtrace: sorry if it's too big, cut a lot $ rake db:drop Dropped database 'project_development' Dropped database 'project_test' $ rake db:create Created database 'project_development' Created database 'project_test' $ psql project_development<dump SET SET SET SET SET set_config ------------ (1 row) SET SET SET SET SET SET CREATE TABLE ALTER TABLE CREATE SEQUENCE CREATE SEQUENCE ALTER TABLE ALTER SEQUENCE ALTER TABLE COPY 0 COPY 1 COPY 1 COPY 0 COPY 4 COPY 40 COPY 0 setval -------- 1 (1 row) setval -------- 1 (1 row) ALTER TABLE CREATE INDEX ALTER TABLE Everything is successful, but the data does not appear $ rails c Contact.all Contact Load (0.5ms) SELECT "contacts".* FROM "contacts" LIMIT $1 [["LIMIT", 11]] => #<ActiveRecord::Relation []>
The dump file is not applied to the database (Rails, PostgeSQL)
0 Rclone can't compress the files, but you can instead use a simple code to zip or rar the files and then use rclone to back them up to AWS. If this is OK, I can explain the details here. Share Improve this answer Follow answered Nov 1, 2022 at 17:35 living beingliving being 21344 silver badges1010 bronze badges Add a comment  | 
Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 1 year ago. Improve this question I have been using rclone to back up google drive data to AWS S3 cloud storage. I have multiple google drive accounts whose backup happens on AWS S3. All those google drives have different numbers of documents. I want to compress those documents into a single zip file and then it needs to be copied on S3. Is there any way to achieve the same? I referred to the link below, but it doesn't have complete steps to accomplish the task. https://rclone.org/compress/ Any suggestion would be appreciated.
Rclone Compression | zip | rar and data transfer [closed]
0 if you have RealCalc installed, it was the place my backup would timeout. I uninstalled, and backup completed. Share Improve this answer Follow answered Sep 15, 2022 at 1:29 BernBern 1 1 1 Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center. – Community Bot Sep 17, 2022 at 18:17 Add a comment  | 
I tried to do a backup of the storage of my Samsung S10+ by the adb backup command: adb backup -shared. The backup starts and file size increases until about 4.8 GB. Then there is a message on the phone display (after 10 minutes or so) that says: Timeout. Operation aborted. Anyone knows how and where to increase the timeout? Thx I tried several options on the s10+ with no effect. I searched the web but did not found a working solution.
adb backup aborts with timeout on samsung s10+
0 Using the network path provided by the company R:\erts..\nnn... failed. restore succeeded using server on the same network with \\Qdata...\nnn.. path as shown in the screenshot. Share Improve this answer Follow answered Apr 15, 2022 at 21:21 AAserverAAserver 2133 bronze badges Add a comment  | 
From a Windows Server 2012 / SQL Server 2014, I backed up a database to a NNNN.bak file on a network drive. When I log in to my new Windows Server 2019 / SQL Server 2019 and try to restore that, I get the famous No backupset selected to be restored error on the upper-left corner of the screen when I select a NNNN.bak. I tried several things like checking/adding the file extension .BAK, restore from a local drive, different checkboxes in "General", "Files", "Options" menu. Important: with the same exact conditions 5 other databases got restored and the other four are not cooperating within the same week. I checked my user permissions inside databases and both SQL servers. I am an administrator with almost all privileges. I tried restore verifyonly... and restore header... etc
From SQL Server 2014 to SQL Server 2019 - No backupset selected to be restored
0 After so many struggle it was found that it was a problem PHP version running on live server machine and localhost machine. Now after matching PHP version on localhost it is running fine on Windows localhost but again struggling on CentOS7. Share Improve this answer Follow answered Mar 14, 2022 at 10:39 Arvind KumarArvind Kumar 40033 silver badges1111 bronze badges Add a comment  | 
I have a backup of a Drupal website which is running on a live web server CentOS 7. I have to restore this site on localhost. I have restored database successfully and website folder on localhost, but I am getting "Not Found" error on every page except home page. I am trying to resolve this issue for one month but unable to resolve, please some one help me to restore the Drupal site on localhost Now what I have done till now to get issue resolved is- Enabled mod_rewrite in .htaccess file at /var/www/html (I have placed my website file/folder in html folder) #Various rewrite rules. < IfModule mod_rewrite.c > RewriteEngine on Base Url (at /var/www/httml/site/default/settings.php) $config_directories = array(); $base_url = "http://localhost/"; ini_set('session.auto_start', 0); Server configuration (at /etc/httpd/conf/httpd.conf) DocumentRoot "/var/www/html" <Directory "/var/www/html"> AllowOverride All # Allow open access: Require all granted </Directory> Database setting (at /var/www/html/sites/default/settings.php) $databases['default']['default'] = array ( 'database' => 'databasenameofmysite', 'username' => 'root', 'password' => '', 'prefix' => '', 'host' => 'localhost', 'port' => '3306', 'namespace' => 'Drupal\Core\Database\Driver\mysql', 'driver' => 'mysql', ); $databases['default']['myseconddb'] = array ( 'database' => 'seconddatabase', 'username' => 'root', 'password' => '', 'prefix' => '', 'host' => 'localhost', 'port' => '3306', 'namespace' => 'Drupal\Core\Database\Driver\mysql', 'driver' => 'mysql', ); $settings['install_profile'] = 'standard'; $config_directories['sync'] = 'sites/default/files /config_zESQ5GgK5qMrKo8T75ePMTuxTIkbfrbzv3YQ0LEpvL-YeSdRapewGr-pZ0AHyYK2Z71SH-GGMw/sync'; But unable to resolve issue any how?
"Not Found" Errors on every page except homepage in Drupla website
Calling Commands Via Code Executing an Artisan command outside of the CLI. For example, you may wish to fire an Artisan command from a route or controller. You may use the call method on the Artisan facade to accomplish this. The on your controller you can do this: public function createBackup(){ Artisan::call('backup:run',['--only-db'=>true]); /// whatever you want to display }
i am trying to make a backup option for my CRM. I have install this package https://spatie.be/docs/laravel-backup/v5/taking-backups/overview and i am using laravel 6^. I can backup my db and all system configuration with this package if i run backup:run from Terminal but this is not all i want. What i am looking for is to crate an interface where admin of page can make backup manually by clicking on options. For example like this: https://jobclass.laraclassifier.com/admin/backups (email: [email protected] password: 123456) Does any one know how can i do something like this?
How to create backups from user interface CRM?
turns out it was the brackets in the install command that was causing the issue --backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \ removed the brackets to this: --backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY,subscriptionId=numbersandlettersandstuff \ and now it works
i'm trying to use Velero to backup an AKS cluster but for some reason i'm unable to set the backup location in velero. i'm getting the error below I can confirm the credentials-velero file I have obtains the correct storage access key, and the secret (cloud-credentials) reflects it as well. Kind of at a lost as to why it's throwing me this error. Never used Velero before. EDIT: So I used the following commands to get the credential file: Obtain the Azure Storage account access key AZURE_STORAGE_ACCOUNT_ACCESS_KEY=`az storage account keys list --account-name storsmaxdv --query "[?keyName == 'key1'].value" -o tsv` then I create the credential file cat << EOF > ./credentials-velero AZURE_STORAGE_ACCOUNT_ACCESS_KEY=${AZURE_STORAGE_ACCOUNT_ACCESS_KEY} AZURE_CLOUD_NAME=AzurePublicCloud EOF then my install command is: ./velero install \ --provider azure --plugins velero/velero-plugin-for-microsoft-azure:v1.3.0 \ --bucket velero \ --secret-file ./credentials-velero \ --backup-location-config resourceGroup=resourcegroupname,storageAccount=storageAccount,storageAccountKeyEnvVar=AZURE_STORAGE_ACCOUNT_ACCESS_KEY[,subscriptionId=numbersandlettersandstuff] \ --use-volume-snapshots=false I can verify Velero created a secret called cloud-credentials, and when I decrypt it with base64 I'm able to see what looks like the contents of my credentials-velero file. for example: AZURE_STORAGE_ACCOUNT_ACCESS_KEY=MYAZURESTORAGEACCOUNTKEY AZURE_CLOUD_NAME=AzurePublicCloud
Trouble creating Velero storage location with storage access key
0 It appears that the poster module is trying to use the Python 2 type print "x" syntax instead of the Python 3 print("x") syntax. You are possibly trying to use the wrong version of poster. There's a module called poster3, maybe give that a try. Share Improve this answer Follow answered Jan 3, 2022 at 15:51 mneumannmneumann 74033 gold badges1010 silver badges4343 bronze badges Add a comment  | 
So i am trying to read a .bak file, and from googling, it seems rebase is the way to go, however i keep getting traceback error Traceback (most recent call last): File "C:filepath", line 1, in from poster.encode import multipart_encode File "C:filepath\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\poster_init_.py", line 4, in import poster.streaminghttp File "C:filepath\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\poster\streaminghttp.py", line 58 print "send:", repr(value) ^ SyntaxError: invalid syntax Rebasedata code: from poster.encode import multipart_encode from poster.streaminghttp import register_openers import urllib2 #Register the streaming http handlers with urllib2 register_openers() #Use multipart encoding for the input files datagen, headers = multipart_encode({ 'files[]': open('example.bak', 'rb')}) #Create the request object request = urllib2.Request('https://www.rebasedata.com/api/v1/convert', datagen, headers) #Do the request and get the response #Here the BAK file gets converted to CSV response = urllib2.urlopen(request) #Check if an error came back if response.info().getheader('Content-Type') == 'application/json': print response.read() sys.exit(1) #Write the response to /tmp/output.zip with open('/tmp/output.zip', 'wb') as local_file: local_file.write(response.read()) print 'Conversion result successfully written to /tmp/output.zip!' How do i actually get this script to run? thank you. Note i replaced my original file path with "filepath"
RebaseData to read backup files
I thought of an alternative that works. The idea is To mount the azure file storage on disk. So it's not really "local" but rather a shared file. (here) I use Rclone to copy to S3 from the local path to the mounted disk.
I would like to copy a directory (with all its sub files, folders etc.) from azure file storage (not azure blob storage) to an aws s3 bucket on powershell. So : Azure Files -> Amazon Web Services (AWS) S3 What I tried : using Rclone but rclone only takes into account blob and not file storage for the moment (see here) use of azcopy but azcopy does not allow the following combination Azure Files (SAS) -> Amazon Web Services (AWS) S3 (Access Key) The process must not go through a local location (Virtual machine). Any Ideas ? Thanks !
Copy Azure Files to Amazon-S3 bucket
0 You have to copy in all of the jars from the contrib/s3-repository/lib folder in addition to the plugin jar from dist/ Share Improve this answer Follow answered Nov 29, 2021 at 14:59 Michael ConradMichael Conrad 10177 bronze badges Add a comment  | 
I’m, new in solr so let me know in case I’m missing anything here. I’m following this guide but no luck so far - https://solr.apache.org/guide/8_10/making-and-restoring-backups.html So, what I’ve done at the moment In solr.xml I’ve added a backup section <backup> <repository name="s3" class="org.apache.solr.s3.S3BackupRepository" default="false"> <str name="s3.bucket.name">solr-backups</str> <str name="s3.region">us-east-1</str> </repository> </backup> After that, I’ve added the S3 plugin (via ansible as I do with the rest of the things usually) - name: Copy SOLR S3 module to server copy: src: /opt/solr/dist/solr-s3-repository-8.10.1.jar dest: /opt/solr/server/solr/lib owner: solr group: solr remote_src: yes become_user: root And finally, Added S3 Credentials to solr config - name: Add aws credentials become_user: root lineinfile: path: /opt/solr/bin/solr.in.sh line: 'SOLR_OPTS="-Daws.accessKeyId=REDACTED -Daws.secretAccessKey=REDACTED"' After restarting Solr to have all new settings applied I'm running the following command to do a backup. http://my-server.com:8983/solr/schools/replication?command=backup&name=29-10-2021&repository=s3&location=backupfolder But receiving the following error: HTTP ERROR 500 java.lang.NoClassDefFoundError: org/apache/solr/core/backup/repository/BackupRepository URI: /solr/schools/replication STATUS: 500 MESSAGE: java.lang.NoClassDefFoundError: org/apache/solr/core/backup/repository/BackupRepository SERVLET: default CAUSED BY: java.lang.NoClassDefFoundError: org/apache/solr/core/backup/repository/BackupRepository CAUSED BY: java.lang.ClassNotFoundException: org.apache.solr.core.backup.repository.BackupRepository Error message for solar backup on s3 Any idea what might be wrong with my config?
How to setup a Solr backup to S3 in 8.10 version
You will be told there are umpteen duplicate ways to do this so in this 22 nd year of the 1st century :-) Windows has no native way of returning a sequential Iso Date the primary answer will be use powershell and for my locale it needs to be called in a suitable format, introducing a delay. powershell get-date -format "{yyyy-MMM-ddTHH_mm+01Z}" Note:- colons : are not allowed, and for me 20 seconds later on one machine (but it does get faster with use) and 12-5 seconds later on this one, I get 2021-07-07T21_55+01Z but actually its now 2021-Jul-07 21:56 I have found that the MakeCab method is faster and reliable but again the format is not pure sequencing and the Jul will NOT appear before Dec in a file list without significant batch file processing. 2021-Dec-31 23:00:00.txt 2021-Jul-08 21:54:20.txt So in a .cmd I prefer a more instant result thus my clock is set to International dates (You will need to look at your LOCALE clock setting bottom right for your own construction.) set isodate=%date:~0,10% instantly returns isodate=2021-07-07 and I can then use that for filename @ECHO OFF cd E:\PCBackup set "isodate=%date:~0,10%" dir /s > %isodate%-dirlist.txt dir returns includes 2021-07-07-dirlist.txt If you want to run several times in a day use @ECHO OFF cd E:\PCBackup set "isodate=%date:~0,10%" set "isotime=%time:~0,2%-%time:~3,2%-%time:~6,2%" dir /s > %isodate%T%isotime%+01Z-dirlist.txt Amend that any way you wish for your timezone, thus your own clock whatever your date format be it :- 31/2021/12 look at the way I split %time :~ start@base 0 2021-07-07T21_55+01Z0 # of chars 2021-07-07T21_55+01Z1 one example for an "English" clock date of 31/12/2021 would be simply reverse to 2021-07-07T21_55+01Z2 For American %date%=Thu 07/08/2021 use 2021-07-07T21_55+01Z3
This question already has answers here: How do I get current date/time on the Windows command line in a suitable format for usage in a file/folder name? (30 answers) Closed 2 years ago. I'm trying to preserve the dates of files that I'm backing up onto an external drive, in the unlikely event that the dates get messed up for whatever reason (I had a previous experience where I lost date information and had no backup). I'm doing this through a batch file containing the following: @ECHO OFF cd E:\PCBackup dir /s > dirlist.txt I would simply run this batch file after running my backup using FreeFileSync. Then, if I need to, I can search the txt file for the filename and see its corresponding date. However, when this batch file runs, if there is a previous dirlist.txt, then it is overwritten with the new dirlist.txt. So, in a scenario where the dates get messed up and I don't yet realize it, if I run this batch file, it will overwrite the previous dirlist.txt with one that has the messed up dates, and I'd lose the date information! So, what I think I want it to do is, if dirlist.txt already exists, then create a new one, say something like dirlist1.txt, so that I can have several "backups" of the text file that I can manually delete if necessary. I've seen that one can instead use >> with something like dir /s >> dirlist.txt to append to an existing file instead of overwriting, but I don't want to append if I don't have to, I'd still like to create a new file. Is there a way to accomplish this? I'm also open to alternative/simpler ways of preserving the dates, if there are any. Please keep in mind that I know little about CMD commands or programming, outside of a computer science course I took years ago. Thank you.
Batch File DIR Command to Text File Without Overwrite [duplicate]
0 I found ghorg which works with GitLab and clones all repos ghorg clone <gitlab_username> --clone-type=user --base-url=https://<your.instance.gitlab.com> --scm=gitlab --token=XXXXXXXXXXXXX I would still like to find a better option, which downloads the tar.gz files rather than cloning the repos. Also ghorg relies on brew which is somewhat cumbersome to install on my OS (Ubuntu). It would be better if there is a quicker/easier option Share Improve this answer Follow answered May 31, 2021 at 10:56 sunyatasunyata 1,90355 gold badges3030 silver badges4444 bronze badges Add a comment  | 
Background I'm looking to make backups of the code that i have hosted on gitlab. I only need the zip/tar.gz files, not the whole code history. (I have many repositories there so it wouldn't be practical to do this manually) My OS is Ubuntu 21.04 Question How can I download zip/tar.gz files for all my gitlab repos? What I've found so far I've tried using gitlabber but it turns out it only works for gitlab groups, and not personal repos (almost all my projects are personal repos). (Also gitlabber will download the whole repos, not just the tar.gz files)
How can I download zip/tar.gz files for all my gitlab repos?
0 The only thing I have found says this: schedule Backup documentation Frequency Apps must issue a request when there is data that is ready to be backed up. Requests from multiple apps are batched and executed every few hours. Backups happen automatically roughly once a day. Also, it looks like if you write your own agent you can register for backup or restore events. events also, one last thing, when you implement the events it looks like backup gives access to the date of the last backup. As far as I can see, a customs agent gets you the info you need with some coding examples Share Improve this answer Follow edited May 7, 2021 at 14:22 Rahul 3,30322 gold badges3232 silver badges4444 bronze badges answered May 6, 2021 at 19:15 keepTrackOfYourStackkeepTrackOfYourStack 1,23599 silver badges2222 bronze badges Add a comment  | 
I am using the Android AutoBackup feature in my app. These are my manifest settings. <application android:allowBackup="true" android:fullBackupContent="@xml/backup_rules" android:fullBackupOnly="true" Is there a way to know when the last backup for my app was made?
Android auto backup - how to know when the last backup was made?
0 You can backup your Kubernetes cluster by command: etcdctl backup. Here is completely guide, how to use etcdctl backup command. Alternatively you can also make a snapshot of your cluster: etcdctl snapshot save. This command will let you create incremental backup. Incremental backup of etcd, where full snapshot is taken first and then we apply watch and persist the logs accumulated over certain period to snapshot store. Restore process, restores from the full snapshot, start the embedded etcd and apply the logged events one by one. You can find more about incremental backup function here. Share Improve this answer Follow answered May 4, 2021 at 9:40 Mikołaj GłodziakMikołaj Głodziak 5,00399 silver badges2929 bronze badges 0 Add a comment  | 
I am trying to take a backup of the Kubernetes cluster without using any third-party applications. I tried backing up /var/lib/etcd, But etcd is only changed when there is a change in namespaces. There is no change in etcd when there is a change in pods or replica sets. Is there any other location where Kubernetes stores its data other than /var/lib/etcd?
Where does kubernetes store the cluster data other than etcd?
0 Acording to this site: InfluxDB also has support for incremental backups. Snapshotting from the server now creates a full backup if one does not exist and creates numbered incremental backups after that. DataSource If this is the case, but you are still having a problem, perhaps you could downsize your data by running continuous queries and a data retention policy. Share Improve this answer Follow answered Mar 27, 2021 at 5:22 PortfedhPortfedh 18822 silver badges1313 bronze badges 2 I was looking into incremental backups, but the link to the readthedocs.io 'DataSource' you provided is now obsolete. – axello May 20, 2022 at 9:57 1 This is the current link. docs.influxdata.com/influxdb/v2.2/process-data/common-tasks/… – Portfedh May 24, 2022 at 23:35 Add a comment  | 
I have a raspberrypi (4b) running Raspbian Linux, collecting IoT data around the house and feeding this into an InfluxDB 1.8.3 (opensource) database. This works fine so far. I also have a backup which runs daily like this: influxd backup -portable /home/pi/influx-backup/ Question: This backup process takes almost 30 minutes, during which InfluxDB is almost unuseable, system load climbs to >7 and my Pi cannot collect data. Each backup is a complete backup. Can I somehow create a faster incremental backup daily? The documentation only mentions a -since parameter but you'd have to specify this manually, which would be risky. Alternatively, the whole system is backed up daily using borgbackup anyway. Stopping Influx, making a rsync copy of /var/lib/influxdb/data as backup, and restarting it is much, much faster than influxd backup. Is this a good alternative idea to backup the database? What other alternatives exist to perform regular, quick, (if possible online) backups of Influx databases? Thanks!
Creating efficient, fast, incremental InfluxDB database backups
0 Starting with version 16.1 there is a solution to exlude repositories from rake backup task. You can do this with the parameter SKIP_REPOSITORIES_PATHS: sudo gitlab-rake gitlab:backup:create SKIP=uploads,artifacts,builds SKIP_REPOSITORIES_PATHS=toolbox/gitlab-smoke-tests The repository path is the full path to the repository. If you have a group with the name toolbox and a project with the Name GitLab Smoke Tests with the path gitlab-smoke-tests, the full repository path will be toolbox/gitlab-smoke-tests. In the most cases you can go to the project overview page of your repository and copy the last part of the url to get the full path. There is also a official documentation about this feature with a description. Share Improve this answer Follow edited Aug 3, 2023 at 5:06 answered Aug 3, 2023 at 4:52 RNoBRNoB 33322 silver badges1010 bronze badges Add a comment  | 
I have to migrate our old GitLab server to another one. Therefore I tried to create a backup using the following command: sudo gitlab-rake gitlab:backup:create SKIP=uploads,artifacts,builds During backing up the repositores I get the following error: Error No space left on device rake aborted! Backup::Error: Backup operation failed: gzip: stdout: No space left on device Does someone know a way to create a backup if the server doesn't have enough space left? Is there a way to exclude specific repositories for a backup so I could split it into many backups?
GitLab Backup no space left on device
First, we can not add permissions to the user in master db, when we try to do this, it will give us the error it does not exist or you do not have permission. master db is usually used to create login. Please switch to user db. We can use the following T-sql to create an AAD user. Then AAD user can login to the UserDB, but by default the AAD user doesn't have any permissions to control the UserDB. use <UserDB>; CREATE USER [[email protected]] FROM EXTERNAL PROVIDER; Then we can execute T-sql to add permissions to the AAD user of the UserDB. So that the AAD user can perform database DDL operations. EXEC sp_addrolemember 'db_owner', [[email protected]]
I am having issues creating a backup of SQL db with my Azure App Service. In the Azure Portal, when I go to backup I am seeing this error: Create Database copy of [database] threw an exception. Could not create Database copy. Make sure to use the admin user in the database connection string. I get this error means the user does not have the correct permissions, but I am not able to grant the correct permissions. I have tried the following so far: Created an Azure Active Directory user and set it as admin for the sql server in the portal Added role "dbmanager" and "loginmanager" to the user Created the user in master and tried adding permissions there but it gives the error it does not exist or you do not have permission I am not sure what else to do. Seems like when I create a new SQL instance in Azure it doesn't give the admin user all permissions.
Azure App Service SQL Database Backup Fails
0 You can use the backup directive on the server line and option allbackups in the specific section. You maybe can also add the weight for the server to define which backup server should be used in priority order. Share Improve this answer Follow answered Dec 19, 2020 at 1:02 AleksandarAleksandar 2,56233 gold badges1616 silver badges2626 bronze badges Add a comment  | 
Let's say I have 3 main servers and 3 backup servers. I want HAproxy to replace a main server with a backup server as soon as it goes down. To elaborate, let's say Main Server 1 goes down, HAproxy will then still continue to use 3 servers in total, where 2 will be main and 1 will be backup. Similarly, if 2 main server goes down, HAproxy will still use a total of 3 servers, 1 from main and 2 from backup. Also, once the main server is active again, HAproxy should stop using the backup and switch back to the main server.
How to set a custom number of backup backend servers in HAproxy?