Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
git-annex is definitely the tool you want for this job. Great docs, amazing code, awesome community. Quick start guide: Install it with sudo apt-get install git-annex then it's on to managing your data just like you'd do with git. git init git annex init "main-backup" git annex add # if you don't specify any paths, it adds everything # it'll hash your files and store them inside the .git folder with their hash as their filenames git commit -m "Add my most important files"
I'd like to keep a "snapshots/versions" of a folder containing lots of huge binary files. The folder contains all my backups made via rsync/rclone/scp/whatever. Since the source can be corrupted/hacked whatever, I'd like to keep all the versions. We all know git/subversion aren't made for files other than text and I wonder if there is a more general tool for that purpose. I'll exclusively use linux.
Binary files/folders version control in 2020?
Answering my own question in case somebody has the same problem: I copied the following folders to the new laptop and I got all the projects with analysis results displaying sonarqube folder Sonar-scanner folder .sonar folder (just in case) .sonarlint (just in case) .m2 (just in case)
I have Sonarqube Community Edition 8.4.1 (build 35646) running locally on Windows 10. My company provided me with the new laptop and I need to move there all the sonar data (and scan results). I have spent few weeks analysing X amount of repos and will have to talk to each team about the issues that were found so it's really important to keep that data. How can I do it? PS: I am new to Sonar and I saw it first time in my life 4 weeks ago. I have to move everything to cloud and I will not have 2 laptops at the same time to experiment.
Move Sonarqube from one computer to another
You can install the same SSL certificate on both servers as long as your DNS server points to the live server. Both server should be configured with the same domain name. To automate this process, you can configure the two servers as a fail over cluster that will move the IP address from one server to another. https://social.technet.microsoft.com/Forums/en-US/cc825079-b821-4a6c-afb9-3e1aef97859d/clusteringmicrosoft-failover-cluster-virtual-adapter?forum=winserverClustering
I have 2 Windows IIS servers (live and backup) running a host of Wordpress sites. The goal is to be able to switch to the backup server if the live server goes down. I can do this now by putting the live server's IP on the backup server. No problem, EXCEPT, the live server has a SSL certificate and the backup does not. So...is it possible to get 2 identical SSL certificates for 2 different servers from the same authority without causing any issues so that I can truely flip the IP addresses and everything runs smoothly?
Two SSL Certificates
I summarize the solution as below. Windows task scheduler doesn't have permissions to the mounted azure file share drive. So we need to use the path https://[Azure_account].file.core.windows.net/storage/Backup to access Azure file share. Besides, if you want to implement auto-upload, you can use azcopy. For example Create a sas token for the file share Script azcopy copy 'C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup*.bak' 'https://[Azure_account].file.core.windows.net/storage/Backup/SQLBackup/[backup_file].bak?<sas token> Create a schedule task For more details, please refer to here and here
I have a server in which I'm running an SQL Server Express DB and an Azure blob i which I Upload each morning the backup of the SQL Server. Now, I've been able to automate the backup via a mix of SQL query + batch file and I have it scheduled into my task scheduler to run each night at 9:00pm, but I would like to move also a copy of the backup from the server to the Azure Storage. I've already tryed a batch file in task scheduler: echo off copy "Z:\Backup\SQLBackup\" "C:\Program Files\Microsoft SQL Server\MSSQL14.SQLEXPRESS\MSSQL\Backup\DailyStep_bck.bak" But it doesn't work by itself, only if I run it manually. Each day the current copy should replace the older one, I don't need retention of old bacups for now. I've tryed also robocopy and it also doesn't work... could someone tell me what am I missin? the task is running as administrator with the "Run wether the admin is loged in or not" option. thanks for your help.
Batch file for auto backup
0 Based on AbraCadaver's suggestion, I found the answer lied in changing the URL parameters in config.php Thanks Share Follow answered Aug 6, 2020 at 11:08 GottanoGottano 122 bronze badges Add a comment  | 
I uploaded a php script to a subdomain that I own, for testing and customizing purposes before it goes live (I had planned on moving everything over to the root domain when done). Someone then suggested that I work on in Xampp instead as it is all locally installed and therefore much faster, etc. Thing is, I had already customized the script a lot (mostly CSS but also uploaded graphics via the admin panel, etc) while it was up live on the web host, so I would like to run a copy of the most up-to-date version of it in Xampp and continue customizing it from there. I downloaded a copy of all my files by FTP into htdocs > Test folder. I also downloaded a copy of the database via phpmyadmin and imported it via phpmyadmin into my localhost. The big problem I have is when I try to access the scritp via localhost, the url immediately reverts back to the live url. How do I set it to link to the local host copy instead? Thanks.
Having trouble running a backup copy of php on Xampp
0 retaindays does not do what you are expecting it to do. It prevents backup files from being overwritten, but it does not mean that old backups are automatically deleted. See here If you want a solution that will do both backup and cleanup, try Ola Hallengren's solution PS: This question is better suited to https://dba.stackexchange.com/ Share Follow answered Jul 8, 2020 at 9:32 allmhuranallmhuran 4,30311 gold badge1010 silver badges2828 bronze badges 2 I does not expect automatically deletion, but old backups are never replaced. Ola Hallengren's solution is too complicated to implement in our production environment. – redsimon Jul 8, 2020 at 10:00 By specifying noinit you are specifying append. But you don't want to specify init either! See Aaron Bertrand's comment Just create separate files, and clean them up with an independent process. If you use Ola's stored procedure, it will handle this for you. I strongly recommend looking into it, yes it has a lot of arguments that you can use, but you can also use it in a very simple way. – allmhuran Jul 8, 2020 at 10:17 Add a comment  | 
I am using SQL Server 2016. I have setup SQL Agent for Full backup (Weekly) and Transaction Log backup (Daily). Backup transaction log into a single file using: BACKUP LOG [XXX] TO DISK = N'E:\SQLDB\Backup\XXX_Trans_Log.bak' WITH RETAINDAYS = 28, NOFORMAT, NOINIT, NAME = N'XXX-Transaction Log Backup', NOSKIP, NOREWIND, NOUNLOAD, STATS = 10 GO What I expected is only 28 days transaction log backups will be kept. But now I just found All transaction log backups are kept. So the file grows to very large in size. Is there syntax/option problems in the backup statement? Or I should have to store in separate files? What should I do now?
SQL Server Transaction Log Backup File
0 It is possible to setup backup alerts in email. Please follow the steps Open your vault and browse to Settings. Click Alerts & Events > Click Backup Alerts. The Backup Alerts dialog opens. This is where all alerts will be displayed. You can filter the information in this blade based on severity, status, time and date. Click Configure Notifications and Enable notifications. Enter the email addresses that you want the alerts to go to. Use a semi-colon ( ; ) to separate multiple addresses. Select what kinds of alerts you want to be notified about, where you can configure to receive backup success alerts. Click Save. And you are done Thanks, Manu Share Follow answered Jul 6, 2020 at 16:15 Manu PhilipManu Philip 19111 silver badge66 bronze badges Add a comment  | 
I need to get a notification whenever a backup is successfully completed. can anyone provide me a solution for this.
How to configure an alert rule for backup success in azure
0 Did you check the documentation? That is, if restoring documents to an existing database and collection and existing documents have the same value _id field as the to-be-restored documents, mongorestore will not overwrite those documents. So, unless you specify option --drop mongorestore will restore only the missing documents - this is the default. In order to run a shard as stand-alone have a look at Perform Maintenance on Replica Set Members. Then you can use mongodump to get the dump from it. Share Follow answered Jul 1, 2020 at 20:44 Wernfried DomscheitWernfried Domscheit 56.7k99 gold badges8383 silver badges117117 bronze badges 1 Thanks. What I ended up doing was I ran the backup shard on localhost, downloaded the documents into a json file in Mongo Compass, and then simply uploaded the json file into the production db also in Mongo Compass. – Daniel Jul 2, 2020 at 16:45 Add a comment  | 
I accidentally deleted some documents from our production database and want to restore them. So, I downloaded a backup shard from MongoDB cloud. I would like to re-add just the deleted documents without restoring the entire database. One suggestion was that I download the documents as a JSON file and upload that JSON file to the production db. I'm not sure how to accomplish this. How can I run the backup shard, download the backup documents as a JSON file, and then upload them back to the production db? Thank you in advance!
MongoDB: Restore documents from backup shard
0 When you create a snapshot, you can specify a storage location. The location of a snapshot affects its availability and can incur networking costs when creating the snapshot or restoring it to a new disk. You will find the pricing for snapshot storage here. This article describes how to create an image from your VM's root disk and export it to Google Cloud Storage or you can directly download it to your remote computer. Share Follow answered Jul 2, 2020 at 20:11 Hasanul MuradHasanul Murad 12766 bronze badges 0 Add a comment  | 
I've two VMs created at the Compute Engine session with hourly snapshot as backup copies. I never created any storage bucket, I wonder where do those snapshots stored and how does it count for the storage space charges? And, is there a way I can backup the VMs to on-prem storage? e.g. can I use any API command to download the VM snapshot to my local storage as a backup of backup just in case Google Cloud screwed up.
Compute Engine Backup
0 So try to rename WAMP root folder from C:\wamp46\www to C:\Mirror Edit the httpd.conf file and/or the vhosts.conf file for the site wish to change. The Directory directive will let you specify where the files for this site are to be located. For more info on httpd.conf see: http://httpd.apache.org/docs/2.2/configuring.html And specifically: http://httpd.apache.org/docs/2.2/mod/core.html#directory Share Follow answered Jun 25, 2020 at 3:57 AziMezAziMez 2,02211 gold badge66 silver badges1818 bronze badges 3 I see this on httpd.conf and httpd-vhosts.conf: "${INSTALL_DIR}/www", so I'll change it to "C:/Mirror" directly? – Jorz Jun 25, 2020 at 4:01 Something like this: DocumentRoot c:/Mirror/www – AziMez Jun 25, 2020 at 4:04 ok got it.. will backup the files first then i'll give it a go. thanks! – Jorz Jun 25, 2020 at 4:07 Add a comment  | 
I got Seagate Backup Plus Slim 1TB today. I am planning to do mirror backup of my web projects from pc (C:\wamp46\www) to external drive (E:). The toolkit app created folders on C:\ and E:\ both named "Mirror" as the syncing folder. Tested it and it works well. But Seagate says: The Mirror folders must each be named “Mirror” in order to sync. Do not rename the folders. Now, how can I mirror backup my files under "www" folders if I can't rename "www" folder? Is there any way? Thanks!
Mirror backup WAMP folder into external hard drive
0 Check your Workspace preference page to have it automatically Refresh the workspace for you. Share Follow answered Jun 5, 2020 at 2:09 nitindnitind 19.5k44 gold badges3535 silver badges4444 bronze badges Add a comment  | 
I run 2 computers and sync the data between them using Google back up and sync which effectively stores everything on my google drive. I have updated a class on my laptop and when I go on my desktop to continue it comes up with the old code however when I open the class file up on notepad it contains the updated file. How do I get my eclipse to run the updated files I thought it should do this automatically since the file has been changed from the sync?
Eclipse IDE not working with Google back up and sync
0 For me it sounds like a classic ETL job. You could use any programming language (like Python) or KNIME that reads from the source db (with an SQL query with a WHERE clause like your_date_column >= CURDATE() - INTERVAL 14 DAY) and write to a sink db. You can then run it as a (cron) job in Windows/Linux and create each day a backup for the last 14 days but make sure that you also delete/drop the older backups if the size of the backups gets to big. Share Follow edited Jun 4, 2020 at 8:26 answered Jun 4, 2020 at 8:18 tardistardis 1,30044 gold badges2424 silver badges4949 bronze badges Add a comment  | 
I have the shop live database. I need to have opportunity to make copies of this database but with orders data only for last 14 days. Database can be really big but almost 80 percent of data is order, payment and related tables. So we want to copy only last 14 days data of these tables, and all data of another tables. How it can be implemented?
Copy MariaDb database with selected data
0 If you have the debug APK you might have a shot. Have a look at this: Flutter debug Apk decompile to get source code If you don't have the debug APK you are probably in trouble. The code on your phone is compiled and deliberately obfuscated, since usually you want to prevent someone with your app from getting the source code. Really hope you can get it back. Good luck Share Follow answered May 31, 2020 at 18:58 Benedikt J SchlegelBenedikt J Schlegel 1,87711 gold badge1111 silver badges2525 bronze badges 2 Thank you for the reply. I bought my laptop 3 months ago so at least it is still in ganrantee. I will go tomorrow to the repair center and let's hope the apk debug works! – Ismail May 31, 2020 at 19:11 @Ismail hope it works out for you! Really heartbreaking to loose work. – Benedikt J Schlegel May 31, 2020 at 19:12 Add a comment  | 
I was almost finished building my first app with Flutter with VScode when my laptops hard drive crashed. All my code gone... I know I should have made a back up and really learnt my lesson now.. I have the app installed on my phone. Is there a way to get it back on my laptop with the full code?
Download flutter app and code back to laptoo
You can use the command Get-AzRecoveryServicesBackupStatus in the if statement like below to check if is BackedUp or not: (Get-AzRecoveryServicesBackupStatus -Name 'vmname' -ResourceGroupName 'rgname' -Type AzureVM).BackedUp If we update your existing code like if backup does not exists it will perform the backup else it will show that Backup is already configured : if (!(Get-AzRecoveryServicesBackupStatus -Name 'vmname' -ResourceGroupName 'rgname' -Type AzureVM).BackedUp) { $vault= Get-AzureRmRecoveryServicesVault -ResourceGroupName "RgName" -Name "VaultName" Set-AzureRmRecoveryServicesVaultContext -Vault $vault Write-Output "Configuring Azure backup to $($vm.Name)" $policy = Get-AzureRmRecoveryServicesBackupProtectionPolicy -Name "PolicyName" Enable-AzureRmRecoveryServicesBackupProtection ` -ResourceGroupName $vm.ResourceGroupName ` -Name $vm.Name ` -Policy $policy } else { Write-Output "$vm.Name has already configured to Azure backup" }
I'm building Azure runbook (powershell) which check if vm has backup enabled if it's not, it will enable it. I have problem with adding and building IF statement to make it better. This is how I do it now and it works, but If vm has backup enabled, runbook will print a lot of red and thats not good. This is part of bigger runbook and those all are running inside on foreach. $vault= Get-AzureRmRecoveryServicesVault -ResourceGroupName "RGName" -Name "VaultName" Set-AzureRmRecoveryServicesVaultContext -Vault $vault Write-Output "Configuring Azure backup to $($vm.Name)" $policy = Get-AzureRmRecoveryServicesBackupProtectionPolicy -Name "PolicyName" Enable-AzureRmRecoveryServicesBackupProtection ` -ResourceGroupName $vm.ResourceGroupName ` -Name $vm.Name ` -Policy $policy Then I wanted to add IF statement there so if backup has been enable on vm, it would just skip it. The below command will print the results of backup ( true or false) but I don't know how to implement that into if statement so if results would be false, it would run the script block and if result would be true, it would just skip and print out $vm.Name has already configured to Azure backup. Get-AzRecoveryServicesBackupStatus -Name 'VmName' -ResourceGroupName 'RgName' -Type AzureVM Results of command if () { $vault= Get-AzureRmRecoveryServicesVault -ResourceGroupName "RgName" -Name "VaultName" Set-AzureRmRecoveryServicesVaultContext -Vault $vault Write-Output "Configuring Azure backup to $($vm.Name)" $policy = Get-AzureRmRecoveryServicesBackupProtectionPolicy -Name "PolicyName" Enable-AzureRmRecoveryServicesBackupProtection ` -ResourceGroupName $vm.ResourceGroupName ` -Name $vm.Name ` -Policy $policy } else { Write-Output "$vm.Name has already configured to Azure backup" } So any tips how to do it? Can I do it somehow like this: if (Get-AzRecoveryServicesBackupStatus -Name 'vmanme' -ResourceGroupName 'rgname' -Type AzureVM backedup -match false) ?
Check if vm has backup enabled, if not enable it (Azure)
0 I don't think so. In order to restore a database, you'd need the same Oracle database version running on the same operating system as it was when that Oracle 7 database was up and running. As it is an ancient database (~30 years old), it ran on an ancient operating system. If you have a support contract with Oracle, they will probably provide installation files. What about operating system? Today's machines are 64-bit; what was Oracle 7's database server? If it was e.g. one of Digital Alpha servers, you should visit a museum. Not very optimistic scenario, I'm afraid. Share Follow answered May 27, 2020 at 6:05 LittlefootLittlefoot 138k1515 gold badges3939 silver badges5959 bronze badges Add a comment  | 
I have got dbf and ctl file of Oracle 7 and does not have installation media of Oracle 7. Is it possible to restore these dbf and ctl file to later versions [Oracle 9i or 10G] ? Thank You Hardik
IS Oracle 7 DB Files are compatible with later version ? Oracle 9 or 10G?
I tried https://github.com/spatie/laravel-backup. it's very good and has many options to manage your backup. I suggest if you want to show a better report of the backups that have been taken to your user in the software. Create a table and store additional information in it and use this table to display information to the user.
I'm using Laravel and I'm looking for a way to back up my database with php code. I want these backup files to be saved in paths such as other drives or usb drives (by the user's choice: so that the user enters the Windows path in the software, such as: "D: \"). I also use xampp to set up a server on Windows. It is better to suggest if you know the standard Laravel package in this regard.
backup mysql database by php on other drive (windows)
0 UNTESTED Of course this is dangerous for /f %%a in ('dir /ad /b *^| findstr /v 01$') do rd /s /q "%%~a" I would do echo rd /s /q "%%~a" first to see the results of what you would do with this command. Share Follow answered May 14, 2020 at 16:52 avery_larryavery_larry 2,09511 gold badge66 silver badges1717 bronze badges Add a comment  | 
I have a simple backup script, which copies a folder to the backups directory each day, in the format Backup-YYYY-MM-DD. I would like to make another script which will delete all of the backups which aren't from the first day of the month (all except those matching 'Backup-YYYY-MM-01') I can't seem to find a way to do this, all i've found is the opposite of what I want: only delete those ending in x which doesn't really help.Or by using a temporary folder: move *-01 to 'temp', then delete all remaining folders this wouldn't be ideal since the file sizes are massive. Thanks in advance
How do I delete all folders that don't end in a string, using batch?
Assuming you have an SSIS variable named @[User::DatabaseName] Given that a backup command would look something like BACKUP DATABASE [MyDB] TO DISK = N'\\Server\backup\MyDB\MyDB_backup_2020_05_06_020003.bak' WITH NOFORMAT, NOINIT, NAME = N'MyDb-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 I'd create SSIS variables. @[User::BackupTimestamp] string Expression something like (DT_WSTR, 4) YEAR(@[System::StartTime]) + "_" + RIGHT("0" + (DT_WSTR, 2) MONTH(@[System::StartTime]), 2) + "_" + RIGHT("0" + (DT_WSTR, 2) DAY(@[System::StartTime]), 2) + "_" ... repeat the above pattern to slice out hours, minutes, seconds into 2 digit entities The purpose of this is to generate a string representation of the start time of the package with the level of time/date precision we want with our backups. @[User::BackupCommand] This is going to be concatenated from other variables to build out the correct TSQL command. "BACKUP DATABASE [" + @[User::DatabaseName] + "] TO DISK = N'\\\\double\\escape\\slashes\\" + @[User::DatabaseName] + "\\" + @[User::DatabaseName] + "_" + @[User::BackupTimestamp] + ".bak WITH NOFORMAT, NOINIT, NAME= N'" + @[User::DatabaseName] + "-Full Database Backup', SKIP, NOREWIND, STATS=10;" As you can see, slashes are a bit of a pain as you have to double them inside Expressions. I typically create a variable called BackupPath which simplifies that bit. Take the generated command, run it via SSMS and if it works, then you can simply wire up the SqlCommand/Text (property name aproximate) to that variable. As for "add it to a backup plan", I don't know what plan your organization uses. I'm a fan of anything that backs up any database it finds as mechanically adding something to a process is a step that will have be missed at the most inopportune time. Ola's backup scripts, Minionware, etc
I have an SSIS flow that creates a database from a variable (and alters a couple of settings). Now, I need to create an initial backup of that database (and then add it to a backup plan). Ask: How do I use the variable I used to create the database to also run the initial backup? (I am trying to use the Backup task...but if an Execute SQL task is better, I'm open to that.)
Backup in SSIS using a Variable db name
There is no export out of the box in SQL Server. Assuming Your table can be pretty big, since it looks like you and image of the table every minute. If you want to do it all from inside SQL Server. Then I'll suggest doing cleanup in chunks. The usual process in SQL to delete by chunks is using DELETE in combination with OUTPUT statement. The easiest way to archive and remove then would be having the OUTPUT to a table in another database, for that sole purpose. so your steps would be: Create a new database (ArchiveDatabase) Create an Archive table in ArchiveDatabase (ArchiveTable) with same structure of the table that you want to remove. In a while loop perform the DELETE/OUTPUT Backup the ArchiveDatabase TRUNCATE ArchiveTable table in ArchiveDatabase The DELETE/OUTPUT loops will look like something like declare @RowsToDelete int = 1000 declare @DeletedRowsCNT int = 1000 while @DeletedRowsCNT = @RowsToDelete begin delete top (@RowsToDelete) from MyDataTable output deleted.* into ArchiveDatabase.dbo.ArchiveTable where dt < dateadd(month, 3, getdate()) set @DeletedRowsCNT = @@ROWCOUNT end
I have a database in SQL Server. Basically, table consists of a number of XML documents that represent same table data at given time (like backup history). What is the best method to cut off all the old (3 months) backups, remove from DB and save them archived?
How do I save archive from SQL Server database
This was an issue with permission to the database. I gave the SQL id permission to the database and now it works.
I am trying to create a backup of a SQL stored procedure using PowerShell, but it produces a blank file. It's not throwing an error. Here is my code: param([String]$step='exeC dbo.test',[String]$sqlfile='',[String]$servename = 'test',[String]$dbname = 'test') $step2=$step $step3=$step2.Replace('[','') $step4 = $step3.Replace(']','') $step4 = $step4.Split(" ")[1] $step5 = $step4.Split(".") Write-Output $step5[0,1] [System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”) | out-null $logfolder = 'C:\Users\fthoma15\Documents\sqlqueries\Logs' $bkupfolder = 'C:\Users\fthoma15\Documents\sqlqueries\Backup' $statsfolder = 'C:\Users\fthoma15\Documents\sqlqueries\stats' $SMOserver = new-object ("Microsoft.SqlServer.Management.Smo.Scripter") #-argumentlist $server $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("$servename") #Prompt for user credentials $srv.ConnectionContext.LoginSecure = $false $credential = Get-Credential #Deal with the extra backslash character $loginName = $credential.UserName -replace("\\","") #This sets the login name $srv.ConnectionContext.set_Login($loginName); #This sets the password $srv.ConnectionContext.set_SecurePassword($credential.Password) $srv.ConnectionContext.ApplicationName="MySQLAuthenticationPowerShell" #$srv.Databases | Select name $db = New-Object Microsoft.SqlServer.Management.Smo.Database $db = $srv.Databases.Item("$dbname") #$db.storedprocedures | Select name $Objects = $db.storedprocedures[$step5[1,0]] #Write-Output $step5[1,0] #Write-Output $Objects $scripter = new-object ("$SMOserver") $srv $Scripter.Script($Objects) | Out-File $bkupfolder\backup_$($step5[1]).sql Please help
PowerShell creating a backup of a stored procedure results in a blank file
0 Try this: $cmd = '& ./cbb editHyperVPlan -n backupname '  Get-VM | Where-Object {$_.Notes -like 'Test'} | ForEach { $cmd += "-v $($_.vmname) " } Command is stored in variable $cmd. To execute use Invoke-Expression $cmd Share Follow answered Apr 24, 2020 at 9:56 WasifWasif 15.2k33 gold badges1717 silver badges3636 bronze badges 0 Add a comment  | 
Good Morning, I want to print only the VM name so I can use it in the CLI from another program. We use Backup-Plans according to the use of the VM. For this we write stuff like "Test" into the Vm's comments. For example: I have: VM1 Test VM2 Linux VM3 Test VM4 Test The other command line (from cloudberry) handels Backupplans like this: ./cbb editHyperVPlan -n backupname -v VM1 -v VM3 -v VM4 The thing is, I want to make this automatically, so I dont have to put everytime someone makes a new VM the VM myself into the list. BUT the Cloudberry CLI does not seem to support the Powershell Commands (it works over powershell). So my idea was that I try to print VM1 VM3 and VM4 into the -v -v -v ... Is that somehow possible? Sorry if this is written confusing, I can't really explain it better.. Edit: I use this command to get the VM's I need: Get-VM | Where-Object { $_.Notes -like 'Test'}
Can I "print" the name of a VM in Powershell?
Done now (: # Initiate the intervals in days by which the folders will be kept $savedays = @('7', '14', '21', '30', '60', '90', '120', '150', '180', '210', '240', '270', '300', '330', '360') # initiate array of folders to be deleted $killlist = @() # Start a for loop until hitting the one before last element of $savedays For ($i=0;$i -lt ($savedays.Length -1 ); $i++) { # Get list of folders in the $path_main $killlist += @(Get-ChildItem $path_main | # Newest first Sort-Object -Property LastWriteTime -Descending | # Between one and next elevement of #savedays, i.e. between 7 days ago and 14 days ago Where-Object { $_.LastWriteTime -lt (get-date).AddDays(-$savedays[$i]) -and $_.LastWriteTime -gt (get-date).AddDays(-$savedays[$i+1])} | # Exclude the most recent folder from the aforementioned period Select-Object -Skip 1 ) } # delete folders foreach ($folder in $killlist) { # delete the folder including contents Remove-Item $folder.fullname –recurse }
I need a PS script that would delete all the subfolders of a directory except for those: everything from last 7 days one newest folder from every week from past month one newest folder from every month from past year I am fairly new to PS and spent quite some time struggling already. Hate crying for help, but I suppose, this is the right moment. Thank you!
Powershell - set up data removal with retention
0 Microsoft's recommended approach for such scheduled jobs/workflows is Azure Logic Apps Microsoft have developed an extensive 'drag and drop' web UI to allow you to easily define actions such as moving data amongst storage containers, sending e-mails, etc. and then link these actions into workflows. Here's a link to a useful quickstart guide. Share Follow answered Apr 2, 2020 at 14:41 simon-pearsonsimon-pearson 1,74688 silver badges1111 bronze badges Add a comment  | 
I have a script copying a daily backup file to Blob storage using azcopy. On the Blob objects I want to create a lifecycle policy, like GFS, for the backups. Then I want to move older data (perhaps the yearly backup files) to colder storage automatically. Then I need to charge my client per GB storage and need a monthly report, maybe an e-mail, to our finance department with the month's maximum storage GB value. I will develop this and need guidance on where to start. Please point me in the right direction. As cheap and serverless as possible. I will answer my own question with the scripts etc to share the knowledge. Thanks!
How to run scheduled serverless lifecycle script on objects in Azure Blob storage
Here is what i did. param([string]$server='test', [string]$dbname='test',[string[]]$sp=('test','test')) [System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”) | out- null $SMOserver = new-object ("Microsoft.SqlServer.Management.Smo.Scripter") #-argumentlist $server $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("$server") $db = New-Object Microsoft.SqlServer.Management.Smo.Database $db = $srv.Databases.Item("$dbname") $Objects = $db.storedprocedures[$sp[1,3]] $scripter = new-object ("$SMOserver") $srv $Scripter.Script($Objects) | Out-File "C:\Users\fthoma15\Documents\backup_03212020.sql" As suggested by AlwaysLearning, i changed the sp variable to an array list,splitting both schema and sp name.
I am trying to backup one particular stored procedure from a SQL Server database by passing parameters from a Python program. Here is the code that I have tried but I keep getting an error. param([string]$server='dbsed0898', [string]$dbname='global_hub',[string]$sp='dbo.gs_eligibility') [System.Reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.SMO”) | out-null $SMOserver = 'Microsoft.SqlServer.Management.Smo' #-argumentlist $server $srv = New-Object("$SMOserver.Server") $server $db = $srv.databases[$dbname] $Objects = $db.storedprocedures[$sp] $scripter = new-object ("$SMOserver.Scripter") $srv $Scripter.Script($Objects) | Out-File " C:\Users\fthoma15\Documents\backup_03212020.sql" $db = $SMOserver.databases[$dbname] $Objects = $db.storedprocedures[$sp] $Scripter.Script($Objects) | Out-File "C:\Users\fthoma15\Documents\backup_03212020.sql" Error: Multiple ambiguous overloads found for "Script" and the argument count: "1". At line:12 char:5 + $Scripter.Script($Objects) | Out-File "C:\Users\fthoma15\Document ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [], MethodException + FullyQualifiedErrorId : MethodCountCouldNotFindBest Can someone help me?
Powershell SQL Server stored procedure backup
0 Yes, an additional machine would be required. Share Follow answered Apr 7, 2020 at 15:06 SadiqhAhmed-MSFTSadiqhAhmed-MSFT 17144 bronze badges Add a comment  | 
In MABS documentation (https://learn.microsoft.com/en-us/azure/backup/backup-azure-microsoft-azure-backup) It's written that we can't install MABS on a computer which is node of a cluster so I just wanted to make sure that I'll need additional machine for that.
Can I install MABS in a server which is node in a wsfc cluster which I'm using for Availability group? If not then why?
If you are using composer (deprecated) and want to restore the backup from scratch, you have to also copy and restore the ENTIRE .composer folder located at ~/.composer. Hope it helps.
I'm trying to do a full backup of a Hyperlerger composer instance, so as many tutorials said I've saved orderer and peer production folders, then restore it using Docker volumes but when I try to run componser rest server (or any other command) I get: Connection fails: Error: Error trying to ping. Error: transaction returned with failure: Error: The current identity, with the name 'admin' and the identifier '26929e0ec17e93fcb6d22cc057d<>43061962760e7f23ebaf7df527', has not been registered Haven't found any way to bypass it, I have admin and network cards used to create the first network, tried re-import with no luck. Any other thing that I can try? Thanks in advance
The current identity, with the name 'admin' and the identifier <ID> has not been registered
0 I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem. Share Follow edited Jul 16, 2020 at 13:31 Vishal Singh 6,13422 gold badges1717 silver badges3434 bronze badges answered Jul 16, 2020 at 7:27 Martin C.Martin C. 111 bronze badge Add a comment  | 
I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer. I'm following the procedure found in hyperledger-fabric-backup-and-restore. The main steps being: Copy the crypto-config and the channel-artifacts directory Copy the content of all peers and orderer containers Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy. Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error: Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1 Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
Hyperledger Fabric - backup and restore
0 You have, basically, two options: Implement your own logic by DynamoDB Steams and process your data by your own logic Use combination on AWS Glue for ETL processing and, possible, AWS Athena for query your data from S3. Be careful and use Apache Parquet format for better query performance and cache your results somewhere else Share Follow answered Mar 7, 2020 at 8:56 Ivan ShumovIvan Shumov 63633 silver badges66 bronze badges 1 In this case could you share more light on the DynamoDB Streams that you have mentioned, since it needs to be something simple and from what I can see it will pass the update of data at the moment of it happens to lambda, which means that there will be a thousand of files. The goal would be to simply make it so that when information in table "LetsDance" (for example) when an update happens between 1-30 of month it will not export it to S3 until the last day of the month in it's respective folder [ automatically create a folder of database name/month ]? – Vasil Garou Mar 12, 2020 at 14:53 Add a comment  | 
The goal that I wish to achieve is to generate a file of the table, so that afterwards that can be checked for data (monthly calculations). What I have done so far is to create a Backup using the PipeLine option from DynamoDB to an S3 bucket, but: It is taking too long, the pipeline has been running for more than 24h since the table I am exporting is 7 GB in DynamoDB size (which is compressed and it will take even more time to finish with the backup); I will need to do that monthly, which means that I will only need the data between first and last day of the month, while the PIPELINE can create a backup I could not find an option to make it so that only the changes in the table from specific timelines is exported; The files that the Pipeline export are around 10 MB each and that means hundreds of files, instead of a couple (for example 100 MB files or 1 GB files). In this case I am interested if there is a different way which I can make a full backup of current information and afterwards do a month to month on the changes that where performed (something like a monthly incremental) and not to have millions of 10 MB files. Any comments, clarifications, code samples, corrections are appreciated. Thanks for your time.
Q: AWS DynamoDB to S3 [pipeline]
0 You need to follow this guide. Also, as a tip: keep out the execution logs from your project export, to gain space in your files. Share Follow edited Feb 29, 2020 at 1:00 answered Feb 28, 2020 at 21:34 MegaDrive68kMegaDrive68k 4,01122 gold badges1010 silver badges5555 bronze badges 1 Friends, good afternoon. Thanks for the tips and advice, but what happens, we have approximately 20000 jobs running and the guide presented by the Rundeck team itself was not very clear to me. The problem is by no means the guide presented, but me. I really lack stuff and knowledge about the rundeck, not least because I was introduced to this system recently. So my doubts about the best way to perform the backup. – Carlos Nascimento Mar 2, 2020 at 17:51 Add a comment  | 
I need your help on a subject. I need to perform a backup of our Rundeck system and send it to a server in the GCP, but there are more than 90GB of information and I don't know how to make this backup. All my attempts to compress using gzip, bzip2, xz and rsync have failed, the error is basically because the file is too big. What would be the best way to perform the backup? Could you give me suggestions? Thanks in advance.
Best way to backup Rundeck
0 Simplest route I can think of would be (if the two DB are the same layout, table names etc) to download data, using a DataAdapter, into a DataTable and write it to disk with DataTable.WriteXml() then at the other end, DataTable.ReadXml() to get it from file back into a DataTable and write it into the destination DB with a DataAdapter. You'll need even fewer lines of code if you use strongly typed datatables (create a dataset) Share Follow answered Feb 27, 2020 at 17:27 Caius JardCaius Jard 73.3k66 gold badges5151 silver badges8787 bronze badges Add a comment  | 
I have a User Interface program written with VB.Net that collects instrumentation data from some PLC's and stores it in a MS SQL database. I need to be able to copy records from the DB based on a range of dates and save them in a file on a thumb drive. Then the file will need to be imported to a DB on another computer for analysis. I know SSMS can do a backup and restore but I don't think it can be based on a date range.
How can I copy MS SQL database data for a date range to be appended to DB on another computer?
0 Create a full SQL Server backup to disk : BACKUP DATABASE AdventureWorks TO DISK = 'C:\AdventureWorks.BAK' GO SQL Server BACKUP DATABASE command As @marc_s said, SQL Server should have write permission on destination address. Share Follow answered Jan 30, 2020 at 14:02 XAMTXAMT 1,56522 gold badges1212 silver badges3333 bronze badges Add a comment  | 
Is there a way to backup the SQL Server to an external hard drive daily (automatically) through a procedure (query)? I am using SQL Server 2014
SQL Server - Backup server through
No, there is no such way. Here is a related bugreport: https://github.com/h2database/h2database/issues/2390 If you use the persistent database you can close the connection after execution of RUNSCRIPT command (make sure that you don't use DB_CLOSE_DELAY or use the SHUTDOWN command) and re-open it. The views will be initialized properly on startup. If you use the in-memory database, the only workaround is to recompile your views with ALTER VIEW VIEW_TEST RECOMPILE; ALTER VIEW otherView RECOMPILE; .....
I've a h2 schema with some tables and a view. The view is defined by: CREATE FORCE VIEW PUBLIC.VIEW_TEST(NAME_,STREET_) AS SELECT USER.NAME_, ADDRESS.STREET_ FROM PUBLIC.USER LEFT OUTER JOIN PUBLIC.ADDRESS ON USER.ADDRESS_= ADDRESS.ID_ After dumping (via "SCRIPT TO ..."), within the dump file, the "CREATE FORCE VIEW PUBLIC.VIEW_TEST ..." is before the "CREATE TABLE ADDRESS ..." clause. This table is joined within the view. The result is, that after restoring the schema (via "RUNSCRIPT FROM ...") the command "SELECT * FROM VIEW_TEST" returns a error that the referenced table "ADDRESS" is unknown: View "PUBLIC.VIEW_TEST" is invalid: "Tabelle ""ADDRESS"" not found Table ""ADDRESS"" not found [42102-197]"; SQL statement: SELECT * FROM VIEW_TEST [90109-197] 90109/90109 If I drop the view and recreating it, everthing works fine, but I want to automize the dumping and restoring process. Is there a way to set the ordering of tables and views? What is the best way to ensure, that the view definitions are at the end of the dump? Many thanks
can I influence the dump/export order at the h2 SCRPT command
Use pg_restore from the newer version to create a plain SQL dump file. Then restore to the older version from that dump file. Depending on what features you are using, the dump file may need to be manually edited before it will restore successfully.
I am trying to restore database from PostgreSQL Version 11 into PostgreSQL Version 10. I am using Windows 7 (32-bit) so I can't use the latest PostgreSQL Version. So I am using PostgreSQL Version 10. But I am taking backup from database of PostgreSQL Version 11. Because of which I am getting error: pg_restore: [archiver] unsupported version (1.14) in file header So, is there any way I can restore my database in PostgreSQL Version 10. It would be really helpful if any can show me a way out.
Problem in restoring database from PostgreSQL Version 11
0 You can use S3, and not grant any users the action "s3:DeleteObject". Enabling versioning in S3 will mean that you can recover previous versions in the event of accidental deletion. See S3 IAM. AWS CodeCommit is Git code repository. You can store code there, including multiple versions. Identity and Access Management (IAM) allows you to deny or allow specific actions on code branches, including pulling, editing or deleting the branch. You could also use something like QLDB, which is an immutable ledger. Share Follow answered Jan 23, 2020 at 8:58 Tyrone321Tyrone321 1,7621616 silver badges2424 bronze badges Add a comment  | 
I am looking for a cloud service where I should be able to store the data but the deletion or manipulation is not allowed by anyone in the firm. The main objective is to store the source code and other data into a storage where no one has write permission over the already present files but atleast one person has permission to store new files. This is to prevent intentional deletion of critical data. Hope I am clear about my query. I would prefer if it can be done in AWS or Hetzner cloud. Thanks
Is there any cloud service which Allows to Store the data but blocks the deletion or manipulation of the same?
0 Your question is unclear, but here are a couple of basic technologies that you should consider: (1) Set up another MySQL server which is a replication slave of the master. The two servers communicate so that the slave is always up-to-date. (2) Use version control such as git to manage all the software that is installed on any server, and all versions and changes made to it. commit the changes as they are made and push the changes to an independent "repository," e.g. on (a private repository on a ...) public service such as GitHub or BitBucket. (3) Arrange for all "asset files" to be similarly-maintained this time probably using rsync. Share Follow answered Jan 2, 2020 at 16:45 Mike RobinsonMike Robinson 8,63966 gold badges2929 silver badges4242 bronze badges Add a comment  | 
I have a primary server, where I'm running couple off websites. A friend of mine has configured everything there. Im running Debian on my server. ISPConfig (Where I manage all my domains, mails, ftp) Apache Mysql PHPMyadmin Now, I have very important websites which needs to up and running all the time and I want to purchase another server so if this one fails the other one should take over. I'm planning to use DNSMadeEasy service.. I know I can use rcync to clone all of this but my question is: How do I know what needs to be copied to the other files so I get all the configuration files of all different services i'm running. Is there a way to clone on server to another or what is the best approach here? Im super concerned that this server might go do, and I can not afford to have my website going do.. Any thought and ideas?
Create a failover server, with all configuration files and everything from master server
...but it is generating error Without the exact error, it is hard to infer what could be the problem. Meanwhile, I don't really understand the way you are passing the credentials. Is there a specific reason why you don't use the relevant module parameters ? Did you try like this ? - name: Run cURL commands hosts: localhost tasks: - name: First task uri: url: http://IP ADDRESS:9200/_snapshot/TEST_backup/backupname method: PUT url_username: "username" url_password: "password" body: indices: "index names" ignore_unavailable: "true" include_global_state: "false" body_format: json validate_certs: no Note that I totally dropped the headers since body_format: json will automatically set Content-type: application/json (and we are not supposed to need the two others). In case your service on the other end is not returning 401 error correctly on first connection for basic auth, you can try to add force_basic_auth: true and see if it does any better (see doc link above for more details).
I am trying to set up a automatic backup for my ELK indices. To do it manually I am using the below curl command. curl -u username:$password -X PUT \ "http://IP address:9200/_snapshot/TEST_backup./backup_name" \ -H 'Content-Type: application/json' \ -d'{ "indices": "index name", "ignore_unavailable": true, "include_global_state": false }' Could you please let me know how the ansible curl structure will pan out for the same as per my idea it should be like below but it is generating error. - name: Run cURL commands hosts: localhost tasks: - name: First task uri: url: http://IP ADDRESS:9200/_snapshot/TEST_backup/backupname headers: Content-Type: "application/json" X-Application-Username: "username" X-Application-Password: "password" method: PUT body: indices: "index names" ignore_unavailable: "true" include_global_state: "false" body_format: json validate_certs: no
Using of CURL model in ansible to run a API command
0 Not sure for HP-UX but maybe device you use is autorewind so you should change the device or use tar on this way: tar cvf /dev/rmt/2m file1 file2 Or you can try to use tar cvf /dev/rmt/2mn file1 tar cvf /dev/rmt/2mn file2 as this driver is not autorewind Share Follow edited Dec 23, 2019 at 15:48 answered Dec 23, 2019 at 15:35 Romeo NinovRomeo Ninov 6,98711 gold badge2525 silver badges3232 bronze badges 12 Hi, The both files are too big and should take one by one. – endrimi Dec 23, 2019 at 15:39 If both files can be stored in to the tape IMO the size is not important. Just try – Romeo Ninov Dec 23, 2019 at 15:39 I have tried but the command shoe me that the size is to big. – endrimi Dec 23, 2019 at 15:40 how big they are? – Romeo Ninov Dec 23, 2019 at 15:41 the files inside contains to much info inside and they are .gz in the end. i have tried to take both in one command but is imposibile. – endrimi Dec 23, 2019 at 15:46  |  Show 7 more comments
I have two biggest file and i am trying to take backup of them to a tape drive. the operate system is HP-UX and the directory of the tape drive is /dev/rmt/2m. The command that i perform for backup was tar cvf /dev/rmt/2m file1 and after that the file2 But when i use the command to view the file tar tvf /dev/rmt/2m that command show me that i have only 1 file backup ( the last file2). Please can you help on this. where is the problem ? The problem is on backup command or the command to view the file. Thanx in advance
backup command on HP-UX fail
0 You need to write a program/script based on the Google Drive API https://developers.google.com/drive/api/v3/about-sdk Or, try a third-party Google Enterprise data backup tool, like CubeBackup or Spanning. Share Follow answered Oct 2, 2021 at 8:58 skyfreeskyfree 88722 gold badges1111 silver badges3030 bronze badges Add a comment  | 
In our company, we use Google Enterprise, and Shared Drives to store all our documents. After chatting with Google support agents, it seems there is no way to backup some of our folders and files. We have two different needs: - keeping a regularly syynchronized backup of some sensible files updated regularly - backing up every month some files as they are on the backup date, and creating a specific copy for each monthly backup Anyone has a way to do either of those things? thanks
Backup Google Shared Drive files and folders
0 I myself answer... by help of this link : Temporary Files Used By SQLite that say : "The WAL file is created when the first connection to the database is opened and is normally removed when the last connection to the database closes. However, if the last connection does not shutdown cleanly, the WAL file will remain in the filesystem and will be automatically cleaned up the next time the database is opened. " so i close database just before backup and after backup open it again protected void backupDB() throws IOException { SugarContext.terminate(); //Close databse String format = new SimpleDateFormat("_yyyyMMdd-HHmmss").format(new Date()); FileInputStream indb = new FileInputStream(new File(new File(Environment.getDataDirectory() + File.separator + "//data//ir.shia.mohasebe//databases//"), "sugar_example.db")); File file = new File(Environment.getExternalStorageDirectory() + File.separator + "Mohasebe Amal"); file.mkdirs(); FileOutputStream outdb = new FileOutputStream(new File(file, "mohasebe_backup" + format + ".mbac")); FileChannel channel = indb.getChannel(); FileChannel channel2 = outdb.getChannel(); channel2.transferFrom(channel, 0, ((FileChannel) channel).size()); channel.close(); channel2.close(); SugarContext.init(getApplicationContext()); //Open database again } Share Follow answered Dec 9, 2019 at 7:39 M KasesangM Kasesang 4944 bronze badges Add a comment  | 
I have a backup activity that user can backup the app database and restore it later... for backup i just copy my app database.db file to sdcard ... for restore i first delete database.db-shm and database.db-wal file if exist (because android 9 use them) and then replace database.db with new one. restore function works fine in all android ver. backup function also works in all android version, but in android 9.0 its not reliable! because some new data that user has added is in cache files (database.db-shm and database.db-wal) and not applied to database.db what should i do? could i forcefully apply cache file to database.db and then backup it? or ...
how to backup database in Android 9 (Pie)?
Generally this question should go to Server Fault. That is the reason for the close votes. But, there are only a few questions there about IBM i and no one there on a regular basis answers these questions. So... No, a save file can only contain a save of objects from a single library at a time. So you cannot do go save --> 21 to a save file. But you do not need to put a virtual tape image in a save file to move it off the system. Think of a virtual tape image like an .iso image that you could burn to a CD. It isn't really an .iso image, but it is stored in the IFS as a single file. These images can be FTP'd as is to any other system and added to an image catalog on that system. Or they can be stored on a server somewhere as a backup and then transferred again to a new system to be used in a restore operation. You can find documentation on how to move virtual tape images to a different system in the IBM Knowledge Center.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 4 years ago. Improve this question I would like to make my AS400 to save entire system (go save, option 21) in a network location instead of a physical tap device. I have not iscsi devices connected. I have already created a virtual tap (type 63B0) with a catalog file mounted, and I can save and restore from it. I have tried to save the catalog file in a savf file and then to move that savf via ftp in a network location. My question is: is it possible to make a go save --> 21 option indicating as device a savf instead of a tap?
AS400 save entire System in SAVF [closed]
0 Currently you can not restore a BAK (database backup) from SQL Server into Azure SQL DB singletons. You can do this into Azure SQL DB Managed Instance, however. There's not a lot of point of replicating your schema logically from SQL Server only to replace the database from a backup. Please note that "dump files" are actually something different - those usually refer to Watson dumps if there is a crash in the SQL process. This contains the memory state of the program when it crashed to aid in debugging. I assume you meant "binary backup" (aka BAK files). You can read about the SQL MI restore option here: docs page for SQL MI BAK restore Share Follow answered Dec 1, 2019 at 16:37 Conor Cunningham MSFTConor Cunningham MSFT 4,32122 gold badges1616 silver badges2121 bronze badges Add a comment  | 
We have database on Azure which we have successfully replicated schema from our previous server. Now we want to restore data over it. script is too long we cant run it in console or either in management studio further in our console we dint find any option for restoring database backups by uploading dump files.
restoration of backup from non azure SQL database
Why not use Backup/Restore and then apply custom scripts to handle unwanted objects? Having the option to modify to the metadata objects(users..) would result in a security exposure/risk. More on that: https://dba.stackexchange.com/questions/71608/security-implications-of-restoring-a-backup-from-an-unknown-source
I have a rather large database (130+ tables) that needs to be transferred from a live server to a dev server quite often. Both servers are SQL Server Express (ver. 14). Backup/restore doesn't work because it transfers data, schema and other objects (user privileges), but users on both servers are not the same Tasks/Generate scripts... would produce very large .sql file (400MB+) and the SSMS (ver. 15) on the dev server runs out of memory (win10_64/32GB)!? I have a script that splits the database (on the live server) into several smaller databases, and then I can generate .sql scripts and transfer the data and schemas and combine them in the required database on the dev server. But, the problem is that I have to manually generate those scripts, and it's a time consuming task. I was wondering if there is a more efficient solution to my problem?
Transfer database data and schema only to different server
0 I would propose to try installing Bacula Community open source product (www.bacula.org) and follow the guide from here. You only need to create a script that exports the DIT in a single file. Then you just need to create a job that backups this file with Bacula. The benefit of this solution will be able to backup something else from your infrastructure with Bacula later on, it's pretty much a universal system. Share Follow answered Nov 26, 2019 at 18:43 user7081357user7081357 2 This is for automating the backup of ldap..My question was more for the method of backup.Like in your script you used ldapsearch..Also How do you restore this backup ? – swetad90 Nov 26, 2019 at 20:12 There are some other resources that may be helpful, like this info on using bpipe to make the backups run directly to the SD (bacula.lat/db2-bpipe-backup-with-community-bacula/?lang=en). And this blog post is also showing how to integrate it all into Bacula (karellen.blogspot.com/2012/02/ldap-backup-with-bacula.html). – user7081357 Dec 4, 2019 at 22:14 Add a comment  | 
We have an openLDAP cluster running with 2 Master(producers) and 1 consumer. I read the below guides and got a good enough idea about using slapcat/ldapsearch with slapadd/ldapadd to backup & restore the data. How do I clone an OpenLDAP database https://serverfault.com/questions/577356/ldap-backup-with-slapcat-vs-ldapsearch For me using ldapsearch with ldapadd worked on taking a backup and restoring it. However, I ended up changing the entryUUID, contextCSN, create & modifyTimestamp of the entries. ldapsearch -x -H ldaps://ldap.server.net -D "dc=mycompany,dc=net" -W -b "dc=admin,dc=mycompany,dc=net" -LLL > ldapd-"`date +%Y%m%d`".ldif ldapadd -x -c -H ldapi:/// -D "dc=admin,dc=mycompany,dc=net" -y "${PASSWORD_FILE}" -f "ldapd-"`date +%Y%m%d`".ldif I wanted to check if this is a preferred way of doing a backup & restore operations or is there any better practices ?
Backup & Restore OpenLDAP 2.4 with multi-master replication enabled
0 It looks like you are using "-NotAfter (Get-Date).AddHours(8)" This will make your certificate expire after 8 hours, the default is 1 year. Share Follow answered Nov 25, 2019 at 21:29 Shiraz BhaijiShiraz Bhaiji 64.5k3434 gold badges145145 silver badges257257 bronze badges 1 I can remove "NotAfter" param but nothing change. Same error. – epi82 Nov 26, 2019 at 8:48 Add a comment  | 
I'm trying to register Windows client machine to a Azure Recovery Services Vault with a powershell script. I'm having this error: WARNING: Vault credentials validation failed. Start-OBRegistration : Vault credentials file provided has expired. We recommend you download a new vault credentials file from the portal and use it within 2 days. These are my commands: $cert = New-SelfSignedCertificate -certstorelocation cert:\localmachine\my -dnsname aly20-srv.xxx.onmicrosoft.com -NotAfter (Get-Date).AddHours(8) $certificate =[System.Convert]::ToBase64String($cert.RawData) $Vault1 = Get-AzRecoveryServicesVault –Name "rsvault-staging" $CredsPath = "C:\temp" $CredsFilename = Get-AzRecoveryServicesVaultSettingsFile -Backup -Vault $Vault1 -Path $CredsPath -Certificate $certificate Import-Module -Name 'C:\Program Files\Microsoft Azure Recovery Services Agent\bin\Modules\MSOnlineBackup' Start-OBRegistration -VaultCredentials $CredsFilename.FilePath -Confirm:$false It seems that the vault credentials file created in "C:\temp" is not valid. If I try to get it directly from azure portal and run "Start-OBRegistration" command it works. What's the problem? How can I solve? Thank you.
Start-OBRegistration Vault credentials validation failed
0 The AWS tool that you can use to build infrastructure again and again is CloudFormation. We call this technique Infrastructure as a Code (IaaC). You can also use Terraform if you don't want to use AWS Specific tool. You can use either YAML or JSON to define the template for your infrastructure. And, you'll be using Git to do templates change management. Watch this reinvent video to clear the whole picture. Share Follow answered Nov 22, 2019 at 9:57 Hassan MurtazaHassan Murtaza 98377 silver badges1616 bronze badges Add a comment  | 
As I'm following a multi-instance deployment strategy opposed to a multi-tenant, I'm deploying my entire infrastructure again for every new customer. This results in a lot of work as I have to Deploy a new API instance on Elastic Beanstalk + env variables Deploy a new webapp instance via s3 Deploy a new file storage via s3 Deploy a new backup file storage via s3 Setup a new data pipeline backing up the file storage to the backup bucket Mapping the API and web app instance to a new customer-specific URL (e.g. mycustomer.api.mycompany.com and mycustomer.app.mycompany.com) via Route 53 + CloudFront ... Is there a way to automate all of this deployment? I've looked into CodeDeploy by AWS but that doesn't seem to fit my needs.
Automate AWS deployment for new customers
0 Found a cure to this, use the option: /XA:SH which stops copying system and hidden files - which are how the special attributes of the Document directory appear to be copied. Worked for me, I only wanted the data files. Share Follow edited Oct 26, 2020 at 15:35 marcus 7201111 silver badges3434 bronze badges answered Oct 25, 2020 at 8:42 CallumCallum 1 Add a comment  | 
I have a custom script I use for backing up my hard drive to a temporary external drive. It's a simply a number of robocopy lines (without /PURGE). I've having trouble with the windows documents folder. If I have a command: "robocopy C:\users\me\documents D:\backups\somerandomdirectoryname ..", every time it's done, Windows thinks that directory is a Documents directory and even renames "somerandomdirectoryname" to "Documents". It changes the icon, and then I can not actually eject the USB drive because Windows will not let it go. What is causing Windows to do this to me? Is there something I have to exclude to make it "just a normal directory" on my external device?
Robocopy "Documents" folder issue?
0 Your backup will be connected to the Default Policy if you have not connected it to something else. Therefore, you should be able to change the retention of your backup by editing the default policy. Share Follow answered Nov 11, 2019 at 19:16 Shiraz BhaijiShiraz Bhaiji 64.5k3434 gold badges145145 silver badges257257 bronze badges Add a comment  | 
I have made a manual backup of an azure VM today, which should have been kept only for today. But now I am in the situation where it's needed for more than one day. Now while making the Backup I was asked for a retention time which I didnt change so it was set to 12.11.2019 which is either today ad 00:00 or tomorrow at 00:00 depends on Microsoft. I would like to change the retention time now and keep this backup for longer. The Policies which can be set in azure seem only to be for backups which were mad by the retention policy.
Azure change retention time for manual made backup
0 Visor CLI shows how many primary and backup partitions each node holds. By default, a cache is split into 1024 partitions. You can change that by configuring affinity function. Share Follow answered Nov 11, 2019 at 21:35 alamaralamar 19k44 gold badges6868 silver badges9898 bronze badges Add a comment  | 
I have started using Apache Ignite for my current project. I have set up the ignite Cluster with 3 Server Nodes with Backup Cache count as 1. Ignite Client Node is able to create a primary Cache as well as Backup cache in the cluster. But here I want to know for a particular cache which is Primary node and on which Node the Backup Cache is stored. Is there any tool available or any Visor command to do so along with finding the size of each cache. Thank you.
Apache Ignite Backup cache Node identification
0 In Azure backup, first backup is always a full backup and the subsequent backups are incremental in nature. For example: 1st day full backup (100 GB) on premise to Azure = Successful 2nd day only the changes/ churn rate will be backed up as incremental. Could be 10 GB churn rate/changes as compared to previous day. Likewise this will continue till the 7th day. On 8th day as per the retention policy, the first day recovery point gets deleted and the full backup is merged with the 2nd day incremental backup which becomes the oldest recovery point for you to recover your data. For this day if you restore you will get 100 GB data. Because there was no additional data added to the full backup just the changes within 100 GB were backed up as incremental on the 2nd day. This same logic applies in your use case as well. Hope this clarifies! Let me know if this is confusing or you've additional questions for me that I can help with. Share Follow answered Oct 15, 2019 at 14:08 SadiqhAhmed-MSFTSadiqhAhmed-MSFT 17144 bronze badges 2 I can't understand the process of the merged very well, do you know some documents about this technology, and can I understand like that when full backup is merged with the 2nd day, the old dates which changes/ churn rate on the 2nd day will cover the old dates ? But if the changes/ churn rate on the 2nd day is additional data added to 1st day, it will change like below: (1st day: Full Backup 100GB → 2nd day: 10GB changed different to 100GB → On 7th day: the 2nd day recovery point just keep 10 GB→ On 8th day: the 2nd day recovery point increase to 110GB)? thanks – Arthur Oct 16, 2019 at 6:35 @jason ye has detailed post on backup retention here - stackoverflow.com/questions/45914855/… – SadiqhAhmed-MSFT Nov 5, 2019 at 11:32 Add a comment  | 
I saw a blog of Incremental backups on Microsoft Azure Backup: Save on long term storage. (https://azure.microsoft.com/ja-jp/blog/microsoft-azure-backup-save-on-long-term-storage/) But I have some questions about the Azure Backup Retention rang process works. As an example, let us take a data source, A, made up of blocks A1, A2, … A10, which needs to be backed up monthly. Block A2, A3, A4, A9 change in the first month, and A5 changes the next month. When I set the backup retention range of two months, so after I finished the third backup, can I restore block A10? If I can, can you tell me the process? Because as I know, the data source without change had been deleted. If it not deleted, as the picture shows, there will have two blocks A2 (the first backup and the second backup)? If this is right, the Total Space occupied will continue to increase.
How Azure Backup Retention range process works?
0 Figured out a way to do this a little differently. Essentially instead of worrying about naming the backup table to start, I just rename it after it successfully is populated with the data snapshot. create table backup like ops_weekly_sla; INSERT INTO backup select * from ops_weekly_sla; SET @yyyymmdd = DATE_FORMAT(NOW(), '%Y%m%d%H%i'); SET @a = concat('RENAME TABLE `backup` to `ops_weekly_sla.',@yyyymmdd,'.backup`;'); PREPARE stmt from @a; EXECUTE stmt; DEALLOCATE PREPARE stmt; Not only did this achieve my goal, bonus feature was that it did not disturb my timestamps for original data insertion or row updates. /wewt! Share Follow answered Oct 4, 2019 at 18:39 jconnorsjconnors 9377 bronze badges Add a comment  | 
I have a table1 that has data uploaded to it every day. The source can be unreliable in format and structure, so to avoid headaches and downtime, I want to take a snapshot of table1 every day before the upload runs. I want to name the backups table1.YYYYMMDDHHMM.backup. Straight up variables weren't working so I tried to use a prepare statement (which is completely new to me). Below is what does not work... Some sage advice would be appreciated. set @a = concat('CREATE TABLE `ops_weekly_sla.',@yyyymmdd,'.backup` like `ops_weekly_sla`;'); set @b = concat('SELECT * from `ops_sla_weekly` INTO `ops_weekly_sla.',@yyyymmdd,'.backup`;'); PREPARE stmt from @a; EXECUTE stmt; PREPARE stmt from @b; EXECUTE stmt; DEALLOCATE PREPARE stmt;
Create backup/snapshot of table1 with timestamp in name, then select * from table1 into table1.YYYYMMDDHHMM.backup
as the error says cannot find the path of the file, changed my approach on specifying the required as environment secrets than specified in documents as here env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username
Deployment Logs:: An error occurred: some backup storage locations are invalid: error getting backup store for location "default": rpc error: code = Unknown desc = error loading environment from AZURE_CREDENTIALS_FILE (/credentials/cloud): open /credentials/cloud: no such file or directory Installed velero following https://velero.io/docs/v1.1.0/azure-config/ Installed velero with velero install \ --provider azure \ --bucket $BLOB_CONTAINER \ --secret-file ./credentials-velero \
Velero deployments failure - Unable to find Azure /credentials/cloud File
0 Issues related to installation of Acronis extension should be investigated with the help of Acronis support team. Just a suggestion, you can try redeploying the VM to see if that lets you successfully install the extension. If issue persist, reach out to support and share below information with the technical professionals. Error details with screenshot if possible Detail steps followed for deployment Share Follow answered Oct 15, 2019 at 7:05 SadiqhAhmed-MSFTSadiqhAhmed-MSFT 17144 bronze badges Add a comment  | 
I'm trying to back up an azure vm to acronis, but whenever I try to deploy the acronis extension on azure I recieve the following error: At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details. VM has reported a failure when processing extension 'AcronisBackup'. Error message: "Failed to connect to Acronis Backup Cloud. Check specified credentials and firewall settings. For more information please refer to https://kb.acronis.com/content/47189" It basically says that the provisioning state failed. How can i fix this?
how to deploy acronis extension on microsoft azure?
0 You could do that using CSOM or SSOM. CSOM: From code you can access Web.RecycleBin property that is collection of type RecycleBinItemCollection. Share Follow answered Sep 30, 2019 at 11:26 Rafał MilewskiRafał Milewski 9644 bronze badges 1 Thanks for the info,much appreciated. Is there a website where I can get more info or just google it and work from there. – Jo West Oct 1, 2019 at 12:59 Add a comment  | 
This might be a odd question but is it possible at all to make a backup or copy of my Recycle Bin, if not possible no worries. Reason for asking : Some people deletes file and remembers 100days later they accidently deleted the file, then it's a bit harder to get back the file just from restoring the file from the recycle bin. Thanks.
SharePoint Online Recycle Bin Copy/Backup
I figured it out: Copy the backup file you want from your host to your container with: docker cp LOCAL_FILE CONTAINER_NAME:/etc/NEW_FILE where LOCAL_FILE is the file on your host that you want to copy over, CONTAINER_NAME is the name of your docker container, /etc/ is a default and already existing directory, and NEW_FILE is just the name of the file that's going to get the data of the LOCAL_FILE. Go inside your Docker container file system with: docker exec -it CONTAINER_NAME /bin/bash and navigate to where you copied your NEW_FILE. Make a new folder(name it "backup" for the sake of clarity) and extract the contents of NEW_FILE into it. Restore the backup to a new database with: influxd restore -portable -newdb NEW_DATABASE_NAME backup For any alternative options in the last step, go to the documentation here
I've recently backed up my Influx database from a docker container and have now a backup file in the format of .tar.gz. I wanted to import the data from this file into another Influx database that is also running inside of a docker container. What I tried to do was using Chronograf and its "Write Data" feature to import the content of the backup since it supports .gz files, but it seems that Chronograf only supports files that are up to 25MB in size and this backup of mine is 70MB. I've searched for other possible methods to solve this in the "Docker Influx Documentation" and "InfluxDB Shell Documentation". The only thing I found relevant was the "-import" option that is refenced in the Shell Documentation. I tried using it but to no avail. Any command that wasn't a direct query in the InfluxDB shell was rejected and all I got was an error message that said: ERR: error parsing query: found influx, expected SELECT, DELETE, SHOW, CREATE, DROP, EXPLAIN, GRANT, REVOKE, ALTER, SET, KILL at line 1, char 1 Just to be clear, I'm on Windows 10 at the moment.
Having trouble importing InfluxDB backup
For the first error, change exec($command,$output=array(),$worked); to $output = array(); exec($command,$output,$worked); Since the second parameter to exec() is a reference parameter, it has to be a variable, not an expression. See Suppress warning messages using mysql from within Terminal, but password written in bash script for lots of ways to prevent the warning about using a password on the command line.
I am able to run the script from the browser and backs up my mysql database, but when I try to do it with a cron job, I am getting the error: Strict Standards: Only variables should be passed by reference in /main_dir/sub_dir/backup.php Warning: Using a password on the command line interface can be insecure. Any suggestions? And, why the password warning? <?php //Enter your database information here and the name of the backup file $mysqlDatabaseName ='xxxxxxxxxxxxxx'; $mysqlUserName ='xxxxxxxxxxxxxx'; $mysqlPassword ='xxxxxxxxxxxxxx_'; $mysqlHostName ='xxxxxxxxxxxxxx'; $mysqlExportPath ='xxxxxxxxxxxxxx.sql'; //Please do not change the following points //Export of the database and output of the status $command='mysqldump --opt -h' .$mysqlHostName .' -u' .$mysqlUserName .' -p' .$mysqlPassword .' ' .$mysqlDatabaseName .' > ' .$mysqlExportPath; exec($command,$output=array(),$worked); switch($worked){ case 0: echo 'The database <b>' .$mysqlDatabaseName .'</b> was successfully stored in the following path '.getcwd().'/' .$mysqlExportPath .'</b>'; break; case 1: echo 'An error occurred when exporting <b>' .$mysqlDatabaseName .'</b> zu '.getcwd().'/' .$mysqlExportPath .'</b>'; break; case 2: echo 'An export error has occurred, please check the following information: <br/><br/><table><tr><td>MySQL Database Name:</td><td><b>' .$mysqlDatabaseName .'</b></td></tr><tr><td>MySQL User Name:</td><td><b>' .$mysqlUserName .'</b></td></tr><tr><td>MySQL Password:</td><td><b>NOTSHOWN</b></td></tr><tr><td>MySQL Host Name:</td><td><b>' .$mysqlHostName .'</b></td></tr></table>'; break; } ?> Should allow me to make a mysql db backup using cron jobs.
Strict Standards: Only variables should be passed by reference in /main_dir/sub_dir/backup.php
0 I can not tell you if the parameters are fine. What I could tell is that I would definitely prefer to use a real and reliable backup and restore software. Why is that: backup and restore tools are designed for that exact purpose backup and restore tools provide more features than the "naming" one, which actually reflects a retention time need: like beeing able to find the data to restore based on dates, search system and so on backup and restor tools can come with support contrat that give you an insurance to be helped if needed. There are many backup and restore tools and as I am an open source guy I would suggest to go with Bacula (https://www.bacula.org) or the enterprise version of it (https://www.baculasystems.com) accordingly with your enterprise SLAs, policies or requirements. Best regards Werlan Share Follow answered Sep 12, 2019 at 9:53 WerlanWerlan 1 Add a comment  | 
I need to make sure I am employing best practices while adhering to my company's requirements. 1. 7 days of data backups, that overwrite when the day repeats. I have the naming convention as DBNAME_DAYNAME.BAK. I achieved this with a cursor that dynamically builds the name. 2. Transaction log backups occur every minute and are named DBNAME_DAYNAME_MINUTEOFDAY.TRN. Similarly, these should overwrite when the 8 day starts in the cycle. Similar cursor is being used. 3. Copy data backups to network share. I am using CmdExec after each backup completes. 4. Copy log backups to network share. I am using Backup Log to Disk and Mirror to Disk. I need to make sure my parameters are correct. For the data file backups in step 1, I am using the following parameters: WITH NOFORMAT, INIT, SKIP, REWIND, NOUNLOAD, COMPRESSION, STATS = 10 Then, I have a second step that copies the files to a network share: Copy H:\BACKUPS\SQLDataFiles*.* """\192.xxx.xxx.xxx\sharepath\Directory*.*""" /Y QUESTION 1: All of this appears to work. Are my parameters okay? For the Log file backups, I am using Mirror to Disk, with the following parameters: WITH FORMAT, INIT, SKIP, REWIND, NOUNLOAD, COMPRESSION, STATS = 10 QUESTION 2: All of this appears to work, but are my parameters okay? Of course, as it usually happens, the whole process was rushed by leadership, and I have not actually done a restore test. I will soon but wanted to get a review of the parameters by this wise group. The parameter descriptions are not sinking in. Admittedly, administration is not my forte. The parameters I am using are copied from other code, and some seem to imply tape. There is no tape, only disk. If they are tape and it does not cause problems, that is fine. I just want to make sure that the files are overwriting, and that I can recover with this setup.
Setting appropriate parameters for backups
0 There are much of ways of doing that. But, if you always work local and you need a simple way of doing that, you may take a look at run scripts if some specific usb device is plugged in. Meaning that a simple backup script with tar would run if you plug in your specific backup hdd. Take a look at udev rules in linux. udev is a generic device manager running as a daemon on a Linux system and listening (via a netlink socket) to uevents the kernel sends out if a new device is initialized or a device is removed from the system. The udev package comes with an extensive set of rules that match against exported values of the event and properties of the discovered device. A matching rule will possibly name and create a device node and run configured programs to set up and configure the device. Take a look at these posts: https://unix.stackexchange.com/questions/65891/how-to-execute-a-shellscript-when-i-plug-in-a-usb-device & https://askubuntu.com/questions/401390/running-a-script-on-connecting-usb-device Share Follow answered Aug 14, 2019 at 11:09 chrs04chrs04 4177 bronze badges Add a comment  | 
First of all, I don't mean version control such as git. I do use git locally but, I'm trying to determine the best way to do back-ups of source code (as well as other app assets) in case of hardware failure or such. I was thinking I could set up a script to tar my project folders, and encrypt them with gpg. I would then save the encrypted tar to external hard drives and to 1 or more off-site locations using a service such as amazon drive or dropbox. Currently, I'm a sole developer so my thinking was that this method should be okay. But I wanted to get some input to make sure I'm doing this the best/most reliable way possible. If there is a better approach to this that may be more applicable to small teams, then please let me know, as I'm more than happy to do the extra work implementing the approach.
How do small teams do secure backups of source code?
0 For your requirement, I recommend you use OneDrive Service to backup your data, you could use it to create update and download file with onedrive api. You could also use RoamingFolder to store your data that could roaming automatically. For more info please refer Roaming data. Share Follow answered Aug 9, 2019 at 8:00 Nico ZhuNico Zhu 32.5k22 gold badges1717 silver badges3737 bronze badges 6 I thought about using the Roaming Data folder initially but since the database grows beyond the 200K limit, it will not work for me. – Dhark Undercity Aug 9, 2019 at 16:09 If your item beyond the 200k, pleas use OneDrive Service to backup your data. – Nico Zhu Aug 12, 2019 at 7:36 I decided I will go a slightly different route now. I just want to save the UWP sqlite db data using filepicker. The database resides in the localfolder. Does anyone have code for this? – Dhark Undercity Aug 14, 2019 at 16:55 1 I actually solved the issue myself and can now backup and restore my UWP SQLite databases. – Dhark Undercity Aug 16, 2019 at 15:38 2 @DharkUndercity: Could you please explain how you solved this issue. Noting you solved it does not help me (nor anyone else) any further. – Kees Dapperens Nov 19, 2019 at 22:28  |  Show 1 more comment
I would like to backup an existing localfolder SQLite DB within my UWP app to the user's OneDrive on button click and then when the same UWP app is installed onto another PC by the same user, restore the database back into the UWP app so that they have the same data in all devices where they may install the UWP app. I don't need anything complex. Just sweet and simple C# code.
UWP SQLite DB OneDrive Backup and Restore C#
I solved the problem with GDPR data by the usual data masking by MySQL functions. mysql -u %DB_LOGIN% -p%DB_PASSWORD% %GDPR_DB_NAME% -e "DROP DATABASE %GDPR_DB_NAME%" mysql -u %DB_LOGIN% -p%DB_PASSWORD% -e "CREATE DATABASE %GDPR_DB_NAME%" mysqldump -u %DB_LOGIN% -p%DB_PASSWORD% %DB_NAME% | mysql -u %DB_LOGIN% -p%DB_PASSWORD% %GDPR_DB_NAME% mysql -u %DB_LOGIN% -p%DB_PASSWORD% -D %GDPR_DB_NAME% -e "UPDATE "customer_address_entity" SET company = CONCAT(SUBSTR(company, 1, 6), REPEAT('*', CHAR_LENGTH(company) - 6));" -vvv
Tell me how to solve the problem with depersonalizing personal data of clients in the MySQL database. My task is to make personal data of clients depersonalized during backup - full name, email. There is an e-commerce CMS, and I want this data to be changed during backup. How to implement this? Are there any examples? I imagine this as data changes during backup on the fly. Another option is a copy of the database and data changes through sql queries, and then anonymous backup. Tell me how to do it right and if possible an example. Thanks.
Obfuscating personal data in MySQL backup
I just found that pg_dump and pg_dumpall are .exe files in the bin folder. Guess that explains why i can call them directly. But i'm still confused as to how i'm supposed to call pg_start_backup() and pg_stop_backup() then.
I'm writing a batch file for doing incremental backups of a cluster of Postgres Databases. For doing a full backup, i am using pg_dump and pg_dump_all, which works no problem if i for example do it like this: @echo off && pushd "%~dp0" set PGHOST=localhost set PGPORT=5432 set PGUSER=postgres set PGPASSWORD=SOMEPSW set BACKUPDIR=C:\some\path set CURRENTDATE=%CURRENTDATE:~0,14% set BACKUPPATH=%BACKUPDIR%\%CURRENTDATE% pg_dumpall -r -f %BACKUPPATH%\users.backup <---- THIS IS THE LINE I WANT TO REPLACE popd pause exit /b %RETURNCODE% (I have tried shaving down the example, to make it quick and easy to read, let my know if anything is unclear). As far as i have understood, i can't use pg_dump and pg_dumpall for incremental backups, so i want to use pg_start_backup and pg_stop_backup as part of my incremental script, and inbetween create zip and move the Data folder. My issue right now is that i can't call either of them, because they are not recognized commands. I have tried calling them in the following ways: 1. pg_start_backup('label', false, false); This does not work because pg_start_backup is not recognized as an internal or external command. 2. SELECT pg_start_backup('label', false, false); This does not work because SELECT is not recognized as an internal or external command. 3. psql -h %PGHOST% -p %PGPORT% -U %PGUSER% -d %DATABASES% -c "SELECT pg_start_backup('label', false, false);" psql -h %PGHOST% -p %PGPORT% -U %PGUSER% -d %DATABASES% -c "SELECT * FROM pg_stop_backup(false, true);" This does not work, because it starts the backup in one context, and tries to stop it in another, hence throwing this error: "ERROR: non-exclusive backup is not in progress". 4. psql -h %DB_HOST% -p %DB_PORT% -U %DB_USER% -d %DB_NAME% -f test.sql This DOES work, but i need to put some batch logic between pg_start_backup and pg_stop_backup, and i don't want to have several sql files i need to call. What is causing my issues, and how can i fix it?
Why can i call pg_dump but not pg_start_backup directly in my batch file?
0 You can write a scheduled lambda function that queries to find the number of views, then updates the backup table based on that query. Documentation for how to connect to a Redshift cluster from a lambda function can be found here. Share Follow answered Jul 22, 2019 at 18:49 ketchamketcham 92244 silver badges1515 bronze badges 2 Hi,i want to do it using a sql query – Sam Jul 22, 2019 at 18:57 Unfortunately there's no way to have a scheduled query with Redshift on its own... You will likely want to create a lambda to perform this query for you daily. This is of course assuming you don't want to perform the query manually every day :) You can find more here: stackoverflow.com/questions/42564910/… – ketcham Jul 22, 2019 at 19:00 Add a comment  | 
We generate a lot of DDL views.I want to create a back-up of all the DDL views in the pg_catalog or information_schema which updates it self everyday. For example: If the number of views yesterday was 10 and I created 5 more views today, the backup table should update itself from 10 to 15 at a specific time.
How to schedule a daily backup of DDL definition view (pg_catalog table) in Redshift
0 I tried to attach these before. They will help make sense of my problem. Share Follow answered Jul 19, 2019 at 20:24 user2884703user2884703 4966 bronze badges Add a comment  | 
Immediate problem: When I do a pgAdmin 4 restore I get "Stymied by idle_in_transaction_session_timeout" error. I am on a MacBook Pro running macOS Mojave version 10.14.5, using Java and PostgreSQL. I use the pgAdmin 4 GUI, as I am not proficient in psql, bash, etc. I have a test database named pg2. As you can see from the attachment, PostgreSQL servers 9.4 and 10 have the identical databases. If I make a change in a database on one server, it will show also in the other server’s database. There is a third server, 11, in which there is only the postgres database. I have tried psql and get errors (including timeout errors). I have tried to Delete/Drop server 11, it will disappear but when I sign out of pgAdmin 4 and then go into pgAdmin 4 again the server 11 will be there again. See the attachments for screen shots. I expect the backup/restore to work: backup, then make a change to the database, then correctly restore to previous state. I would like to have just one server, preferably 11 with only pg1 and the test db tempdb running in it. I thought that I could live with the three, for I am aware of my current capabilities and thus did not want to screw things up further. However, I suspect that the two servers 9.4 and 10 are the source of my current problem: receiving the idle_in_transaction_session_timeout error while doing a restore. Note: I did the backup using the server 10’s pg1 backup. Did it create 2 backups, one for 9.4 and one for 10?
Stymied by idle_in_transaction_session_timeout
0 Can you please specify from where you are trying to do Mongodb database backup ? from any server or from terminal or mongodump ? Share Follow answered Jul 15, 2019 at 5:53 Hammad ul HasanHammad ul Hasan 30433 silver badges1515 bronze badges 0 Add a comment  | 
I am running below command for Mongodb database backup but it is not working fine.I am running below command. mongodump --db mydb --out /var/www/html/'date +"%m-%d-%y"' this command is not working for me it throw an error 2019-07-15T11:02:50.758+0530 E QUERY [thread1] SyntaxError: missing ; before statement @(shell):1:12 Please let me know what is wrong in this command and what is correct way for backup of database?
Mongodb Backup of Database command is not working
0 Well no error in cron job script - it was difference in timezone as pointed out by @parttimeturtle. Also checking logs also helped, thank you both. This was really informative. Share Follow answered Jun 7, 2019 at 19:47 akhileshakhilesh 5311 silver badge1212 bronze badges Add a comment  | 
03 20 * * * do_snapshot --digital-ocean-access-token notreallymyaccesstoken96notreallymyaccesstoken3 --only 52713483 -k 3 -c -v Running do_snapshot to take snapshot of my digital ocean droplet. I am able to do this manually via this command do_snapshot --digital-ocean-access-token notreallymyaccesstoken96notreallymyaccesstoken3 --only 52713483 -k 3 -c -v This works perfectly well and takes a snapshot of my droplet. However when i try to run cron job of the same - I fix time 2-3-5 mins ahead and save the cron job. But nothing happens. Been stuck on this for too long - I tried to read about cron job - and followed this tutorial word to word too I am still not able to figure out what am I doing wrong?
What is wrong with this cron job?
0 I have found this to work for multiple directories.... #!/bin/bash TIME=`date +%b-%d-%y` # This Command will read the date. FILENAME=backup-xmltv-www-$TIME.tar.gz # The filename including the date. DESDIR=/var/backups # Destination of backup file. tar -cpzf $DESDIR/$FILENAME /var/www/html/wp-admin /var/www/html/wp-content /var/www/html/wp-includes /var/www/html/*.php /var/www/html/*.html Share Follow answered Jun 6, 2019 at 19:33 James MillerJames Miller 133 bronze badges Add a comment  | 
I am using the script below to do daily backups. I need to add more folders to be included in the daily backup, and I would like to have them contained in the same zip file. How would this scripted be modified to backup multiple folders to the same compressed file? #!/bin/bash TIME=`date +%b-%d-%y` # This Command will read the date. FILENAME=backup-xmltv-www-$TIME.tar.gz # The filename including the date. SRCDIR=/var/www/html/wp-admin # Source backup folder. DESDIR=/var/backups # Destination of backup file. tar -cpzf $DESDIR/$FILENAME $SRCDIR I have tried to add multiple source directories to the script, but only the last source in the list gets compressed. SRCDIR=/var/www/html/wp-admin SRCDIR=/var/www/html/wp-content SRCDIR=/var/www/html/wp-includes I have also tried to give the sources different numbers, but tar errors out to tell me to look at the tar help file. SRCDIR1=/var/www/html/wp-admin SRCDIR2=/var/www/html/wp-content SRCDIR3=/var/www/html/wp-includes
Need to Backup from multiple sources
0 You should change your code as follows Your connection should be open before execution command. T-SQL statement to backup the database should contain 'WITH INIT' keywords. If you execute your code multiple time, your BAK file will continue to grow. So, try this one Using con = New SqlConnection("Data Source=.\SQLEXPRESS;User id=sa;password=admin;") con.Open() Dim str As String = "backup database OFFICEMANAGEMENT to disk='C:\TMP\OM.bak' WITH INIT" Using cmd = New SqlCommand(str, con) cmd.ExecuteNonQuery() End Using con.Close() End Using Share Follow answered Jun 6, 2019 at 12:28 user11380812user11380812 3 Its not about opening a connection, backup file is still not generated after adding that line. – Hemal Jun 6, 2019 at 14:48 And there is no exceptions? Very strange. Try changing the path. Put the backup file somewhere else. – user11380812 Jun 6, 2019 at 16:26 Tried creating files in D drive and E drive also. I doubt if its a permission kind of issue. – Hemal Jun 7, 2019 at 13:29 Add a comment  | 
I am using VB.NET with SQL Server 2012 Express in a software. I provided facility to take a backup of a database from application itself with following code Dim con = New SqlConnection("Data Source=.\SQLEXPRESS;Database=Master;User id=sa;password=admin;") con.Open() Dim str as string="backup database OFFICEMANAGEMENT to disk='C:\OM.bak'" Dim cmd as new SqlCommand(str,con) cmd.ExecuteNonQuery con.Close() When above code is run, no backup file gets created, and also no error is shown. If I run the backup command with T-SQL in the SQL Server Management Studio, the backup file is successfully created. Can anyone help with this? Thanks
SQL Server backup from VB.NET not generating backup file, file not shown in drive
0 You can add "cpbackup-exclude.conf" in a global or user level. Global: /etc/cpbackup-exclude.conf User level: /home/username/cpbackup-exclude.conf The entries would be like this: */backup-*tar.gz */backup-*zip */backup*gz */cpmove* Share Follow answered Jun 7, 2019 at 6:36 Harijith RHarijith R 34922 silver badges88 bronze badges 0 Add a comment  | 
I want to exclude any file that has the word backup in it and is in format of zip , tar.gz from cpanel backup I do not know exactly what I should enter in cpbackup-exclude.conf
Exclude names like "backup" from cpbackup-exclude.conf in cpanel backup
sudo mkdir -p /var/lib/influxdb/meta /var/lib/influxdb/data Problem solved!
Environment: OS: Linux travis-job-50c7192e-954f-4101-8363-813e067b3b40 4.4.0-101-generic #124~14.04.1-Ubuntu SMP Fri Nov 10 19:05:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux influxdb version: influxdb_1.2.2_amd64 travis-CI job log: https://travis-ci.org/y1j2x34/spring-influxdb-orm/builds/540383484 backup files: https://github.com/y1j2x34/spring-influxdb-orm/tree/master/data/backups I was restored the backup data correctly in my cent OS server, but failed in travis-CI, I don't know what caused this error. The following command is how I backup the test database data: influxd backup --database test path/to/project/data/backups and restore backups commands: influxd restore -metadir /var/lib/influxdb/meta ./data/backups influxd restore -database test -datadir /var/lib/influxdb/data ./data/backups The error message: $ influxd restore -metadir /var/lib/influxdb/meta ./data/backups Using metastore snapshot: data/backups/meta.00 restore: open /var/lib/influxdb/meta/node.json: no such file or directory The command "influxd restore -metadir /var/lib/influxdb/meta ./data/backups" failed and exited with 1 during .
restore influxdb backup inside travis-CI failed with an error: restore: open /var/lib/influxdb/meta/node.json: no such file or directory
You can use pv, to make a progress bar for every file, as explained here: https://superuser.com/questions/168749/is-there-a-way-to-see-any-tar-progress-per-file
I have a script which i can do a backup of entire system, but i can't see at what progress it is. I found many answers with progress-bar scripts but i don't know how to implement them into my backup script. My backup script: #!/bin/bash Backup_system="Backup_$(date +%Y-%m-%d).tar.gz" # Record start time by epoch second start=$(date '+%s') # List of excludes in a bash array, for easier reading. excludes=(--exclude=/$Backup_system) excludes+=(--exclude=/proc) excludes+=(--exclude=/lost+found) excludes+=(--exclude=/sys) excludes+=(--exclude=/mnt) excludes+=(--exclude=/media) excludes+=(--exclude=/dev) if ! tar -czf "$Backup_system" "${excludes[@]}" /; then status="tar failed" elif ! mv "$Backup_system" ~/Backups ; then status="mv failed" else status="success: size=$(stat -c%s ~/Backups/$Backup_system) duration=$((`date '+%s'` - $start))" fi # Log to system log; handle this using syslog(8). logger -t Backup_system "$status"
How to implement a progress-bar in a backup script
0 I think mysqldump is what you're looking for. Export database A to SQL file to import to database B: mysqldump --host=localhost --user=dbauser --password=dbapassword dba_name > /path/to/store/dba.sql Import database A dump into database B: cp /path/to/store/dba.sql | mysql --host-localhost --user=dbbuser --password=dbbpassword dbb_name You can wrap these commands in a call to system() in a PHP script. Share Follow answered May 30, 2019 at 9:24 alistaircolalistaircol 1,4421212 silver badges2323 bronze badges Add a comment  | 
I want to create backup of my website database in which is in MySQL in another database on regular basis , is it possible to do it using php Already tried exporting database using php but requirement is something else
How to create backup of one database into another database using php
0 there is a way of doing incremental backups which is way simpler and way faster, create another bat file which will rename the backup file by and append a timestamp and schedule it for a time (maybe 5-10 minutes) from the backup time depending on the speed of your backup. I did the following bat file and it worked: cd C:\Users\delli5\Documents\backups @ECHO OFF start cmd.exe /C "ren "backup" "backup%date:/=-% %time::=-%"" and this was after I had scheduled another backup using a bat like the following: @ECHO OFF cd C:\Program Files\MongoDB\Server\4.0\bin @ECHO OFF start cmd.exe /C "mongodump -d backup -o C:\Users\delli5\Documents\backups" @ECHO OFF I didn't create replica sets but just did the normal mongodump way so you can try this out. Cheers Share Follow answered Aug 30, 2019 at 11:49 tafadzwa_5tafadzwa_5 111 bronze badge Add a comment  | 
I am new to mongodb and I know there is no direct way for incremental backups in mongodb. I had setup the Replicaset. after this, what are the steps that I need to follow for incremental backups I tried the below way but it is giving the error. db.fsyncLock() in secondary member mongodump --host <secondary> -d local -c oplog.rs -o /mnt/mongo-test_backup/1 --query '{ "ts" : { $gt : Timestamp(1437725201, 50) } }' in secondary member. I don't know the exact use of this command. but somehow it is giving the error. I stuck at this step since I am facing the issue. The error message is below. 2019-05-17T14:37:18.716+0000 E QUERY [thread1] SyntaxError: missing ; before statement @(shell):1:12 Please help me with this. If anyone, help me out with some other process then that is also helpful for me.
How to take the mongodb incremental backups after setting up the replicasets
I can't think of any risk. However, there may be an even faster way... Plan A: Use a 4th machine to run the backup script. Or, at least, write the dump to a 4th machine. Any dump program is I/O-heavy, probably even I/O-bound. By writing the dump on another machine, you offload the Galera node's I/O. I assume you take the node out of the cluster during the backup? And have gcache big enough so that it does an IST, not an SST, to resync? Plan B: Carrying things a step further, use 4 nodes instead of 3. That way, having a node offline has less impact and less exposure. Plan C: Use a 4th machine as a Slave to one of the nodes. Questions... What is the geographical distribution of the nodes? If all three are currently sitting together, think about tornados, earthquakes, floods, etc. What is the purpose of the backup? Will the backup data be geographically remote? (Note: that will add some delay to the backup method.)
I have a Galera cluster (MySQL) consisting of three nodes and am currently relying on mysqldump for backup. This is cumbersome, to say the least, as the database has grown over the years and is now approaching 20 GiB and mysqldump takes roughly half an hour to do its thing. Percona xtrabackup seems to be promising and I have tried it out on a copy of the database on a single node with very positive results, where backup time is radically reduced. What I have not been able to find out is if it is OK to install xtrabackup on one of the nodes (preferably the one running mysqldump today) and run that on the cluster database? I have read quite a lot on xtrabackup but nowhere have I found any information about any risks with running it on one of the cluster members.
Can I run Xtrabackup directly on one of the members in a Galera cluster?
For your issue, I just can say the property is not supported by Terraform. You can see it in the Azure REST API for Recovery Policy as property instantRpRetentionRangeInDays and use the request body like this: { "properties": { "backupManagementType": "AzureIaasVM", "schedulePolicy": { "schedulePolicyType": "SimpleSchedulePolicy", "scheduleRunFrequency": "Weekly", "scheduleRunDays": [ "Friday" ], "scheduleRunTimes": [ "2018-07-30T18:30:00Z" ], "scheduleWeeklyFrequency": 0 }, "retentionPolicy": { "retentionPolicyType": "LongTermRetentionPolicy", "weeklySchedule": { "daysOfTheWeek": [ "Friday" ], "retentionTimes": [ "2018-07-30T18:30:00Z" ], "retentionDuration": { "count": 5, "durationType": "Weeks" } } }, "instantRpRetentionRangeInDays": 5, "timeZone": "UTC", "protectedItemsCount": 0 } } Or you also can use the Azure Template and it also shows in it. But you cannot find the property in the Terraform. So I suggest you can use the Azure REST API or Template to achieve it.
I have successfully created daily and weekly backup policies using Terraform and both work fine. The Azure Portal however shows a red mark under "Instant Restore" on the policy blade saying "Retain instant recovery snapshot(s) for" and the value appears as 2 days. I need to change this value to 5; however, I don't see an option to alter it in Terraform. I was wondering if I should use "azurerm_snapshot" resource type to change it or if there is a workaround available in TF for it. resource "azurerm_recovery_services_protection_policy_vm" "backup_policy_weekly" { name = "${var.RG4VM}-weekly-bkp-policy" resource_group_name = "${var.RG4VM}" recovery_vault_name = "${azurerm_recovery_services_vault.backup_vault.name}" depends_on = ["azurerm_recovery_services_vault.backup_vault"] timezone = "UTC" backup { frequency = "Weekly" time = "18:30" weekdays = ["Friday"] } retention_weekly { count = "2" weekdays = ["Friday"] } retention_monthly { count = "1" weekdays = ["Friday"] weeks = ["Last"] } } Expected: Snapshot set to 5, as it is the minimum value Actual: 2 Thank you/Asghar
Terraform Azurerm azurerm_recovery_services_protected_vm “Set number of instant recovery snapshot(s)”
I've ended up by setting a symbolic link for the archive directory after creating the stanza and working perfectly.
I have installed and started using pgbackrest for postgresql incremental backups. Everything is working fine but i have a requirement to have separate repository for archived wals from the base backups. In pgbackrest documentation i can find only one setting for both. repo1-path=/var/lib/pgbackrest
Configuring Different repo path for pgbackrest wal archive
0 You will not be able to migrate your Google Cloud instances to AWS without a lot of work. The underlying system drivers are different. You will need to do a bunch of things to remove drivers and software and install new drivers and software plus change the format of the files. You can do neither while running on Google Cloud. You must complete these changes before you can import into AWS. This means that you will need to convert the Google image into a virtual machine running locally in your network, make the required changes, and then use the AWS tools to import into AWS. My answer will only cover exporting VMs to a file. The format of this file is not compatible with AWS as mentioned above. This file does make for an offline, out of the cloud backup, that can be imported back into Google Cloud. You cannot export Windows VMs at all. Only Linux VMs. You can export a Linux VM to Google Cloud Storage. From there you can use any tool that you want to download the file to your desktop. The Google Cloud tool is gsutil. First create an image of your Compute Engine VM using the gcloud compute images create command. Next, export the image to Cloud Storage using the gcloud compute images export command. Exporting a Custom Image to Google Cloud Storage Share Follow answered May 4, 2019 at 11:14 John HanleyJohn Hanley 78.1k66 gold badges103103 silver badges168168 bronze badges 0 Add a comment  | 
I have 12 instance running on my Google Cloud Platform. I want to migrate the instance from Google Cloud to AWS. For this i want to first make the backup of all these instance offline in my computer. Is there any way for this ?
Is there any way to make the backup of the VM instances , and make it availabe locally?
0 Back up SQL database BACK UP SQL, Delete old database Delete old database. Share Follow answered Mar 18, 2019 at 13:56 Hasan MahmoodHasan Mahmood 97877 silver badges1010 bronze badges 4 will this work with MS SQL server Express the free version ? – Navid2132 Mar 18, 2019 at 14:20 @sam2132, yes, that should be. this is basically for SQL Server Agent service, please install SQL Server Agent to do this. – Hasan Mahmood Mar 18, 2019 at 14:26 >>>please install SQL Server Agent<<< SQL Server Agent is already installed by Expess Edition, but it CANNOT BE STARTED. This effectively means that to USE SQL Agent you need to install any other edition, NOT Express edition – sepupic Mar 18, 2019 at 14:37 i know how to do with agent but it coast allot money thats y i want to find a script run it with windows task schudel freely – Navid2132 Mar 18, 2019 at 14:42 Add a comment  | 
How can I backup a SQL database to a destination automatically every day and delete the Backup automatically older than 5 days?
How to automatically backup a SQL database and then delete the any database older then 5 days with a SQL commmand
0 From the configuration you have posted it doesn't seem that you are specifying what you want to backup. Example: <?xml version="1.0" encoding="UTF-8"?> <phpbu xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://schema.phpbu.de/5.1/phpbu.xsd" verbose="true"> <backups> <backup name="BackupDB"> <!-- source --> <source type="mysqldump"> <option name="databases" value="myDatabase"/> <option name="user" value="user.name"/> <option name="password" value="topsecret"/> </source> <!-- where should the backup be stored --> <target dirname="/path/to/backup/directory" filename="mysql-%Y%m%d-%H%i.sql" compress="bzip2"/> <!-- sync sftp --> <sync type="sftp"> <option name="host" value="backup.example.com"/> <option name="port" value="22"/> <option name="user" value="user.name"/> <option name="password" value="topsecret"/> <option name="path" value="backup/someName"/> <option name="passive" value="true"/> </sync> </backup> </backups> </phpbu> The above will do a mysql backup and then it will transfer the backup file to a remote server via sftp. You can check the documentation and an example of xml config: http://phpbu.de/manual/current/en/configuration.html#configuration.xml Share Follow answered Apr 8, 2019 at 0:43 m0xffm0xff 111 bronze badge Add a comment  | 
I have a XAMPP environment with PHP 7.0. I installed PHPBU in my website project by putting the phpbu.phar and phpbu.xml files into the root directory. My configuration: <?xml version="1.0" encoding="UTF-8"?> <phpbu xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://schema.phpbu.de/5.1/phpbu.xsd" verbose="true"> <sync type="sftp"> <option name="host" value="my-host"/> <option name="port" value="22"/> <option name="user" value="my-ftp-user"/> <option name="password" value="123456"/> <option name="path" value="/my/path"/> <option name="passive" value="true"/> </sync> </phpbu> I execute in my terminal: php phpbu.phar I get the following result: phpbu 5.1.6 by Sebastian Feldmann and contributors. Runtime: PHP 7.0.6 Configuration: C:\xampp\htdocs\www\european-business-ecademy\website\main\phpbu.xml Time: 1 second, Memory: 4.00MB No backups executed! Nothing gets backuped. How come?
PHPBU executes no backups on localhost
0 This specific error has nothing to do with this being a standby server. Rather, you forgot to use the -U option to specify the database user, so pg_dump assumes it is the same as the operating system user. Don't use the root user for anything but administrative activities! Share Follow answered Mar 6, 2019 at 7:05 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
I have setup PostgreSQL hot stand by replication on Ubuntu. I need to know if master DB server is down, then how to get the backup from the slave. I have tried this command pg_dump testdb > /var/lib/postgresql/20190306.bak -p 5433 I got this error: pg_dump: [archiver (db)] connection to database "channeldb" failed: FATAL: role "root" does not exist
How to get the PostgreSQL db backup from slave if master down
0 I remember at the time of 16 years, the time can also access to the application of through itunes. The ipa file and then through the ipa file modification extension into the. Zip file, directly open the can get the current English local resources, and then from the Payload folder to find the app file, right-click the display file content, can be read into the application of more local resources, basic inside it will include the app icon resources and Assets inside pictures, and so on Unfortunately, I also forgot that the specific time was 17 years ago. This function was cancelled by an update of Itunes. However, we can still get the.ipa file package of the App with the help of three party tools in MAC, such as PP assistant and iToos. Then the relevant local resources of the application can be obtained through the method I just described l hope help you Share Follow answered Mar 4, 2019 at 2:28 JerseyJersey 42111 gold badge55 silver badges1414 bronze badges 1 You're welcome. Please feel free to contact me if you have any questions. – Jersey Mar 5, 2019 at 6:24 Add a comment  | 
Is it possible to extract an iOS application's icon from its files within a locally stored iTunes iOS backup? I've searched the backup manifest and app related files and can find the list of installed apps, but no icons. Currently I am using the domain name as an input to the Apple iTunes search API. e.g.: for "com.facebook.Messenger" https://itunes.apple.com/search?term=Messenger&entity=software Ideal would be to extract from the backup files directly.
Finding application icons within an iTunes iOS backup
0 It is not possible to do backup XML via bash via REST API. I was informed by Atlassian team that this is not possible. They are working on it but solution is not ready yet. Only way of exporting data is by backup and restore from database. Share Follow answered Mar 5, 2019 at 8:32 KUEKUE 311 silver badge44 bronze badges Add a comment  | 
I'm wondering whether it is possible to export data into XML via bash. Of course I could make a dump from database but XML is being choose by procedure of migration which I don't want to change. As far as I have noticed exporting via rest API is not possible. Is it possible to do it another way? Plug-in?
Jira export via bash
As @wildpasser mentioned in comments, the best way to do this is following: Import required table using different table name (particular table could be imported using psql COPY 'different_table_name' FROM 'file_name' Update the desired columns of your table from the different_table_name Drop different_table_name
Is it possible to restore the particular column from the PostgreSQL dump? I know that it is possible to specify the table to restore, but what about column from this table?
Restore column from PostgreSQL dump
You should either backup ${ignite.work.dir}/marshaller directory, or call ignite.binary().type(KeyOrValue.class) for every type you have in cache to prime binary marshaller.
I’m trying to come up with a strategy to backup data in my apache ignite cache hosted as a stateful set in google cloud Kubernetes. My ignite deployment uses ignite native persistence and runs a 3 node ignite cluster backed up by persistence volumes in Kubernetes. I’m using a binaryConfiguration to store binary objects in cache. I’m looking for a reliable way to back up my ignite data and be able to restore it. So far I’ve tried backing up just the persistence files and then restoring them back. It hasn’t worked reliably yet. The issue I’m facing is that after restore, the cache data which isn’t binary objects is restored properly, e.g. strings or numbers. I’m able to access numeric or string data just fine. But binary objects are not accessible. It seems the binary objects are restored, but I’m unable to fetch them. The weird part is that after the restore, once I add a new binary object to the cache all the restored data seems to be accessed normally. Can anyone please suggest a reliable way to back up and restore ignite native persistence data?
Backup of ignite stateful set in Kubernetes
0 Relying on timing is probably a poor idea because it will vary according to how much data needs to be moved, and how congested the machine and/or network might be. I suggest you either make your schedule very generous, or else implement some kind of locking mechanism. Try this for your rsync: flock /path/to/some/nfs/filename.lock rsync <args> And this for your tape backup: flock /path/to/some/nfs/filename.lock <mycmd> <args> The flock command (f-lock) ensures that only one process can own the lock-file at once, and will sit and wait until it owns the lock-file before it launches the command you give it. As long as the sync and backup are launched in the right order then the backup will always wait until the sync is done. The main gotcha is that you ever have a power-cut, network outage, or some other interruption the causes a stale lockfile to be leftover then you have to delete it manually before either job can ever run again (and if you don't notice quickly then there can be a whole bunch queued up). Share Follow answered Jan 24, 2019 at 17:41 amsams 25.2k44 gold badges5555 silver badges7676 bronze badges 0 Add a comment  | 
I am using rsync from more than 1 year to sync production data to an folder on the nfs volume, once sync completed our NDMP backup / Tape backup schedule will start. Situation: Yesterday we observed that the rsync was still in process to sync file from production folder to destination folder, before completion of rsync command tape backup process was completed. Hence the tape backup data is inconsistent. Question 1) how to find how much time rsync took to generate list of files which needs to be synchronized b/w source & destination folder? I used below command to print the time stamp to identify how much time rsync took to generate file list before copying file process start. rsync -avz --out-format="%t %f" --delete /opt/app_home/shared/data /opt/app_home/shared/plugins /opt/app_home/shared/tape-backup-rsync-shared_new/ However seeking guidance on how to determine the time taken at each stages of rsync process so that i can tweek my scheduled cron job execution times.
rsync : how much time rsync takes to build file list before starting sync process b/w source & destination
0 Excel already does 10 minute autorecovery backups by default out of the box: Share Follow answered Jan 22, 2019 at 16:12 Leon EvansLeon Evans 14611 silver badge66 bronze badges 4 But doesn’t this just give the ability to revert to the last saved version? What if I wanted to go back 3 versions? I’m looking for hourly restore points. – user1753362 Jan 22, 2019 at 18:31 take the AutoRecover file location path in the image above and write some powershell to copy all the files from that location to another location would seem like a simple solution for what you're after? – Leon Evans Jan 22, 2019 at 19:53 The was the essence of my first question. This answer, I'm afraid, does not answer anything. My question was....Is powershell suitable for doing this or is there anything built into Excel or off the shelf software that could do a better job? – user1753362 Jan 23, 2019 at 9:06 Ah apoligies, the my answer would be: PowerShell would be perfectly suitable for this and no, I do not believe Excel has anything built in to do this. The only thing excel does natively is the autosave function outlined above. – Leon Evans Jan 23, 2019 at 16:00 Add a comment  | 
I want to create hourly restore points of various documents (usually Excel 2010). Is powershell suitable for doing this or is there anything built into Excel or off the shelf software that could do a better job?
Hourly version backup of Excel possibly using Powershell
Changing the Access tier to Azure Archive Storage(if storing data in Blobs) would be your best option. A few notes: The Archive storage tier is only available at the blob level and not at the storage account level. Archive storage is offline and offers the lowest storage costs but also the highest access costs Hot, Cool, and Archive tiers can be set at the object level. Additional info can be found here:https://learn.microsoft.com/en-us/azure/storage/blobs/storage-blob-storage-tiers
We are migrating from an on-premises virtual machine to Azure cloud. The virtual machine will eventually be decommissioned and we have many files and folders that we don't want to lose, like old websites and databases, scripts, programs etc. We use an Azure storage account for storing and retrieving images via blob containers for the live websites. Q: What is the best and most cost effective way to backup large amount of files unused in production, rarely accessed, from an on-premises virtual machine to Azure cloud?
Backup files to Azure Storage
Don't bother with PARTITIONing unless you have over a million rows. A table with 8000 rows is 'tiny'; keep them all in a single table. If there is already a DATE or DATETIME column, you don't even need an extra column to indicate the "year". And you don't need id_table2 (etc). If you want to discuss further, please provide SHOW CREATE TABLE and some of the queries.
I have a database that need to be archived one every year. The data from last year will some times needed to be used in the next year so i have a column year that can have 3 values (0 - from current year, 1-from last year to be used, 2-to saved but not used). The DB is as follows : Table1 id,year ... Table2 id,id_table1 ... Table3 id,id_table2 ... Table4 id,id_table3 ... These are the tables that need to be archived where table1.year = 2 . The archived data needs to be accessible as well and needs to go back at least 2 years. The number of rows per year is around: Table1 - 50 rows; Table2 - 250 (Table1 x5); Table 3 - 2500 (Table2 x 10); Table 4 - 5000 (Table3 x 2); I looked at partitioning but couldn't figure out how to group the four so that they could all move to a separate partition.
Best way to have archive which you can access
0 You can use any external plugins or apps to connect cpanel with dropbox or google drive . as a hosting company we know that few of our clients using https://backupcp.com for the personal purposes . Share Follow answered Jan 12, 2019 at 10:23 Ebin V ThomasEbin V Thomas 122 bronze badges 2 Hi Ebin, I have check backupcp.com and its shows we need WHM access.I don't have WHM panel, CPanel only i have. – Sathya Jan 13, 2019 at 17:24 You can enter cpanel or whm login details. Both works. If its cpanel user only 1 cpanel will be shown If reseller whm user it will show all users under the reseller – Ebin V Thomas Jan 16, 2019 at 4:34 Add a comment  | 
I am using cPanel for managing my websites in Shared Hosting. Now want to store my backup on my Google Drive. I have check lots of tutorials all are said, we need to do something on WHM. I don't have that. CPanel only i have. Please help me to store cPanel backup files to Google Drive. Thanks.
How to do CPanel Backup to Google Drive
0 I'm going to assume that you consider the backup incomplete if at least one folder does not contain a file modified in the past 24 hours. $refdate = (Get-Date).AddDays(-1) $complete = $true $results = foreach ($line in Get-Content "C:\Backup\sample.txt") { $result = Get-ChildItem $line | Where-Object { $_.LastWriteTime -gt $refdate } | Select-Object Directoryname, Name, LastWriteTime, @{n="Size (GB)";e={[Math]::Round($_.Length/1MB, 1)}} if ($result) { $result } else { $complete = $false } } if ($complete) { Write-Host 'Backup complete' } else { Write-Host 'Backup incomplete' } Share Follow answered Jan 2, 2019 at 21:10 Ansgar WiechersAnsgar Wiechers 196k2626 gold badges271271 silver badges342342 bronze badges Add a comment  | 
Loop through a text file which contains folder Locations foreach ($line in Get-Content "C:\Backup\sample.txt") { Get-ChildItem $line | Where { $_.LastWriteTime -gt (Get-Date).AddDays(-1) } | select Directoryname, Name, LastWriteTime, @{Name="Size (GB)"; Expression={[Math]::Round($_.Length/1MB, 1)}} } This way I am only able to get the files that were modified yesterday. What I want is to list out the folders which does and does not have files modified yesterday. If a folder contains files that were modified yesterday (modified date as yesterday), then list those File Names, Folder Name and their sizes(GB). And also Write-Host as backup successful. The same should go with folders which does not contain files with the modified date as yesterday. Write-Host backup not complete.
Need a PowerShell script to find folders which does not have files modified yesterday
0 Here is the algorithm and the code for it. Algorithm: 1. Get the backup directory of the file connected to current opening file buffer. 2. Open the backup directory. 3. In the dired buffer, go to end of the buffer. 4. Search backward for the basename of opening file. Code: (defun backup-each-save-dired-jump () (interactive) (let* ( (filename (buffer-file-name)) (containing-dir (file-name-directory filename)) (basename (file-name-nondirectory filename)) (backup-container (format "%s/%s" backup-each-save-mirror-location containing-dir)) ) (when (file-exists-p backup-container) (find-file backup-container) (goto-char (point-max)) (search-backward basename) ))) Share Follow answered Jan 1, 2019 at 7:00 Kei MinagawaKei Minagawa 4,44533 gold badges2727 silver badges4343 bronze badges Add a comment  | 
Notice: See my answer before answering this question, because I answered for this question first. But another way or any suggestion are welcomed. How to open the most recent backup file which created by backup-each-save.el in dired like dired-jump?
How to open the backup file which created by backup-each-save.el in dired like dired-jump?
0 You can't exclude a table from a backup. That is kind of defines a backup, the whole thing, not part of it. You can however find some ways around this. One way you could do this is by creating a second database and copying all the data from your database except that table to this database and backup the copy. You could use replication to copy your data and exclude the table and backup the replicated database. The problem is that either of these might actually take longer due to the quantity of data in the rest of the database. Taken from here Share Follow answered Dec 28, 2018 at 7:21 Suraj KumarSuraj Kumar 5,59288 gold badges2222 silver badges4444 bronze badges 2 1 Well done for finding the other question; but the correct thing to do is to mark this question as a duplicate of that other question, rather than offer the information from the other question as an answer. – Richardissimo Dec 28, 2018 at 7:59 Noted for all next time – Suraj Kumar Dec 28, 2018 at 8:12 Add a comment  | 
I need to full back up my center database. But I wanna a back up which does not include data of some tables. These tables should locate on back up but their data should not. If somebody could share sample query of this request, I would be grateful.
Create back up by TSQL by Excluding Data of Some Tables
I reported this GitHub Issue #9. I then made a minor change to the error reporting GitHub Pull Request #10 to work out that the error was a Too many open files error: MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore Caching block indexes in memory... libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest (../../sg2015/642033544161964565/cpbf0000000000017581637/cpbmf) for reading: Too many open files Abort trap: 6 Just a note that if my pull request (only just submitted) is not merged (and a new binary released) you will need to build from my fork. Which I then fixed with a ulimit change: MacBook-Pro:PlanC daniel$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 256 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 1418 virtual memory (kbytes, -v) unlimited by increasing number of open files for the shell to 1024: MacBook-Pro:PlanC daniel$ ulimit -S -n 1024 Recording this answer in case others have problems - backups are important after all :)
Background to Plan C Code42 decided to terminate their "CrashPlan for Home" service. This means that after the shutdown date of October 22, 2018, CrashPlan will delete your backup on their servers, which is to be expected, but much more annoyingly, you will no longer be able to restore CrashPlan backups that you stored locally. Effectively, Code42 is reaching into your computer to break your backups for you. PlanC is an open source project to enable restore from existing CrashPlan Home backups to be performed. My Problem However, when attempting to restore I received an error: MacBook-Pro:CrashPlanHomeRecovery daniel$ ./plan-c-osx/plan-c --key 07B... --archive ./sg2015/642033544161964565/ --dest ./recovered/ --filename "J:/..." restore Caching block indexes in memory... libc++abi.dylib: terminating with uncaught exception of type std::runtime_error: Failed to open block manifest for reading: ./sg2015/642033544161964565/cpbf0000000000017581637/cpbmf Abort trap: 6 The file referenced in the error appears to read OK, but the reported error provided no more information.
Error attempting CrashPlan Home restore using PlanC - "Failed to open block manifest for reading"
Just a quick answer to some of your question(for others, I will update later). Some questions can be found here. 1.1 Where are the snapshots stored? Share snapshots are stored in the same storage account as the file share. 1.2 Will it cost the storage capacity As per this doc (The Space usage section) metioned: Snapshots don't count toward your 5-TB share limit. There is no limit to how much space share snapshots occupy in total. Storage account limits still apply. This means that when you create a file share, there is a Quota option which let you specify the file max capacity(like 5 GB), if you total snapshots(like 10 GB) is larger than that max capacity, and don't worry, you can still save these snapshots, but the total snapshots capacity should less than your storage account's max capacity. If my snapshot exceeds 200, what will it be? Deleted by itself or the new one can't be created? if more than 200, an error will occur: "Exception calling "Snapshot" with "0" argument(s): "The remote server returned an error: (409) Conflict.". You can test it just using the following powershell code: $context = New-AzureStorageContext -StorageAccountName your_accouont_name -StorageAccountKey your_account_key $share = Get-AzureStorageShare -Context $context -Name s22 for($i=0;$i -le 201;$i++){ $share.snapshot(); start-sleep -Seconds 1 } May I delete the snapshot which I want by Azure Automation (use the runbook to schedules it)? This should be possible, I can test it at my side later then update to you. And most of the snapshot operation commands can be found here, including delete. update: $s = Get-AzureStorageShare -Context $context -SnapshotTime 2018-12-17T06:05:38.0000000Z -Name s33 $s.Delete() #delete the snapshot Note: For -SnapshotTime, you can pass the snapshot name to it. As of now, the snapshot name is always auto assigned a UTC time value, like 2018-12-17T06:05:38.0000000Z For -Name, pass the azure file share name
I have some questions about the Azure Files Share snapshot, if you know something about that, please let me know. Thanks. 1, Where are the snapshots stored? Will it cost the storage capacity and how about the cost of creates and delete snapshots? 2, If my snapshot exceeds 200, what will it be? Deleted by itself or the new one can't be created? 3, May I delete the snapshot which I want by Azure Automation (use the runbook to schedules it)? 4, If I use Azure automation and Back up (Preview) to deploy the Azure FileShare snapshot together, which snapshot will I get? If you know something about that, please share with us (even you can answer one of them, I will mark it as an answer). Thanks so much for your help.
Some question about the place to store and the cost of Files Share Snapshot
0 Assuming this is for a replica set deployment with default configurations, back-up methods that create backups of a MongoDB system at an exact moment in time (i.e. atomic) would guarantee at a transaction boundary. With MongoDB multi-document transactions only when a transaction commits, all data changes made in the transaction are saved and visible outside the transaction. Also worth mentioning as of MongoDB v4.0.x, there is only a single opLog entry for all the writes within a single transaction. See also: Transactions Atomicity Transaction Options MongoDB Backup Methods for various back up strategies. Share Follow answered Dec 3, 2018 at 4:01 Wan B.Wan B. 18.5k44 gold badges5555 silver badges7171 bronze badges Add a comment  | 
If all operations on a community edition of MongoDB 4.x occur in transactions, are any of the methods for backing up the DB guaranteed to produce a snapshot at a transaction boundary, rather than just at some random state of partial transaction?
Can MongoDB be backed up at a transaction boundary?
On Feb. 14, 2019, Google has finally released in beta scheduled snapshots - https://cloud.google.com/compute/docs/disks/scheduled-snapshots Here's how it looks like in the Google Cloud Console: For Windows instances, you must select Enable VSS. Image taken from Google Blog. Hope this helps.
For VM-instances with linux which have SSH. I know that can using link as below https://github.com/grugnog/google-cloud-auto-snapshot But for Window Server which is using RDP, How can i do it auto snapshot daily ? Or i need to using gcloud command.
How to auto snapshot Google cloud platform VM-instances in window server 2018
0 openStatus = OpenVirtualDisk( &storageType, virtualDiskPath, //VIRTUAL_DISK_ACCESS_GET_INFO, VIRTUAL_DISK_ACCESS_ALL, OPEN_VIRTUAL_DISK_FLAG_NO_PARENTS, openParameters, &vhdHandle ); After my test, the handle of the open virtual disk is changed from VIRTUAL_DISK_ACCEES_GET_INFO to VIRTUAL_DISK_ACCESS_ALL, it's work. But here is a new problem: the virtual machine can't boot or the virtual machine is powered on, QueryChangesVirtualDisk() returns 32 (0x20)( the file is in proceeding). WTF?! I feel very tired, very. Share Follow edited Nov 23, 2018 at 2:32 answered Nov 23, 2018 at 1:14 Chaplin HwangChaplin Hwang 7111 silver badge77 bronze badges 2 Does anyone pay attention to this issue? – Chaplin Hwang Dec 3, 2018 at 1:07 Does anyone pay attention to this issue? – Chaplin Hwang Feb 26, 2019 at 5:40 Add a comment  | 
// QueryChangesVirtualDisk PCWSTR changeTrackingId = virtualDiskInfo->ChangeTrackingState.MostRecentId; ULONG64 byteOffset = 0L; ULONG64 byteLength = virtualDiskInfoSize; PQUERY_CHANGES_VIRTUAL_DISK_RANGE pQueryChangeRange = NULL; ULONG rangeCount = 0L; ULONG64 processedLength = 0L; openStatus = QueryChangesVirtualDisk( vhdHandle, // A handle to the open VHD changeTrackingId, // A pointer to a string that specifies the change tracking identifier byteOffset, // Specifies the distance from the start of the VHD to the beginning of the area of the VHD byteLength, // Specifies the length of the area of the VHD that you want to check for changes QUERY_CHANGES_VIRTUAL_DISK_FLAG_NONE, // Reserved pQueryChangeRange, // Indicates the areas of the virtual disk that have changed &rangeCount, // The number of QUERY_CHANGES_VIRTUAL_DISK_RANGE structures that the array that the Ranges parameter points to can hold &processedLength // Indicates the total number of bytes that the method processed ); if (openStatus != ERROR_SUCCESS) { wprintf(L"Failed to call method(QueryChangesVirtualDisk), Erorr code: %ld\n", openStatus); wprintf(L"Virtual disk path: %s\n", virtualDiskPath); wprintf(L"%s\n", changeTrackingId); wprintf(L"Start offset: %llu\n", byteOffset); wprintf(L"End offset: %lu\n", virtualDiskInfoSize); getchar(); return 1; } cout << "Succeeded to call method(QueryChangesVirtualDisk)." << endl; if (vhdHandle != NULL) { CloseHandle(vhdHandle); } Recently we started using new Resilient Change Tracking (RCT 2016) APIs. We are facing issue with QueryChangesVirtualDisk API. We are following steps as mentioned in MSDN. Anyone has any suggestions if it is working for them?
QueryChangesVirtualDisk() is returning Access_Denied (5)?
0 Just keep the ipa with you, and install it using iTunes whenever you want. Another way would be installing it using XCode. Share Follow answered Nov 12, 2018 at 11:50 Viren MalhanViren Malhan 11511 silver badge55 bronze badges Add a comment  | 
I have created a few apps in Swift with a lot of data. This apps are not in the Appstore. Is there a possibility to save the Apps and the data in the iTunes Backup? If I do a restore my own apps are missing.
Backup selfmade IOS apps
0 There are implementations of ActiveStorage::Service that save the image data directly to the database. Here's one such implementation. https://github.com/lsylvester/active_storage-postgresql Usually not a good idea to store lots of binary in an RDBMS. However, if it's relatively few and simplifies your architecture, it may make sense. Share Follow answered Nov 29, 2018 at 13:52 AndyWattsAndyWatts 34522 silver badges66 bronze badges 1 Thanks for the suggestion. At the moment I will keep the traditional method of storage. – Bashir Hud Nov 30, 2018 at 14:13 Add a comment  | 
I am trying to make a backup in the database with active storage, but I do not know where the images are stored. I did a pg_dump but I only got the data from the tables. Do you know how I could do it? I am using a local ubuntu 18 lts server with postgresql to store the images.
Backup ActiveStorage images in PostgreSQL rails
0 When Redis restarts, it resets the LASTSAVE time as the current time. I actually need the unix timestamp of when the last backup was made and not the timestamp of when the backup was restored You can get the last backup time by checking the dump.rdb file's last modified time. Share Follow answered Oct 31, 2018 at 2:30 for_stackfor_stack 21.7k44 gold badges3636 silver badges5050 bronze badges Add a comment  | 
According to Redis official documentation: LASTSAVE Return the UNIX TIME of the last DB save executed with success. However, when I execute LASTSAVE I get the timestamp of the last restored backup instead of the last DB save executed. In other words, if I did a backup yesterday and I restore it today, LASTSAVE will give me a timestamp from today. My problem is that I actually need the unix timestamp of when the last backup was made and not the timestamp of when the backup was restored.
why does the value of LASTSAVE changes when I load a redis backup?
0 You can't restore a database that is in use because the restore would put it in an inconsistent state. You need to disconnect all active connections (including all SSMS query windows and other applications) from the database in order to restore it. If it already exists make sure to check "Overwrite existing database" on the Options tab of the restore window. On a side note, up to you but I would recommend not using a '.' in the database name. It can get confusing when using fully qualified object names that include the database. Share Follow answered Oct 23, 2018 at 14:03 squillmansquillman 13.5k33 gold badges4343 silver badges6262 bronze badges 6 I don't understand though why it would affect the in-use database, since I'm restoring from a backup, and creating a new database, not altering the existing one – Erica Stockwell-Alpert Oct 23, 2018 at 14:08 Does Dev.Web already exist on the server instance when you're trying to restore? – squillman Oct 23, 2018 at 14:13 No. It did but I deleted it after getting it stuck in "restoring" state, and I have also checked the option to overwrite any existing databases anyway – Erica Stockwell-Alpert Oct 23, 2018 at 14:14 As long as Dev.Web does not exist already, you changed the destination database to Dev.Web, and the physical file names don't already exist, then it should restore without problem. Can you post a screenshot of the restore window that is giving you the error? – squillman Oct 23, 2018 at 14:16 I think I found the issue. Under Options, Tail-Log backup > Take tail-log backup before restore > Leave source database in the restoring state was checked off, so I think this is why it was complaining that the source database was in use – Erica Stockwell-Alpert Oct 23, 2018 at 14:27  |  Show 1 more comment
I have two sets of databases for my different testing environments (internal qa and uat). I'm trying to bring qa up to date by restoring it from the latest uat backups. I encountered an issue with the qa DBs getting stuck in "restoring" mode and ended up deleting them, so I'm trying to create a brand new database now by restoring from the UAT backup and changing the name, but it keep failing. Restore database Source: device > latestUATbackup.bak Destination: database > change name from UAT.Web to Dev.Web Files > check off Relocate all files to folder Options: Close existing connections to destination database "Restore of database Dev.Web failed. Access could not be obtained because the database is in use" I tried taking UAT.Web offline but then it fails with the error "UAT.Web cannot be opened because it is offline" Why would it matter if the database is in use when I'm using a backup? What do I need to do?
Cannot create a copy of a database with a new name by restoring from backup: "the database is in use"
0 It is absolutely safe and commendable to run backups from a standby server. If you use pg_dump, you may run into replication conflicts that make pg_dump fail. To avoid that (at the price of delaying replication), set max_standby_streaming_delay = -1. Share Follow answered Oct 22, 2018 at 14:56 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
I have this production database (let's call it database a) running in the production server A. I have a replication (let's call it database b) of this A db running in the server B. Right now, I have a backup script running on the A server. It's burdening A with too much processing and networking ... I would like to stop backing up the a db and backup only b with the same script. I would move the script currently running on A server to A0. Theoretically both databases are equal, but I'm not sure if it is good practice and if it is safe enough. What do you think? Is it safe to do backups only on replication databases instead of the production ones?
Is it safe to backup only my replication database?