Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
So since no one seemed to have an answer to this one I resorted to restarting the SQL-server. And after the restart the transaction log backup started working again! What is interesting is the following log that appeared in the application event log during the restart. It does seem like there was a thread hanging indefinitely, waiting for an status update that never arrived. The restart seems to have taken care of it by killing this status thread and not restarting it again in the erroneous state it had ended up in. Log Name: Application Source: Microsoft SQL Server Automated Backup Date: 1/15/2022 11:16:20 AM Event ID: 57007 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: wn-sqlserver1 Description: [Warning] AutomatedBackupStatusMonitorError: System.Exception: Error in auto-backup status monitor thread ---> Microsoft.SqlServer.Management.IaaSAgentSqlQuery.Contract.IaaSAgentSqlQueryException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) --->
I have a database running on an azure vm with sql server. The db is in full recovery mode. The backup is configured through the web interface. Database and log backups have been working flawlessly for years. But recently the log backup was interrupted halfway through and the log backup process somehow got stuck. The following event has been logged every 5 minutes since then (reading log with managed_backup.sp_get_backup_diagnostics): [SSMBackup2WAAdminXevent] Database Name = DB, Database ID = 777, Stage = VerifyJobOutcome, Error Code = 0, Error Message = Warning, Additional Info = A progress update hasn't been received from SQL Server in more than 30 minutes for log backup. SSMBackup2WA will continue to wait. SSMBackup2WA seem to be stuck waiting for a progress update never being received. This has resulted in no log backups being taken. The database backup have continued running without problem. I have trouble finding the job/task used by SSMBackup2WA. I understand its not in the usual batch of SQL Server Agent jobs but somehow hidden. My idea is to somehow cancel the existing job that is stuck in waiting loop but I have not figured out how. I have tried to "reset" the backup process by turning off the backup and then turning it on again but that did not help. I have no possibility to restart the sql server (and I don't know if that would help).
SQL Server Managed Backup for Windows Azure (SSMBackup2WA) stuck waiting for progress update
1 By "through the UI", I presume you mean ... using the OpenStack Dashboard. Backup in OpenStack is a bit complicated. Services like Nova, Cinder and Trove each have their of mechanisms for backup the resources that they manage. In addition, there is a service called Freezer that is designed to orchestrate backups ... in various ways. For example, it can perform a Nova instance snapshot, a Cinder volume backup or a Glance image backup. Furthermore, Freezer has a mechanism for scheduling periodic backups. See the Freezer: Agent User Guide. So to answer your question: Nova doesn't support backup scheduling. The base Horizon Dashboard doesn't support scheduling either. If you set up Freezer, you should be able to use it to create a backup job to do a periodic instance snapshot. You can do other sorts of backup as will ... which might be more appropriate than instance snapshots. There is a Horizon plugin for Freezer. But if it was up to me, AND all I wanted to do was regular instance snapshots for a fixed set of instances, I would just use cron to do the scheduling and either shell or Python scripts to orchestrate the instance snapshots. Share Improve this answer Follow answered Jan 1, 2022 at 14:36 Stephen CStephen C 708k9595 gold badges821821 silver badges1.2k1.2k bronze badges Add a comment  | 
I need to schedule snapshot of OpenStack instance. I'm wondering if this can be done through the UI. Or is it terminal only?
How to: Schedule snapshot of OpenStack instance
How about exporting your applications regularly (e.g. on a daily basis)? You'd use APEXExport. As it is invoked from the operating system command prompt, you can create a batch script (.bat on MS Windows) and schedule it (using Task Scheduler on MS Windows) to run at any time you want, e.g. 02:00 (2 hours past midnight)). That's what I do, works just fine.
Right now I have an "Production App" that changes everyday. Every Friday I go to Tasks > Copy this application Tasks And send that copy to another ID App, in order to have (for example) "Production App 2" as a backup (I have the DB backup in another server) Is there any way to make that "backup" automatically? Thanks in advance
APEX 5.1: Automatic Copy of an app in Oracle APEX
1 As I know, there are two method to create the server backup. create by command line openstack server backup create, it will use the server name while not define the --name optional argument. usage: openstack server backup create [-h] [-f {json,shell,table,value,yaml}] [--name <image-name>] <server> Create a server backup image positional arguments: <server> Server to back up (name or ID) optional arguments: --name <image-name> Name of the backup image (default: server name) create by the horizon GUI create snapshot button and have to define the snapshot name. So, you could get the instances's backup list through images list with some search filter, and the Type column could tell the create method, the Visibility column default to Private of the server backup, like this: If you click the image name, you will see the instance_uuid of the original server in the Custom Properties block. In my experiment, you should get the whole cluster's backup servers list by some script, the horizon GUI functions are limited. Share Improve this answer Follow edited Jan 12, 2022 at 14:08 Stephen C 708k9595 gold badges821821 silver badges1.2k1.2k bronze badges answered Oct 28, 2021 at 3:24 Victor LeeVictor Lee 2,61033 gold badges2020 silver badges4141 bronze badges Add a comment  | 
How do you verify whether OpenStack instances are being backed up or not? Is there a way to do this using the GUI? The instances are running on Centos7
How do you verify whether OpenStack instances are being backed up or not? Is there a way to do this using the GUI?
I don't believe there is a way to tell zip to create the target directory, but this is something you can do yourself using if not os.path.exists(today): os.makedirs(today) Note that makedirs will recursively create all required directories, meaning that you no longer need os.mkdir(target_dir) unless you want to keep it.
Following a python tutorial making backups. The code works only if i manually in python make all subfolders for placing final zip file, but that defeats the purpose. 1)I create subfolder backup. 2) But i want to create %Y%m%d folder in it using zip. I pass all the arguments. # pylint: disable=missing-module-docstring import os import time source = "/Users/bogdan/Downloads/tests/new/" target_dir = os.path.join(source, "backup") today = target_dir+os.sep+time.strftime("%Y%m%d") now = time.strftime("%H%M%S") if not os.path.exists(target_dir): #create a subdirectory for backups os.mkdir(target_dir) print("Created backup dir") target = today+os.sep+now+".zip" #HERE IS THE PROBLEM. the Zip command will work only if folder "today"(variables filled) already exists in parent backup folder. But i want zip command to CREATE it(it contains Year,date etc.. zip_command = r"zip -qr {0} {1}".format(target, source) print(zip_command) zip_command The code above is from book the "byte of python" Here i use the plain zip command in terminal (base) bogdan@MacBook-Air-Bogdan main % zip -qr /Users/bogdan/Downloads/tests/new/backup/20210922/123228.zip /Users/bogdan/Downloads/tests/new zip I/O error: No such file or directory zip error: Could not create output file (/Users/bogdan/Downloads/tests/new/backup/20210922/123228.zip)
zip command in Mac refuses to create folder
You have to remove --databases https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_databases --databases, -B Dump several databases. Normally, mysqldump treats the first name argument on the command line as a database name and following names as table names. With this option, it treats all name arguments as database names. CREATE DATABASE and USE statements are included in the output before each new database. This option may be used to dump the performance_schema database, which normally is not dumped even with the --all-databases option. (Also use the --skip-lock-tables option.) Note See the --add-drop-database description for information about an incompatibility of that option with --databases.
Is there a way to remove/change the USE databasename from the .sql file generated with mySqlDump? I'm useing the following commandline bin/mysqldump.exe -uName -pPass --single-transaction --routines --triggers --host host_test.com --databases testreporting > backups/testreporting.sql The part I want changed is the start -- -- Current Database: `testreporting` -- CREATE DATABASE /*!32312 IF NOT EXISTS*/ `testreporting` /*!40100 DEFAULT CHARACTER SET latin1 */; USE `testreporting`;
How to change mysqldump USE DATABASE
1 You are using dynamic provisioning and then you want to hardcode DiskURIs? With this you also have to bind pods to nodes. This will be a nightmare when you have a disaster recovery case. To be honest, use Velereo :) Invest the time to get comfortable with it, your MTTR will thank you. Here is a quick start article with AKS: https://dzone.com/articles/setup-velero-on-aks Share Improve this answer Follow answered Aug 28, 2021 at 22:52 Philip WelzPhilip Welz 2,63955 silver badges1313 bronze badges 0 Add a comment  | 
I have an (AKS) Kubernetes cluster running a couple of pods. Those pods have dynamic persistent volume claims. An example is: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pvc namespace: prd spec: accessModes: - ReadWriteOnce storageClassName: custom-azure-disk-retain resources: requests: storage: 50Gi The disks are Azure Managed Disks and are backupped (snapshots) with the Azure backup center. In the backup center I can create a disk from a snapshot. Here is my question: how can I use the new disk in the PVC? Because I don't think I can patch the PV with a new DiskURI. What I figured out myself is how to use the restored dik directly as a volume. But if I'm not mistaken this does not use a PVC anymore meaning I can not benefit from dynamically resizing the disk. I'm using kustomize, here is how I can link the restored disk directoy in the deployment's yaml: - op: remove path: "/spec/template/spec/volumes/0/persistentVolumeClaim" - op: add path: "/spec/template/spec/volumes/0/azureDisk" value: {kind: Managed, diskName: mysql-restored-disk, diskURI: <THE_URI>} Some people will tell me to use Velero but we're not ready for that yet.
Use an existing disk in a persitent volume claim
1 It is very unusual for a dump to be larger than the database; usually that means that you have bytea columns or large objects in there. But even with these I could not explain a factor of five. Without understanding the cause, the solution is probably to use a compressed custom format dump: pg_dump -F c -p 5432 -f backup.sql db Share Improve this answer Follow answered Aug 21, 2021 at 19:37 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
My English is bad and I used a translator. I am doing backup using: pg_dump -p 5432 db> backup.sql At the moment, the weight of the backup reaches 4GB, and continues to grow more and more. At the same time, the database itself weighs only 700MB. I think that it should not be so, how can you optimize the creation of a backup and reduce its weight?
The database backup in postgresql is five times larger than the database itself
1 You cannot restore the backup directly to a running instance. You need to create a volume from the backup and use the same to boot your instance and it will change the Public IP. But you may prefer to use elastic IP if its allowed within the free tier. Share Improve this answer Follow edited Sep 26, 2021 at 0:44 halfer 20.2k1919 gold badges105105 silver badges194194 bronze badges answered Aug 12, 2021 at 15:47 Anupam SinhaAnupam Sinha 33411 silver badge55 bronze badges 2 Thanks a lot for the info about the elastic IP, didnt know that was a thing. I will try the reserved IP if thats possible and that would be enough for me and a solution that works for me. I am a bit irritated why it is not possible to restore and replace an existing boot volume or reassign another one to the instance. Thats what i would assume how it worked. Tried the reserved IP, thats what i needed. I can now just assign and unassign the reserved ip for a newly created instance of a backup. Thank you. – DarkerTimes Aug 17, 2021 at 12:01 thanks for the comment, i am irritated by this too, its just a bit troublesome to detach, terminate the old boot volume, then create new boot volume using the backup and attach it back to the instance, 4 steps just to recover the backup lol – Maxwell Cheng Jul 6, 2022 at 7:17 Add a comment  | 
I wanted to try out the Oracle Free Cloud, but i have a problem with restoring a backup boot volume to the instance. Is it possible to restore a boot volume backup and replace the current one from the instance? At the moment i was only able to terminate the current instance and boot volume and create a new instance out of the backup. But then the public IP changed...
Oracle Free Cloud restore boot volume backup
It looks like your question was answered in this Github thread. Adding to Ajay's answer- Snapshots contain both the contents of a web app and the web app configuration. Your app files content can be found in the D:\home\site\wwwroot folder of your app.
In Microsoft documentation, Microsoft dosen't say what folder is considered by snapshot. But , she say : any data outside '/home' is not persisted , when open a ssh terminal on the web app using Kudu. So, if i have an azure web app custom docker image with premium sku , which he run a application stored under /var/www/html and this application store log file in /home/site/wwwroot ( option websites_enable_app_service_storage is already changed to yes ) My question is : when using snapshot to restore web app , should i found all content in the web app ? Or only the file stored in /home ?
About snapshot feature in Azure App service
1 The short answer is: DO NOT use GNU's tar for incremental backups. The long answer is, - there is pretty old bug that won't allows to restore incremental archives reliably. The bug still exists and reported multiple times since 2004. References: stackexchange 01,stackexchange 02, Ubuntu-Lunchpad, GNU 01, GNU 02, GNU 03, Debian Share Improve this answer Follow answered Nov 15, 2021 at 12:17 AlexAlex 81577 silver badges99 bronze badges Add a comment  | 
I created a python script to do an incremental backup strategy on seven days whith a full backup on Sunday, using the command : tar I have no probleme to generate my differents backups. However, I've got an issue during trying to restore an incremental backup with this message error : tar: Cannot rename `./path1' to `./path2': No such file or directory tar: Exiting with failure status due to previous errors My backups strategy run for a jenkins service. Do you why I've got this error message which stop my restore. And do you know how to fix it
Tar incremental restore : Cannot rename
1 If you want force the app to allow backups you need to edit the AndroidManifest.xml from android:allowBackup="false" to android:allowBackup="true". However, if you edit the AndroidManifest.xml the signature of the app will no longer be valid. This means that after rebuilding the apk you need to sign it with your own key and reinstall it. During this all app data will be lost. I found this guide a while back and successfully used it on another app. I'm not sure whether banking apps have any protections against this though. https://dalvikplanet.blogspot.com/2020/05/how-to-force-any-android-app-to-allow.html Share Improve this answer Follow edited Jun 12, 2022 at 11:47 answered Jun 12, 2022 at 11:41 ToxideToxide 1122 bronze badges Add a comment  | 
may I please ask? Let's assume I have access to android manifest xml. Some applications, like banking apps have option to be backed up disabled. I would like to back them up with adb backup anyway. Can I modify android manifest xml to allow the backup? How? Thank you
Is it possible to modify android manifest xml to allow backup of application that originally could not be backed up?
Not sure if that's doable as of today but this is definitely a much-requested feature: Feature request : backup file #754 Export Settings (include Quick Access) #2880 Persisting the Transfer and such Explorer settings at machine level. #4169 Feel free to comment on any/all of the above issues adding more context or create a new issue for the Storage Explorer Team to evaluate and prioritize.
On MacOS, I'm trying to restore Microsoft Azure File Explorer settings/configurations from an old hard drive backup. I'd like to get all the previous account connections back without having to set them up again manually. Where is this data stored in the MacOS directory structure so I can copy it to the new hard drive?
Where are Azure Storage Explorer configurations stored?
You typically do not need to "backup" Amazon S3 objects because they are replicated in multiple data centers. However, as yu point out you might want a method to handle "accidental deletion". One option is to prevent deletion by using Object Locking or a Bucket Policy. If people aren't permitted to delete objects, then there is no need for a backup. Of the two options you present, Versioning is a better option because you are not duplicating objects. This means that you are not paying twice for storage. It is also simpler because there is no need to configure replication. Buckets cannot be deleted unless they are empty.
I just want to know the best strategy for backup of the files stored on s3 bucket. I can think of 2 options - enabling versioning and (periodically e.g. once a day) syncing to new s3 bucket. The files are created by Athena CTAS queries every day and the file names are randomly generated. If I delete the files by accident, I need to restore it from the backup. Some advantages of having another s3 bucket is that it protects from accidental delete of the original s3 bucket itself and another one is easy restore process of the deleted file(s). On another hand, versioning looks simple and most preferred. I could not find my articles talking about the pros and cons of these 2 approaches and hence this question/debate. I just want to know the pros and cons of each approach. Thanks, Sree
s3 backup strategy: versioning vs sync to another s3 bucket
Unlike the data you store in Firestore or Storage, the user profiles in Authentication are fully managed by Firebase. I believe they're quite well globally replicated, but the point is that they're not your/my concern. If you do want to create your own back up of the user data, you can do so through the auth:export command of the CLI or through the Admin SDKs.
With a Firebase project using GCP resources in a single Region (not dual/multi region), are Firebase Auth Users also only stored somehow in that region and would be lost in case of a disaster in that region? I am backing up Firestore data (that contains additional information for accounts) as well as Storage data to Storage buckets in another region. But I am wondering whether the Firebase Auth Accounts itself (I mean the data from the "Authentication" tab in Firebase Console, e.g. Auth Provider, UID, password-Hash parameters for each user) would be lost in case of a disaster? Let's say a fire destroys the GCP region completely the project has set as default GCP location - I can then of course restore the Firestore and Storage data but will all accounts (="logins") be lost or are they anyway always backed up/replicated across regions by Google.
Are Firebase Auth User accounts lost if using a single GCP Region as default location in case of a disaster in that region?
1 why are you prompting the user for src and dest path though you already pass them as command args? The issue probably came from the fact you didn't provide the src arg while running the script. Things like python script.py srcpath dstpath Share Improve this answer Follow answered Feb 20, 2021 at 22:56 Mahery RanaivosonMahery Ranaivoson 44077 silver badges1818 bronze badges Add a comment  | 
So I am fairly new with coding in Python and in general and I am trying to write a program that will backup files in a giving folder. However, I continue to get a "NameError: name 'src' is not defined. I see some other questions similar about this error but none have yet to make me understand what I am doing wrong or why I get this error. As far as I understand it I am defining 'src' in the code below. Any help would be greatly appreciated. ERROR: File "/home/student/PycharmProjects/Lab1.py/Lab5.5.py", line 1, in processing backup(src, dest) NameError: name 'src' is not defined def backup(src, dest): #Checking if src and dest directories exist sourceFilePath = input('Enter folder path to be backed up') destFilePath = input('Please choose where you want to place the backup') #found = true for directory in [src, dest]: if not isdir(directory): print(f'could not find {directory}') found = False if not found: exit(1) #for each file in src for sourceFileName in listdir(src): #computing file paths sourceFilePath = path.join(src, sourceFileName) destFilePath = path.join(dest, sourceFileName) #backing up file copy2(sourceFilePath, destFilePath) #entry point if __name__=='__main__': #validating length of command line arguments if len(argv) != 3: print(f'Usage: {argv[0]} SRC DEST') exit(1) #performing backup backup(argv[1], argv[2]) #logging status message print('Backup succesful!')
Issue with NameError
The output from @@VERSION from the Managed Instance is misleading. Because a Managed Instance is an evergreen deployment (it is ALWAYS the latest version) this means that it is always a newer version than any version of SQL Server you will get on a VM. You will need to consider other methods such as BACPACs or replication to get your DB copy on the VM
I am trying to restore a backup from an Azure Managed Instance to an SQL Server running on an Azure VM. When running the Backup Script, I get this error Message: Meldung 3169, Ebene 16, Status 1, Zeile 2 The database was backed up on a server running version 15.00.2000. That version is incompatible with this server, which is running version 15.00.4073. Either restore the database on a server that supports the backup, or use a backup that is compatible with this server. Firstly, when running the Script SELECT @@VERSION the managed instance seems to run on version 12.0.2000.8 and secondly, the new SQL Server is running a newer version than the source DB from the error print, which means it shouldn't be a problem, right? What am I missing? Thanks in advance Justus
Why can't I restore a SQL Backup on a newer Version? Error 3169
The backups only support restore all the data during point time. We can not specify the portion of the data from every table to restore. Just for now, your request can not be achieved.
Could someone please help to achieve is this, my requirement is as follows: I have a database and it is in Azure, we have scheduled backups (point-in-time restore options available), and now I want to restore one of the dated database backup to my local. But I don't require all the data from all the tables, I only need some portion of the data from every table, Say for example I have 2 tables, and the data available in each table 1000 and 2000 records respectively and are related too, now I wanted to take only 100 and 200 records which are related, from backup and restore option.
Taking Database Backups with partially data from SQL Server tables
Redirect the output of a find command into a while loop while read line; do dattim=${line%%/*}; filpath=${line#*/}; echo "zip -r $dattim-backup.zip $filpath"; # zip -r "$dattim-backup.zip" "$filpath" done <<< "$(find /path/to/directory -name "*.txt" -printf "%TY-%Tm/%h/%f\n")" Find all files with the extension txt and print the date of modification in the format outlined along with a "/", the leading directories, "/" and the file name. Process the output through a while loop. Extract the date/time stamp into the variable dattim and the file path into the variable filpath. Use these variables to generate the commands to add the files to the dated zip files. Verify that the echo displays the commands as expected and then remove the comment flag to execute the actual zip command.
I want to compress and archive (e. g. zip or tar.gz) a lot of files which are all stored in /folder. It should create archive files like yyyy-mm-backup.zip including all original files which were modified in that time range. Sample files: fileA.txt (modified date 2020-12-01) fileB.txt (modified date 2020-12-02) fileC.txt (modified date 2021-01-01) Should create archive files: 2020-12-backup.zip (include fileA.txt and fileB.txt) 2021-01-backup.zip (include fileC.txt) Is this possible with a single zip or tar command? Or do I need a script or using logrotate?
Create compressed archive with filename based on year and month of compressed files on linux system
if=/dev/sda Is cloning the entire disk and of=/dev/sdd1 Is writing to a partition. Which doesn't make much sense. You may want to clone the entire disk onto another disk dd if=/dev/sda conv=sync,noerror status=progress bs=64k of=/dev/sdd Or better yet clone to an compressed image dd if=/dev/sda | gzip > /sda.img.gz And restore like so gzip -d /sda.img.gz | dd of=/dev/sda
I'd like to back up a SSD which I'm using for CentOS. Trying to learn dd. My drive is a fairly simple GPT partition of 120GB. I run "dd" to copy the image of sda to a USB stick sdd1: [root@localhost ~]# dd if=/dev/sda conv=sync,noerror status=progress bs=64k of=/dev/sdd1 120029118464 bytes (120 GB, 112 GiB) copied, 30810 s, 3.9 MB/s 1831575+1 records in 1831576+0 records out 120034164736 bytes (120 GB, 112 GiB) copied, 30810.8 s, 3.9 MB/s But then when I examine the USB stick, there is nothing to be seen on it and I see no way to mount it this is what appears under the Disks command Question is: How do I access the image? (As a side note, I read a claim that the dd command is like the IBM JCL statement of the same name. I was a mainframe programmer. The IBM DD command is often still called a "DD Card". It doesn't copy files. It just joins your file declaration in your program to some external file. To copy a file the old skool way is to use IEBGENER)
Trying to back up CentOS using the "dd" command
1 The easiest way to backup a directory to Amazon S3 would be: Install the AWS Command-Line Interface (CLI) Provide credentials via the aws configure command When required run the aws s3 sync command For example aws s3 sync folder1 s3://bucketname/folder1/ This will copy any files from the source to the destination. It will only copy files that have been added or changed since a previous sync. Documentation: sync — AWS CLI Command Reference If you want to be more fancy and keep multiple backups, you could copy to a different target directory, or create a zip file first and upload the zip file, or even use a backup program like Cloudberry Backup that knows how to use S3 and can do traditional-style backups. Share Improve this answer Follow answered Jan 24, 2021 at 7:03 John RotensteinJohn Rotenstein 254k2626 gold badges408408 silver badges497497 bronze badges Add a comment  | 
Use case: I have one directory on-premise, I want to make a backup for it let's say at every midnight. And want to restore it if something goes wrong. Doesn't seem a complicated task,but reading through the AWS documentation even this can be cumbersome and costly.Setting up Storage gateway locally seems unnecessarily complex for a simple task like this,setting up at EC2 costly also. What I have done: Reading through this + some other blog posts: https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html What I have found: 1.Setting up file gateway (locally or as an EC2 instance): It just mount the files to an S3. And that's it.So my on-premise App will constantly write to this S3.The documentation doesn't mention anything about scheduled backup and recovery. 2.Setting up volume gateway: Here I can make a scheduled synchronization/backup to the a S3 ,but using a whole volume for it would be a big overhead. 3.Standalone S3: Just using a bare S3 and copy my backup there by AWS API/SDK with a manually made scheduled job. Solutions: Using point 1 from above, enable versioning and the versions of the files will serve as a recovery point. Using point 3 I think I am looking for a mix of file-volume gateway: Working on file level and make an asynchronus scheduled snapshot for them. How this should be handled? Isn't there a really easy way which will just send a backup of a directory to the AWS?
On-Premise file backup to aws
1 To get the date to be replaced with formatted date, you need to force execution with '$' around the parenthesis. * * * * * rsync -avz -Iu --backup [email protected]:/home/database.sqlite /home/ProdBackups/dbBackup_$(date +\%Y\%m\%d\%H\%M\%S).sqlite As an alternative, consider creating a small wrapper script backup.sh, and then execute the backup.sh scripts from the cron # ~/cron/backup.sh timestamp=$(date '+%Y%m%d%H%M%S') rsync -avz -Iu --backup [email protected]:/home/database.sqlite /home/ProdBackups/dbBackup_$timestamp.sqlite And the cron will be: * * * * * $HOME/cron/backup.sh The advantage is that you can test the backup script, and then execute it as-is from the cron, without having to worry about quoting the date format, etc. Share Improve this answer Follow edited Jan 25, 2021 at 15:23 answered Jan 19, 2021 at 15:11 dash-odash-o 14k11 gold badge1212 silver badges3838 bronze badges 0 Add a comment  | 
This command works if I input in shell in order to backup locally a database file from a remote server with the date appended to each new backup file so they are all different: rsync -avz -Iu --backup [email protected]:/home/database.sqlite /home/ProdBackups/dbBackup_(date +\%Y\%m\%d\%H\%M\%S).sqlite But when I try running it as a cronjob inside crontab: * * * * * rsync -avz -Iu --backup [email protected]:/home/database.sqlite /home/ProdBackups/dbBackup_(date +\%Y\%m\%d\%H\%M\%S).sqlite I get this error: Syntax error: "(" unexpected How can I fix this?
rsync command works but not as a cronjob
1 Percona XtraBackup 8.0.22-15.0 has been release which supports MySQL 8.0.22. Please find below links for more details https://www.percona.com/doc/percona-xtrabackup/LATEST/release-notes/8.0/8.0.22-15.0.html https://jira.percona.com/browse/PXB-2314 Share Improve this answer Follow answered Dec 16, 2020 at 4:48 ROHIT KHURANAROHIT KHURANA 93377 silver badges1313 bronze badges 1 Thanks @ROHIT KHURANA, really good to know. – Joseph Tankoua Dec 16, 2020 at 10:28 Add a comment  | 
I am a beginner in backups and restorations. I want to perform a backup and restoration using Percona XtraBackup 8.0.14 on MySQL 8.0.22. According to https://www.percona.com/blog/2020/10/23/mysql-new-releases-and-percona-xtrabackup-incompatibilities/, it seems that MySQL 8.0.22 is not compatible with percona 8.0.14. Anyone know how can I perform the backups and restoration of my DB ? I thought about downgrading MySQL from 8.0.22 to 8.0.21 but seems blurry in my mind. Notes: MySQL and percona are running in Docker containers.
Perform backup and restoration using percona xtrabackup on mysql 8.0.22
The answer is it depends. Typically backups are made to some sort of network storage, so loosing a node doesn't affect the backups. If for some reason the backups are stored locally to the system, then it would depend on if you have HA enabled, and whether you are backing up thee replica forests along with the primary forests. If HA is enabled, you could lose a node and keep running, giving you time to rebuild the lost node. Alternatively, if you are backing up both the primary and replica forests in your cluster, you would have a complete data set in your backups even if you lose a node.
According to ML 9 doco, a database is backed up onto all nodes in the cluster, but the backup process appears to backup the forests that are only local to each node. So for a database with 6 forests across 3 nodes, I may have 2 forests backup files on each node. If I have a 3 node cluster and lose one node ( so that one node is now 100% unrecoverable ), are all my backups now effectively useless as they will be missing the back up files for 2 forests? Or is ML smart enough to re-create the missing data from the dead node, via parity? Thanks.
What happens to Marklogic database backup if one cluster node is lost?
Short answer: No. Long answer: mysqldump creates a consistent snapshot if applied correctly (--single-transaction). Copying files grabs an inconsistent, possibly corrupted snapshot that could be full of problems. For other options consider innobackupex. Remember, you can make a user with read-only access for backup purposes. You can also run the backup process locally and save an encrypted backup stream somewhere else. Then if someone somehow intercepts this stream and saves it they still have nothing.
i'd like to run a cronjob to backup a database, but i'd rather not expose the credentials. can i just backup the relevant folder like this scp /var/lib/mysql/myDatabase [email protected]:/path/to/backup/myDatabase instead of using mysqldump myDatabase > myDatabase.sql -uUser -pPassword; <--- makes me twitch is there a better way to do this without exposing credentials in a script? or am i worrying for nothing?
MySQL: can i cron backup folder instead of using mysqldump so as not to expose credentials?
1 One approach is to utilize a Recovery Services vault. Thorough guidance is available at https://learn.microsoft.com/en-us/azure/backup/backup-afs Share Improve this answer Follow answered Nov 23, 2020 at 13:31 MMThornbergMMThornberg 35411 silver badge55 bronze badges 1 can you please comment on this stackoverflow.com/questions/65137145/… – Bala krishna Dec 4, 2020 at 7:11 Add a comment  | 
I have created Azure file share and used it as a persistent storage for my application deployed in openshift. The data is important and I need to take backups of my azure files also periodically. How to take backup of Azure file shares inside storage account?
How to take backup of Azure fileshares
As the screenshot that you provided shows that when you select Azure Backup, it selects the principal Backup Management Service and grants it the necessary permissions. In Terraform, it should be like this: resource "azurerm_key_vault_access_policy" "example" { key_vault_id = azurerm_key_vault.example.id tenant_id = "tenant_id" object_id = "Backup Management Service object Id" key_permissions = [ "get", "list", "backup" ] secret_permissions = [ "get", "list", "backup" ] } Get more details about the key vault access policy.
I am deploying azure infra using terraform. I have an encrypted vm - and it backup keeps failing - with below reason: Azure Backup Service does not have sufficient permissions to Key Vault for Backup of Encrypted Virtual Machines I checked the Docs and found i have to create access policy for keyvault - azure backup. To set permissions: In the Azure portal, select All services, and search for Key vaults. Select the key vault associated with the encrypted VM you're backing up. Select Access policies > Add Access Policy. Add access policy In Add access policy > Configure from template (optional), select Azure Backup. The required permissions are prefilled for Key permissions and Secret permissions. If your VM is encrypted using BEK only, remove the selection for Key permissions since you only need permissions for secrets. How do i do this in terraform. cannot find example for this?
key Vault access policies for encrypted vm azure backup - Terraform
There is no direct option to select backup failed alerts in Alerts blade as of now however as an alternative you can follow below options to achieve Backup Alerts. To create the rule, navigate to App Service --> Monitoring --> Alerts. Click on New alert rule Resource will be automatically selected and can edit too. Next step click on select condition. This will take you to configure signal logic. Signal Type – All , Monitor Service – All, Select “All Administrative operations" signal name. Select Chart period, Select Alert Logic, Event Level – Error, Status - Failed. This rule gives all the administrative failed alerts including backup failed alerts with a detailed email. Then select Action Group if you already have one otherwise click on create action group and create one. After adding action group edit click on Email Alert to enter Email or Phone number. Click ok and save changes. Then add alert rule name and select resource group in Alert rule details Once you create alert rule you will receive and email like below Activity log rules take 5 minutes to activate. You can manage the alert rules by going to Monitoring --> Alerts --> Manage alert rules. Once the action is triggered you will receive an email notification with detailed description. The below screenshot is an informational level email but as you have enabled for failed operations you will receive an email once an operation is failed.
Closed. This question is not about programming or software development. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 5 months ago. Improve this question Today my scheduled backup failed and I had to go to the portal to know the reason or monitor the status. Instead is there a way to configure Backup alerts that notify me the status?
How to create App Service Backup Alerts for Failed Backups? [closed]
The Restore will restore the entire backup to a temporary directory. Once all of the stands/forests have been copied to disk it will swap the current forests/stands with the restored ones. API requests will still be serviced by the active stands, with the existing data. It will not use the restored data until after the restore completes. There will be a very small amount of downtime as the existing forests shutdown, and the restored forests start up. This is usually just a few seconds, but it does depend on how big the forests are. There is no difference in the behavior if you are restoring a full + incremental/s or only a full backup. Be aware that you will need enough disk space for both the current data and the restored data, as they will coexist for a period of time. Backup and Restore Transactions Phases of Backup or Restore Operation
We are using MarkLogic 9 version. We have developed API above MarkLogic We are doing daily full backup for last 7 days. Now we are upgrading our MarkLogic server to 10 & as a part of disaster recovery if due to some reason upgrade fails and we need to restore backup from yesterday I want to understand How restore process works, do restore for each stand and then serve from remaining ones? Whether API requests will be served during restore process ? If API requests will be served then which data will be used to serve that request ? Do we need to go for downtime as a part of restore process ? If we go for incremental backup & then restore, will there be any difference to above points?
MarkLogic - API Request & Restore Database
1 In general it's important to take regular snapshots of the metadata storage as this is the "index" of what's in the Deep Storage. Maybe one snapshot per day, and store them for however long you like. It's good to store them for at least a couple of weeks, in case you need to roll back for some reason. You also need to back up new segments in deep storage when they appear. It isn't important to take consistent snapshots, just to get every file eventually. Also see https://groups.google.com/g/druid-user/c/itfKT5vaDl8 One other note as you mentioned data loss: Deep Storage is not queried directly - queries execute on the local segment cache in, for example, the Historical process. The Deep Storage is written to at ingestion time, so you might "lose" data that can't be ingested once it's available again, but you will continue to get analytics capability as the already-loaded data is on the historicals... Just a thought haha ! I hope that helps....?!?! Share Improve this answer Follow answered Sep 24, 2020 at 11:52 Peter MarshallPeter Marshall 10655 bronze badges 2 But taking backup of whole deep storage is resource consuming tasks if u have a huge cluster and data can be in TBs. It's better to backup data from deep storage incrementally instead of taking full backup. Is there existing utility that can help to achieve that? – Rahul Vedpathak Sep 25, 2020 at 12:31 I'm afraid I'm not an S3 expert: maybe something like this is what you need? stackoverflow.com/questions/21479110/s3-incremental-backups – Peter Marshall Sep 28, 2020 at 11:45 Add a comment  | 
I am new to druid. In our application we use druid for timeseries data and this can go pretty large(10-20TBs). Druid provide you facility of deep storage. But if this deep storage crashes/or not reachable then it will result in data loss and which in turn affect the analytics the application is running. I am thinking of taking an incremental backup druid segment data to some secure location like ftp server. So if deep storage is unavailable, then they can restore the data from this ftp server. Is there any tool/utility available in druid to incrementally backup/restore druid segment?
How to take druid segment data backup?
1 You need to import your schema into the new keyspace first, this error occurs because the server cannot find a schema label in your dataset. The steps for migrating schema are described in the docs: https://dev.grakn.ai/docs/management/migration-and-backup Share Improve this answer Follow answered Sep 16, 2020 at 11:20 Adam MitchellAdam Mitchell 6133 bronze badges Add a comment  | 
I have created a backup of Grakn with the exporter tool like this: ./grakn server export 'old_test' backup.grakn $x isa export, has status "completed", has progress (100.0%), has count (105 / 105); I then wanted to import this into a new keyspace with ./grakn server import 'new_test' backup.grakn But I got this error below: An error has occurred during boot-up. Please run 'grakn server status' or check the logs located under the 'logs' directory. io.grpc.StatusRuntimeException: INTERNAL: java.lang.NullPointerException
NullPointerException on loading data into Grakn
If you are willing to make DESTINATION the first expected argument, then something like this should work for you: DESTINATION=$1 TABLES=`echo ${@:2}|sed "s/\s/ -t /g"` /usr/local/pgsql/bin/pg_dump --quote-all-identifiers --username=postgres -p 5432 -t $TABLES -h localhost mydb | gzip -1 > $DESTINATION
I have a script which backups multiple tables in a single line as follow: /usr/local/pgsql/bin/pg_dump --quote-all-identifiers --username=postgres -p 5432 -t schema.table1 -t schema.table2 -t schema.table3 -t schema.table4 -h localhost mydb | gzip -1 > file.dmp.gz I've created a new sh script to be able to re-utilize the command as follow: backup_table.sh $TABLE=$1 $DESTINATION=$2 /usr/local/pgsql/bin/pg_dump --quote-all-identifiers --username=postgres -p 5432 -t $TABLE -h localhost mydb | gzip -1 > $DESTINATION As you can see, this works for only 1 table, I'm not sure how to pass multiple tables to the sh script (-t table1 -t table2 -t table3 etc) I could use arrays, but still, not sure how to code this. Thanks!
backup multiple tables on one single sh script
1 You have several ways. Two of them already suggested by @John Hanley; If you have access to the GCP console you can create a disk snapshot of a VM running Wordpress (most convinient and easy - also reliable). If you just have SSH access to the VM - just manually backup all the files & database Use one of many available back-up/migration plugins. Share Improve this answer Follow answered Aug 24, 2020 at 11:29 community wiki Wojtek_B Add a comment  | 
I have a client's website build on WordPress and hosted on Google cloud host. Now I want to backup it before developing the new website for him. Is there any way to download all files as a zip or any other option?
How to download WordPress files from Google Cloud Platform
Try the next code, please: Private Sub Workbook_Open() Dim wb As Workbook, shC As Worksheet Dim sh As Worksheet, i As Long, strBackup As String, arr As Variant Set shC = ThisWorkbook.ActiveSheet 'this should be clear... strBackup = Range(ThisWorkbook.Names("BackupPath")).Value 'extract the string from the named range Set wb = Workbooks.Add 'open a new workbook shC.Copy before:=wb.Worksheets(1) 'copy the active sheet before the existing one If wb.Worksheets.Count > 1 Then 'delete all sheets, except the first For i = wb.Worksheets.Count To 2 Step -1 Application.DisplayAlerts = False wb.Worksheets(i).Delete Application.DisplayAlerts = False Next i End If arr = Split(strBackup, ".") 'split the path on the dot "." 'the last array element will be extension arr(UBound(arr)) = "xlsx" 'change exiting extension with "xlsx" strBackup = Join(arr, ".") 'join the processed array and obtain the correct path wb.SaveAs strBackup, xlWorkbookDefault 'save the workbook wb.Close False 'close it without saving MsgBox "A backup has been done, like " & strBackup End Sub
I need to create a backup copy of active Sheet - into a new workbook. So that the new workbook would be created with only Active Sheet in it (no macro, no vba) I need it to be happening on "After Opening" my Worksheet event Doing the following: Private Sub Workbook_Open() ActiveWorkbook.SaveCopyAs "E:\Projects\FolderName\FileName.xlsm" End Sub It copies the entire Workbook, with all the vba code and macro in it, not what I need. Is there a way to only copy the Active Sheet? Ideally, I would wanted to have cell reference (I store the file path in a different sheet, in a separated cell named "BackupPath").
Creating a backup copy of Active Sheet using Excel vba
As @Andre suggested I managed to use FileSystemObject to copy the backend while the frontend is in use. Function BackUpBE() On Error GoTo Err_backup Dim fso As Object Set fso = VBA.CreateObject("Scripting.FileSystemObject") Dim strNewBEname As String Dim strOldBEname As String Dim strDateStamp As String strOldBEname = "P:\Access Datenbank\Durament_db_be\Durament_db_be.accdb" 'strOldBEname = "\\192.XXX.XX.XXX\Daten\Access Datenbank\Durament_db_be\Durament_db_be.accdb" strDateStamp = Format(Date, "d.m.yy") strNewBEname = "P:\Access Datenbank\Durament_db_be\BackUp\" & "Backup_vom_" & strDateStamp & ".accdb" 'strNewBEname = "\\192.XXX.XX.XXX\Daten\Access Datenbank\Durament_db_be\BackUp\" & "Backup_vom_" & strDateStamp & ".accdb" 'copy current BE to Folder Call fso.CopyFile(strOldBEname, strNewBEname) MsgBox "The back-end database has been backed up!" Exit_Backup: Exit Function Err_backup: MsgBox Err.Number & Err.Description Resume Exit_Backup End Function
I am trying to automatically back up the backend of my split database which is located on a network drive. Unfortunately, I keep getting the error displayed in the title. Code: Function BackUpBE() On Error GoTo Err_backup Dim strNewBEname As String Dim strOldBEname As String Dim strDateStamp As String strOldBEname = "P:\Access Datenbank\Durament_db_be\Durament_db_be.accdb" 'strOldBEname = "\\192.XXX.XX.XXX\Daten\Access Datenbank\Durament_db_be\Durament_db_be.accdb" strDateStamp = Format(Date, "d.m.yy") strNewBEname = "P:\Access Datenbank\Durament_db_be\BackUp\" & "Backup_vom_" & strDateStamp & ".accdb" 'strNewBEname = "\\192.XXX.XX.XXX\Daten\Access Datenbank\Durament_db_be\BackUp\" & "Backup_vom_" & strDateStamp & ".accdb" 'copy database FileCopy strOldBEname, strNewBEname MsgBox "The back-end database has been backed up!" Exit_Backup: Exit Function Err_backup: MsgBox Err.Number & Err.Description Resume Exit_Backup End Function The code simply copies the current backend into another folder. At first I thought it was a server related issue concerning a password that is required. So I mapped the drive and used a local path, however, it still does not work. I have already stepped through the code using f8 and the error occurs upon exiting out of the function which does not make much sense to me. I appreciate any hints that would allow me to find the faulty part within my code, thanks in advance.
Error 70 Permission Denied when attempting to Backup Database Backend
To get the hour, you can use the environment variable %time% that is perfect for this task. You will need to extract the hour from %time% because it has format hh:mm:ss,cs for /f "tokens=1 delims=:" %%T in ("%time%") do set "hour=%%T" So the final command (assuming you are running both commands in the same batch file) for SQL Server will be sqlcmd -S SRV01 -E -Q "BACKUP DATABASE [databasename] TO DISK = N'C:\Backup\databasename\hourly_%HOUR%.BAK'"
I want to make every hour a backup of 1 database in SQL Server via a Windows Batch file. The hour should be processed in the filename. sqlcmd -S SRV01 -E -Q "BACKUP DATABASE [databasename] TO DISK = N'C:\Backup\databasename\hourly_%HOUR%.BAK'" Howto solve this?
Add the Hour of backup SQL Server command in Batchfile
Read carefully please! The script has a safeguard echo in the rmdir line. Do not remove this until you are 100% sure the script does what you want. @echo off for /f "tokens=2 delims=.=" %%i in ('wmic os get localdatetime /value') do set result=%%i set "mydate=%result:~0,8%" robocopy "d:\dat" "f:\backup\%mydate%" /MIR /Z for /f "skip=4 delims=" %%a in ('dir /b /ad /o-d "f:\backup\"') do echo rmdir /S/Q "%%~fa" pause So the script will create a folder for each date yyyymmmdd each time you run it, if the folder already exists i.e you ran the backup twice in one day, it will simply update the files and not recreate any folders if they exist. The second for loop you have to be carefull of. it will sort the folders by decending date, i.e the latest created folders will be listed first. So you will see here I have skip=4 meaning it will skip the first 4 latest folders, and delete the rest. So if you want to keep two latest backups, then do skip=2 etc. to amend the date to yyyymm only, change to set "mydate=%result:~0,6%". You get the idea.
Closed. This question is seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. It does not meet Stack Overflow guidelines. It is not currently accepting answers. We don’t allow questions seeking recommendations for software libraries, tutorials, tools, books, or other off-site resources. You can edit the question so it can be answered with facts and citations. Closed 3 years ago. Improve this question I have a file .bat that do a backup but i want to do another file .bat that remove the oldest files. Someone can help me? set dia=%DATE:~0,2% echo %dia% if exist f:\exist.txt goto OK echo KKKKKKKKKK pause exit :OK md f:\backup md f:\backup\%dia% xcopy d:\dat\*.* f:\backup\%dia%\*.* /s /c /h /r /e /y /j echo TODO OK pause
How to do a backup with .bat with windows? [closed]
1 The best way to get backup for SQL VMs (among other things) is to register your VM hosted VM with the SQL Server VM resource provider. Once you do, you can manage backups from the Azure portal and configure storage accounts directly using blobs instead of mounted file shares which I believe mount based on the context in which you mounted them. After following the steps, you should get SQL management blades in the Azure portal for your VM as if you created the VM based on an existing SQL image. https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/sql-vm-resource-provider-register?tabs=azure-cli%2Cbash Register the SQL VM resource provider to your subscription az provider register --namespace Microsoft.SqlVirtualMachine Get the existing Compute VM $vm = Get-AzVM -Name <vm_name> -ResourceGroupName <resource_group_name> Register with SQL VM resource provider in full mode New-AzSqlVM -Name $vm.Name -ResourceGroupName $vm.ResourceGroupName -SqlManagementType Full Share Improve this answer Follow edited Jul 6, 2020 at 14:25 answered Jul 6, 2020 at 14:15 GeeknGeekn 2,76266 gold badges4444 silver badges8181 bronze badges Add a comment  | 
I have a standalone MS SQL Server and azure File share mounted on the same windows server, however when I try to configure backup maintenance plan I do not see the Azure file share which is mapped to Z. Chanting the permissions of SQL Server and SQL Server agent user context to the local user does not help. How can I use mounted Azure file share for SQL Server direct backups?
Azure file share for SQL Server backups
1 The simple thing you can do is to set an 'expired' date instead of 'obsoletFlag' and consider a backup maximum period less than the expiration period. Adapt your delete job to process only 'expired' items. You will have to store for a longer period but any not expired backup could be restore accordingly. Regards. Share Improve this answer Follow answered Jun 8, 2020 at 13:52 Nicolas COSMENicolas COSME 9933 bronze badges Add a comment  | 
I'm currently building a backend application that will need to store and serve a lot of images and videos (> 1 Mio. each). The files will obviously get uploaded and sometimes deleted but not updated. I did some reading on the issue of whether to store the files in my database or in the file system and decided to use the file system for file storage (and keep metadata in the DB). How I upload files: I'm uploading files synchronously via a REST endpoint on the file server. How I delete files: I delete files asynchronously by marking them as obsolete in the DB metadata and then letting a cron job in the file server do the actual deleting. Now at the moment I'm trying to find out how I would backup my database and file server in a synchronized fashion. I'm not much of a devOps guy and didn't find anything useful during my Google researches (probably because I lack the right terminology for a good question). My current plan: currently my naiv idea is to always first backup the database metadata and once that finishes to backup the files. I suspend my garbage collector for the time of the file backup. If I restore my DB, there might be orphaned files on the file system but that's ok (I guess ...) Are there logic holes in my current strategy? What better options are there? At the moment I'm not using any cloud storage solution so please don't suggest stuff that e.g. Amazon offers for services like S3 Thanks!
Synchronize File Server and Database Backups?
Using git, might not possible. But using Pycharm, Luckily it is possible. You can use its builtin local history feature to bring it back from dead. Just right click your project name from project section (left panel) > Local History > Show History. Locate your file from that popup window and just click the revert button Note: You can verify that file changes will be listing a "deleting" history, and after pressing revert button it will show up in your project section. See details here
I was having trouble pushing my file to a repo, so I read to use: 'git checkout filename' and now my script is gone. Is there a backup section in pycharm or github where I can recover this script? How else could I recover this file, I've read some other posts but don't fully understand and don't want to put any more commands in case I make it worse. Thanks
recover deleted script when used 'git checkout filename'
1 Yes , its possible to upload your users data to their respective google drive account and retrive back when again user installed your app. You can follow this link to know more https://www.c-sharpcorner.com/article/google-drive-integration-in-flutter-upload-download-list-files Share Improve this answer Follow answered Feb 9, 2020 at 12:08 Saikumarreddy atluriSaikumarreddy atluri 53977 silver badges2323 bronze badges Add a comment  | 
Objective: backing up my Notekeeping app data such that I can sync it back if I lost my data. May I know how am I able to do it? I have tried searching for answers online, but unfortunately I am unable to find a clear approach to it. I have seen some apps that uses googldrive, or zip file approach. May I know what is the best approach? And also how am I able to do it. Thanks! All help is deeply appreciated!
Hi, Is it possible to backup my app data?
I can't work out if restoring the volume and attaching it to an instance will bring it back, or if using ec2 backup and restoring will also bring back the information on the volume. Both will work. The EC2 backup will be a little easier to recover because you won't have to manually connect the restored volume to an instance. The EC2 backup includes an EBS snapshot of any volumes attached to the instance.
I have an ec2 instance with one volume as root drive. I've set a AWS backup of the volume, is this the best procedure or should I ec2 backup not the volume. I can't work out if restoring the volume and attaching it to an instance will bring it back, or if using ec2 backup and restoring will also bring back the information on the volume. Any info would be helpful.
Backup of AWS ec2 root drive
1 When you switch database, you will have new schema_migrations table. In new database, schema_migrations is empty so Rails will think you have pending_migration. I think you need to re-migrate in your new database. You can use some feature like database dump to migrate date from old database to new database Share Improve this answer Follow answered Jan 9, 2020 at 1:28 Bùi Nhật DuyBùi Nhật Duy 32222 silver badges99 bronze badges Add a comment  | 
I have a Rails app which uses db my_database_development in my config/database.yml: development: <<: *default database: my_database_development Works correctly when I run rails server. Now I want to use another db, so I change my config/database.yml: development: <<: *default database: my_prev_database Now when I run rails server, they give me an ActiveRecord::PendingMigrationError. To resolve this issue, run: bin/rails db:migrate RAILS_ENV=development. When I run that command, my_prev_database gets cleared. I don't want that to happen. I want to use my_prev_database and all the data it has (which I backed up from somewhere) How can I switch database in Rails effectively? Thank you!
How to switch Rails database without having to do migrations?
It sounds like you are trying to work with a dump file that was created on a Mac, and you are trying to restore it on another machine (or the other way around?). brew is a Mac package manager. The equivalent for Ubuntu is apt. To install Postgres on Ubuntu, please follow the steps on the Ubuntu page of the PostgreSQL website. To upgrade, you should be able to do apt-get upgrade. You may also not be able to brew upgrade postgresql on your Mac because Postgres may have been installed in another way (you may want to use Spotlight to find out whether your pg_dump lives in some .../Cellar/... folder or a .../Postgres.app/... folder (or some other folder), and perform your upgrade accordingly. In either case, it seems you are encountering this archiver error message because of a missing security patch
I am trying to restore the backup file to PostgreSQL DB by; pg_restore -d db_name -U user_name -C backup.dmp However, I get this error when I executed this command. pg_restore: [archiver] unsupported version (1.14) in file header The version of the PostgreSQL DB I'm using is 11.6 (Ubuntu 11.6-1.pgdg18.04+1) I assumed I need to upgrade PostgreSQL DB so I tried this command on the server. brew upgrade postgresql However, I get this error this time.. Error: postgresql not installed I would appreciate any help or advice. Thank you very much.
How to restore PostgreSQL database from dump file
1 You can have a look on this project: https://github.com/Zenika/alpine-firestore-backup I'm a contributor on it, don't hesitate if you have question or if you want new features. Share Improve this answer Follow answered Dec 2, 2019 at 13:20 guillaume blaquiereguillaume blaquiere 71.1k33 gold badges5151 silver badges8686 bronze badges 2 Thanks, I will check this library. It seems to nicely automate the backup process. – user8246956 Dec 3, 2019 at 10:33 It's more than a library, you have a Docker container that you can deploy on Cloud Run and customize it with Environment Variables. Then, create a Cloud Scheduler for triggering the Cloud Run periodically and for performing the backup. – guillaume blaquiere Dec 3, 2019 at 10:59 Add a comment  | 
Hello Google Cloud Platform users! I am interested in a solution for a regular (let's say daily) backup of Datastore/Firestore databases. Typical use: for some reason (bad "manual" operation, bug, whatever), a series of entities have been wrongly modified or destroyed, or the database is corrupted; in that case, the database version from the previous day will be restored. I know this has been discussed in previous posts, but mostly through gcloud datastore|firestore import|export through files hosted on Google Cloud Storage. The problem is that for large databases (typically for professional applications with thousands and thousands of entities), this approach can take huge time and resources, even if launched in batch during the night (and it can only get worse when the database increases). A solution that I have thought about would be to copy to another Datastore/Firestore dataset at each upsert, but that seems like overkill, since Datastore/Firestore services already guarantees replica anyway. But most of all: it does not address the issue of unwanted writing or deletion of entities if this second database is 100% synced with the original one... Are there best practices to backup Datastore/Firestore entities for this use case? Any (brilliant) idea is welcome! Thanks.
Backup of Datastore/Firestore without gcloud import/export
--include really means "don't exclude"; everything is implicitly included to begin with, so --include is used to override a previous --exclude. You want something like options=( -avz -e 'ssh -p -28' --relative --exclude '*' # Don't transfer *anything* ... --include '*.php' # ... except for the .php files ... --exclude website1.com # ... in a directory other than website1.com ... --exclude website2.com # ... or website2.com --delete-during --backup --backup-dir=/mnt/usb/shares/me/ubuntu_backup/ --suffix=".""201911261032" ) rsync "${options[@]}" \ /var/www \ [email protected]:/mnt/usb/shares/me/ubuntu/
I have a problem. I am trying to use rsync to copy all the .php files from my /var/www to another server, except for website1 and website2, so I created this command: rsync -avz -e 'ssh -p 28' --relative --exclude={'website1.com','website2.com'} --include='*.php' --delete-during --backup --backup-dir=/mnt/usb/shares/me/ubuntu_backup/ --suffix=".""201911261032" /var/www [email protected]:/mnt/usb/shares/me/ubuntu/ I would like to see that all the .php files are being rsynced, but when I run this command, not only the .php files are being coppied, but also .jpg, .csv, etc. How can I make this work?
rsync Include/Exclude specific file type
First, to get a clear understanding about parameter "RETAINDAYS" please go through this following blog link. But if I get your issue correctly, your concern is to avoid the Job to execute on Sunday and I think this is configurable in the job scheduling setup. You can select or deselect any day from the week (Under Frequency section) to Execute the job as per your requirement.
I have already defined a SQL Server Agent job, in order to backup 'mydb' in a specific disk location. BACKUP DATABASE [MYDB] TO DISK = N'C:\Dummy.bak' WITH RETAINDAYS = 3; I planned this job to be performed every day of the week, but not Sunday. The question is: can I set to have n different backup instances in my bak file instead of using n retaindays parameter, AND to delete others?
SQL Server set number of Backup to retain
You can use Clonezilla to make a bootable copy of the whole existing SSD with all of its partitions including Windows. The boot menu comes from Grub2 and it gets created from templates in /etc/grub.d and settings from /etc/default/grub. So, if your Clonezilla ISO file lives at /srv/iso/clonezilla-live-disco-amd64.iso and /srv directory lives in hard drive 0 in partition 13, then create a new executable file in /etc/grub.d, such as 40_clonezilla and put the following in it: #!/bin/sh exec tail -n +3 $0 # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry "Clonezilla live" { set root=(hd0,13) set isofile="/iso/clonezilla-live-disco-amd64.iso" loopback loop $isofile linux (loop)/live/vmlinuz boot=live union=overlay username=user config components quiet noswap nolocales edd=on nomodeset ocs_live_run=\"ocs-live-general\" ocs_live_extra_param=\"\" keyboard-layouts= ocs_live_batch=\"no\" locales= vga=788 ip=frommedia nosplash toram=live,syslinux,EFI findiso=$isofile initrd (loop)/live/initrd.img } Then, run update-grub to regenerate your grub menu. When you reboot, you will have a new boot option that boots from Clonezilla, and, from there, you can make a bootable copy of the existing hard-drive onto an external drive and overwrite whatever is already on that external drive. All of this stuff, editing Grub templates and overwriting drives is quite dangerous and the penalty for getting something wrong is high.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered. Closed 4 years ago. Improve this question I have an Acer Aspire R laptop with 260GB SSD, UEFI, Ubuntu and Windows 10 dual boot. How can I backup / clone / image the whole drive to be reinstalled on a new drive if current drive fails? Clonezillla: Will it backup all partitions (EFI, recovery, Ubuntu, swap, Windows) to external drive, so I can restore it to a new drive, no problem? Which file system should the external drive have? GParted: Or should I partition the external drive like the existing drive and copy the partitions with gparted?
How to backup laptop SSD with UEFI [closed]
See my answer at Kafka connect - string cannot be casted to struct But, essentially, FileStreamSink can only write string values of records, and is not meant for production use, but rather as an example for writing your own sink connectors, which would then require you to implement a source connector to read that binary data back into a topic If you don't want to implement your own Connector, then you'd need to implement some other consumer or look at mirroring your data to a secondary, backup Kafka cluster. As mentioned elsewhere, backup of just a topic does not backup its configurations or any consumer groups associated with that topic
I'm trying to backup Kafka data using FileStreamSink connector. I know there are better options but my company already have file backup infrastructure (based on NetApp), so I'd like to dump Kafka data to a binary file and backup the file. As the data stored in Kafka is encrypted, so we don't have a schema to use or transform. I tried to use this setting, but doesn't seem to work well: key.converter=org.apache.kafka.connect.converters.ByteArrayConverter value.converter=org.apache.kafka.connect.converters.ByteArrayConverter Do you have suggestions for this case? Thanks.
Write Kafka data to binary file using FileStreamSink connector
1 The simplest option is CTAS (Create Table As Select), i.e. create table my_table_backup as select * From my_table; Or, use Data Pump Export / Import utilities. Or, as it is just a single table, the original EXP / IMP utilities might also work. Or, spool data into a CSV file and load it back using SQL*Loader (or external tables feature). Quite a few options; I'd start with option 1 (CTAS). Share Improve this answer Follow answered Oct 16, 2019 at 22:23 LittlefootLittlefoot 138k1515 gold badges3939 silver badges5959 bronze badges 1 The simplest option would be to do nothing and then use flashback table if necessary. But that operation has some caveats and is not always reliable, so you'd want to create a backup anyway, in case flashback doesn't work. – Jon Heller Oct 17, 2019 at 0:21 Add a comment  | 
I want to have a backup of a specific table because a I want to change one of it's field, if changes don't work, apply the backup and restore the initial state. I'm using plsql developer
how to backup a specific table oracle?
If this is an server admin only operation, you may use iseed to generate seeding files from your existing data in database. The restoration would be easy through Laravel's database seeding. If this is a user operation, then you would probably need to program your own import / export feature. You may use a combination of fopen('php://output') with fputcsv for the database to csv export. Then you may reference some available tutorials to write the csv to database import with Eloquent.
I am working on a project in Laravel where I have to take backup of the database (mySQL). I have learned to take backup of entire database or selected tables. But the challenge for me is to take backup of only some rows. Here is the scenario: There are 4 tables users posts tags post_tag Their relations user hasMany posts and each post belongsTo a user post hasMany Tags and a Tag hasMany posts If if initiate backup of a user (where userId = 1), Then I should get a backup file containing all the four tables mentioned above with data related to userId = 1. Also, how to restore the data? Updates It is a role based application. (There are 2 roles editor and author) Editor has the privilege to backup and restore data of author.
How to take backup of partial data in Laravel?
1 Where metadata is stored differs drastically between file formats (ID3/MP3 and APE: start or end of file; MP4 and FLAC/Vorbis and RIFF: anywhere), especially if you modify it (instead of providing it right away when creating the file) - just try it yourself to see how filesizes can change when you modify metadata in each file format. And how it begins/ends before your modification and how afterwards. At the end of the day it's much easier to "just" copy the whole file if it differs at all than to first analyze which file format is there and which metadata system(s) it uses and what to which extent has changed in there. Share Improve this answer Follow answered May 23, 2020 at 14:19 AmigoJackAmigoJack 5,58611 gold badge1717 silver badges3232 bronze badges Add a comment  | 
I have thousands of audio files in mp3, m4a, ape, flac formats on my PC. I made a copy (backup) of them to an external disk drive. If I changed the metadata (e.g. ID3) of some audio files on my PC, what is the best way to sync the changed metadata to the external disk drive? File syncing program may be a choice, but copying the whole file is much slower. I want to sync the metadata only.
How to sync audio metadata (e.g. ID3) between file copies?
Unless you add the -name test before the filename find is going to consider "$i" to be the name of a directory to search in. So your find command should be: find -name "$i" -type f -mmin -1440 which will search in the current directory. Or find /path/to/dir -name "$i" -type f -mmin -1440 which will search in a directory named "/path/to/dir". But, based on BashFAQ/099, I would do this to delete all but the newest file for each VM (untested): #!/bin/bash declare -A newest # associative array to store name of newest file for each VM for f in * do vm=${f%%-*} # extracts vm name from filename (i.e. vmm001 from vm001-2019-08-01.bck) if [[ -f $f && $f -nt ${newest["$vm"]} ]] then newest["$vm"]=$f fi done for f in * do vm=${f%%-*} if [[ -f $f && $f != ${newest["$vm"]} ]] then rm "$f" fi done This is set up to run against files in the current directory. It assumes that the files are named as shown in the question (the VM name is separated from the rest of the file name by a hyphen). In order to use an associative array, Bash 4 or higher is required.
I have a remote server that copies 30-some backup files to a local server every day and I want to remove the old backups if and only if a newer backup successfully copied. With different codes I tried, I managed to erase older files, but I got the problem that if it found one new backup, it deleted ALL older ones. I have something like (picture this with 20 virtual machines): vm001-2019-08-01.bck vm001-2019-07-28.bck vm002-2019-08-01.bck vm003-2019-07-29.bck vm004-2019-08-01.bck vm004-2019-07-31.bck vm004-2019-07-30.bck vm004-2019-07-29.bck ... And I'd want to erase all but keep only the most recent ones. i.e.: erase: vm001-2019-07-28.bck vm002-2019-07-29.bck vm004-2019-07-31.bck vm004-2019-07-30.bck vm004-2019-07-29.bck and keep only: vm001-2019-08-01.bck vm002-2019-08-01.bck vm003-2019-07-29.bck vm004-2019-08-01.bck the problem I had is that if I have any recent backup of any machine, files like vm-003-2019-07-29 get deleted, because they are older, even if they are of different machines. I know there are several variants of this question in the site, but I can't quite get this to work. I've been trying variants of this code: #!/bin/bash for i in ./*.bck do echo "found" "$i" if [[ -n $(find "$i" -type f -mmin -1440) ]] then echo "$i" find "$i" -type f -mmin +1440 -exec rm -f "$i" {} + fi done (The echos are for debugging purposes only) At this time, this code finds the newer and the older files, but doesn't delete anything. If I put find "$i" -type f -mmin +1440 -exec echo "$i" {} +, it never prints anything, as if find $i is not finding anything, but when I run it as a solo command in the terminal, it does (minus the -exec part). I've tested this script generating files with different timestamps using touch -d, but I had no success.
How to delete older files but keep recent ones during backup?
Found out that i had to redo the folder permissions. This is done the following way: 5. Change permissions for the new data directory For the new data-dictionary folder: Right-click on it and click Properties. Under the Security Tab click “Edit...” and then “Add...”. Type “Network Service” and then click “Check Names”, make sure it has Modify and Full Control permissions and then click OK. Equally important PostgreSQL needs to be able to “see” the data-directory (see my ServerFault.StackEx question), i.e. it needs to have read access to the parent directories above it. So Right-click on the pg_db folder and under the Security Permissions add Network Services again, but this time it only needs Read & Execute as well as List folder contents permissions. The full post is a nice checklist to go through, for anyone else facing similar issues: https://radumas.info/blog/tutorial/2016/08/08/Migrating-PostgreSQL-Data-Directory-Windows.html
I'm writing some batch scripts for doing incremental backups of a PostgreSQL cluster on a Windows Server. I copied the Data folder to a different folder, ran my backup scripts, stopped the service, deleted the Data folder, and tried recovering the database from the WAL files and such. This didn't work, because i copied the wrong log files, and i couldn't get the service started again, so i tried copying back in the original Data folder, but i still can't start the service. The first script i ran called: pg_basebackup -Fp -D %BACKUPDIR%\full_%CURRENTDATE% This was the only line which actually ended up interacting with the data, but not the original Data folder, which i copied beforehand. When trying to start the service again i get the following error message: The postgresql-x64-10 - PostgreSQL Server 10 service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs. I have gotten this before, when making a typo in the conf file, so i'm guessing that's just the standard error message for when something is missing.
How do i restart PostgreSQL service after putting back the original Data folder?
This should do the trick, pretty sure the issues you might have had were around the (Get-Date) in the middle of a string and possibly the fact directories can't contain special chars like / or : which are in the date. Also as other people have said, do this as an administrator. Get-Service SERVICENAME | Stop-Service -Force Start-Sleep -seconds 20 $Date = (Get-Date).ToString().Replace("/","-") $Date = $Date.Replace(":","-") Copy-Item "\\remote path\folderderA" -Destination "\\Remoate path\folderA-$Date" -Recurse -Verbose if(Test-Path("\\Remoate path\folderA-$Date")){Remove-Item -Path \\remote path\folderA -Verbose} Start-Sleep -seconds 20 Start-Service SERVICENAME Get-Service SERVICENAME
How to Stop a service Backup a folder/contents in a remote path as per sys date Remove the contents of the original folder Finally to start the service Get-Service SERVICENAME Stop-Service SERVICENAME -Force –PassThru Start-Sleep -s 20 Copy-Item -Path \\remote path\folderderA -Destination \\Remoate path\folderA(Get-Date) -Recurse -Verbose Remove-Item -Path \\remote path\folderA -Verbose Start-Sleep -s 20 Start-Service SERVICENAME -Force –PassThru Get-Service SERVICENAME The above code is throwing an error.
PowerShell script to stop service and backup a folder in remote path
1 Velero (formerly Heptio Ark) gives you tools to back up and restore your Kubernetes cluster resources and persistent volumes. Velero lets you: Take backups of your cluster and restore in case of loss. Copy cluster resources to other clusters. Replicate your production environment for development and testing environments. Velero consists of: A server that runs on your cluster A command-line client that runs locally https://github.com/heptio/velero Share Improve this answer Follow answered Jul 15, 2019 at 10:37 Ijaz AhmadIjaz Ahmad 11.5k1010 gold badges5656 silver badges7878 bronze badges Add a comment  | 
I'm going to backup the master node using this script: DATA=$(date +"%m-%d-%y-%H-%M") ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save /opt/backup/etcd/snapshot-$DATA.db In case of a disaster recovery, what's the best practice in order to restore the master node? I've this in my mind: Re-install, if it is possible, the master node with the same IP After the installation of the master node, use a specific command to import the saved database (what's the command in this case?) I think that at this point, all of our slaves will detect the master node, but I've some questions: After this re-installation, the master node is blank, so, is there a way to backup also the pods/jobs/volumes informations to completely restore the cluster? is there a opensource kubernetes backup software?
Kubernetes disaster recovery - Reinstall the master node and import etcd backup
1 If you choose to take a backup of your images to another ACR, then you can use import command - https://learn.microsoft.com/en-us/azure/container-registry/container-registry-import-images This even works if you want to copy images from any other registry to ACR. Share Improve this answer Follow answered Jun 21, 2019 at 0:38 Siva GSiva G 2122 bronze badges Add a comment  | 
I need to take back up of images which are store in azure container registry. The container registry is growing large with version of images. And i am planning to automate clean up process to archive images which are older or unused images. we can do save images using docker save command in the cli docker save -o <path for generated tar file> <image name> But i need to automate this process for set of images which are stored in azure container registry or any registry.
How to take backup of images in container registry
1 It's not terribly difficult to put together your own archiving scripts, but there are a few things you need to keep track of, because when you need your backups you really need them. There are some packaged backup systems for PostgreSQL. You may find these two a good place to start, but others are available. https://www.pgbarman.org/ https://pgbackrest.org/ Share Improve this answer Follow answered May 13, 2019 at 18:47 Richard HuxtonRichard Huxton 22k33 gold badges4141 silver badges5252 bronze badges Add a comment  | 
i am in no way a db admin, so please don't shoot me if i'm doing it completly wrong ... I have to add some archiving to a productive postgres database (newest version in docker container) and trying to build some scripts to use with WAL. The idea is to have a weekly script, that does a full backup to a new directory and then creates a symlink to this new directory that is used by the WAL script to write it's logs. Also the weekly script will delete old backups older than 30 days. I would be very happy for any comments on this... db settings wal_level = replica archive_mode = on archive_command = '/archive/archive_wal.sh "%p" "%f"' archive_timeout = 300 weekly script: #!/bin/bash #create base archive dir #base_arch_dir=/tmp/archive/ base_arch_dir=/archive/ if [ ! -d "$base_arch_dir" ]; then mkdir "$base_arch_dir" chown -R postgres:postgres "$base_arch_dir" fi #create dir for week dir="$base_arch_dir"$(date '+%Y_%m_%d__%H_%M_%S') if [ ! -d "$dir" ]; then mkdir "$dir" chown -R postgres:postgres "$dir" fi #change/create the symlink newdir="$base_arch_dir"wals ln -fsn "$dir" "$newdir" chown -R postgres:postgres "$newdir" #do the base backup to the wals dir if pg_basebackup -D "$newdir" -F tar -R -X fetch -z -Z 9 -U postgres; then find "$base_arch_dir"* -type d -mtime +31|xargs rm -rf fi crchive script: #!/bin/bash set -e arch_dir=/archive/wals arch_log="$arch_dir/arch.log" if [ ! -d "$arch_dir" ]; then echo arch_dir '"$arch_dir"' does not exist >> "$arch_log" exit -1 fi #get the variables from postgres p=$1 f=$2 if [ -f "$arch_dir"/$f.xz ]; then echo wal file '"$arch_dir"/$f.xz' already exists exit -1 fi pxz -2 -z --keep -c $p > "$arch_dir"/$f.xz Thank you in advance
postgres backup with WAL
The short answer is "no", there's no support for dumping the firmware over WiFi. I've not looked at how WiFi Update is implemented so I'm not saying it can't be done at all - just that you're going to have to implement it yourself. Just like Update, if the sketch doesn't already support it, it is likely not possible at all (so you can't backup from an ESP32 flashed with just any old sketch).
It is widely known that you can update firmware over-the-air using <Update.h> functionality: receive blob size over the network, call Update.begin(blob_size), consecutively call Update.write() until its done, then call Update.end(), and restart the board. But is there a way to do a backup of current firmware binary using WiFi?
Arduino OTA Firmware Backup ESP32
I just realized, that you don't even have to edit elasticsearch.yml to set the path.repo setting, you can add it as an enviromental variable in your statefulset like this: env: - name: path.repo value: "/mnt/backup"
I have to edit elasticsearch.yml in order to create a backup (setting the path.repo like this is necessary): path.repo: /mnt/backup But I have elasticsearch running on Kubernetes, and I would like to set the path.repo from a statefulset or something similar to all pods at the same time. Can anyone tell me how to do that? Thanks I tried to do this with configmap like this: https://discuss.elastic.co/t/modify-elastic-yml-file-in-kubernetes-pod/103612 but when I restarted the pod it threw an error: /usr/share/elasticsearch/bin/run.sh: line 28: ./config/elasticsearch.yml: Read-only file system
How can I edit elasticsearch.yml on kubernetes pods, with a statefulset, or something similar?
Just slightly alter the databases variable: touch $logfile timeslot=`date +%d%m%y%H%M%S` #databases=`sudo su - postgres -c "psql template1 -c '\l'|tail -n+4|cut -d'|' -f 1|sed -e '/^ *$/d'|sed -e '$ d'"` databases="test_database" for i in $databases; do if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then timeinfo=`date '+%T %x'` echo "Backup and Vacuum started at $timeinfo for time slot $timeslot on database: $i " >> $logfile su - postgres -c "vacuumdb -z -U postgres $i >/dev/null 2>&1" su - postgres -c "pg_dump $i --exclude-table-data=sale_order -U postgres | gzip > \"/tmp/openerp-$i-$timeslot-database.gz\"" cp /tmp/openerp-$i-$timeslot-database.gz $backup_dir/openerp-$i-$timeslot-database.gz chown $user:$user $backup_dir/openerp-$i-$timeslot-database.gz timeinfo=`date '+%T %x'` rm /tmp/openerp-$i-$timeslot-database.gz echo "Backup and Vacuum complete at $timeinfo for time slot $timeslot on database: $i " >> $logfile fi done
so I have this problem, with this script below it goes for each database and creates a backup for that database. this line for i in $databases; do but how could I modify this script to make back up only for 1 database named "test_database"? #!/bin/bash # Location of the backup logfile. logfile="/home/erp/backups/logfile.log" #erp user user="antonp" # Location to place backups. backup_dir="/home/erp/backups" if [ ! -d $backup_dir ]; then mkdir $backup_dir chown $user:$user $backup_dir fi touch $logfile timeslot=`date +%d%m%y%H%M%S` databases=`sudo su - postgres -c "psql template1 -c '\l'|tail -n+4|cut -d'|' -f 1|sed -e '/^ *$/d'|sed -e '$ d'"` for i in $databases; do if [ "$i" != "template0" ] && [ "$i" != "template1" ]; then timeinfo=`date '+%T %x'` echo "Backup and Vacuum started at $timeinfo for time slot $timeslot on database: $i " >> $logfile su - postgres -c "vacuumdb -z -U postgres $i >/dev/null 2>&1" su - postgres -c "pg_dump $i --exclude-table-data=sale_order -U postgres | gzip > \"/tmp/openerp-$i-$timeslot-database.gz\"" cp /tmp/openerp-$i-$timeslot-database.gz $backup_dir/openerp-$i-$timeslot-database.gz chown $user:$user $backup_dir/openerp-$i-$timeslot-database.gz timeinfo=`date '+%T %x'` rm /tmp/openerp-$i-$timeslot-database.gz echo "Backup and Vacuum complete at $timeinfo for time slot $timeslot on database: $i " >> $logfile fi done
Script modification to backup only 1 database
If this game is writing one file and only one file. Meaning replacing it every few seconds, then looking at that time stamp is meaningless, as it would never hit a different time span. Just set a scheduled to copy the file every 5 mins. It does not matter If you use PowerShell or Batch file to do it. When you copy, of course give it a new name in the destination. $TimeStamp = (Get-Date).ToString('mmddyyyhhmmss') Copy-Item -Path 'D:\Game\GameFileName' -Destination "D:\Game\Backup\$(GameFileName)_$(TimeStamp).bak"
There is a game that uses one save file. That save file is written to every couple of seconds. I want a skript to create a backup copy of that save file every 5 minutes. The output should look like this: c:\users\...\10000.sl2 <-- Original C:\users\...\backup\10000-15.02.2019_18-34.sl2 C:\users\...\backup\10000-15.02.2019_18-39.sl2 I tried to put something together using LastWriteTime PS C:\Users\...> $source = "C:\Users\...76561198109841889" >> $destination = "C:\Users\...76561198109841889\backups" >> >> Get-ChildItem $source -Recurse -Include *.sl2 | % { >> $name = $_.Name.Split(".")[0] + "_" + ($_.LastWriteTime | Get-Date -Format yyyymmdd) + "_" + ($_.LastWriteTime | Get-Date -Format hhmmss) + ".sl2" >> #$name = "Finished_" + ($_.LastWriteTime | Get-Date -Format yyyymmdd) + "_" + ($_.LastWriteTime | Get-Date -Format hhmmss) + ".sl2" >> #$name = "Finished_" + $_.Name.Split(".")[0] + "_" + ($_.LastWriteTime | Get-Date -Format yyyymmdd) + "_" + ($_.LastWriteTime | Get-Date -Format hhmmss) + ".sl2" >> Rename-Item $_ -NewName $name >> Copy-Item "$($_.Directory)\$name" -Destination $destination I found this code by googling my question. Hitting Enter produces a bunch of files but then nothing happens again.
Create a backup Copy of a file every 5 minutes using PowerShell
1 In short, use CBMANAGER whenever you can. CBMANAGER - "Designed for the Enterprise Edition, it replaces the cbbackup and cbrestore tools as the primary and recommended means of backup and restore for Enterprise customers from version 4.5 and above" https://docs.couchbase.com/server/6.0/backup-restore/backup-restore.html Share Improve this answer Follow answered Mar 19, 2019 at 10:51 deniswsrosadeniswsrosa 2,44111 gold badge1717 silver badges2626 bronze badges 1 Okay, both are open source right? means couchbase enterprise and community? can we download from couchbase.com and installed the same without any cost? – LetsNoSQL Mar 19, 2019 at 12:06 Add a comment  | 
While taking my couchbase backup I found two way in couchbase 1) cbbackup 2) cbmanager utility. What is the deference between both. I have tried both command and working fine but cbbackup is taking more time than cbmanager command. Please help in this case. Thanks
Difference between cbmanager backup and cbbackup command
I'm pretty sure the error is due to you passing integer, not string to the apiTimeout, try passing a string: apiVersion: velero.io/v1 kind: VolumeSnapshotLocation metadata: name: azure-default namespace: velero spec: provider: azure config: apiTimeout: "30" Api spec and your error suggest its looking for a string
I am trying to install heptio velero (earlier known as Ark) for one of my k8s clusters. I took the following steps A]install prereq. original yaml file here B]install secrets kubectl create secret generic cloud-credentials --namespace velero --from-literal AZURE_SUBSCRIPTION_ID="" --from-literal AZURE_TENANT_ID="" --from-literal AZURE_CLIENT_ID="" --from-literal AZURE_CLIENT_SECRET="" --from-literal AZURE_RESOURCE_GROUP="name-of-resource-group-where-my-vm etc created typically starts with MC_ in azure" C]apply remaining k8s resources these files are the content of volume snapshot location --- apiVersion: velero.io/v1 kind: VolumeSnapshotLocation metadata: name: azure-default namespace: velero spec: provider: azure config: apiTimeout: 30 and backup storage location --- apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: name: default namespace: velero spec: provider: azure objectStorage: bucket: "<blob name for bucket>"" config: resourceGroup: "<resource group name of my azure storage>" storageAccount: "<storage account name >" C]while looking at logs I found following error Failed to list *v1.VolumeSnapshotLocation: v1.VolumeSnapshotLocationList.Items: []v1.VolumeSnapshotLocation: v1.VolumeSnapshotLocation.Spec: v1.VolumeSnapshotLocationSpec.Config: ReadString: expects " or n, but found 3,error found in
error while installing heptio ark (velero) on Azure AKS
1 For SQL in Azure VM you can use the following. $vault = Get-AzureRmRecoveryServicesVault -ResourceGroupName "ressource group name" -Name "vault name" Set-AzureRmRecoveryServicesVaultContext -Vault $vault $containers = Get-AzureRmRecoveryServicesBackupContainer -ContainerType AzureVMAppContainer -Status Registered foreach ($container in $containers) { $items = Get-AzureRmRecoveryServicesBackupItem -Container $container -WorkloadType MSSQL foreach ($item in $items) { Disable-AzureRmRecoveryServicesBackupProtection -item $item -RemoveRecoveryPoints -Force } } Share Improve this answer Follow answered Feb 10, 2021 at 21:18 Charles GroleauCharles Groleau 1111 bronze badge Add a comment  | 
I'm currently running into a bit of a problem with Azure. My organization has several Recovery Service Vaults, one of which contians 6 backup items, within these items there are varying numbers of backups. One that I need to remove contains backups for SQL databases (not VM's, SQL DB backups). The only method through the GUI to remove these is doing them one at a time but we have hundreds that need to go. I have done some research but couldn't find a method in removing a specific backup item, just click methods for removing backups from within the Backup Item one at a time through the point and click method. I have found powershell solutions for removing the entire vault but as there are backup items in there we want to preserve, this won't work. Does anyone know of a powershell method to remove an entire backup item or at least remove all the backups from within a backup item so that I may then manually remove the backup item instead of going through hundreds of these through ye' ol' point-and-click?
Is there a method using Powershell to remove a "Backup Item" from within a vault in Azure?
1 The correct way to substitute with sed is sed 's/pattern_1/pattern_2/g' file_name. If you want the changes to be saved you have to use sed -i, which is better than redirecting the standard output. Here, since you don't give any input to sed, it outputs nothing to the standard output and then you redirect it to your file, so you end up with an empty file. I am afraid the file is definitely lost if you rewrite it by redirecting the standard output in it, but it is worth googling it. Share Improve this answer Follow edited Mar 9, 2019 at 11:26 answered Mar 9, 2019 at 11:09 Loïs RancilhacLoïs Rancilhac 7311 silver badge77 bronze badges Add a comment  | 
I had a file named "ex1.f95" in my shell, and wanted to change all the "y_parameter" for "y". Therefore, I used: sed "s/y_parameter/y/g" > ex1.f95 When I opened the file, everything was deleted! There was nothing written. Is there a way I can recover everything I had inside?
How can I recover a file in shell?
The errors are very clear and verbose, Error 1 : After you hit enter, you had to provide the password for the DB User, the password you typed was incorrect. Error 2 : You didn't specify the parameter [-W] so it didn't ask for any password. This would work if the postgres server configuration was set to TRUST for localhost but the default configuration is always set to md5 or peer. To solve this, all you need to do is to understand pg_dump tool. Example: pg_dump -Fc -h localhost -d adempiere -U adempiere -v -W > file.backup Understanding Parameters -h It corresponds to Host where the database is located -d Refers to the database you try to backup -U It referes to the database user with backup privileges -W This parameters let you type the database user password after you run the command -Fc It allows you to generate a custom format file .backup -v Verbose, It let you see the log of what is happening in the background. You can find more information about this command in the following link: https://www.postgresql.org/docs/9.5/app-pgdump.html
I'm using postgresql9.2 and OS Redhat6.9; While getting backup from postgresql database backup from command line found a error. I'm using two command like [root@clipntouch ~]# pg_dump -h localhost -U adempiere -W -F t live_3001 > database_dump_file.tar or [root@clipntouch ~]# pg_dump -U adempiere live_3001 | gzip > /home/database_dump_file.gz Find 2 error- 1. For first one- pg_dump: [archiver (db)] connection to database "live_3001" failed: FATAL: password authentication failed for user "adempiere" 2. For second one- psql.bin: FATAL: password authentication failed for user "root" Any best solution ?
How to get Postgresql backup by command in Redhat6.9?
1 Question Will 60-day snapshots (full snapshots with all data) be combined with 59-day snapshots (incremental snapshots)? Yes. The consistency of all snapshots will be maintained when you delete any snapshot including the oldest one. Technically, nothing is combined, each snapshot is just a list of pointers to stored data blocks. When you delete the oldest snapshot any data in that snapshot that has been overwritten in the next newer snapshot will be released (deleted). The list of blocks in the 60th snapshot will be merged into the 59th snapshot. The 59th snapshot now represents the entire disk volume. Share Improve this answer Follow answered Jan 24, 2019 at 3:19 John HanleyJohn Hanley 78.1k66 gold badges103103 silver badges168168 bronze badges Add a comment  | 
I chose a snapshot as a way to backup the VM(google compute engine). I know that snapshots are incremental and automatically compressed. So I will take a snapshot every day at the appointed time. And I want to delete the snapshots that are older than 60 days. Question Will 60-day snapshots (full snapshots with all data) be combined with 59-day snapshots (incremental snapshots)?
How to back up using snapshots
According to your requirements, specified in comments, you can try: @echo off rem Set variables for size count: set total_size_backup=0 set total_size_origin=0 rem Find the most recent BACKUP folder: for /F "delims=" %%A IN ('dir /b /AD /OD') do set "folder_to_search=%%~fA" rem Find the size of all files inside the backup folder: for /R "%folder_to_search%" %%B IN (*.*) do ( set /a "total_size_backup+=%%~zB" ) rem Find the size of the original folder: for /R "full_path_to_folder_with_original_files" %%C IN (*.*) do ( set /a "total_size_origin+=%%~zC" ) rem Compare the two sizes from these two folders. If they are NOT the same include your code there. if %total_size_backup% EQU %total_size_origin% ( echo Well Done! Your newest backup files and your original files are up-to-date! pause>nul exit /b 0 ) else ( echo Ooops! Your newest backup files and your original files are out-of-date! Never worry! Running backup now, please wait... start /min /wait your_backup_file.bat echo Updated files successfully! exit /b %errorlevel% )
I'm writing backup batch file for a particular folder that is located in a shared network folder, to my personal Windows machine. I want to keep track of changes made on this network folder, so I keep a lot of backup folders which have a datestamp+timestamp name like 20181224145231 which is the backup folder created on the 24th of December (12), 2018 at 14h52min31sec. All of my backup folders of datestamp+timestamp are located in a separate folder. To do this I came up with a script that grabs date and time from the system and checks if a particular file in the original folder is different than the one located in the last backup folder using fc and a for loop to grab the last backup folder created in the past. Things have grown and I need to compare the content of the whole folder (with subfolders) and not just a file. And that's where I've hit a wall. I've looked into comp and 240, but can't seem to find a way. 241 syncs folders, but I want to create a new folder each time changes have occured. One thing I'm thinking about is creating a comparison file in both folders like a 7-zip file and running 242 on both of these, but it seems rather extreme. So summing up my question is: How to check if the most recent backup has the same files as with the network shared folder without third party-tools through batch file?
Compare two folders' contents instead of two files contents' through batch file
Yes you can drag and drop those folders (Application, Documents, Music...) into the dropbox folder as a backup. However, I would still recommend you select all the folders you want to copy first (Application, Documents, Music...) with maintaining the cmd key pressed while clicking. Then press alt + cmd + i and you will get the total number of files, and total size. It's a good indicator for how long it will take. Also, I would recommend to drag and drop only one folder at a time, because they can be huge, so if something messes up during the copy, you can start over with only one folder, not everything. Alternatively, for this kind of copy, I like to use the terminal, with the command rsync: it's more reliable, more lightweight and if something goes wrong, it can start back exactly from the last file copied. (But not everyone knows how to use it) Here is what I would enter successively into the terminal: rsync -avh ~/Documents ~/Dropbox/ rsync -avh ~/Downloads ~/Dropbox/ rsync -avh ~/Music ~/Dropbox/ rsync -avh ~/Movies ~/Dropbox/ rsync -avh ~/Pictures ~/Dropbox/ rsync -avh ~/Applications ~/Dropbox/ Also, beware that there are other files needed for your applications to be restored as they are (configuration, settings, cache...). They are mostly located into the ~/Library folder, but this can become tricky to save everything "by hand", and that's why there are specialized backup programs like Time Machine.
Please let me know if this question is better served elsewhere. To the extent that it is technical and potentially helpful to others, I'm hoping this post is okay (even though it is not code-related). I am having my Macbook Pro's battery replaced later today and want to make sure I don't lose too much in the case where they fuck up and have to wipe my computer (they said this is a possibility anytime a battery is replaced). This is a snapshot of my Finder window, clicked on Home: The Documents folder is nearly empty, as most stuff I would normally save there is in Dropbox instead. My question is - can i simply drag my Applications, Downloads, Pictures, Music, etc. folders into my Dropbox folder? I have the space (I have 1TB on Dropbox, with 850GB still available). I'm simply not sure if dragging these entire main folders is okay? They have icons in the folders which (I think) means that they are specialty folders, and I'm worried that they won't behave properly if moved into the Dropbox folder. Thanks in advance with this!! Edit: I dont use iCloud, and only have 5GBs there, but let me know if simply purchasing a larger iCloud storage and backing up there is the more obvious way to go?
Can I move my Downloads & Applications folder into my Dropbox Folder
1 Also look into dbatools.io (PowerShell tool) to do it easily Easier SQL Server Restores using DBATools - Stuart Moore https://dbatools.io/commands/#Backup https://dbatools.io/dr/ # What if you just want to script out your restore? Invoke Backup-DbaDatabase or your Maintenance Solution job # Let's create a FULL, DIFF, LOG, LOG, LOG Start-DbaAgentJob -SqlInstance localhost\sql2016 -Job 'DatabaseBackup - SYSTEM_DATABASES - FULL','DatabaseBackup - USER_DATABASES - FULL' Get-DbaRunningJob -SqlInstance localhost\sql2016 Start-DbaAgentJob -SqlInstance localhost\sql2016 -Job 'DatabaseBackup - USER_DATABASES - DIFF' Get-DbaRunningJob -SqlInstance localhost\sql2016 Start-DbaAgentJob -SqlInstance localhost\sql2016 -Job 'DatabaseBackup - USER_DATABASES - LOG' Get-DbaRunningJob -SqlInstance localhost\sql2016 Start-DbaAgentJob -SqlInstance localhost\sql2016 -Job 'DatabaseBackup - USER_DATABASES - LOG' Get-DbaRunningJob -SqlInstance localhost\sql2016 Start-DbaAgentJob -SqlInstance localhost\sql2016 -Job 'DatabaseBackup - USER_DATABASES - LOG' Get-DbaRunningJob -SqlInstance localhost\sql2016 # Now export the restores to disk Get-ChildItem -Directory '\\localhost\backups\WORKSTATION$SQL2016' | Restore-DbaDatabase -SqlInstance localhost\sql2017 -OutputScriptOnly -WithReplace | Out-File -Filepath c:\temp\restore.sql Invoke-Item c:\temp\restore.sql # Speaking of Ola, use his backup script? We can restore an *ENTIRE INSTANCE* with just one line Get-ChildItem -Directory \\workstation\backups\sql2012 | Restore-DbaDatabase -SqlInstance localhost\sql2017 Share Improve this answer Follow answered Dec 12, 2018 at 2:15 Jerry HungJerry Hung 16055 bronze badges Add a comment  | 
I need to make backup copies of my database and store them on another server I created this stored procedure for that task: USE [master] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BackUpRecursosHumanos] @backupLocation NVARCHAR(200), @databaseName SYSNAME = NULL AS DECLARE @BackupName VARCHAR(100) DECLARE @BackupFile VARCHAR(100) DECLARE @DBNAME VARCHAR(300) DECLARE @sqlCommand NVARCHAR(1000) DECLARE @dateTime NVARCHAR(20) --DECLARE @Loop INT --DECLARE @backupLocation NVARCHAR(200) SET @DBNAME = @databaseName SET @backupLocation = @backupLocation SET @dateTime = REPLACE(CONVERT(VARCHAR, GETDATE(),101),'/','') + '_' + REPLACE(CONVERT(VARCHAR, GETDATE(),108),':','') SET @BackupFile = @backupLocation+REPLACE(REPLACE(@DBNAME, '[',''),']','')+ '_FULL_'+ @dateTime+ '.BAK' SET @BackupName = REPLACE(REPLACE(@DBNAME,'[',''),']','') +' full backup for '+ @dateTime BEGIN SET @sqlCommand = 'BACKUP DATABASE ' +@DBNAME+ ' TO DISK = '''+@BackupFile+ ''' WITH INIT, NAME= ''' +@BackupName+''', NOSKIP, NOFORMAT' END EXEC(@sqlCommand) Where to create a script: // Sqlbackup.bat /****************************************************************/ backup /***************************************************************/ sqlcmd -S DESKTOP -Q "EXEC sp_BackUpRecursosHumanos @backupLocation='C:\Users\dell\Documents\BackUp\', @databaseName='RecursosHumanos'" Here is saving the copy internally, My problem is how I keep it on another server
Backup SQL Server automatically
1 Okay I've found my answer with this command: zip -r - directory_to_archive/ | base64 Source: Zip file and print to stdout Share Improve this answer Follow answered Dec 9, 2018 at 20:33 WaymixWaymix 2155 bronze badges Add a comment  | 
I'm on a server that hasn't enough disk space. I'd like to backup all the files by zipping a directory and directly output it in base64 only in bash. I tried to zip this directory, but my server doesn't have enough disk space. Is there a way to create a zip (or a tar) archive and output the base64 in live ?
How to output base64 from a directory zipping
solved the issue. I gave IP as 127.0.0.1:6362-6372 in --from tag, which is given in official document of Neo4j.
neo4j-admin backup --from=IP:PortNum --backup-dir=/home/ubuntu/neo4jdevdump --name=neodbdump --fallback-to-full=true --check-consistency=true --pagecache=4G I am running above command to take backup of neo4j DB but facing error command failed: Failed to run a backup using the available strategies. I am using neo4j-admin backup rather than neo4j-backup, because neo4j-backup is deprecated.
command failed: Failed to run a backup using the available strategies. neo4j-admin backup command
1 What I'm using is something like that MAXKEEP=30 ZIPPER_EXT="gz" find $LOG_DIR -type f -name "*.$ZIPPER_EXT" -mtime +$MAXKEEP -exec rm -rf {} \; LOG_DIR is self-explaining, and I do compress (in other part of the script) my files after a certain amount of time so I'm looking for the compressed files only. So, that line do erase compressed file after 30 days. But that can be modified easily to suits your need I think. Share Improve this answer Follow answered Sep 6, 2018 at 20:56 Andre GelinasAndre Gelinas 91211 gold badge88 silver badges1111 bronze badges Add a comment  | 
I'm very new in bash script and need to write this code. The purpose of this code is to delete older backups depending on how old they are. The folders name is the date they were made. I think I commented everything, so the idea should be easy to get. #!/bin/sh #delte backups automatically cd backup/backup_collection #make sure to be in the right directory todate=$(date +”%Y-%m-%d”) #today count_back=$(ls -l | grep "^d" | wc -l) #counts the number of folders in the current directory back_names=( $( ls . ) ) #array with all filenames for ((i=0; i<count_back; i++ )) do back_days[i]=$(( (todate +%s - todate +%s -d ${back_names[i]}) /86400 )) #this number tells us how many days ago this backup was done #the array with the days is already sorted from small to big y=$(((${back_days[count_back-1]} + 2) / 7)) #y is the newest date, how many weeks ago for ((i=count_back-2; i>=0; i—- )) do x=$(((${back_days[i]} + 2) / 7)) #how many weeks ago is the i-th entry if [ x<8 ] || [ [ x>=8 ] && [ x<=26 ] && [ y-x>=2 ] ] || [ [ x>=26 ] && [ x<52 ] && [ y-x>=4 ] ] || [ [ x>=52 ] && [ y-x>=8 ] ] then y=$x else rmdir backup/backup_collection/${back_names[i]} #we remove the specific folder fi done The code does not work yet. For example this line is not correct I think. back_days[i]=$(( (todate +%s - todate +%s -d ${back_names[i]}) /86400 )) I tried very much. Maybe someone can help me. I would appreciate it!
Code to delete older backups, ordered by date
I solved it on my own like this. Create a temporary directory and cd there cd $(mktemp -d) Get the bundle from your OPs (full snapshot) of the repository there mv /tmp/yourrepo.bundle . Clone the broken repository from GitLab to BROKEN git clone --mirror URL_to_yourrepo BROKEN.git cd in BROKEN and run git bundle verify ../reponame.bundle. It should not report any errors, if so, continue cd BROKEN.git git bundle verify ../yourrepo.bundle Go back and then clone a new repository from the bundle file cd .. cd git clone --mirror yourrepo.bundle LASTKNOWNGOOD.git cd there and verify that all your refs are there as u expect them to be cd LASTKNOWNGOOD.git git show-ref Now set the repositories remote to the local BROKEN clone from GitLab git remote add BROKEN ../BROKEN.git Then push the contents of here to the BROKEN remote git push --tags --force --mirror BROKEN By then, the BROKEN repository should be healed. cd into the BROKEN and simulate a push to verify that it will do what you expect cd ../BROKEN.git git push --tags --verbose --dry-run --mirror origin If it looks like u expect it to be, run without --dry-run to heal the remote repository. It may report about mv /tmp/yourrepo.bundle . 0 but as long as they are from the mv /tmp/yourrepo.bundle . 1 refs group you can safely ignore them. You also need to reopen all the automatically closed merge request in GitLab for that repository.
By accident, I forcefully pushed from my local repository to the GitLab repository having mirroring active which then deleted all the hidden and GitLab related refs (for the merge request 'refs/merge-request') in the remote. I told my coworkers to stop interacting with the remote repository and asked OPs if I could get the daily backup for that repository. I received a repositoryname.bundle file. Now how can I recover the remote with this bundle file?
How to recover a gitlab repository from a git bundle?
1 Because you know your micro model you know the FLASH memory size and layout. ST-LINK utility does not provide any method of the chip identification You can work it around by resetting the target first and saving the output to the file: ST-LINK_CLI.exe -Rst STM32 ST-LINK CLI v3.2.0.0 STM32 ST-LINK Command Line Interface ST-LINK SN : 0670FF485550755187194938 ST-LINK Firmware version : V2J29M18 Connected via SWD. SWD Frequency = 4000K. Then you can call another program (self written) to parse the result and get the uC model and memory size. Then you can execute the ST-LINK_CLI with the calculated parameters. Target voltage = 3.3 V. Connection mode : Normal. Device ID:0x449 Device flash Size : 1024 Kbytes Device family :STM32F74x/F75x MCU Reset. Share Improve this answer Follow answered Aug 24, 2018 at 12:57 0___________0___________ 1 1 My firmware file sizes are about 20 kilobytes. Flash memory size is 1 megabyte. It means that I have to create backups with redundant constant size (if I use the same model)? – ilya Aug 26, 2018 at 12:40 Add a comment  | 
I have a console STM32 ST-LINK utility. It is able to dump firmware to bin file. But the problem is in parameters. GUI version shows address and size in the upper "Memory display" groupbox. But how do I know the memory size parameter without GUI ST-LINK utility? Here is a parameter list for console version: -Dump<Address> <Memory_Size> <File_Path>
Backup STM32 firmware using command line tools
1 One way would be to restore the backup to a separate database, query the column in question including the record's primary key. Then transform the result of that's query into an update statement that you can execute on the live database. I would advise to try this on a test environment first. Share Improve this answer Follow answered Aug 21, 2018 at 20:06 SilSil 1,21011 gold badge1212 silver badges3131 bronze badges Add a comment  | 
We back up our mysql database every day around 2.00am. yesterday, we did an accidental update to a column and that's affected the entire database instead of just one record. Question : Is it possible to get a column value from backup and use that to update the live database ?
Mysql retrieve data from back up sql
Look for Admin, System, Services. There is a Backup Service running there. You can edit or delete it there
On my Jira server v7.5.2 (CentOS 7), in /data/atlassian/jira/export, there is a bunch of zipfiles created every 3 hours, each around 200 Mb in size: ... 2018-Aug-14-0000.zip 2018-Aug-14-0300.zip 2018-Aug-14-0600.zip 2018-Aug-14-0900.zip 2018-Aug-14-1200.zip 2018-Aug-14-1500.zip ... Apparently they're automated backups. However, there is neither any Scheduled Job nor any cron job with such a timing. What could be creating these files? Is there any other Jira job scheduling or setting that I should check?
Jira backup files being automatically created every 3 hours
you don't need the terminal. Use set yourscript to "command 1;command 2;command 3" do shell script quoted form of yourscript
I want to use AppleScript to do the following: SSH to remote server, and zip a folder Download the zipped folder file to local directory I can already do both 1 and 2, however the problem is that I don't know how to use AppleScript to do them one after another. The code currently looks like this: set currentTab to do script("ssh balahbala") delay 3 do script("pswd") in currentTab do script("tar -cvzf xxx.tar.gz server_path) in currentTab set currentTab to do script("scp server:xxx local") do script ("pswd") in currentTab But I can't do scp before the tar running completes, and I also don't want to use delay for tar, because if I do that, I have to predict the time that tar running... Is there any way using AppleScript run scp just after the tarcompletes?
Applescript run terminal commands one by one?
I am assuming you need to backup the VM on demand. One of the ways this can be done is from the Azure Portal. Navigate to the VM which needs to be backed up. Under Operations section , select Backup menu. Click Back up now. It gives an option to specify a date till which you want(a custom retention period) to preserve your backup copy.
My daily backup policy for VM's keeps backups for 30 days. Now I want to save a specific backup and keep it for a longer time. As a baseline for work. Is this possible with Azure? How can I configure this?
How to keep / save a specific Azure backup
With the code you posted you will have only the restore points of the last VM in $VMs in the variable $allRestorePoints. To get the restore point count for all VMs enumerate all restore points for all VMs and group by VM name: $restorePoints = $VMs | ForEach-Object { Get-VBRRestorePoint -Name $_.Name | Where-Object { $_.CreationTime -gt $DateToCompare } } | Select-Object VMName, CreationTime $restorePoints | Group-Object VMName | Select-Object Name, Count | Export-Csv 'C:\output.csv' -NoType
I have written a little script to get the creation time of each restore point in the last x days: $VMs = $ImportCSV # Names of Virtual Machines $DateToCompare = 10 foreach ($vm in $VMs) { $allRestorePoints = Get-VBRRestorePoint -Name $vm.Name | where {$_.CreationTime -gt $DateToCompare} | Select-Object VMName, CreationTime } This script is showing me all the VM names and the creation time of the restore points in the last $DateToCompare days. But how can I count the restore points for each VM and export it to CSV?
How to Count Veeam Restore Points in PowerShell
1 In simple terms, the transaction log backup contains activity, but the full backup just contains data. For example, if you have an empty table, insert 100 rows, then delete all 100 rows, the table will still be empty. The full backup will just be the empty table, but the transaction log backup will record the 100 inserts and deletes. Share Improve this answer Follow answered May 24, 2018 at 8:55 Rhys JonesRhys Jones 5,39811 gold badge2323 silver badges4444 bronze badges 2 but in my case i am not seeing any activity on database, – khurram shahzad May 24, 2018 at 15:03 Can you use SQL Profiler or another monitoring tool to watch all database activity? For 500MB every 15 minutes there is definitely some activity. – Rhys Jones May 24, 2018 at 20:44 Add a comment  | 
Transaction log backup is 450 mb but the full backup size is 10mb. BackupType: 2 HasBulkLoggedData:0 RecoveryModel:Full BackupTypeDescription :Transaction Log auto growth setting : By 512 MB, Limited to 2097152 MB correction, the size of the mdf file is 512 MB current LDF size is around 450 MB every 15 minutes. Full backup size is 11 MB
SQL Server Transactional log backup is larger than full backup
Let me try to change the feeling of inconceivable (I like this movie too... :) ) So, Artifactory backup works as follow: Incremental - The first time that it will run will run as a full backup and will backup everything. The second time (and on) it will run it will only change the diffs between the backup folder to the actual stat in Artifactory. Full backup with retention - Let's say that you have set the retention to 168 hours (a week), that means that upon running a backup, Artifactory will check if the last backup finished 168 hours ago AND if it had no errors while running. The "AND" part here is very important. If you had an error message in the backup the retention will not delete the previous backups so you won't have a state of missing data. I would check the backup logs to make sure that you don't see any issues in it and if so resolve the errors, it should fix the retention issue. Hope this helps to make this topic conceivable :)
We purged our backup directory after executing serious cleanup of dead docker images in our artifactory hosts registry. The weekly backup executed without a problem but the next incremental filled the disk. Looking at the docker images within the incremental backup, I see images from at least 4 months ago. Can someone explain this behavior or am I like Vizzini from the Princess Bride and I don't understand that word. Inconceivable. Thanks Peter
Artifactory incremental backup contains items modified months prior to our last weekly backup
1 Works fine for me... DataStudio 4.1.3 on Winx64, remote Db2-v11 on Linux x6, and I'm using the private-key of the Db2-instance owner account inside DataStudio. However, my Linux allows both password-authentication and public-key authentication which may be significant. In DataStudio, which Run Method have you chosen for the-backup? The default is jdbc. Did you click "Preview command" to see what DataStudio will submit? For jdbc, it just runs SYSPROC.ADMIN_CMD to perform the backup. Does it make a difference if you chose a run method of 'Db2 server CLP' in the DataStudio tool? There is also a technote advising of a limitation, which may be relevant. Share Improve this answer Follow answered May 6, 2018 at 16:04 maomao 11.6k22 gold badges1313 silver badges2929 bronze badges Add a comment  | 
I'm using Data Studio to connect to a DB2 database server (db2 server is running on a Linux box). The server disabled SSH password login and I can only SSH using a private key and a keyphrase. I have configured SSH connection in Data Studio to use the private key and I'm able to establish a remote SSH connection from the Data Studio. But when I try to Back Up the database, it fails due to the following authorization error. com.ibm.datatools.cmdexec.RemoteExecutorAuthenticationException: com.ibm.tivoli.remoteaccess.RemoteAccessAuthException: CTGRI0000E Could not establish a connection to the target machine with the authorization credentials that were provided.CTGRI0000E Could not establish a connection to the target machine with the authorization credentials that were provided.CTGRI0000E Could not establish a connection to the target machine with the authorization credentials that were provided. Any knows what's causing the issue?
Backup operation fails for DB2
This most probably indicates that the import failed, for whatever reason. have you checked the karaf.log for any ERROR related log entries in general, and specifically anything that looks like it could provide you details about what could have failed during the import? BTW you can get more direct developer support on the mailing list of ODL's daexim project on https://lists.opendaylight.org/mailman/listinfo/daexim-dev.
We are performing online export/import using "Daexim". We are scheduling export job and checking its status using http://<controller-ip>:<restconf-port>/restconf/operations/data-export-import:status-export This URL used to return "complete" status till "Boron", but in Nitrogen we observed the state transitions from "scheduled" -> "in-progress" -> "initial". We are not getting "complete" state after "in-progress". Please let us know how to identify the status of export/import job. Export URL https://<odl-node>/restconf/operations/data-export-import:schedule-export/ Payload { "input": { "data-export-import:run-at": 500 } }
Opendaylight Export/Import using Daexim doesnt transition to "Complete" state
The traditional High-Availability design is: Data stored in Amazon RDS, preferably configured as Multi-AZ in case of failure Objects stored in Amazon S3 At least two Amazon EC2 instances for the application, spread across more than one Availability Zone — preferably created with Auto Scaling A Load Balancer in front of the instances An Amazon Route 53 domain name resolving to the Load Balancer This way, both instances are serving traffic (you can use two smaller instances if you wish). The Load Balancer performs continuous health checks. If an instance fails the health check, the load balancer stops sending it traffic, so users are minimally impacted. If Auto Scaling is configured, it can automatically replace an unhealthy instance. This can be done by providing a fully-configured AMI, or by providing a User Data script that installs and configures the software at startup (or a combination of both). When performing a software update: Update the Auto Scaling Launch Configuration, which defines how new instances should start (eg different User Data or AMI) Tell Auto Scaling to launch a new instance, then terminate an old instance — this is a rolling update If you can't do a rolling update (due to code change), deploy a second Auto Scaling group and test it. If everything is okay, point the Load Balancer to the new Auto Scaling group, then terminate the old one (after a few minutes to allow connection draining). This is very similar to what Elastic Beanstalk offers — it will create the Load Balancer and Auto Scaling group for you, and deploy code updates. The result is a highly-available, resilient architecture that can auto-recover from failure. It will also force you to use code repositories rather than manually updating servers, which leads to greater reliability and reproducibility. See: AWS Design for Web Application Hosting
I have an aws ec2 instance called primary. I have another ec2 instance called secondary. The primary instance IP is linked to domain, and contains all the hosted code and services. I want to be able to copy all the data (files/deamons/services etc) from primary to secondary on real time. Can this be done via some service on AWS? Or if I have to write code, what kind of code/linux script etc am I looking at? Edits I am expecting the secondary instance to be able to instantly run the system that is being copied. As soon as a failover is detected, I will change the IP linked to the domain to this secondary machine. For now the system is using database to store data, but we will be moving it to an RDS instance The system is a linux machine I looked at Load Balancer, and Auto Scaling group and EFS, but they don't solve my purpose. I looked at Elastic Bean Stalk, but it seemed like overkill for what I am trying to achieve. I can be wrong here too. Any help is greatly appreciated.
Copy files and services in real time from one aws instance to another
By binary backup I assume you're referring to the online backup functionality. Online restore is not supported/implemented. IMO, there's not much point to even try to implement that since you'd likely end up essentially restarting all components/apps anyway to ensure the new state is properly synchronized/converged throughout the system. Hence you might as well just restart the karaf process. That said, if you have a strong requirement for online restore, we always welcome contributions. You also might want to look at the daexim project: https://wiki.opendaylight.org/view/Daexim:Main.
We are currently using nitrogen version of Opendaylight. Our requirement is to restore the binary backup from MDSAL without restart of ODL/Karaf server. Could you please advise some approaches to achieve this??.
opendaylight dynamic backup restore
If you want to have a full backup of your machine choosing 'files and folders' and 'system state' is your best option: Files and folders will allow you to recover individual files and folders on your machine. Imagine a user accidentally deletes a file, you can recover it from the backup. System State will allow you to recover your system state (configuration of your machine) if your machine would be corrupted. The other items in there will allow you to recover from specific sources (Hyper-V or VMware) or to take application consistent back-ups. To recover a full machine, I would enable files/folders and system state backup. With Azure Backup you can restore either on Azure (on a VM on Azure) or on the source server. Make sure to also have a look at Azure Site Recovery. With Azure Site Recovery, you can 'mirror' a machine towards Azure. This will allow you to very quickly restore a machine in case of corruption on Azure. If your source is a VPS, you would only be able to restore to Azure with site recovery, not go back to the VPS.
I have a windows VPS, not on azure. I'm looking into the Azure backup services. Ideally I'd like to backup the whole VPS to azure. Lets say MY current VPS dies, then I can just use the Azure backup to create a new VPS, with all programs, settings, files, databases everything. I'm not sure which of the azure options to pick for this: Does anyone know any good resources or what each of the options mean, or have any suggestions? I've read lots on the Azure website but it's not particularly clear. Apologies if this is basic stuff or I've missed an obvious resource, I'm new to servers. Many thanks, Phil.
Azure Backup for VPS
A tag generally used for indicating a point where you are making a release. Suppose we have released v1.00, for keeping track our this release code we make a tag so that we can easily find out what were the codes at release v1.00. We can say Git tag track a particular commit in a branch. We use branch for separating out main code(master branch) from our developing(dev branch) or testing(test branch) code. And finally we Marge them with main code(master branch). So in your case, commit your latest changes to a branch and Tag it. Hope this will help you :-) .
I need to take a backup at specific point in Github, as we are removing at least 50% of code now. What is the best way to handle this. PS: We dont have plans to use the code we are removing even in future, but just wanted to keep it as backup. Should I create a different branch or have a tag? Thanks.
Can I use tag to take backup of a specific point?
Your <form>'s action attribute is relative, <form action="/main/find-a-writer">. http://www.bwa.org/main/find-a-writer does exists http://www.stagebwa.org/main/find-a-writer does not http://www.stagebwa.org/find-a-writer does exist If you remove the /main from the form action to make it just <form action="/find-a-writer"> it works just fine. I have a feeling you that perhaps you didn't account for the fact that the live site lives on a subdirectory and not the root URL when you cloned it - but it should be as simple as that, just removing /main on the staging site
so I just migrated my website, bwa.org/main to the staging site for testing. the staging site is stagebwa.org. I used backupbuddy and importbuddyphp to migrate the site for the most part it was fine. I've had to fix some little things but I think it's because during the backup buddy process, they asked me if I wanted to make the site homepage stagebwa.org/main but I chose stagebwa.org. I think this has caused a lot of problems with the search function. here's how it's supposed to work. if you click 'begin specialized search' without inputing parameters, it will spit out all users in the results. from the regular site: http://www.bwa.org/main/find-a-writer/ here's the staging site where it doesn't work. when you click the same button here, it brings you to the site's version of a '404 not found page'. http://www.stagebwa.org/find-a-writer/ The php logic is the same so I know that's not the problem. I think it's because clicking the search button reloads the page as stagebwa.org/main/find-a-writer and that's messing the whole thing up. I'm pretty sure getting the page to reload on stagebwa.org/find-a-writer will solve the problem.
migrated my website but one link doesn't work php?
I think there is no way to bring it back unless you have registered/saved your procedure in a certain variable.
I was replacing some procedure and i realized that I need the procedure i overwrote , is there a way to bring it back , undo is not an option i closed the query ... ?
Is there a way to return closed query ? SQL
1 Possibly out of topic, but I got bitten by this calculation as well and this may be relevant to your question. Say that you define you want a retention period that translates into a maximum of n backups on a storage location. That means that your storage should be big enough to store at least n + 1 backups (and not n as I would naively expect). This is because the backup mechanism does the following Check that the storage has enough space for one more backup Do the actual backup Check if any backup stored is more ancient that the retention period, and remove any backup that is out-of-date As the cleanup is done after the current backup, size calculation is confusing. Share Improve this answer Follow answered Jan 25, 2018 at 14:06 Arnaud JeansenArnaud Jeansen 1,6741515 silver badges2828 bronze badges Add a comment  | 
I've performed 2 backups of 2 repositories in Artifactory. They are incremental and they check avaible diskspace. The first time the backups went well. They are 400GB and 150GB and they are saved in an NFS share. Now when I manually start a second backup (incremental) I got the 'error': Free space available for backup: 88170561536 Not enough free space to perform backup snapshots There is still 82GB available. 550GB is taken. But I started an incremental backup on the smallest repo (150GB), there is maybe 100MB or something like that added (probably less). Incremental = add only new added artifacts to existing backup, why is this not working or do I have to turn off the available disk space check in this case? Edit: I tried to turn off the available disk space check but still I receive: Not enough free space to perform backup releases. After I've deleted all my backups and backup jobs, I recreated new backup jobs without checking the disk size before starting. The job was still complaining about a full disk till I enabled the option again. So now a backup is running again, but my incremental backup will never work. I've always to delete the full backup first.
Artifactory: incremental backup: Not enough free space while there is enough space left
You can trigger a Lambda function on demand: Using AWS Lambda with Amazon API Gateway (On-Demand Over HTTPS) You can invoke AWS Lambda functions over HTTPS. You can do this by defining a custom REST API and endpoint using Amazon API Gateway, and then mapping individual methods, such as GET and PUT, to specific Lambda functions. Alternatively, you could add a special method named ANY to map all supported methods (GET, POST, PATCH, DELETE) to your Lambda function. When you send an HTTPS request to the API endpoint, the Amazon API Gateway service invokes the corresponding Lambda function. For more information about the ANY method, see Step 3: Create a Simple Microservice using Lambda and API Gateway.
I'm currently setting up a small Lambda to take snapshots of all the important volumes of our EC2 instances. To guarantee application consistency I need to trigger actions inside the instances: One to quiesce the application before the snapshot and one to wake it up again after the snapshot is done. So far I have no clue how to do this. I've thought about using SNS or SQS to notify the instances about start and stop of the snapshot, but that has several problems: I'll need to install (and develop) a custom listener inside the instances. I'll not get feedback if the quiescing/wake-up is done. So here's my question: How can I trigger an action inside an instance from an Lambda? But maybe I'm approaching this from the wrong direction. Is there really no simple backup solution? I know azure has a snapshot based backup service that can do application consitent backups. Did I just miss an equivalent AWS service? Edit 1: Ok, it looks like the feature 'Run Command' of AWS Systems Manager is what I really need. It allows me to run scripts, Ansible playbooks and more inside an EC2 instance. When I've got a working solution I'll post the necessary steps.
AWS application consistent snapshots of EC2 instances
I just found this article that has step-by-step instructions for what I'm trying to do. So apparently it is possible using a combination of VMware and Clonezilla. https://www.howtoforge.com/converting-a-vmware-image-to-a-physical-machine Thanks for the comment-less downvotes though.
What I'm imagining is something like this: I spin up a VM or container with the OS and packages that I need, lets say an ansible script runs and provisions the vagrant or docker container exactly the way I want it. After that I use some tool, I'm thinking of tools like Systemback or Clonezilla, to make an iso image off of that vagrant or Docker. Then I would like to be able to take that iso image and install it directly onto a bare metal machine and it's ready to go. Basically an image and restore but the imaging happens in a VM or container. Is this possible? Is there anything I should know about the inner workings of Docker or Vagrant that wouldn't allow an image to be created and/or restored to a physical machine?
Create an image (iso) of a linux machine from VM or Docker then restore to bare metal
1 Every online physical backup contains a text file backup_label that contains the log sequence number of the checkpoint at the start of the backup. There is also a file with a name ending in .backup created in the WAL archive that contains this information. Since 9.4 the view pg_stat_archiver contains information about archived WAL segments. Share Improve this answer Follow edited Nov 27, 2017 at 8:52 Vao Tsun 48.9k1313 gold badges107107 silver badges140140 bronze badges answered Nov 26, 2017 at 20:39 Laurenz AlbeLaurenz Albe 225k1818 gold badges234234 silver badges303303 bronze badges Add a comment  | 
In SQL Server, there is a msdb..backupset table which keep tracks of which backup it take. How to capture which archivelog is archived and what is the starting point of this backup?
How to track postgres archive log files
This may be caused by the edit session crush like the items 2 in command sudo vim market/resources/views/user.blade.php. It make the swp file make the wong original file path. Note I really not have the /root/market/resources/views/user.blade.php file even the /root/market/ path. This is the first I create the file user.blade.php and without save it, and then shutdown the computer. E325: ATTENTION Found a swap file by the name "resources/views/.user.blade.php.swp" owned by: root dated: Thu Nov 9 17:33:23 2017 file name: ~root/market/resources/views/user.blade.php modified: YES user name: root host name: localhost process ID: 128985 While opening file "resources/views/user.blade.php" (1) Another program may be editing the same file. If this is the case, be careful not to end up with two different instances of the same file when making changes. Quit, or continue with caution. (2) An edit session for this file crashed. If this is the case, use ":recover" or "vim -r resources/views/user.blade.php" to recover the changes (see ":help recovery"). If you did this already, delete the swap file "resources/views/.user.blade.php.swp" to avoid this message. Swap file "resources/views/.user.blade.php.swp" already exists! [O]pen Read-Only, (E)dit anyway, (R)ecover, (D)elete it, (Q)uit, (A)bort: Solution: sudo vim market/resources/views/user.blade.php, if vim show above info, just press R and then save the file. If it not show above info, use :rec(over), then save the file. use vim open any file env create a new file, with sudo vim aaa.txt, then use :rec(over) user.blade.php then save it. This will save the file to user.blade.php not the aaa.txt. Make sure the aaa.txt and .user.blade.php in same directory.
I want to recover the /home/lufei/market/resources/views/user.blade.php, here is the place of swp file. > lufei@localhost:~/market$ ls resources/views/.user.blade.php.swp > resources/views/.user.blade.php.swp I use vim -r resources/views/.user.blade.php.swp to recover the file, I get this, E306: Cannot open resources/views/.user.blade.php.swp Press ENTER or type command to continue When I use sudo vim -r resources/views/.user.blade.php.swp I can open the file, but when I use :wq in the vim I get "/root/market/resources/views/user.blade.php" "/root/market/resources/views/user.blade.php" E212: Can't open file for writing Press ENTER or type command to continue From the info above I think vim want to save the file to the root directory. From the vi - getting an error E212: Can't open file for writing, I guess if I create path /root/market/resources/views/ may work. But I want to directly save the file to the right place /home/lufei/market/resources/views/user.blade.php. So I use su && cd /home/lufei/market then vim -r resources/views/.user.blade.php.swp. But when I swp0 in vim , I also get swp1 From above trying, no matter I use normal user or the root user, when I recover the file, vim will try to save the file to the wrong place which path is not exist. Can I directly recover the file to the right place, without create a temp path and then copy the recoveried file back to the right path.
Can't open file for writing when vim recover file with swp file
1 read-backup-data would only kick in if you try to read the value from server B itself. It does not help to have multiple servers as value sources when using clients. This would mitigate the idea of how Hazelcast distributes not only data but also optimizes request latencies by sending requests from clients directly to the record-owing cluster node, if that makes sense. Share Improve this answer Follow answered Nov 9, 2017 at 7:14 noctariusnoctarius 6,0241919 silver badges2222 bronze badges Add a comment  | 
I use hazelcast 3.8.4 and IMap. I set in hazelcast.xml <map name="default"> <backup-count>1</backup-count> <async-backup-count>0</async-backup-count> <read-backup-data>true</read-backup-data> and I observe get/s per server in management center. I think about this situation. I put key 3, 4. And key 3 owner is server A, key 4 owner is server B. before I set read-backup-data true, if I get key 3, only server A's get/s is up in management center. After I set read-backup-data true, I expect not only get/s of server A but also server B will up. But it didn't. Why? thanks in advance.
hazelcast doesn't read from backup data
This seems to be related to a bug in Puppet 4. This workaround applies: puppet filebucket --local \ --bucket /opt/puppetlabs/puppet/cache/clientbucket \ list UPDATE Pipeing the output from this command into sort -k 2 will sort entries by date (newest first).
I am exploring Puppet filebuckets with a manifest that contains the following excerpt: file { '/tmp/test' : backup, # ... } When I apply this manifest, Puppet reports that it backed up the old version of /tmp/test into the (local) filebucket puppet: Info: /Stage[main]/<module>/File[/tmp/test]: Filebucketed /tmp/test to puppet with sum <hash> This matches the following description in the documentation: Default value: puppet, which backs up to a filebucket of the same name. (Puppet automatically creates a local filebucket named puppet if one doesn’t already exist.) When I now try to inspect the contents of the filebucket with puppet filebucket --local list (or puppet filebucket --local --bucket puppet list) I get this error message: Error: Could not run: File not found What can explain this behavior and how can I successfully inspect the contents of the (local) filebucket? This is for Puppet version 4.10.5.
Exploring Puppet filebuckets: Error: Could not run: File not found
1 Does SharePoint 2013 restore only from Database? The short answer is no. A full fidelity SharePoint farm backup is mostly databases but there is also configuration information and data that is stored outside of the databases. The Central Admin backup facility (as well as the Backup-SPFarm powershell commands) initiate SQL backups as well as backups of all the stuff that isn't in SQL. That is the only point-and-click (or type a single command) solution. Could you get away with only having some of the databases to recreate your environment? Sure but then you'd have to have a documented and tested (and ideally automated) process for recreating the farm from the databases. Share Improve this answer Follow answered Oct 12, 2017 at 13:20 Mark MascolinoMark Mascolino 2,27211 gold badge1515 silver badges1919 bronze badges Add a comment  | 
Does SharePoint 2013 restore only from Database? I have a scheduled script in MSSQL Server to run all database backups daily , and my SharePoint site also require a daily differential/weekly full backup usually happen in Central Administration. I am aware that multiple backups running would break log chain in this case. If I stop doing backup in Central Administration and let DB does the backup only, would I be still able to restore my SharePoint site (Contents and Configurations)?
SharePoint 2013 Backup and Restore from database side only
1 I would do something like this: env = defined?(RAILS_ENV) ? RAILS_ENV : 'development' config = YAML.load_file(File.join('config', 'database.yml'))[env] Model.new(:my_backup, 'My Backup') do database MySQL do |db| config.each_pair do |key, value| db.public_send("#{key}=", value) end # ... Share Improve this answer Follow answered Sep 9, 2017 at 16:44 spickermannspickermann 104k99 gold badges104104 silver badges136136 bronze badges Add a comment  | 
I'm testing out the backup gem http://backup.github.io/backup/v4/utilities/ I understand that I've to create a db_backup.rb with the configuration for example Model.new(:my_backup, 'My Backup') do database MySQL do |db| # To dump all databases, set `db.name = :all` (or leave blank) db.name = "my_database_name" db.username = "my_username" db.password = "my_password" db.host = "localhost" db.port = 3306 However I'm not able to find out how to get those details from the Rails database.yml. I've tried something like this: env = defined?(RAILS_ENV) ? RAILS_ENV : 'development' @settings = YAML.load(File.read(File.join( "config", "database.yml"))) But I guess there should be a better way.
Using the backup gem how can I get database authentications details from rails database.yml